Chances are you’ve already heard something about ChatGPT, a recently released piece of Artificial Intelligence (AI) which has the potential to massively transform the way we work with and relate to technology. It also raises fresh concerns and challenges for cybersecurity. As new risks arise we explain why a Zero Trust approach can improve your security posture.

What is Chat GPT?
ChatGPT (short for “Generative Pre-trained Transformer”) is a large language model developed by OpenAI. It’s designed to understand and generate human-like text responses to a wide variety of questions and prompts. It’s been trained on a massive amount of text data, including books, articles, and websites, using a deep learning algorithm known as the transformer architecture. As a result, it’s capable of understanding natural language and generating human-like responses to questions and prompts.

How is ChatGPT being used?
ChatGPT has practical applications in chatbots, customer service, language translation, content creation, and personal assistants like Siri and Alexa. By automating routine inquiries and providing fast, efficient communication across language barriers, ChatGPT can improve efficiency and enhance customer experiences. It can also generate written content for various purposes, including blog posts, news articles, and social media posts.

In fact, we’ve had a conversation with it and this article is actually written by ChatGPT and edited by our team.

Overall, ChatGPT is a powerful tool that can be applied in many practical ways to improve communication and streamline workflows in various industries.

What does ChatGPT mean for cybersecurity?
As with any tool, ChatGPT can be helpful or harmful depending on who is using it. In the hands of cyber attackers ChatGPT could be used to mimic trusted people and organisations, increasing the risk of deceptive activities such as scams.

According to a report published by the threat intelligence firm Recorded Future: “ChatGPT is lowering the barrier to entry for malware development by providing real-time examples, tutorials, and resources for threat actors that might not know where to start.”

It is important to ensure that proper measures are in place to mitigate these risks and ensure the ethical and responsible use of the technology.

Another recent buzzword, Zero Trust is being heralded as the future of cybersecurity, and can be one way to mitigate the threats posed by misuse of ChatGPT.

What is Zero Trust?
Zero Trust is a security approach that assumes that every device, network, or resource connected to a network is potentially untrustworthy and must be verified before greater access is granted. It’s otherwise known as a ‘never trust, always verify’ approach.

Traditional cybersecurity models have assumed that everything inside a network is secure, a bit like being within the walls of a fortress. If an activity like a user requesting access to information happens from inside a network, it is presumed to present no risk. The focus is only on defending the fortress gates.

Zero Trust takes the stance that malicious activity can occur from inside the fortress. A user requesting access is deemed an untrusted source until that user is authenticated. This is particularly relevant now that sophisticated AI like ChatGPT could generate fake profiles that look and sound identical to real users.

How does Zero Trust apply in practice?
In a Zero Trust environment, each access request undergoes multiple layers of security checks, such as authentication, authorisation, and data encryption.

Security controls are implemented at every level of the organisation’s network, and access to resources is granted on a need-to-know basis, based on the user’s identity, device, and context of their access request. This means that even if an attacker gains access to the school’s network, they will not be able to access sensitive information or systems without additional authentication and authorisation.

This approach is not about not trusting one’s own people, but rather being mindful that activity inside a network can’t automatically be considered safe. It’s about creating a more secure and resilient network environment that accounts for the possibility of both internal and external threats.

Cutting through the buzz
With all the current excitement and debate around ChatGPT, it pays to keep in mind that it can be used for nefarious purposes. A Zero Trust environment may sound daunting, but it is an up to date way to secure networks and minimise risk.