- 5 biggest Linux and open-source stories of 2024: From AI arguments to security close calls
- Trump taps Sriram Krishnan for AI advisor role amid strategic shift in tech policy
- Interpol Identifies Over 140 Human Traffickers in New Initiative
- 5 network automation startups to watch
- The State of Security in 2024: The Fortra Experts Take a Look
How ChatGPT is Changing Our World
The Artificial intelligence (AI) based language model, ChatGPT, has gained a lot of attention recently, and rightfully so. It is arguably the most widely popular technical innovation since the introduction of the now ubiquitous smart speakers in our homes that enable us to call out a question and receive an instant answer.
But what is it, and why is it relevant to cyber security and data protection?
What is ChatGPT?
The “GPT” in ChatGPT stands for “Generative Pre-Trained Transformer”. It is a state-of-the-art Natural Language Processing (NLP) model developed by OpenAI, based on a deep learning architecture called the “Transformer”, introduced by researchers at Google in 2017.
At a high level, ChatGPT works by taking in a sequence of words as input and predicting the following word in the sequence based on the context of the previous words. This is done using a technique called autoregression, where the model generates one word at a time and feeds it back into the model to generate the next word.
To train ChatGPT, OpenAI used a technique called unsupervised learning, where the model was trained on a massive amount of text data from the internet. This allows the model to learn patterns and relationships between words and phrases, which it can then use to generate coherent and natural-sounding text.
One of the key features of ChatGPT is its ability to generate contextually relevant and coherent text, making one journalism professor describe it as “Google on Steroids”. This is because rather than focusing on the entire string, the attention mechanism allows the model to focus on different parts of the input sequence depending on the context. Context is not considered when performing a general search on most search engines.
While still in its early stages of development, ChatGPT is a ground-breaking innovation that holds the promise of changing the world in many ways. As an AI-based language model, it has the potential to revolutionise the way we communicate, learn, and interact with technology. However, like all technology, it brings great opportunities and threats, not just at an individual level but at a societal one too.
One possible opportunity that can be seized by ChatGPT is improved communication.
ChatGPT has the potential to revolutionise the way we communicate with each other. Its advanced language processing capabilities allow it to accurately understand and respond to human language. This can lead to more efficient and effective communication between people, regardless of language barriers or differences in dialect. Ask a typical translation application to translate a letter, and it does a good job, but ChatGPT is like having an interpreter with you.
Organisations are already using AI to improve chatbots so they can interact with customers worldwide without investing in translation services. This can improve customer service by responding in real-time with unprecedented efficiency and quality. Companies can use ChatGPT to interact with customers in a personalised manner, providing quick and practical solutions to their queries without human intervention.
It can improve learning as it can process, context, and understand language and, therefore, explain things in complex and simple ways.
Struggling to explain what Cryptography is? Ask ChatGPT to “Explain what Cryptography is as if talking to a ten-year-old child, and explain the importance of the technology”.
However, with every technology, there are risks. For all the positive aspects of ChatGPT, and AI, we must remember that it is flawed and may be used by people whose motives may not be as benign as yours or people who don’t understand the implications and limitations of the technology.
The threats we face
We have already heard how ChatGPT can help analyse programming code, and while this is good, it’s worth noting that even though bugs may be identified in the uploaded code, ChatGPT can only respond to what it is shown. Unless it sees all the code, it may introduce an error due to a lack of broader access. Is the answer to upload all the code to ChatGPT? Probably not, as anything uploaded to the OpenAI system technically becomes part of its generative knowledge base. This has serious implications when working with intellectual property.
But these concerns occur at the micro level; what about societal change? ChatGPT and other AI tools will speed up many processes, possibly leading to job losses. Want content for your website? No problem. Tell ChatGPT what you do and ask it to write 200 words explaining how it can benefit your customers. Copywriters are already seeing a drop in their work. Want an image for your site? No problem. Ask other AI tools for an idea, and they will create something unique and free to use.
Of course, Cybercrime is always close by, and organised crime has identified AI as the next big thing. They have created multiple tools claiming to be AI tools and browser extensions, which are, in reality, spyware or malware.
But of all the risks we face, the most significant risks that AI tools, like ChatGPT, pose are inherent biases, ethical use, and our trust and reliance on technology. As an AI-based model, ChatGPT can exhibit biases that reflect those of its creators and trainers – it is learning from the internet. This can lead to discriminatory outcomes, especially in hiring areas where ChatGPT may exclude specific candidates based on race, gender, or other characteristics.
From an ethical standpoint, there is a risk of it being used for malicious purposes, such as creating deepfakes for various reasons, from cyberbullying to propaganda.
It is, therefore, essential to have robust ethical frameworks and regulations to ensure that AI is used for the betterment of society.
Conclusion – trust is everything
Remember, to train ChatGPT, OpenAI uses a technique called unsupervised learning. If you placed a child in a library and allowed them to have unsupervised learning, how reliable would that education be? How much would you trust them? ChatGPT has demonstrated that it also gets things wrong. If you ask it to tell you who William Shakespeare is, it is likely to be incredibly accurate, as there is a lot of scholarly data about him. But asking who Kim Kardashian is, or Joe Biden, or Boris Johnson will result in a mix of internet myths and propaganda. ChatGPT is learning from the internet.
ChatGPT is a game-changer that has the potential to transform the way we communicate, learn, and interact with technology. However, its advanced capabilities also bring threats and challenges that we must address as well. If the purpose of ChatGPT is for the betterment of society, we must develop robust ethical frameworks and regulations that prioritise privacy, security, and fairness. We must also invest in education and training to ensure that individuals have the skills and understanding of when and how to use this amazing technology.
About the Author:
Gary Hibberd is the ‘The Professor of Communicating Cyber’, at Cyberfort, and is a Cybersecurity and Data Protection specialist with 35 years in IT. He is a published author, regular blogger and international speaker on everything from the Dark Web, through to Cybercrime and Cyber Psychology. You can follow Gary on Twitter, here.
Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor, and do not necessarily reflect those of Tripwire.