Skip to content
arrow-alt-circle-up icon

Cyber Incident Call

arrow-alt-circle-up icon

00800 1744 0000

arrow-alt-circle-up icon

chris

Article Thoughts on AI

Christiaan Ottow

Christiaan Ottow MSc. is director of cyber security and CTO at Northwave and has a background as ethical hacker. He has a long-standing career as CTO at several cyber security companies, and has seen the defensive and offensive sides of cyber security.

 

 

 

Gradual revolution 

This year,  AI has become an even bigger topic than it already was, with the appearance of a new generation of Large Learning Models (LLMs). We’ve seen the emergence of so-called “Generative AI” models, this is a model that can generate things such as text, images and audio, based on the data they’ve been trained on.

These new models feel like magic – whether it’s ChatGPT for text, Midjourney or DALL E for images or similar projects for speech. From an end-user perspective, they appear to be pure magic – we haven’t seen computers interact with us in a way yet that feels so similar to a human interaction. From a technology perspective, it looks fancy, but it is not magic. 

When looking into the developments of this model, two things have changed under the hood, compared to the previous generation of machine learning models: 

  • Computing has become cheap enough to process truly humongous amounts of data for training your Large Learning Models on. This includes using large quantities of information from the internet, which we refer to as "most of the internet." These models are built using deep learning techniques, which involve using multiple layers of neural networks. In the past, this was considered too expensive because it required a lot of computational power and data management resources. However, models are now advances to such an extent that models can use many layers of neural networks (called “deep learning”). Previously, this was too expensive both in terms of computation and data handling. 

  • The human feel from for instance ChatGPT comes from thousands of hours of humans calibrating the model by giving feedback on its output (a process called RLHF). It’s important to realise that the human touch doesn’t come from the “intelligence” of the model, but by thick layers of training by thousands of hours with human interaction: under enough pressure, this layer breaks and the model’s personality-feel evaporates (as some famous ChatGPT examples show). 

It is important to realise that this is a step in a long-term gradual development. From the perspective of machine learning models and our understanding of AI, LLMs are not new, neither are they a revolution. Actually, there is a long road of development ahead of us to make LLMs into products that will make true on the promises that many people think we are owed by AI. If we want to reap true benefits for our work and our societies, we should nurture this development and give it space to develop into truly transformative products. 

How does ChatGPT work? 

When looking at how an LLM like ChatGPT works under the hood, it becomes clear why experts in the field stay away from the term “Artificial Intelligence”. Machine-learning models like these work by taking text as input, and then based on the training they received, predicting tokens that are likely to follow.  

 Tokens, in this case, are parts of words. There is no “intelligence” or “consciousness”, the algorithm doesn’t have awareness, feelings, or opinions: it simply predicts the next word time after time and feels quite human in the way it does this thanks to the human feedback it received. Think of it as autocomplete on steroids, with some layers of human-trained processing after that for things like task awareness and dialog awareness. For terminology, machine learning / deep learning / LLM are better terms than AI when referring to tools like ChatGPT.   

Regardless of how big or small the underlying technological advances are, the possibilities opened up by this new wave of generative AI are impressive. In its simplest form, ChatGPT for instance, is a highly effective search assistant, a great tool for generating explanation and texts for your clients, some help in puzzling together how things work from snippets that you have and many other things.  

By building some plugins we might already be able to leverage much more of its power and value. When it becomes integrated into more products/services, it will truly start changing the way we work: automated minute taking and tracking of action points for meetings, suggestions for large portions of text that you need to write, prioritization in your mailbox and suggestions for replies, and so on. And these examples only refer to the topic of general productivity, not even to domain-specific work.  

Safe usage 

And there lies the rub. An LLM needs to be trained on a large dataset and then trained some more with human feedback, assuming you already have the model and a way to acquire the data. If you want a new and cool AI model to do domain-specific work for you, for example automated SOC alarm analysis, you will need to have that data and be able to build and train a model. We are currently not in a position to do this: it requires much more money, data, time and expertise than we currently have available. With our size and scope within Northwave for example, we can do two things: 

  • Experiment and investigate: use the available tools (like ChatGPT) to find optimisations to our work, experiment with available open-source models, create integrations of existing services into our workflow 

  • Keep a close eye on products that become available and integrate them as soon as we can (for instance, Microsoft Security Copilot) 

So that’s what you can do: experiment and keep a close eye on new products you could use. I’m sure we will all be surprised in the coming year about the possibilities. 

And when experimenting, please keep the following points firmly in mind: 

Privacy. Don’t put personally identifiable information into the computer systems of organisations with which you don’t have a legal basis to do so. For example, you probably don’t have such agreements with OpenAI for ChatGPT, for instance, and I don’t think that’s within easy reach.

Security. Don’t put information that (when pieced together with other information you share) could tell an observer something about your clients or their security or your company or its confidential information. This concerns company names, IP addresses, hostnames, URLs, usernames, email addresses, technologies used and so on.

Correctness. ChatGPT has no knowledge of what is correct or not, it just performs next-token prediction on text. This means that it sometimes generates complete rubbish, which is easy to see, but also sometimes faults that are harder to spot. It’s also not able to give source references for the answers it gives, so the burden of verifying before using falls on you. Don’t blindly trust its output but use its output. 

With those caveats in mind: head to https://chat.openai.com to create an account and start getting creative about how you can improve the way you work! 

p.s. for those interested in a bit more information about how LLMs work, and a sniff of the discussion around consciousness and intelligence: https://podcasts.apple.com/nl/podcast/sean-carrolls-mindscape-science-society-philosophy/id1406534739?l=en&i=1000604984879 

p.p.s. interesting thoughts on why the AI revolution is already disruptive, but not for the reasons you think (Dutch): https://berthub.eu/articles/posts/ai-sowieso-ontwrichtend/ and if you’d like to get more hands-on and learn how deep learning works, https://berthub.eu/articles/posts/hello-deep-learning-intro/  

At Northwave we often receive questions from our clients about the use of AI and ChatGPT in particular. That gave us reason to work out a guideline for organisations using AI.  

We are delighted to soon share with you the whitepaper on “How to securely use Generative AI in your organisation” written by our Security Consultant Rob Berends.