ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

The Expert View: AI and LLMs from a Cyber Security Perspective

Sponsored by Withsecure

The rise of AI cyber security threats and how to defend against them was the topic of discussion for executives at a recent TEISS Breakfast Briefing.

Linked InTwitterFacebook

Artificial intelligence (AI) has huge potential but also brings new cyber security risks, from worries about data governance to the possibility of criminals using it to create new attacks. Introducing a TEISS Breakfast Briefing at the Langham Hotel in London, Donato Capitella, Principal Security Consultant, at WithSecure, said he was particularly keen to know how concerned businesses were about AI and cyber risk.

 

The attendees, who were all senior executives from a range of sectors, said they were already seeing evidence of AI-driven cyber-attacks and that stopping them was a “cat-and-mouse” game.

 

AI-driven attacks

 

One attendee said his company had been targeted three times by deepfake video calls, purporting to be from the CEO and attempting to convince staff to make money transfers. Fortunately, the staff were suspicious, and all three attacks failed.

 

Similar attacks were reported by an attendee from the insurance industry, who said his company had received fraudulent claims using deepfakes. Others said they had seen the quality of phishing emails improve, which they suspected was because attackers were using AI to write them.

 

They also said attackers are exploiting zero-day vulnerabilities more quickly, and they felt AI was being used to increase the efficiency of cyber criminals. In many cases, AI has lowered the bar to entry, making it easier for criminals to craft convincing messages and even use generative AI to write the code for malicious software.

 

Defending against AI-driven attacks

 

Tackling this developing threat is difficult and, attendees agreed, requires action on multiple fronts. Technology firms are already rolling out tools to detect the use of AI in video, for example, by identifying cues such as unnatural muscle movements.

 

However, this is an endless race. AI video quality is constantly improving so security tools will need constant updates to stay ahead of malicious actors. These tools must also be accurate: attendees reported that tools designed to spot AI-written text already produce lots of false positives.

 

Training will be more difficult, too. A participant said companies could make their own deep fake videos to demonstrate to staff just how realistic they can be. Likewise, attendees suggested training staff to spot phishing emails by showing the successful ones. Spotting threats is so hard that the best response may be to encourage staff to be more suspicious.

 

AI vulnerabilities

 

These aren’t the only risks AI presents, though. As companies start using AI themselves, they increase the risk of a data breach or of opening a new vulnerability. In broad terms, AI tools require two things: the underlying model that performs some kind of analysis to deliver a result; and the data the model is analysing.

 

As one attendee said, it can be hard to verify that a model is safe. Many open-source models have been compromised by attackers to introduce a backdoor. Companies must be vigilant to ensure they do not unwittingly adopt them. An attendee from a bank said his company has 18 separate controls against which any new AI tool must be measured before it can be deployed.

 

When it comes to data, the controls around the AI tool must ensure that the model can access only the data needed for its task. It is particularly important to ensure that data interfaces exposed to autonomous AI agents have appropriate, deterministic access controls outside the AI’s control, as these agents can easily be subverted by attackers. There is a data governance issue here  because companies must be sure they are complying with their own rules - and any applicable regulatory controls - when it comes to things like applying AI to customer data.

 

Supply chain risks

 

Even if the company is very careful in how it adopts AI, suppliers can introduce vulnerabilities of their own, attendees said. A few at the briefing said they had suppliers who had added AI to tools and services without notice. That’s a problem because the supplier has been onboarded and security cleared but now might no longer be secure.

 

Once that is dealt with, a supplier’s AI use raises more questions. Are they allowed to train their model using your data? And if so, will that model will be accessible to other customers? Attendees also pointed out that it isn’t always clear what happens to that data - and the trained AI - when you leave the supplier.

 

One attendee recommended the NIST AI Risk Management Framework, which organisations can use to develop their own playbooks. Furthermore, AI has an obvious role to play in cyber defence, with most attendees already using it in some form, whether in EDR or XDR to build a picture of normal behaviour, or as a way of vetting alerts.

 

Overall, though, the security picture around AI is still very immature. It is a fast-moving area in which threats are developing rapidly. The main consolation is that the threats themselves are not new, so cyber security experts understand what needs to be defended. The pressure is on to get those defences right.


To find out more, please visit: www.withsecure.com

Sponsored by Withsecure
Linked InTwitterFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543