ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

The Expert View: Driving Cyber Security with AI

Sponsored by BT & Palo Alto

As artificial intelligence (AI) reshapes the cyber security landscape, organisations face a dual challenge: harnessing AI’s defensive capabilities while guarding against increasingly sophisticated AI-powered attacks, according to experts at a recent Business Reporter briefing.

Linked InTwitterFacebook

"This isn’t something we can tackle as individuals. We have to do it together," said Tristan Morgan, Managing Director & President of BT Security, opening a cyber-security dinner briefing at the House of Lords.

His co-host, Alistair Wildman, VP EMEA North at Palo Alto Networks, told the audience of senior IT experts: "AI is a massive enabler for defenders but also a massive enabler for bad actors. We see 13.5 billion attacks over the networks Palo Alto monitors, and AI is increasing the threat."

 

Navigating AI Defence Strategy


Cautious first steps mark many organisations’ approaches to AI-powered cyber defence. "We’re very risk averse," admitted one attendee, whose organisation cannot allow its data to leave the UK. He said they are exploring an AWS-hosted large language model (LLM) that will be trained on company policies to support their Security Operations Centre (SOC).

 

Vendor upgrades have become the default path to AI adoption for many. Some security teams already use AI to analyse security logs, dramatically cutting analysts’ workload. But simply stockpiling AI-enabled tools misses the mark because these tools aren’t working in concert and might even conflict.

A more sophisticated approach positions AI at platform level, overseeing all processes with the power to adjust security policies in real-time.

 

However, one attendee pointed out that AI tools understand everything except business context, which changes daily. Human analysts can be briefed on these changes, such as the launch of a new application, at the start of each shift. "In time, the AI will ’attend’ daily stand-ups so it can be briefed on evolving context," predicted one participant. "Until then, we’ll need a human in the loop."

 

Accountability questions sparked debate. What if AI misses a security breach? Or shuts down a plant on a false alarm? One attendee said: “When a person does it, you have accountability. How does that work for AI?" The answer varies wildly, many agreed, depending on each organisation’s risk tolerance and their ability to forge clear chains of responsibility for AI-driven decisions.

 

Enterprise AI Adoption


The AI challenge extends far beyond security. Take Microsoft’s Co-pilot – opinions were divided. “We’re not seeing good use cases. It’s expensive just to get a summary of your emails," one delegate argued. But another painted a different picture: “In our organisation, 35% of staff are using Co-pilot. It’s helping speed up their work.” For those who use Microsoft already, adding Co-pilot is straightforward and easy to police."

 

Tomorrow’s enterprise will run multiple specialised AIs. These tools might eventually merge, but for now, organisations must meticulously track each AI’s role and potential risks.

Meanwhile, the race to develop secure, customised LLMs trained on proprietary data is already on. Yet current demands on time and resources put this beyond most organisations’ reach – though the technology grows more accessible by the day.

 

The AI Threat Landscape


While organisations deliberate, cyber criminals forge ahead. They’re deploying AI to launch cleverer attacks. "They are professional and using it like we are - to help their staff so less expertise is required or so they can accelerate their knowledge," said one participant.

 

Traditional threats now wear sophisticated disguises. Deepfake voices. AI-generated video. AI job applicants have even appeared in recruitment systems – a puzzling development that had attendees wondering about the attackers’ motives. With threats becoming this sophisticated, employee education isn’t just important – it’s critical.

 

Yet strip away the AI enhancement, and these attacks follow familiar patterns. "None of these attacks does anything new, so we can still deal with them with traditional controls," one attendee pointed out. Take invoice payments: proper verification processes will catch fakes, regardless of how convincing the AI-generated request might be. But this raised an uncomfortable question: "Does that mean we’re making our processes less efficient to defeat attackers?"

The answer might lie in culture, not controls. "It’s powerful to tell your staff that they have the right to question anything," suggested one delegate. Empowering employees to flag suspicious behaviour – even if it occasionally means halting legitimate transactions – could prove decisive in this new landscape. This approach puts security before speed when doubt creeps in.

 

Drawing the discussion to a close, Mr Wildman reflected on how policy and people dominated the conversation more than technology and process. Mr Morgan highlighted that LLMs and workforce productivity had clearly emerged as the AI challenges at the top-of-mind for most participants.

 

The future of cyber security won’t hinge solely on AI tools. Success depends on how organisations adapt their people and policies to this new reality. The challenge is to create an agile, security conscious culture, supported by the smart addition of AI tools.


To learn more, please visit: www.bt.com and www.paloaltonetworks.com

Sponsored by BT & Palo Alto
Linked InTwitterFacebook
Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543