ao link
Affino
Search Teiss
My Account
Remember Login
My Account
Remember Login

The Expert View: GenAI Security - How to Deploy the Power of Generative AI Safely and Securely

Sponsored by WithSecure

As organisations rush to embrace generative AI, many are adding large language models to existing applications without fully considering security implications, creating new risks that demand careful management.

Linked InTwitterFacebook

The hype around generative AI (genAI) is driving rapid adoption, but at what cost? Opening a Business Reporter breakfast briefing at The Goring Hotel in London, Donato Capitella, Principal Security Consultant at WithSecure Consulting, posed a crucial question: "How do you deploy it without generating risk?"

 

He asked the attendees, all senior cyber security executives from a range of sectors, how they were approaching the challenge of delivery genAI applications within their organisations. Particularly since many employees are demanding access to this new technology as soon as possible.

 

The Push for Adoption


GenAI marks an unusual shift in enterprise technology adoption. Rather than IT departments driving change, ordinary users across organisations are experimenting with these tools - often in their personal lives - and then seeking their implementation in the workplace. They are keen to realise improvements in two areas: efficiency and insights.

 

"People want their boring jobs to go away," said one attendee, describing how employees seek to automate routine tasks like report-writing and slideshow creation to focus on work that requires their genuine expertise. Others look to genAI for deeper insights, such as automating access management by identifying when staff leave or change roles.

Client expectations add another layer of complexity. "Some clients are insisting we use genAI on their account because they expect it to reduce costs," one delegate said. "Others insist that we shouldn’t use it at all." This split in customer demand creates a challenging balancing act for service providers.

 

Understanding the Risks


Data security and governance emerged as primary concerns. Some participants said their organisations have struggled to establish guardrails for using sensitive data in genAI systems, raising concerns of exposure through breaches or inadvertent sharing in LLM responses.

 

Traditional data processes follow clear paths from point A to B to C. GenAI, however, can leap from A to Z without any clarity about the steps between. "That raises the possibility of explainability," one attendee pointed out. "For example, if someone wants to know why an AI-based system turned them down for a loan." The risk of undetected mistakes, such as hallucinations, adds another layer of concern.

 

Data quality poses its own challenges. "We can’t make use of genAI without good quality data," said one participant. Rather than tackle enterprise-wide data cleaning, some organisations opt to improve data quality for specific use cases as they arise.

 

The proliferation of AI features adds complexity to vendor relationships. "Every company is now an AI company," said one delegate, "which adds to the due diligence we must do on their products." Vendors display varying levels of AI maturity, requiring careful assessment of their data processing practices and customer data segregation.

 

Securing the Future


Without established benchmarks, securing genAI requires a ground-up approach. "We have to start with use cases," Capitella advised. "What is the AI being used for? How would an attacker target it? How do we secure it against those attempts?"

 

Many organisations are taking a measured approach to implementation. "Anyone who wants to use it must have a clear use case," explained one attendee. "Then a governance committee will evaluate their plan, start to put a framework around it, then start to pilot it." Careful rollouts allow for problem detection and implementation of additional safeguards when needed.

 

Education plays a crucial role. "People often don’t know the limitations and risks," said one participant whose organisation runs an AI Academy for in-house training. While core security principles still apply new challenges emerge around data usage for model training.

 

Test applications, not models


LLMs present unique security challenges. "They’re susceptible to social engineering attacks, just like humans," Capitella explained. His advice? "Don’t try to pen-test the LLM. You already know it has vulnerabilities. Instead, test the application you’re building around it, and the relevant controls." He emphasised the importance of output validation and implementing controls to detect and respond to suspicious prompts.

 

Looking ahead, regulation looms on the horizon. While new rules will provide frameworks for genAI deployment, organisations must prepare to understand and implement these requirements effectively.

Closing the briefing, Capitella said: "In the last seven months I’ve tested more than 25 different genAI models that clients have built. Only 10 are in production because the rest need tweaks." Thorough testing of genAI applications isn’t just important – it’s vital for safe deployment.

 

As organisations navigate generative AI’s challenges, success will depend on balancing rapid adoption with robust security measures. The key lies not in avoiding risk entirely, but in understanding and managing it effectively through careful planning, testing, and governance.

 


To learn more, please visit: www.withsecure.com

Sponsored by WithSecure
Linked InTwitterFacebook
Kaltrina Jashari

Kaltrina Jashari

Affino

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. teiss® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543