コンテンツにスキップ

Racing to Deploy GenAI? Security Starts With Good Governance

GenAI adoption is moving fast. Here’s what enterprises need to know to keep their deployments secure and on track.

今後の展望

While debates about artificial intelligence (AI) focus on some of the scariest scenarios – a super intelligence that swiftly subjugates its human creators to do its bidding or a rogue nation-state that uses AI to start conquering the world – the likeliest risks of AI are more mundane. And much more manageable.

That should come as good news to the countless enterprise owners eager to incorporate forms of generative artificial intelligence (GenAI), that is, any systems that can generate new text, images, audio, or coding in response to prompts. The risks associated with GenAI are already familiar to business technology executives and chief information security officers (CISOs). They relate to data governance, data security, trustworthy and ethical decision-making, and the security of AI applications themselves.

The difference, and it’s a big one, between these concerns and the more standard cybersecurity risks enterprises have faced over the years is that GenAI technology within the enterprise is new and the security outcomes are unpredictable. While the scale of this unpredictability has many scientists worried, it’s not insurmountable. There are steps enterprises can take to manage the overall risks of GenAI effectively.

AI’s great promise comes with added risk: Ransomware is expected to rise with AI. Get the ultimate guide to ransomware defense and reduce the likelihood of suffering an attack.

“Step one is having a policy, at a minimum, that provides guidance about how to use large language models,” says Walter Haydock, founder and CEO at AI cybersecurity firm StackAware. Of all the new forms of GenAI now available, large language models (LLMs), which generate text or code on demand, are drawing keen interest from companies of all sizes.

Step one is having a policy, at a minimum, that provides guidance about how to use large language models.

Walter Haydock, founder and CEO, StackAware

Enterprises spent an estimated $19.4 billion on GenAI solutions in 2023 – and may invest as much as $151 billion by 2027 – according to figures from research firm International Data Corp. LLMs, in particular, are already in use as chatbots at leading firms in a variety of industries, including travel (Expedia), insurance (Allstate), financial services (Mastercard), and manufacturing (Schneider Electric).

What’s at stake? Enterprises without good internal governance policies will find that they won’t be able to trust the output from their home-built LLM models. They’ll be more vulnerable to ethical and copyright violations, or data leaks. And they may lose control of the LLM model itself if they suffer a cyberattack.

Setting effective GenAI governance is faster than you think

John Pescatore, director of emerging security trends at SANS Institute, the world’s leading cybersecurity research and training organization, agrees with Haydock and advises organizations to start with a comprehensive governance model. That includes securing the actual model itself and the data that feeds the model, and monitoring the model’s behavior.

It’s not about building the model and forgetting about it – it’s ongoing management that gets the job done.

John Pescatore, director of emerging security trends, SANS Institute

The process needn’t take months. It can be done in a matter of weeks, with the understanding that you’re not determining a final set of rules and guidelines to oversee all GenAI systems for all time. Your strategy will (and should be) adjusted over time as the technology and your enterprise needs change. “It’s not about building the model and forgetting about it – it’s ongoing management that gets the job done,” Pescatore says.

To effectively “get the job done” over time, experts recommend organizations take the following steps.

1. Define the scope of your AI/LLM and data governance program

This means determining and clearly stating the specific goals of your GenAI system and outlining its limits. Strong boundaries will help prevent outputs that may breach compliance or privacy standards. To confirm roles, responsibilities, and good practices, ask and determine the following: Who is responsible for securing and managing the LLM program, data sources, and classification of data in the LLM? How will the model then collect, store, manage, and use that data?

[Read also: The 3 biggest GenAI threats (plus 1 other risk) and how to fend them off]

2. Establish strong data hygiene

Organizations must detail how to clean, enrich, and validate the data used to train the LLMs. Again, they should define who will oversee these processes and the expected protocols. Validating the training data (to make sure it’s accurate) and sanitizing it of any sensitive data, regulated data, or intellectual property (if it’s a public-facing model) is essential for preventing data exposures or privacy violations.

3. Demand robust data security

To put it plainly, data security is foundational, notes Ron Reiter, co-founder and CTO at cloud security startup Sentra. “If you take care to secure the sensitive data that’s being moved around, you minimize your attack surface so you are able to build a better security policy,” he says.

It’s also wise to define policies for access to the models and model development, the data that feeds the models, and how the models and the data they contain will be protected. Governance planners must determine how the organization will ensure that the data and the model’s actions comply with government regulations and company standards.

4. Continuously monitor

Organizations must establish systems for monitoring the behavior of the AI, the model’s data feeds, and how the model is used. The business can be shut down if the model begins behaving maliciously or spilling sensitive data.

5. Mitigate bias

While it’s impossible to eliminate bias, an enterprise can create and implement ways to reduce biases within LLM systems and identify them when they pop up. Biases from GenAI can affect decisions in loan approvals, credit scoring, hiring practices, and even the spread of misinformation. Mitigation efforts should include data curation (utilizing data from varied sources), bias-detection tools, transparency with data and the LLM, and frequent monitoring. Enterprises must implement ongoing checks against biased outcomes because of potential biases within enterprise data.

6. Revise your incident response

Dust off those incident response playbooks and make sure they address GenAI. Your incident response plan must outline step-by-step guidance should your GenAI systems malfunction or fall prey to a cyberattack.

7: Maintain flexibility

Enterprises must create a GenAI governance framework flexible enough to adapt to AI technological advancements and new rules from government regulators both in the U.S. and abroad. A successful GenAI governance policy will also regularly audit the enterprise’s LLM usage for potential violations and evaluate the policy’s effectiveness.

[Read also: Ultimate guide to AI cybersecurity – benefits, risks, and rewards]

#8Keep it uniform

It’s essential that the AI governance framework governs AI usage and security uniformly throughout the organization. An effective strategy should ensure that different departments and business units aren’t each doing their own thing regarding AI governance.

Finetuning your GenAI governance policies

Governance plans should also address the type of LLMs an enterprise uses. (You can employ an existing LLM, customize a pretrained LLM, or build your own from scratch.) If building your own, it may seem advantageous to build one large model and virtually segregate the data fed into it for use by different departments. A potentially better option: Build smaller, physically distinct LLMs for areas such as marketing, research, and financials so that staff and other users can only access data they have a right and legitimate need to access. By limiting access to GenAI systems, you lessen the risks of more widespread data leakage. “It’s the only way I believe is possible to protect models regarding access and data compartmentalization,” says Reiter.

As with any comprehensive cybersecurity policy, third-party risk management is essential to a GenAI governance plan. Enterprise leaders and security teams must look beyond their own network architecture to the governance and security protocols of third-party suppliers, assessing (and continuously monitoring) how their governance and risk practices intersect with yours.

Like your overall governance planning, appraising third-party risk need not be too time-consuming. “In our experience with clients, assessing and enhancing the security environment to protect GenAI can typically be done in 60 days,” notes a recent PricewaterhouseCoopers guide to scaling GenAI for business.

[Read also: Machine learning, another subset of AI, can glean patterns and insights from vast amounts of data – here’s your primer]

Making sure security leaders and C-suite executives are on the same page will help streamline this process, especially when it comes to any prerequisites or procedures for getting third-party AIs on your enterprise’s approved list. “If for some reason the security team disagrees with using that tool or allowing that type of data into that tool, then here’s the process for getting risk acceptance, based on the business justification,” Haydock explains.

The key to all of this, experts agree, is committing to and investing in security-by-design.

“The race to deploy and secure generative AI follows the typical pattern: Enterprises rush to get it out, and security falls behind,” says Pescatore. “Unfortunately, security always has to play catch-up,” he says. And playing catch-up against adversaries is a guaranteed losing strategy.

George V. Hulme

George V. Hulme is an information security and business technology writer. He is a former senior editor at InformationWeek magazine, where he covered the IT security and homeland security beats. His work has appeared in CSO Online, Computerworld and Network Computing.

Taniumサブスクリプションセンター

ITセキュリティと運用のための最新ソートリーダーシップ、業界ニュース、ベストプラクティスを含む、Taniumによるダイジェストを直接メールでお送りいたします。

今すぐ購読
本ウェブサイトの利用を継続することにより、利用規約およびプライバシーポリシーに同意したものとします。
close
Powered by Translations.com GlobalLink OneLink Software