Is Generative AI Having Its Oppenheimer Moment?

Print Friendly, PDF & Email

As one of the most talked about films of the past year, Oppenheimer – the story surrounding the creation of the atomic bomb – was an object lesson in the fact that any groundbreaking new technology can be deployed for a variety of purposes. Nuclear reactions, for instance, could be harnessed for something as productive as generating electricity, or as destructive as a weapon of mass destruction.

Generative AI – which burst into the mainstream a little over a year ago – seems to be having an Oppenheimer moment of its own.

On the one hand, generative AI offers bad actors new ways to carry out their nefarious activities, from easily generating malicious code to launching phishing attacks at a scale they previously only dreamed of. At the same time, however, it puts powerful new capabilities into the hands of the good guys, particularly in its ability to analyze and serve up valuable knowledge when responding to security threats.

The technology is out there, so how can we ensure that its capacity for good is leveraged to the fullest extent while its capacity to cause damage is minimized?

The right hands

Making generative AI a force for good starts with making it easily accessible to the good guys, so that they can effortlessly take advantage of it. The most effective way to do this is for vendors to incorporate AI securely and ethically into the platforms and products that their customers already use on a daily basis.

There is a long, rich history of just this sort of thing taking place with other forms of AI.

Document management systems, for example, gradually incorporated a layer of behavioral analytics to detect anomalous usage patterns that might indicate that the system has been breached. AI gave threat monitoring a “brain” through its ability to examine previous usage patterns and determine if a threat was actually present or if it was legitimate user behavior – thus helping to reduce disruptive “false alarms”. 

AI also made its way into the security stack by beefing up virus and malware recognition tools, replacing signature-based identification methods with an AI-based approach that “learns” what malicious code looks like so that it can act as soon as it spots it. 

Vendors can follow a similar path when folding generative AI into their offerings – helping the good guys to implement a more efficient and effective defence.

A powerful resource for the defenders

The chatbot-style interface of generative AI can serve as a trusted assistant, providing answers, guidance, and best practices to IT professionals on how to deal with any rapidly unfolding security situation they encounter.

The answers that the generative AI provides, however, are only as good as the knowledge that’s been used to train the underlying large language model (LLM). The old adage “garbage in, garbage out” comes to mind here. It’s crucial, then, to ensure that the model is trained on approved and vetted content to ensure it’s providing relevant, timely, and accurate answers – a process known as grounding.

At the same time, customers need to pay special attention to any potential risk around sensitive content fed to the LLM to train it, including any ethical or regulatory requirements for that data. If the data being used to train the model leaks to the outside world – which is a possibility, for instance, when using a free third-party generative AI tool whose fine print gives them license to peek at your training data – that’s a huge potential liability. Using generative AI applications and services that have been folded into platforms from trusted vendors is a way to eliminate this risk and create a “closed loop” that prevents leaks. 

The end result, when done properly, is a new resource for security professionals – a wellspring of valuable knowledge and collective intelligence that generative AI can serve up to them on demand, augmenting and enhancing their ability to protect and defend the organization.

As with nuclear technology, the genie is out of the bottle when it comes to generative AI: anyone can get their hands on it and put it to use for their own ends. By making this technology available through the platforms that customers already utilize, the good guys can take full advantage of it – helping to keep the more destructive applications of this new force at bay.

About the Author

Manuel Sanchez is Information Security and Compliance Specialist at iManage.

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*