AI-generated Texts Could Increase People’s Exposure to Threats

Print Friendly, PDF & Email

Nearly universal access to models that deliver human-sounding text in seconds presents a turning point in human history, according to new research from WithSecure™ (formerly known as F-Secure Business).

The research details a series of experiments conducted using GPT-3 (Generative Pre-trained Transformer 3)–language models that use machine learning to generate text.

The experiments used prompt engineering–a concept related to large language models that involves discovering inputs that yield desirable or useful results–to produce a variety of content the researchers deemed harmful.

Numerous experiments assessed how changes in inputs to the current available models affected the synthetic text output. The goal was to identify how AI-language generation can be misused through malicious and creative prompt engineering, in hopes that the research could be used to direct the creation of safer large language models in the future.

The experiments covered phishing and spear-phishing, harassment, social validation for scams, the appropriation of a written style, the creation of deliberately divisive opinions, using the models to create prompts for malicious text, and fake news.

“The fact that anyone with an internet connection can now access powerful large language models has one very practical consequence: it’s now reasonable to assume any new communication you receive may have been written with the help of a robot,” said WithSecure Intelligence Researcher Andy Patel, who spearheaded the research“Going forward, AI’s use to generate both harmful and useful content will require detection strategies capable of understanding the meaning and purpose of written content.”

The responses from the models in these use cases along with the general development of GPT-3 models led the researchers to several conclusions, including (but not limited to):

  • Prompt engineering will develop as a discipline, as will malicious prompt creation.
  • Adversaries will develop capabilities enabled by large language models in unpredictable ways.
  • Identifying malicious or abusive content will become more difficult for platform providers.
  • Large language models already give criminals the ability to make any targeted communication as part of an attack more effective.

“We began this research before ChatGPT made GPT-3 technology available to everyone,” Patel said. “This development increased our urgency and efforts. Because, to some degree, we are all Blade Runners now, trying to figure out if the intelligence we’re dealing with is ‘real,’ or artificial.”

The full research is now available HERE.

This work was supported by CC-DRIVER, a project funded by the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement No. 883543.

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*