What Does the Commercialization of Generative AI Mean for Society?

Print Friendly, PDF & Email

ACM TechBrief Lays Out Risks of Generative AI Technologies and Delineates Policy Implications for New  “Black Box” Technology

ACM, the Association for Computing Machinery has released “TechBrief: Generative Artificial Intelligence.” It is the latest in the quarterly ACM TechBriefs series of short technical bulletins that present scientifically grounded perspectives on the impact and policy implications of specific technological developments in computing. 

“TechBrief: Generative AI” begins by laying out a core challenge: the rapid commercialization of GenAI poses multiple large-scale risks to individuals, society, and the planet that require a rapid, internationally coordinated response to mitigate.

“While generative AI technologies have seemingly burst onto the scene, we’ve long been familiar with many of the risks they pose, though at a smaller scale,” explains David Leslie, Director of Ethics and Responsible Innovation Research at The Alan Turing Institute, Professor of Ethics, Technology and Society at Queen Mary University of London, and co-lead author of the new TechBrief. “We need swift and global policy action to regulate generative AI applications as well as the so-called foundation models upon which they are built. We see this ACM TechBrief as a first step in that direction.”

The TechBrief defines GenAI as “a broad set of computing technologies and techniques that enable a computer to create content such as text, code, image, audio, speech, voice, music, and video,” also noting that “over the past decade, the development of sophisticated neural network architectures, combined with increasing computing capacity and access to vast bodies of training data, has led to a great expansion in their utility and uptake.”

Potential harms from generative AI identified by the new TechBriefinclude misinformation, cyberattacks, and even environmental damage. To mitigate these risks, the authors contend that AI law and policy should incorporate end-to-end governance approaches that address risks comprehensively and “by design.” Specifically, laws and policies must address how to govern the multiphase character of GenAI systems, and the foundation models (FMs) used to construct them. 

“Although the innovation potential of generative AI is clear, so too are risks of this technology that we must address through a complete, end-to-end governance approach that protects business and society,” said Francesca Rossi, IBM AI Ethics Global Leader and co-lead author of the TechBrief. “From critical early design-stage mechanisms such as stakeholder engagement and bias self-assessment, to implementation of guardrails in the application and post-deployment phases grounded in legal and policy measures, as well as technical features such as traceability, auditability and third-party oversight, we must establish a holistic process that ensures a GenAI system’s behavior improves how we live, learn, and work while being aligned to human values.”

Noting that a few major companies have taken the lead in developing and deploying generative AI technologies, the TechBrief also emphasizes that policymakers must pay close attention to potential power imbalances at the ecosystem level that could substantially affect the public interest. The TechBrief states “…when large-scale private sector organizations control the critical digital infrastructures on which the production and use of FMs and GenAI applications depend, this may create distorted financial incentives to further centralize economic power and disadvantage smaller or less well-resourced commercial, academic, and public sector stakeholders. Effective AI policy should consider addressing such power asymmetries directly.”

The key conclusions of “TechBrief: Generative AI” are:

  • AI policy should incorporate end-to-end governance approaches that address risks “by design” and regulate at all stages of the design-to-deployment life cycle of AI products.
  • Governance mechanisms for GenAI technologies must address the entirety of their complex supply chains.
  • Actors should be subject to controls that are proportionate to the scope and scale of the risks their products pose, with corresponding legal liability and other concrete consequences for irresponsible practices.

ACM’s TechBriefs – which collectively have been downloaded to date nearly 40,000 times –  are designed to inform policymakers, the public, and others about the nature and implications of information technologies. As with other TechBriefs in the ACM series, “Generative AI” includes an overview of the major policy implications of the technology, key statistics to put the issues in context, a narrative introduction to educate the public, and key conclusions. 

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*