Advertisement

White House Gets ‘Responsible AI’ Commitments from Amazon, Google, Meta and More

The White House has gotten some of the biggest names in AI to agree to responsible practices.
(Photo credit: assetseller - Adobe Stock)

Some of the biggest names in AI — including Amazon, Google, Meta, Microsoft and OpenAI — have voluntarily agreed to a set of commitments developed by the White House to ensure the “safe, secure and transparent” development of AI technologies. This includes the development of watermarking technology for AI-generated content, research and reporting on the security and societal impact of the AI technologies they create, and looking beyond their own business needs to develop AI technology that “addresses society’s greatest challenges.”   

AI safety and research company Anthropic and the self-described “AI studio” Inflection also are among the group of companies that have agreed to the commitments, which all seven companies said they will undertake immediately.

“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,” said the White House in a statement. “To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”

The commitments include:

Advertisement

  • Ensuring products are safe through internal and external security testing by independent experts of AI systems prior to their release, as well as sharing insights gleaned from their efforts on managing AI risks across the industry and with governments, civil society and academia.
  • Building systems that put security first by investing in cybersecurity and insider threat safeguards, as well as facilitating third-party discovery and reporting of vulnerabilities in their AI systems. The administration zeroed in specifically on “unreleased model weights,” which are the pieces of the AI algorithms that determine how much influence certain inputs will have on the output and therefore a critical determinant of algorithmic biases. “These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered,” reads the White House statement.
  • Earning the public’s trust by developing “robust technical mechanisms” to ensure that users know when content is AI-generated, such as a watermarking system; publicly reporting their AI systems’ capabilities, limitations and areas of appropriate and inappropriate use with an eye to both security risks and societal risks, such as the effects on fairness and bias; prioritizing research on the societal risks that AI systems can pose, including avoiding harmful bias and discrimination and protecting privacy; and developing and deploying advanced AI systems to help address society’s greatest challenges “from cancer prevention to mitigating climate change.”

In addition to securing these commitments, The White House also said that it is currently developing an AI-focused executive order and will pursue bipartisan legislation. The U.S. government has a bit of catching up to do on the legislative front, with the EU already several drafts into its own set of AI laws, dubbed the EU AI Act, which is poised to become official later this year.

Featured Event

Join the retail community as we come together for three days of strategic sessions, meaningful off-site networking events and interactive learning experiences.

Advertisement

Advertisement

Access The Media Kit

Interests:

Access Our Editorial Calendar




If you are downloading this on behalf of a client, please provide the company name and website information below: