Tech Companies Agree to Voluntary AI Framework With White House
Amazon, Google, Meta, Microsoft and OpenAI agreed on voluntary measures to ensure AI develops in a “safe, secure and transparent” manner, the White House announced Friday. The companies agreed to internal and external security testing and to share information with…
Sign up for a free preview to unlock the rest of this article
If your job depends on informed compliance, you need International Trade Today. Delivered every business day and available any time online, only International Trade Today helps you stay current on the increasingly complex international trade regulatory environment.
industry, government and researchers to ensure products are safe before they’re released to the public, the administration said. Anthropic and Inflection also agreed on the measures. The agreement outlines commitments to cybersecurity investment and third-party examinations of AI vulnerabilities. The companies committed to developing mechanisms to inform the public when content is AI-generated, and researching societal risks in order to avoid “harmful bias and discrimination.” The commitments build on the National Institute of Standards and Technology’s AI Risk Management Framework and the Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights, Microsoft President Brad Smith said. Senate Majority Leader Chuck Schumer, D-N.Y., said Congress “will continue working closely with the Biden administration and our bipartisan colleagues to build upon their actions and pass the legislation that’s needed.” Senate Intelligence Committee Chairman Mark Warner, D-Va., said voluntary commitments are a good step, but Congress needs to regulate: “While we often hear AI vendors talk about their commitment to security and safety, we have repeatedly seen the expedited release of products that are exploitable, prone to generating unreliable outputs, and susceptible to misuse.”