NewsRescue.com

Big Tech unveils platform to promote ‘safe’ AI

NewsRescue

On Wednesday, major leaders in the AI business formed a group dedicated to “safe and responsible development” in the subject. Anthropic, Google, Microsoft, and OpenAI are among the founding members of the Frontier Model Forum.

According to a statement released by Microsoft on Wednesday, the forum’s goal is to advocate and build a standard for evaluating AI safety while also assisting governments, enterprises, policymakers, and the general public in understanding the risks, limits, and opportunities of technology.

The forum also aims to create best practices for dealing with “society’s greatest challenges,” such as “climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.”

Anyone who is interested in the development of “frontier models” aimed at generating breakthroughs in machine-learning technology and is committed to the safety of their projects is welcome to join the discussion. The forum intends to develop working groups and collaborations with governments, non-governmental organizations, and academics.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft President Brad Smith in a statement.

AI thought leaders have increasingly urged for serious regulation in a field that some believe may kill humanity, highlighting the dangers of uncontrollable development. Except for Microsoft, the CEOs of all forum participants signed a statement in May encouraging governments and global organisations to prioritize “mitigating the risk of extinction from AI” on par with preventing nuclear war.

Anthropic CEO Dario Amodei warned the US Senate on Tuesday that artificial intelligence is much closer to surpassing human intelligence than most people assume, and he demanded tight laws to prevent nightmare scenarios such as AI being used to create biological weapons.

His views matched those of OpenAI CEO Sam Altman, who warned Congress earlier this year that AI may go “quite wrong.”

In May, the White House formed an AI task force led by Vice President Kamala Harris. It reached an agreement with the forum participants, as well as Meta and Inflection AI, last week to allow outside audits for security flaws, privacy risks, potential discrimination, and other issues before releasing products to the market and reporting all vulnerabilities to the appropriate authorities.

Exit mobile version