The G7, a collection of the world’s most advanced economies, has come to an agreement on a voluntary AI code of conduct for companies involved in the development of artificial intelligence technology. This code serves as a set of recommendations for AI companies, acting as an interim measure until formal regulations are established.
Details about the AI Code of Conduct
It will be an 11-point agreement that will promote responsible and safe AI practices.
“Its primary goal is to encourage the global adoption of safe, reliable, and secure AI. However, the AI code of conduct offers voluntary recommendations for organizations working on cutting-edge AI technologies, including advanced foundational models and generative AI systems.”
Many AI companies already developed their guidelines, and they are also funding various other organizations to study AI safety.
OPenAI, Microsoft, Anthropic, and Google introduced a forum to discuss and study possible AI harms. Moreover, The companies pledged to fund $10 million to different organizations.
Along with these companies, Meta, Nvidia, Palantir, and IBM also agreed to make AI development safe and secure. Also, the G7 agreement is essential to analyze the risks associated with AI. This, in turn, will help companies take the necessary steps at the right time.
What is G7 (Group of 7)?
G7 consists of seven countries and one region as a member. The region is “The European Union” and the countries are the UK, Canada, Germany, Japan, Italy and France.
Order by the President of the United States
President Biden is said to have prepared an executive order directing federal agencies to establish guidelines and exert influence on companies involved in AI technology development to ensure they follow safe and secure practices.
Moreover, the United States Federal Trade Commission is reportedly tasked with closely monitoring AI companies.