China and the US are part of a multilateral agreement to cooperate on AI risks

3 minutes, 43 seconds Read


Globe against city background

cofotoisme/Getty Images

A group of 28 countries, including China and the United States, have agreed to work together to identify and manage potential risks from “frontier” artificial intelligence (AI), marking the first such multilateral agreement.

Published by UK, Bletchley Declaration on AI Security Outlines recognition of countries with “urgent need” to ensure AI is developed and deployed in “safe, responsible ways” for the benefit of the global community. The effort requires broad international cooperation, according to the declaration, which has been endorsed by countries in Asia, the EU and the Middle East, including Singapore, Japan, India, France, Australia, Germany, South Korea, the United Arab Emirates and Nigeria. ..

Also: Generative AI can help evolve low code into no code – but with a twist

Countries recognize that significant risks may arise from intentional misuse or unintended problems with border AI controls, in particular, cybersecurity, biotechnology and confusion risks. The announcement points to potentially serious and catastrophic harm from AI models, as well as risks associated with bias and privacy.

Along with their recognition that the risks and capabilities are not yet fully understood, the countries agreed to collaborate and build a shared “scientific and evidence-based understanding” of frontier AI risks.

Also: As developers learn the ins and outs of generative AI, non-developers will follow.

The manifesto describes frontier AI as systems that include “highly capable general-purpose AI models”, including foundation models, which can perform a wide range of tasks, as well as specific, narrower AI.

“We resolve to work together in an inclusive manner to ensure human-centered, trustworthy, and responsible AI that is safe, and supports the well-being of all through existing international forums and other relevant initiatives,” the declaration said.

“In doing so, we recognize that countries should consider the importance of a pro-innovation and proportionate governance and regulatory approach that maximizes the benefits and considers the risks associated with AI.”

This approach may include establishing risk classifications and categorizations based on a country’s local conditions and applicable legal framework. Collaboration may require new procedures such as common policies and codes of conduct.

Also: Can AI code? Only in baby steps

The group’s efforts will focus on developing risk-based policies across countries, collaborating where appropriate, and country-level approaches may differ. In addition to requiring increased transparency by private actors who are developing frontier AI capabilities, these new efforts include relevant assessment metrics and tools for security testing, as well as public-sector capabilities and scientific research.

UK Prime Minister Rishi Sunak said: “This is a landmark achievement that sees the world’s greatest AI powers agree on the urgency behind understanding the risks of AI.”

UK Technology Secretary Michelle Donnellan added: “We have always said that no single country can tackle the challenges and risks posed by AI alone, and today’s landmark announcement marks the start of a new global effort to build public confidence by ensuring the technology’s safe development.”

A Singapore-led project known as Sandbox was also announced this week, which aims to provide a set of criteria for evaluating generative AI products. The initiative draws resources from major global players including Anthropic and Google, and is managed by Draft catalogue which categorizes the current criteria and methods used to evaluate large language models.

Also: The Ethics of Generative AI: How We Can Use This Powerful Technology

The catalog compiles commonly used technical testing tools, organizes them according to what they test and their methods, and recommends a baseline set of tests for evaluating generative AI products. The goal is to establish a common language and support “wider, safer and more reliable adoption of generative AI”.

The United Nations (UN) last month set up an advisory group to look at how AI should be managed to mitigate potential risks, with a pledge to adopt a “globally inclusive” approach. the body Currently formed 39 members and includes representatives from government agencies, non-governmental organizations and academia, such as the Singapore Government’s Chief AI Officer, Spain’s Secretary of State for Digitization and CTO of AI and OpenAI.





Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Revisiting Priyanka Chopra, Nick Jonas’ Wedding Album ​Must-Visit Places In India As A Solo Traveller​ ​Randeep Hooda, Lin Laishram’s Million-Dollar Moments From Traditional Meitei Wedding​ ​10 Must-Visit Travel Destinations In India This Winter​ ​​8 Detox Water to Combat Festive Binge​