
The United Nations (UN) has formed an advisory group to look at how artificial intelligence (AI) should be managed to mitigate potential risks, with a pledge to adopt a “globally inclusive” approach. The move comes amid new research showing that consumers don’t trust businesses to responsibly adopt generative AI or adhere to regulations governing its use.
UN Secretary-General Antonio Guterres said the new AI advisory body would be multidisciplinary and address issues related to international governance of AI.
Also: generative AI is everything, everywhere, all at once
the body Currently formed 39 members and includes representatives from government agencies, non-governmental organizations and academia, such as the Chief AI Officer of the Government of Singapore, the Secretary of State for Digitalization and AI of Spain, the CTO of Sony Group, the CTO of OpenAI, the International Policy Director of Stanford University’s Cyber Policy Center. And China University of Political Science and Law Institute of Data Law Professor Dr.
With the emergence of applications such as chatbots, voice cloning and image generators in the past year, AI has demonstrated its ability to bring significant potential as well as potential dangers, Guterres noted.
“From predicting and responding to crises to launching public health programs and education services, AI can enhance and expand the work of governments, civil society and the United Nations across the board. For developing economies, AI offers the potential to leapfrog outdated technologies. And those who have the most There is a need to bring services directly to them,” he said.
Also: Generative AI in commerce: 5 ways industries are changing how they do business
He added that AI can help drive climate action and efforts to achieve the international group’s 17 Sustainable Development Goals by 2030.
“But, this all depends on using AI technologies responsibly and making them accessible to all, including developing countries that need it the most,” he said. “As things stand, AI expertise is concentrated in a handful of companies and countries. This could deepen global inequality and bridge digital divides.”
Pointing to concerns about misinformation and confusion, Guterres said AI could potentially exacerbate bias and discrimination, surveillance and privacy invasions, fraud and other violations of human rights.
Also: Why companies need to use AI to think differently, and not just to cut costs
The new UN advisory group, therefore, is needed to drive discussions on AI governance and how to contain the associated risks. The agency will also assess how the various AI governance initiatives already underway can be integrated, he said, adding that the advisory body will be guided by the values outlined in the UN Charter and efforts to be inclusive.
By the end of the year, he noted that initial recommendations will be ready in three areas, namely international governance of AI, shared understanding of risks and challenges, and enablers to tap AI to accelerate delivery of the Sustainable Goals.
Consumers lack confidence in business adoption of AI
Although rules governing the use of AI are considered necessary, there are questions about whether they will be enforced.
According to survey results from tech consultancy ThoughtWorks, nearly 56% of consumers don’t trust businesses to follow generative AI regulations. study surveyed 10,00 respondents across 10 markets including Australia, Singapore, India, UK, US and Germany. There were 1,000 respondents in each market, all of whom were aware of generative AI.
Also: Machine learning helps this company provide a better online shopping experience
A lack of consumer confidence in business compliance is evident even as 90% believe government regulations are necessary to hold companies accountable for how they implement AI.
Nearly 93% of consumers are concerned about the ethical use of generative AI, with 71% expressing concern that businesses will use their data without consent. Another 67% are concerned about the risks associated with misinformation.
Also: Generative AI will go beyond what ChatGPT can do. Here’s everything about how technology advances
When asked if they would buy from a company that uses generative AI, 42% of consumers are more likely to do so, while 18% feel less inclined.
Among those likely to buy from generative AI adopters, 59% of consumers believe businesses can use the technology to innovate more, and 51% look for better customer experiences with faster support from companies.
Some 64% of consumers cite a lack of human touch as the reason they are less likely to buy from a business that uses generative AI, while 48% cite data privacy concerns.
Across the board, 91% of consumers express concern about data privacy, specifically, how their information is used, accessed and shared.
Also: I’m using ChatGPT to help fix code faster, but at what cost?
“Consumers are savvy enough to recognize the potential for misuse of technology, which can include privacy violations, intellectual property violations, job losses or degraded customer experiences,” said Mike Mason, chief AI officer at ThoughtWorks.
“At the heart of this fear is a concern that enterprises will not be transparent about their use of generative AI technology,” Mason said.
“For some consumers, government regulation is seen as the best way to mitigate against unscrupulous use of generative AI, but government regulation has inherent problems: We’ve often seen regulators keep pace with the technology.
He urged businesses to embrace generative AI in a “responsible manner” by capitalizing on consumer enthusiasm for the technology, rather than relying on regulations.