NewsBizkoot.com

BUSINESS News for MILLENIALAIRES

AI Seoul Summit 2024: Landmark Commitments on AI Safety, Innovation, Inclusion

3 min read

AI Seoul Summit 2024Department of Science, Innovation and Technology, UK Government Official Website

The AI Seoul Summit has been taking place over 21st-22nd May 2024, co-hosted by the Republic of South Korea and the United Kingdom leading to advanced commitments on AI ‘safety, innovation and inclusion.’

This Summit is monumental as it is the first time that 16 Global AI tech companies have agreed to more concrete “safety commitments on development of AI”, including putting a stop to tech initiatives where risk mitigation measures cannot be met. Additionally, the participant countries adopted the ‘Seoul Declaration for safe, innovative and inclusive AI.’

This marks a radical transformation in ‘responsible AI’ trends, with companies and countries firmly agreeing upon the need for safety outcomes, risk mitigation, “accountable governance structures and public transparency.”

South Korean President at AI Seoul Summit 2024

South Korean President Yoon Suk-yeol at the virtual AI Seoul Summit.AFP

The ‘Frontier AI Safety Commitments’ and the Seoul Declaration

What is the background?With the rapid expansion of AI, there have been growing concerns regarding its regulation and safety. As a result, the first AI Safety Summit was held at Bletchley (UK) in November, 2023 which marked the beginning of deliberations. Other existing commitments such as the US Voluntary Commitments and the Hiroshima Code of Conduct are noteworthy. By building upon these developments and enhancing global commitments, the AI Seoul Summit and Declaration are path breaking.

Which companies participated and who attended?The participant AI companies range from the major Big Tech leaders such as Google, OpenAI and Meta to others like Zhipu.ai (China) and the Technology Innovation Institute (UAE).

This event also brought together international governments including the G7 countries, South Korea, Singapore, Australia and the EU along with “select global industry, academia and civil society leaders for discussions.”

What are the key commitments?

Companies to apply the brakes, when needed:The biggest take-away agreed upon is that companies “have committed to not develop or deploy AI models if the risks cannot be sufficiently mitigated.” This is a huge development, especially with the expansion of AI into sectors such as healthcare, life sciences and other high risk impact areas including public safety.

Companies to devise safety frameworks:The next commitment from the AI tech companies deals with publishing their respective “safety frameworks on how they will measure risks of their frontier AI models, such as examining the risk of misuse of technology by bad actors.”

The frameworks will also outline when severe risks, unless adequately mitigated, would be “deemed intolerable” and what companies will do to ensure thresholds are not surpassed.

Building Consensus:On defining these thresholds, “companies will take input from trusted actors including home governments as appropriate.”

Governance and Transparency:The efforts to be taken forward by this Summit, both by the AI companies as well as the participating countries will be targeted at strengthening ‘effective governance and transparency’ mechanisms in order to achieve a balance on safety, inclusivity and innovation. The call from world leaders included in the Seoul Declaration on AI focused on “human welfare”, the “promotion of democracy, human rights”, “free and open innovation” and “tackling global challenges such as poverty, climate change.”

The story is yet to unfold

The renewed commitments of the Summit are commendable at the very least. However, there is some scepticism in terms of its actual follow through. Professor Yoshua Bengio, a world leading AI researcher summed up the need for a more holistic approach towards making these a reality. “This voluntary commitment will obviously have to be accompanied by other regulatory measures, but it nonetheless marks an important step forward in establishing an international governance regime to promote AI safety.”

The evolving AI track here deserves attention particularly after the amped up roll out of various new technologies from OpenAI, Google’s DeepMind and Android. Further, recent AI related controversies centred around the misuse of the voices and characters associated with film stars like Jackie Shroff in India and Scarlett Johanssen in the US are illustrative of the urgent need for the safety and transparency concerns of AI to be addressed.

The next meeting will be held in France in 2025, and by then clarity on the actual efforts and implementation of these commitments will become clearer.

About Author