As world leaders gather in Paris for a high-stakes summit on artificial intelligence, experts from various fields call for stricter AI regulations to ensure the technology remains under human control.
While past meetings, such as those held in Britainโs Bletchley Park in 2023 and Seoul in 2024, focused heavily on AI safety concerns, this yearโs summitโco-hosted by France and Indiaโtakes a different approach.
A Shift Toward AI Governance and Opportunity
Rather than solely emphasizing the risks, Franceโs agenda highlights global AI governance, sustainability, and industry commitments.
โWe donโt want to talk only about the risks. Thereโs the very real opportunity aspect as well,โ said Anne Bouverot, AI envoy for French President Emmanuel Macron.
However, not everyone is convinced that focusing on AIโs potential should come at the expense of addressing the looming risks. Max Tegmark, head of the Future of Life Institute, warned that France should seize the opportunity to take decisive action.
โFrance has been a wonderful champion of international collaboration and has the opportunity to lead the rest of the world,โ the MIT physicist stated.
Addressing AI Risks with Global Initiatives
One of the key initiatives launched ahead of the summit is the Global Risk and AI Safety Preparedness (GRASP) platform. Backed by the Future of Life Institute, GRASP aims to track and mitigate AI-related risks by analyzing global safety solutions. โWeโve identified around 300 tools and technologies to address these risks,โ said GRASP coordinator Cyrus Hodes.
Last week, the first-ever International AI Safety Report, compiled by 96 experts and endorsed by 30 countries, the UN, the EU, and the OECD, was unveiled. The report outlines potential threats ranging from misinformation to far more severe concerns, such as biological attacks and cyber warfare.
Renowned computer scientist and 2018 Turing Prize winner Yoshua Bengio emphasized the growing fears of losing control over AI systems. โProof is steadily appearing of additional risks, and in the long term, thereโs the potential for AI to develop its survival instincts, leading to unintended consequences,โ he explained.
The Looming Threat of Artificial General Intelligence
A major concern among AI experts is the rapid development of artificial general intelligence (AGI), a form of AI that could surpass human intelligence across all domains. OpenAI CEO Sam Altman and Anthropicโs Dario Amodei have suggested that AGI could emerge within the next few years.
โIf you just look at the pace of advancements, it does make you think that weโll get there by 2026 or 2027,โ Amodei stated in November. Tegmark, however, warned of a worst-case scenario where leading AI firms, particularly in the U.S. and China, lose control of AGI, potentially resulting in a world dominated by machines.
AI in Warfare: A Growing Concern
One of AIโs more immediate dangers is its potential use in autonomous weapons. Stuart Russell, a professor at UC Berkeley and coordinator of the International Association for Safe and Ethical AI (IASEI), expressed deep concerns over AI-driven weapons systems. โThe biggest fear is AI making battlefield decisionsโchoosing who to attack and whenโwithout human oversight,โ he said.
Experts argue governments must act swiftly to regulate AI like they oversee other high-risk industries. โIf someone wants to build a nuclear reactor, they must prove itโs safe before construction. AI should be treated the same way,โ Tegmark asserted.
As global leaders deliberate in Paris, the challenge remains: How can the world harness AIโs potential while ensuring it doesnโt spiral out of human control? The coming months will be critical in determining whether AI governance keeps pace with technological advancementsโor if humanity risks falling behind.