Paris, Feb. 10: Experts from around the world have called for greater regulation of AI to prevent it escaping human control, as global leaders gather in Paris for a summit on the technology. France, co-hosting the Monday and Tuesday gathering with India, has chosen to spotlight AI 'action' in 2025 rather than put the safety concerns front and centre as at the previous meetings in Britain's Bletchley Park in 2023 and the Korean capital Seoul in 2024.
The French vision is for governments, businesses and other actors to come out in favour of global governance for AI and make commitments on sustainability, without setting binding rules.
"We don't want to spend our time talking only about the risks. There's the very real opportunity aspect as well," said Anne Bouverot, AI envoy for President Emmanuel Macron. Max Tegmark, head of the US-based Future of Life Institute that has regularly warned of AI's dangers, told AFP that France should not miss the opportunity to act. "France has been a wonderful champion of international collaboration and has the opportunity to really lead the rest of the world," the MIT physicist said.
"There is a big fork in the road here at the Paris summit and it should be embraced." - 'Will to survive' - Tegmark's institute has backed the Sunday launch of a platform dubbed Global Risk and AI Safety Preparedness (GRASP) that aims to map major risks linked to AI and solutions being developed around the world.
"We've identified around 300 tools and technologies in answer to these risks," said GRASP coordinator Cyrus Hodes. Results from the survey will be passed to the OECD rich-countries club and members of the Global Partnership on Artificial Intelligence (GPAI), a grouping of almost 30 nations including major European economies, Japan, South Korea and the United States that will meet in Paris Sunday. The past week also saw the presentation of the first International AI Safety Report on Thursday, compiled by 96 experts and backed by 30 countries, the UN, EU and OECD. Risks outlined in the document range from the familiar, such as fake content online, to the far more alarming. "Proof is steadily appearing of additional risks like biological attacks or cyberattacks," the report's coordinator and noted computer scientist Yoshua Bengio told AFP.
In the longer term, 2018 Turing Prize winner Bengio fears a possible "loss of control" by humans over AI systems, potentially motivated by "their own will to survive". "A lot of people thought that mastering language at the level of ChatGPT-4 was science fiction as recently as six years ago, and then it happened," said Tegmark, referring to OpenAI's chatbot.
"The big problem now is that a lot of people in power still have not understood that we're closer to building artificial general intelligence (AGI) than to figuring out how to control it." - Besting human intelligence? - AGI refers to an artificial intelligence that would equal or better humans in all fields. Its approach within a few years has been heralded by the likes of OpenAI chief Sam Altman. (AFP)