A race without rules that could drag us all into the abyss

For those of us who work in robotics, trends such as the race toward artificial superintelligence and physical AI—which enables robots to perceive, understand, and act—raise important and unavoidable questions. These are essential questions of enormous relevance for each of us. Will robotics and artificial intelligence serve humanity and the common good? Or will they contribute to making our society more unjust and dystopian? Or worse?

The answer depends on the choices of the individual companies behind these technologies and, above all, on the rules that states and the international community manage to impose. Currently, leaders in artificial intelligence operate in a virtually unregulated market in the United States. Their primary concern is profit, and they have access to billions of dollars raised worldwide. The social, political, and environmental impact of their decisions is rarely considered.

Despite the substantial benefits of large-scale artificial intelligence adoption—for example, its applications in pharmaceutical and medical research—the threats associated with its unregulated development are tangible and well-known. These threats range from the development of biological weapons and military uses to the surveillance of dissidents in authoritarian countries and the flood of fake news. The same applies to robotics.

Much of the scientific community, and senior executives of what the media calls Big Tech, agree on the risks of the reckless pursuit of artificial superintelligence. While the contours of these risks are still not well defined, they are potentially enormous—even the extinction of the human species.

Leading CEOs such as Dario Amodei and distinguished scientists like Yoshua Bengio (Turing Award recipient in 2018) and Geoffrey Hinton (Nobel Prize in Physics recipient in 2024) have given interviews and written books to warn the world. In a very recent essay titled If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI, Eliezer Yudkowsky and Nate Soares — from the very title — paint deeply troubling scenarios. At the same time, however, artificial intelligence and robotics have the potential to transform society for the better by building fairer, more resilient, and safer societies with stronger welfare systems capable of addressing the challenges of the 21st century, such as an aging population, the risk of new pandemics, and the climate crisis.

Tactile Robots, the technology company I founded, collaborates with top-tier partners to explore new use cases through robotics and artificial intelligence that can improve the quality of services we access daily. Our goal is to improve human life and society.

As an engineer and manager, I do everything I can to guide the development of Tactile Robots in the right direction. However, I recognize that the efforts of one company alone are insufficient to mitigate the risks associated with artificial superintelligence. Nor can a single national government.

That is why I have embraced the causes of organizations such as PauseAI and ControlAI. A few days ago, I attended the PauseCon conference at the European Parliament in Brussels. I listened to a moving speech by Stuart J. Russell, author of the artificial intelligence textbook I studied at the Politecnico di Milano. I was struck by seeing a mind of his caliber—an institution in our field—become emotional while speaking about the absurdity of the race to artificial superintelligence without regulation that could avert the catastrophic risks stemming from it.

I appreciated the contributions of the many Members of the European Parliament who demonstrated great sensitivity to these issues. In my view, the development of artificial superintelligence is collective suicide. The only viable path forward is a major international agreement that defines the rules for the safe and responsible development of artificial superintelligence. This agreement should also set in motion the creation of institutions to oversee and monitor artificial superintelligence, as well as the development of purpose-built technologies. After all, cars have brakes, and their engines can be switched on and off at any moment. Historically, the defining feature of every technological creation is precisely this: the fact that human beings retain ultimate control.

The only rational choice is an international collaborative path towards the safe and sustainable development of AI and robotics. This choice would not only avert the risk of extinction due to uncontrolled artificial superintelligence, but also stabilize the AI industry, make deep tech profitable, and guarantee a return on investment for venture capitalists around the world. Furthermore, this choice would avert the risk of technological-military escalation capable of spilling over into devastating conflicts.

We must act; we must all act.