World Geostrategic Insights interview with professor Toby Walsh on the main risks that artificial intelligence (AI) machines pose to the present and future of humankind and on how AI is redefining warfare, particularly in relation to the current conflict in Ukraine.
    Toby Walsh is a professor of artificial intelligence (AI) at the University of New South Wales, Australia. He is one of the world’s leading experts on AI. The Australian newspaper called him a “rock star” of Australia’s digital revolution and listed him as one of the 100 most important digital innovators in Australia. He is a strong advocate of limits to ensure the use of AI. He was recently banned from entering Russia for debunking the claim that Russian AI mines can distinguish between soldiers and civilians. Toby Walsh is the author of  “Android Dreams: The Past, Present and Future of AI“, “2062: The World that AI Made” and “Machines Behaving Badly: The Morality of AI“.
    Q1 – Artificial intelligence has become an essential part of our lives. AI is already being used for an impressive array of purposes, which will continue to expand in the coming decades and will soon be present in every aspect of our lives. It opens up unprecedented opportunities, but also existential challenges and risks for humankind. In your book “Machines Behave Badly: The Morality of AI,” you wonder whether it is possible to make artificial intelligence machines function morally, point out the unintended consequences of artificial intelligence advances, and offer insights into the oddities of the people behind them. What are the main challenges and risks that AI poses to our present and future? What steps need to be taken now to ensure a secure digital future for humanity?
    A1 – There are near term and long term risks. A common mistake is to conflate them together. In the near term, the biggest risk is likely to be giving too much responsibility to machines that are not capable enough. In the longer term, we might have to worry about giving too much responsibility to machines that are too capable.
    Many of the risks concern unintended consequences. The machine learning engineers in Facebook behind the algorithms sorting user’s feeds didn’t intend to promote fake news and hate speech. But building a system based on likes and clicks, and reinforcing this signal, unfortunately did just this and not increase “meaningful” engagement. Of course, as they are unintended, they are inevitably hard to predict.
    We face some significant challenges, from increasing inequality, to fractured democracies, and geopolitical instability. AI can help us tackle many of these challenges. But equally AI can also help enlarge many of these problems if we are not careful.
    Q2 – The development and use of autonomous weapons capable of selecting and killing targets without human intervention is having an increasing impact in armed conflicts, raising important ethical questions. You spearheaded a petition signed by thousands of artificial intelligence experts around the world on the dangers of autonomous artificial intelligence in warfare. What are your views on the application of artificial intelligence in weapon systems? What are the associated risks? In major recent conflicts and in the current war in Ukraine, there has been an extensive use of AI weapons, such as the semi-autonomous drones. The United States seems also considering sending to Ukraine electronic equipment that converts unguided aerial munitions into smart bombs. Do you think it is possible, despite the current serious tensions between great powers, to reach an international agreement to regulate the use of autonomous AI weapons in warfare?
    A2 – Every now and again, we see a war being fought that redefines warfare. In the 1st World War, we saw the introduction of tanks and machine guns that redefined how we fought war. In the 2nd World War, we saw the introduction of long range bombers, along with other innovations like computers and nuclear weapons that again redefined how we fought war. I suspect we are seeing war again redefined in Ukraine. 
    A relative cheap drone helped take out Russia’s Black Sea billion dollar flagship. And drones are now some of the most feared weapons on the front line, directing artillery fire and even, in some cases, munitions onto terrified targets. Militaries around the world are looking closely at how war will now be fought. 
    Nevertheless, it is entirely possible and necessary to see autonomous weapons regulated. A good analogy is chemical weapons, equally simple and cheap to obtain, but now regulated. The regulation of chemical weapons is not perfect but it is nevertheless worth having. We can expect to regulate autonomous weapons similarly.
    Q3 – The Ukrainians are successfully applying advanced artificial intelligence tools to publicly available images and producing critical information useful to counter Russian military forces. Since the beginning of the war, Mykhailo Fedorov, Minister of Digital Transformation and Deputy Prime Minister of Ukraine, has asked for direct assistance from the leadership of commercial space companies. Ukraine has partnered with Clearview AI, which has provided software using facial recognition, meanwhile Elon Musk responded to the appeal by providing Ukraine with the Starlink Internet network, which is used by Ukrainians to attack Russian forward positions. Other commercial space AI companies have also responded, and their support has been critical in providing timely information on Russian troop movements and keeping Ukrainian military communication networks operational. However, some experts express concerns about the direct involvement of private AI technology companies in a major conventional war. What is your opinion?
    A3 – There are many good uses of AI in a military setting, from facial recognition to identify war crimes, to robots for clearing minefields. It is certainly possible (and even desirable) for private AI technology companies to contribute to these positive uses of AI. And indeed, much useful AI technologies can only be found in such places. 
    However, we need to be careful of the risks. What if the facial recognition software makes mistakes? Does a small company have the resources to protect human rights in such settings? Ultimate it is nation states (with their vast resources and in our case democratic legitimacy) that fight wars, not private companies.
    Toby Walsh is a professor of artificial intelligence (AI) at the University of New South Wales, Australia. He is one of the world’s leading experts on AI. He has been invited to give speeches to the United Nations, heads of state, parliamentary bodies and corporate boards. Toby Walsh regularly appears in the world’s most prestigious media to talk about the impact of AI and robotics on society. 
    Toby Walsh has written  popular books on AI, including  “Android Dreams: The Past, Present and Future of AI“, “2062: The World that AI Made” and “Machines Behaving Badly: The Morality of AI“. He has been elected a Fellow of the Australian Academy of Science, a Fellow of the Association for the Advancement of AI, and a Fellow of the European Association for AI. His Twitter account, @TobyWalsh, was voted among the top ten for keeping up to date on AI developments, meanwhile his blog,  The Future of AI,  attracts tens of thousands of readers each month.

     

    Share.