
Specifically the news that the race is on to build killer robot armies.
They won’t look anything like James Cameron’s famous images from his dystopian hell of The Terminator movies.
(By the way, watch only the first two of the series. After the 1991 sequel they’re totally derivative crap designed only to pull money from your wallet, a warning from friends that I had already guessed at as I avoided them.)

Still less is it going to look like the Cylons in Battlestar Gallactica (BSG) such as “Six”, more’s the pity.
No, as is often the way of reality vs fantasy they’ll look a lot more mundane, probably not too different to the sort of drones you can buy off-the shelf nowadays.
And that’s what really frightening about them. Unlike nuclear weapons it doesn’t take a lot of infrastructure or resources to build large numbers of these things.
Also, don’t imagine that an “AI killer robot” is going to have some sort of human-level intelligence, or need to.
That’s not what Artificial Intelligence is really about, despite decades of SF stories like BSG.
The “AI” in this case will amount to little more than the ability to do the following:
- Recognise a human target, which could be just any human or perhaps using facial or body recognition (or your cellphone)
- Control flight and/or other movements towards the target.
- Trigger a lethal munition to kill the target. Lethal meaning something as small as a single bullet.

It should be noted that all these capabilities are here now.
The temptation to open Pandora’s Box is irresistible. In early March, the U.S. National Security Commission (NSC) on Artificial Intelligence completed its two-year inquiry, publishing its findings in a dense 750-page report. Its members unanimously concluded that the United States has a “moral imperative” to pursue the use of lethal autonomous weapons, a.k.a. “killer robots.” Otherwise, we risk bringing a rusty knife to a superhuman gunfight.
Citing the threat of China or Russia leading the global artificial intelligence (AI) arms race, the commission’s chairman, former Google CEO Eric Schmidt, urged President Biden to reject a proposed international ban on AI-controlled weapons. Schmidt rightly suspects our major rivals won’t abide by such a treaty, warning U.S. leaders, “This is the tough reality we must face.”
If other superpowers are going to unleash demonic drone swarms on the world, the logic goes, the United States should be the first to open the gates of Hell.
Of course we already have things like the General Atomic Predator drones (“General Atomic”, how 1950’s is that?) and others which have been launching missiles at people for over a decade now. But they have humans in the decision loop and they’re still big and relatively expensive, although much cheaper than a human-piloted fighter bomber.
The attack drones currently on the market are plenty dangerous as is. A good example is the KARGU Loitering Munitions System, currently deployed by Turkish forces. This lightweight quadcopter “can be effectively used against static or moving targets through its … real-time image processing capabilities and machine learning algorithms.”
KARGU’s mode of attack is full-on kamikaze. It hovers high in the air as the operator searches for victims. When one is located, the drone dive-bombs its target and explodes. If the concussion doesn’t kill them, the shrapnel will. Just imagine what a thousand could do.
That last is the future. What we’re talking about here is a swarm of such machines and again – not like SF – these don’t need any centrally organised intelligence, human or AI, to operate. For twenty years now computer simulations have mimicked the swarming movements of schools of fish and flocks of birds with just three rules.
Once you get into such swarms we’re no longer talking about just picking off a few selected targets:
To raise awareness of this imminent threat, the Future of Life Institute produced the alarming, if poorly acted film Slaughterbots. The finale shows dissident college students having their brains blown out by bird-sized quadcopters.
…
In a 2018 study conducted for the US Air Force, drone specialist Zachary Kallenborn correctly argued that lethal drone swarms should be declared weapons of mass destruction.
Cheap weapons of mass destruction, too.
Even without that miserable conclusion from the USNSC I would have found it hard to believe that various nations could be held back from pursuing development of these things.
In the future how tempted would a POTUS be by the idea that the entire North Korean nuclear team, military and scientists, could be taken out in one hit by such a swarm, leaving nobody to launch a nuclear counter-strike? Or imagine an Israeli leader looking at the Iranian nuclear group? And that’s in democratic nations. What brakes might there be on the likes of Xi Jinping, Putin and Erdogan?
Of course every weapon system has been countered sooner or later. In this case it may be that in future we’ll each be guarded by a small swarm of counter-drones, starting with the wealthy members of society like Eric Schmidt:
In 2019, PAX published a list of the global corporations most likely to develop lethal autonomous weapon systems. Among the U.S. companies ranked as “high risk” are Amazon, Microsoft, and Oracle, as well as Intel, Palantir, Neurala, Corenova, and Heron Systems. It’s worth noting that the top members of the National Security Commission on AI—all of whom support using these murder machines—include chiefs from Amazon, Microsoft, and Oracle.
What’s new Tom?
Lovely song Andrei. Never heard of this couple before.
However, the prospect of an AI-driven LAW scares me more than old-fashioned bombs because the latter needed a heavy infrastructure behind it for manufacture and delivery, which put such out of reach for a lot of nations unless they were supplied.
These things increasingly look like off-the-shelf tech.
I would be surprised if the Biden Administration goes for autonomous weapons that are specifically targeted at humans. There are substantial international efforts to ban them, in the same way as land mines are banned. That Treaty has been pretty successful. Not universal yet. However the US and allies largely accept it, except for the DMZ.
New Zealand will be at the forefront of leading the effort for an international convention.
I know you might scoff, but we were also a lead nation on the landmine convention. I have some (minor) involvement by virtue of being Deputy Chair of the Public Advisory Committee on Disarmament and Arms Control (PACDAC).
I’ll resist the urge to scoff, despite a century’s worth of international agreements around warfare that have achieved very little, starting with Kellogg-Briand Pact of the 1920’s. National objectives and fears usually override such things sooner or later.
However, I can only hope that in this case you are correct. Given my background I’m far more afraid of these things than even nuclear weapons, and that report contained some very heavyweight people with influence inside the USA that will last much longer than a single Administration of eight years, let alone four.