Modern Warfare (Future of AI #3)

Simon Baars
4 min readJun 24, 2024

--

China is experimenting with using robots in combat. They turned a robo doggie into a dystopian war machine using the commercially available Unitree B1.

Many games and movies explore the idea of “robot warfare”. Usually, in such media, robots and humans get relatively equal levels of power, such that the humans can struggle to beat the robots. It creates a good fictional narrative because humanity stands a chance.

Computer advantage in warfare won’t be fair like that.

If we combine computer vision with an ordinary machine gun, it’s relatively easy to accurately predict the exact point of impact of any bullet fired. With sufficiently high-res/zoom cameras, it’s easy to accurately pinpoint the exact way to aim for the most lethal result.

The faster guns on the market right now, like the MG42, can fire 20–25 bullets per second:

Even with the constraint of servo speed and a challenge of factoring recoil into the aim position, I have no doubt that an AI-assisted weapon like this could deliver 20 perfect headshots per second. In the course of a minute, a whole busy plaza of people could be dead. And that’s just assuming the use of a single weapon. Add the capability to fly (in the form of a drone), and the result could be absolutely devastating.

Several wars are ongoing as we speak. In war, many ethical boundaries are crossed. But there are some that cannot be crossed without global retaliation. Both Israel and Russia, two countries involved in war right now, have access to nuclear warheads. Neither have used them. Because when they do, it will yield severe consequences.

But using increasingly advanced machines in war has historically been fair game. Their impact is much more local than the impact of nukes.

When it comes to aligning GunAI™, war is partly about being faster than your opponent. It is much easier to make an AI-driven robot weapon that just shoots anything that looks like a human head, rather than aligning it to only shoot the enemy soldier. Air-drop it into enemy territory, and have them deal with it.

Whoa. That’s not fun. Why’d you go there?

The reason I’m making this argument is that these technologies are dangerous. But meanwhile, we laugh when GPT-4o gets streamed live video, and compliments the user in a strangely flirty way on their appearance:

For aiming guns, these models would be useless. Much simpler computer vision AI models will do. The main power of GPT and similar transformer models comes in steering the robot through enemy territory.

I don’t think anything in current AI alignment would prevent GPT-4o, when served a video stream of the “view” of the robot, from assisting the robot to move to places that are most likely to house people to eliminate. The model just sees simple spatial movement instructions. It can’t infer the military usage.

This is far from the only “dangerous” way to use AI outside of current alignment capabilities. The input/output-space of these models is near-infinite. We cannot ensure that all of them are safe.

Investing in AI alignment is important to catch these kinds of usages to avoid malicious use of these models. However, as long as all major model vendors try to out-compete each other, focusing on AI safety becomes decreasingly interesting from a financial perspective:

Building safety into these models is a losing battle. Instead, we should focus a lot of attention into monitoring how these models are used. I’m not a big fan of impeding privacy, but I think it’s necessary to avoid these models harming people, such as when they are used for military usecases.

As an outsider, doing this kind of monitoring is near-impossible. Only the big model vendors have access to this data. That’s why these kinds of efforts have to come from inside.

I trust that these organizations have smart people working for them, who hopefully prioritize this. But if not, let this be yet another reminder of the risk of these technologies. We’re building this future together. Let’s make it a future where we can safely cuddle dogs, be they biological or mechanical.

Good boy. Gooood boy.

Please don’t kill me.

--

--

Simon Baars
Simon Baars

Written by Simon Baars

Yet another guy making the internet more chaotic with random content.

No responses yet