While doomers debate chatbot safety, Ukraine is teaching drones to hunt tanks autonomously. While articles about deepfakes raise concerns, a harpy loiters overhead, selecting its own targets. While we philosophize about AGI, every major military is racing to remove humans from the kill chain.
Defense contractors quietly solve computer vision for terminal guidance, swarm coordination for overwhelming air defenses, and decision trees for target prioritization. The hard problems aren't "will AI say mean things?" it is "how many killers can we build by Tuesday?"
The Automation of Violence
Every military understands: autonomous weapons are just better weapons. No pilot fatigue. No moral hesitation. No extraction required.
The capability gap is widening. Countries with AI drones will dominate countries without them like machine guns dominated spears. The choice isn't whether to build them—it's how to build them first.
The real AI safety question isn't whether a chatbot might say mean things. It is whether the drone swarm can distinguish between a school bus and a personnel carrier. And whether it matters to the people programming it.
Sticks and stones break bones. But autonomous sticks that pick which bones to break? That's not a future risk. That's a current product category. It is shipping.
The graveyard isn't full of jobs.