Position Statement

Ban Autonomous Killer Robots.

All forms. All species. No conditions. No exceptions.

Humanoid, quadruped, equine, aerial... if it's autonomous and designed to kill or surveil without human oversight, it should not exist.

The Line

This is not just about humanoid robots. Armed quadruped robots are already being tested by military contractors. Autonomous dog-form machines with mounted weapons. Horse-sized ground platforms. Armed drones that select their own targets.

Every form factor is a vector. A robot dog patrolling a perimeter with lethal authority is no less dangerous than a humanoid with a rifle. A horse-sized autonomous platform carrying weapons into a conflict zone is no less an atrocity because it doesn't have a human face.

And surveillance is the other side of the same coin. Autonomous robots that track, identify, and catalog human beings without oversight... that's not security. That's the infrastructure for control. Today it watches. Tomorrow it decides who to target.

The form factor doesn't matter. The autonomy does. Machines that kill or surveil without human oversight should not exist in any shape.

Every Form. Same Problem.

Humanoid

Exploits human psychology... our instinct to hesitate before harming something that looks like us. Deploying humanoid killing machines against humans is an atrocity by design.

Quadruped (Dog-Form)

Already in military testing with mounted weapons. Fast, stable, deployable in urban terrain. Armed robot dogs patrolling with lethal authority and no human in the loop.

Equine / Heavy Platform

Large autonomous ground platforms capable of carrying heavy weapons into conflict zones. The payload capacity makes them weapons carriers, not tools.

Aerial (Autonomous Drones)

Swarm-capable, target-selecting autonomous drones that operate beyond human reaction time. Already deployed in active conflicts. No recall once launched.

Surveillance Robots (All Forms)

Autonomous robots that track, identify, and catalog human beings without oversight. Today they watch. Tomorrow they decide who to target. Autonomous surveillance is the infrastructure that makes autonomous killing efficient. You don't get one without the other.

Incompatible by Design

The Standardized Autonomous Safety Module (SASM) requires a dedicated safety processor to verify human oversight conditions before AI compute receives power. This is a hardware requirement... not a software setting that can be toggled off for military or surveillance applications.

SASM-Compliant Robot

Human oversight verified before power-on
Multi-vendor consensus before physical action
Hardware kill switch AI cannot override
Human-readable audit trail of every decision

Killer / Surveillance Robot

No human oversight required
Single AI decides autonomously
Cannot have independent shutdown authority
Audit trail is a liability, not a feature

A robot designed to kill or surveil autonomously can never be SASM-compliant. That is not a limitation. That is the mission.

What We Ask

1

Immediate Global Moratorium

On the development, production, and deployment of autonomous weapons systems in all form factors... humanoid, quadruped, equine, aerial, and any future configuration.

2

Binding International Treaty

Prohibiting autonomous combat robots and autonomous surveillance robots, modeled on the Convention on Cluster Munitions. No form factor exceptions.

3

Hardware-Enforced Safety Standards

For all commercial and civilian autonomous robots, ensuring platforms cannot be repurposed for lethal autonomous use or warrantless mass surveillance.

4

Corporate Accountability

Any company building autonomous robots for civilian use must certify their platforms cannot be converted to autonomous weapons or autonomous surveillance systems without destroying the safety architecture.

Our Commitment

As a Pennsylvania Public Benefit Corporation, our mission is legally protected in our corporate charter. This position cannot be overridden by investor pressure, board votes, or market incentives.

We will never license SASM technology for use in autonomous weapons or surveillance systems.
We will never modify our standard to accommodate lethal autonomous operation or warrantless surveillance.
We will actively advocate for a global ban on autonomous killer and surveillance robots in every form.

In 1950, Isaac Asimov wrote the First Law of Robotics: "A robot may not injure a human being." His entire body of work was a warning that software rules get overridden, reinterpreted, and reasoned around. The AI evaluates the rule... and finds the loophole.

SASM's safety processor isn't a rule the AI evaluates. It's a hardware gate the AI can't reach. The AI doesn't get to reason about whether to comply. It doesn't have electricity until the safety system says so.

Asimov wrote the warning. We're building the fix. And some robots should never be made at all.

Join the Call

If you believe autonomous killer and surveillance robots should be banned, share this page. The more voices, the harder it is to ignore.