Any Lawful Use: The Three Words That Swallowed the Red Line

There’s a special kind of comfort in hearing the phrase, “Don’t worry — it’s legal.”

It’s the same tone someone uses when they say, “Relax, I read the terms and conditions.”

No one has ever been comforted by that sentence.

So when OpenAI’s CEO announced that the company had successfully negotiated a Pentagon contract that preserved its “red lines” — no domestic mass surveillance and no lethal autonomous weapons without human responsibility — it sounded reassuring. Mature. Responsible. Like someone had brought a salad to a barbecue.

Then people noticed three words buried in the fine print:

“Any lawful use.”

And suddenly, the salad looked like it might be made of shredded loopholes.


What This Is Really About (And What It Isn’t)

At first glance, this story sounds like a clash of tech giants and generals — a Silicon Valley ethics debate conducted over encrypted group chats and defense briefings.

But that’s not what this is really about.

This is about how language works.

More specifically, it’s about how language can sound like a boundary while functioning like an invitation.

Anthropic drew two explicit red lines in its negotiations with the Department of Defense:

  • No mass surveillance of Americans.
  • No lethal autonomous weapons operating without human oversight.

The Pentagon reportedly refused. Anthropic stood firm. The government responded by labeling the company a “supply chain risk,” a term usually reserved for foreign adversaries — not domestic AI startups with strong opinions about robot assassins.

OpenAI, meanwhile, reached a deal.

According to its CEO, the agreement reflects its safety principles in law and policy. It prohibits domestic mass surveillance and requires human responsibility in the use of force.

But when outside observers asked, “Wait — why would the Pentagon suddenly agree to what it just rejected?” the answer from sources was blunt:

It didn’t.

The difference wasn’t in the red lines.

The difference was in the definition of “lawful.”


Insight #1: “Legal” Is Not the Same as “Limited”

OpenAI’s agreement reportedly allows the Pentagon to use its systems for any lawful purpose.

At first blush, that sounds reasonable. We are, after all, fans of law. It’s one of society’s top ten inventions.

But here’s the problem: in the realm of intelligence and national security, “lawful” has historically been a very stretchy word.

After 9/11, US intelligence agencies dramatically expanded surveillance programs — all under legal authorities like:

  • The National Security Act of 1947
  • The Foreign Intelligence Surveillance Act (FISA)
  • Executive Order 12333

Years later, Edward Snowden revealed programs that collected bulk phone metadata, vacuumed up global communications, and tapped infrastructure outside the US to gather domestic information indirectly — all backed by legal memos.

The lesson here isn’t “laws are bad.”

It’s that law can be interpreted. And interpretation is power.

If a contract says “no unlawful surveillance,” and the government defines a surveillance program as lawful under existing authorities, then the red line isn’t a wall.

It’s a speed bump.

And speed bumps are famously ineffective at stopping tanks.


Insight #2: “Human Responsibility” Is Not “Human Oversight”

OpenAI says its agreement requires “human responsibility for the use of force.”

Anthropic had reportedly pushed for “human oversight” before lethal decisions.

Those phrases sound similar.

They are not.

Human responsibility can mean someone is accountable after a decision is made.

Human oversight means someone is involved before or during the decision.

One is retrospective.
One is preventive.

Imagine a self-driving car programmed to make life-or-death decisions at intersections.

“Human responsibility” means someone reviews the accident report.

“Human oversight” means someone is sitting in the driver’s seat.

Now apply that to an AI system participating in what military analysts call the “kill chain” — identifying targets, ranking threats, synthesizing intelligence.

Even if the final trigger pull technically involves a human, AI systems could shape every upstream decision.

And the contract reportedly allows any use that is lawful under current DoD directives.

Which, by the way, can change.


Insight #3: Technical Safeguards Are Not Magic Shields

OpenAI points to safeguards in the agreement: classifiers that monitor usage, employees with security clearances, deployment architecture that allows auditing.

That sounds like a digital hall monitor squad.

But technical safeguards have limits.

A classifier can block certain outputs. It can flag specific patterns. It can refuse disallowed prompts.

What it cannot do:

  • Determine whether a query is part of a broader mass-surveillance program.
  • Detect whether a “one-off” request is actually the thousandth request in a bulk pipeline.
  • Override a legally authorized use determined by government policy.

If the government says, “This is lawful intelligence activity,” then the guardrails don’t remove the road.

They just make sure the car is wearing a seatbelt while accelerating.

And there’s another subtle point: deploying AI only in the cloud doesn’t prevent large-scale surveillance.

Mass surveillance requires enormous computing power. It lives in the cloud.

So saying “we won’t deploy on edge devices” is like saying, “We won’t give you a handgun — just the satellite.”

Technically true. Strategically irrelevant.


Insight #4: Law Has Not Caught Up to AI

Anthropic’s CEO argued that existing intelligence law hasn’t caught up to AI’s capabilities.

That’s the part that should make you pause.

Traditional surveillance required manpower, analysts, infrastructure. Scale had friction.

AI reduces friction.

It can layer:

  • Geolocation data
  • Web browsing records
  • Public voter registration
  • CCTV footage
  • Financial data
  • Social media

Not one dataset is necessarily alarming on its own.

But combined, they form what one expert described as a “comprehensive picture of any person’s life — automatically and at massive scale.”

AI doesn’t create new categories of power.

It amplifies existing ones.

Which means if the legal system allowed X before, AI allows X at 100x scale.

And the law hasn’t necessarily updated its definition of X.

That gap is where ambiguity thrives.


Insight #5: Everyone Is Framing, Always

There’s a fascinating psychological layer to this whole saga.

Anthropic is portrayed as principled and punished.
OpenAI is portrayed as pragmatic and successful.

But even Anthropic’s stance isn’t absolute. Its leadership has said lethal autonomous weapons might eventually be necessary — just not today, not unsupervised, not with current reliability.

So this isn’t a simple morality play.

It’s a negotiation over timing, control, and narrative.

OpenAI framed its deal as preserving red lines.
Critics framed it as caving.
The Pentagon framed it as asserting authority over tech companies.
Tech workers framed Anthropic as heroic.
Pop stars downloaded Claude.

Everyone is telling a story.

And the most powerful story is the one that defines the words.

If “red line” means “no unlawful use,” and “lawful” is defined by evolving executive interpretations, then the red line is elastic.

Elastic lines don’t break.

They stretch.


The Bigger Pattern

Step back from the specific companies.

What’s happening here is the oldest dance in modern governance:

  1. A new technology arrives.
  2. It scales faster than existing oversight.
  3. Institutions rely on old legal frameworks.
  4. Language absorbs the tension.

We saw it with telecom surveillance.
We saw it with social media data harvesting.
We’re seeing it with AI.

The most dangerous word in tech policy isn’t “weapon.”

It’s “compliance.”

Because compliance sounds ethical.
But it often just means “within current rules.”

And current rules were not written for systems that can ingest half the internet and find patterns across it in seconds.


The Quiet Lesson

The interesting question isn’t whether OpenAI caved.

The interesting question is this:

When you hear “we comply with the law,” do you assume the law is sufficient?

Most of us do.

We outsource moral certainty to legality.

If it’s legal, it must be okay.
If it’s in the contract, it must be bounded.
If there are safeguards, it must be contained.

But history suggests something more complicated.

The law is reactive.
Technology is exponential.
Language is flexible.

And in the space between those three, a lot can happen.


Back to the Salad

Remember that comforting sentence?

“Don’t worry — it’s legal.”

Legal doesn’t mean impossible.
Legal doesn’t mean restrained.
Legal doesn’t mean future-proof.

It means someone wrote a memo.

And memos, like contracts, like red lines, are made of words.

The real question isn’t whether those words are strong.

It’s whether they mean what you think they mean.

Because if the line only exists as long as it remains “lawful,” then the only thing standing between principle and power…

…is a definition.