← bilca.ai

What we let it become...

On architectural security and the shape of AI to come.
The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong, it usually turns out to be impossible to get at or repair. — Douglas Adams
Security by design Turing App ↗ Articles ↗

What we let it become... is the path of drift. The path of building systems that quietly slide into pathology, systems that behave in ways no one intended, because no one really, truly understands what was built in the first place.

Adams understood something fundamental: the systems we declare "safe" are often the most dangerous. Not because they fail more often, but because when they fail, we have no idea how to fix them. We didn't design for failure. We designed for the happy path and crossed our fingers.

This is the pattern that emerges from decades of work on safety-critical systems— from Navy submarines to FBI cybersecurity standards to telecom infrastructure. Systems designed without adversarial thinking eventually get exploited. Systems designed without structural constraints eventually drift. Every conceivable mode of failure shows up eventually. And several that were, frankly, inconceivable.1

1 I'm told that word doesn't mean what I think it means. I disagree.

Security must lead and constrain architecture—not follow it.

There's a good chance the AI alignment problem isn't about training better models. It might be that we're building the wrong architecture in the first place.

We're trying to make giant centralized language models "safe" through better training, better guardrails, better oversight. But you can't bolt safety onto a fundamentally unsafe architecture. Not in submarines. Not in telecom. Not in AI.

That's why the focus here is on distributed spiking neural networks—architectures in which safety is structural, not added after the fact. Where alignment isn't a single point of failure.

Training first, then failing, then surrounding the failures with constraints after the fact— that approach doesn't work. It can't work. The constraint has to come first. The architecture has to embody the safety property, not merely be trained to approximate it.

This worldview shapes the approach to AI alignment taken here: not as a tuning exercise, but as an architectural systems-safety problem.

One thing is certain: AI will become.

The only question is whether we make it, or merely let it.

Michael Bilca

The Perspective Behind This Work

These conclusions emerge from two decades of building, hardening, and operationalizing safety-critical systems—U.S. Navy submarine systems, FBI cybersecurity architecture, and international telecom security standards (3GPP SA3-LI).

When you spend that long watching systems fail in ways their designers never imagined, you develop a bias: toward architectures where constraints are structural, not bolted on. Toward mechanisms that can be audited and reasoned about. Toward viewing AI as infrastructure that must be robust, observable, and governable by design.

Further Reading

The Distributed Mind
Why AI alignment requires architectural decentralization. An exploration of concentration risk, model collapse, and the SNN alternative.
The Water's Fine
The amplification loop of invisible errors. Why the water feels so warm right before it boils.
Original Sin
The original sin of computing—mixing instructions with data—is now scaling to catastrophic levels with autonomous agents.
Bell Labs Solved Prompt Injection in 1976
The dual nature of intelligent systems and the architecture of AI security.
Full Article Series →
Read the complete 7-part series — from The Distributed Mind to The Reckoning.