← bilca.ai CV Turing App ↗

Articles

On AI alignment, structural safety, and the case for distributed intelligence

The Thesis

The Distributed Mind

Why AI alignment requires architectural decentralization. Centralized LLMs concentrate risk, distort intelligence through language, and collapse under synthetic feedback loops. A philosophical and technical case for distributed spiking networks.

Read article
A Data Exhaust Antidote

How spiking neural networks can save AI from itself. LLMs are running out of training data— but SNNs don't consume data, they generate it. Spike trains as a novel data modality, grounded in physical reality and endlessly renewable.

Read article
Alignment, Safety, and Security

A practical executive model for three distinct failure modes, how to triage them fast, and how to staff the right fixes when AI goes wrong.

Read article
Proof of Humanity, Proof of You

The only cure for plausible deniability is the elimination of deniability. All deniability. The convergence of AI safety, law enforcement, and privacy.

Read article
The Water's Fine

The amplification loop of invisible errors. Why the water feels so warm right before it boils. A look at how AI assumptions create a reality that diverges from truth.

Read article

The Architecture

Original Sin

We are walking into a wall with our eyes fully open. The original sin of computing—mixing instructions with data—is now scaling to catastrophic levels with autonomous agents.

Read article
Bell Labs Solved Prompt Injection in 1976

The dual nature of intelligent systems and the architecture of AI security. SS7's two-channel separation already solved the problem we're now rediscovering.

Read article
Berkeley Proved the Blueprint Works

A UC Berkeley team validated the two-channel architecture. Here's what they got right, what's missing, and why the hard problems remain.

Read article

Read the thesis. Then read the fix.