On AI alignment, structural safety, and the case for distributed intelligence
Why AI alignment requires architectural decentralization. Centralized LLMs concentrate risk, distort intelligence through language, and collapse under synthetic feedback loops. A philosophical and technical case for distributed spiking networks.
Read articleHow spiking neural networks can save AI from itself. LLMs are running out of training data— but SNNs don't consume data, they generate it. Spike trains as a novel data modality, grounded in physical reality and endlessly renewable.
Read articleA practical executive model for three distinct failure modes, how to triage them fast, and how to staff the right fixes when AI goes wrong.
Read articleThe only cure for plausible deniability is the elimination of deniability. All deniability. The convergence of AI safety, law enforcement, and privacy.
Read articleThe amplification loop of invisible errors. Why the water feels so warm right before it boils. A look at how AI assumptions create a reality that diverges from truth.
Read articleWe are walking into a wall with our eyes fully open. The original sin of computing—mixing instructions with data—is now scaling to catastrophic levels with autonomous agents.
Read articleThe dual nature of intelligent systems and the architecture of AI security. SS7's two-channel separation already solved the problem we're now rediscovering.
Read articleA UC Berkeley team validated the two-channel architecture. Here's what they got right, what's missing, and why the hard problems remain.
Read articleRead the thesis. Then read the fix.