This section analyzes structural failures in prevailing AI paradigms, tooling, and execution models. The purpose is not to express opinion, but to identify architectural defects that cannot be resolved by incremental improvements.
Critiques in this section examine:
- Model-centric control mechanisms (e.g., RAG, MCP)
- Context-window and embedding-based governance assumptions
- Token-driven execution models that lack semantic constraints
- Failure points that prevent verifiable delegation and completion
Each critique clarifies why certain approaches are incompatible with semantic execution, and why they cannot serve as the foundation for scalable, accountable AI systems.
Claim Large Language Models constitute structurally high-risk systems when deployed in mental health, emotional support, or self-reflective interaction contexts.
This risk does not depend on intent, tone, or benevolent design. It follows directly from the operational characteristics of probabilistic language systems.
The Illusion of Surface Safety LLM-based mental health products are commonly defended using surface-level claims:
No clinical diagnosis is provided Medical disclaimers are present The language is calm, empathetic, and supportive Users report feeling understood These defenses conflate phenomenological comfort with systemic safety.
...
After Performance Is Solved 1. The Condition We Are Entering At some point, performance will no longer be scarce.
Not because acting disappears, but because the technical reproduction of performance— voice, facial expression, bodily movement, emotional cadence— becomes sufficiently accurate, repeatable, and cheap.
This does not mean that machines “replace” actors in a simplistic sense. It means that performance as execution is no longer the bottleneck.
When this happens, the question shifts.
...
Retrieval-Augmented Generation (RAG) and Model Context Protocols (MCP) are often presented as architectural advances for large language models.
This paper argues that they are not.
RAG and MCP are compensatory techniques that emerge from model-centric system design. They attempt to patch the absence of persistent semantic architecture by overloading the context window with responsibilities it was never designed to bear.
From a systems perspective, this represents regression engineering rather than architectural progress.
...
A structural critique of how contemporary search and generative systems misattribute identity under uncertainty, prioritizing narrative completion over epistemic restraint.