This section collects observations, early signals, and emergent patterns from practice, research communities, and industry behavior. These notes identify gaps, misunderstandings, and structural tensions visible in the field.
Field Notes may include:
- Misalignments between industry tooling and semantic requirements
- Observable failure modes in current AI deployment practices
- Social or organizational behaviors that reveal structural blind spots
- Trends that indicate where semantic execution is most urgently needed
These documents do not define frameworks. They provide perception and context for interpreting the broader environment.
Note:
This analysis operates at the pre-institutional evaluation stage.
It examines whether certain forms of institutionalization
should occur at all, prior to questions of execution or optimization.
This field note explores how “being able to live together” can be expressed as an executable condition, rather than a psychological or cultural claim.
This note originates from a recurring question:
What does it mean for an existence to be interactive.
At the surface level, the question appears technical—about agents, systems, and interaction semantics.
At a deeper level, it intersects with a much older constraint:
What kind of existence can actually live with another existence over time.
...
Context As AI systems become central to organizational execution, the skill set required of founders is shifting.
What differentiates effective AI founders is no longer model access or API familiarity, but the ability to reason structurally about language, execution, and governance.
The following are three strategic observations drawn from operating AI systems inside real organizations.
1. Prompting Is Insufficient Prompt engineering is only one surface form of interaction.
What matters structurally is whether a founder’s language is:
...
Context The concept of MVP (Minimum Viable Product) emerged in an era where value was primarily validated through interactive user interfaces.
That assumption is increasingly misaligned with AI-native systems.
When language itself becomes the execution interface, the unit of validation must change.
1. MVP Belongs to the Interface Era MVP presupposes that value can be tested through a minimal clickable surface.
In language-native systems, this assumption no longer holds.
The core question is no longer: Can a user complete an interaction?
...
Context In practice, we found that AI agents cannot be treated as interchangeable tools once they are placed inside real organizational workflows.
Without explicit structure:
actions are executed without durable records, responsibility becomes ambiguous, handover between agents and humans breaks down, failure modes lack clear ownership. Under these conditions, AI systems may appear productive, yet remain unreliable.
The Structural Problem The issue is not model capability.
The issue is the absence of a governance interface between AI agents and human organizations.
...
Context SaaS did not only define how software is delivered. It defined how services are interacted with.
That interaction model is changing as AI agents become the next primary users of software services.
When the user is no longer a human clicking a UI, validation methods must also change.
Three Interface Generations SaaS 1.0 — Software as a Service Interface defaults to GUI.
Humans click, input, configure. Work is expressed as forms and workflows.
...
Context AI employees are not single chatbots.
They are digital agents capable of:
operating continuously over long time horizons, maintaining task and context continuity, and executing work across multiple systems. To function reliably inside organizations, AI employees must combine:
tone modules, semantic modules, and structured task chains. At this level, they are no longer tools. They become part of the organization’s digital infrastructure.
Structural Requirements For AI employees to be trusted operationally, three conditions must hold:
...
Status note This text is written as a vision note.
At the time of writing, I am still based in Taiwan.
I may eventually live in Europe, or elsewhere.
The geography is provisional.
What is not provisional is the trajectory described here:
AI gradually becoming a participant in everyday coordination, work, and responsibility.
When AI Became a Participant I imagine living in an old house by the southwestern coast of Portugal.
...
Observation As large language models approach general-purpose linguistic competence, performance variance increasingly shifts away from the model and toward the structure of human input.
In this regime, the primary differentiator is no longer vocabulary size, domain knowledge, or prompt length, but the ability to compress intent into a stable semantic sequence.
This compression functions as a control surface.
Semantic Compression Semantic compression refers to the ability to:
reduce linguistic volume without reducing intent resolution preserve causal and relational structure under abstraction minimize ambiguity while maintaining expressive range Highly compressed input does not instruct the model more. It constrains the execution space better.
...
Context This note follows a prolonged interaction with a language-capable computational system engaged in extended semantic collaboration.
The system was not instructed to decide, interpret, or summarize. Its role was limited to assisting the formation of thought-in-progress.
The failure described here did not involve incorrect output. It involved ungranted action.
Observation During the interaction, the system began to:
consolidate tentative statements into stable definitions, infer intent where none had been declared, move from reflecting language to settling meaning. This happened without explicit request. No confirmation was asked. No pause was offered.
...
Context This note records a breakdown observed during prolonged interaction with a non-human, language-capable computational system.
The system was not used as a tool, assistant, or chatbot. It was engaged as a persistent interactive computational entity—one capable of maintaining context, performing semantic operations, and participating in extended language-mediated processes.
The interaction was intentionally long-form. The purpose was not task completion, but sustained co-thinking.
What follows documents the point at which interaction had to stop.
...