This section articulates positions derived from a specific set of architectural assumptions about language, execution, and AI.
These assumptions are not restated in full here and summarized below and treated as non-negotiable throughout the following work.
- Natural language is an executable input, not merely an interface.
- Execution requires an intermediate semantic structure that is inspectable and verifiable.
- Agents are accountable actors, not workflow abstractions.
- AI enters institutions before it enters products at scale.
- Standards are engineering artifacts, not external constraints.
- High-friction domains are primary validation environments.
- Agent execution presupposes persistent identity and semantic stability.
This work proceeds from a set of explicit foundational assumptions.
Readers unfamiliar with these assumptions may consult Foundational assumptions
as background context.
For a structural overview of how these positions, assumptions, and downstream architectures relate, see
Overview: Executable Semantic Order.
Operating Context#
The current execution environment for these positions is cross-border trade.
This domain functions as a high-friction testbed for validating accountable AI workforce systems under real regulatory, jurisdictional, and economic constraints.
While individual positions may address governance, execution, or institutional design in isolation, they are developed and tested within a broader objective: the construction of AI Workforce Infrastructure—foundational systems that allow organizations to deploy, govern, and audit AI workforces as part of their core operations.
Cross-border trade provides the conditions necessary to stress-test these assumptions without prematurely constraining the architecture to a single industry or product form.
Recommended Reading Order#
For first-time readers:
- Institutional Entrepreneurs
- Institutional TAM and New Entrepreneurs
- Sovereign AI Beyond Models
- Trustable AI Beyond Models
- AI-Native Management
- Building an AI-Native Company
- Service-as-Agent-Service
- Executable Semantic Order
- Semantic ISA
- Accountability, Reproducibility, and Trust in AI
- Completion Is Not Neutral
Institutional entrepreneurs did not meaningfully exist in earlier historical periods.
The authority to write institutions — to define governance, legitimacy, liability, and enforcement — was historically monopolized by states.
Entrepreneurship operated within institutional boundaries. It did not author them.
That boundary is no longer stable.
A Structural Shift in Authority Today, a non-state actor can operate a complete governance structure.
A company, a formal language, or even an autonomous agent can instantiate systems that include:
...
A Shift in How Markets Are Formed In classical entrepreneurship, Total Addressable Market (TAM) is treated as an external given.
Founders are expected to:
identify an existing market, estimate its size, and compete for a share of it. This framing assumes that markets pre-exist the company.
That assumption no longer holds under certain technical conditions.
From Market Discovery to Market Authorship When institutions become executable, markets can be generated, not merely entered.
...
From Model Competition to Institutional Sovereignty Discussions of AI sovereignty are often framed around large language models, chips, and compute infrastructure.
These efforts, while necessary, address only the supply side of technology.
They do not confer sovereignty.
Sovereignty emerges when a society can legitimately define:
who may speak, under what authority, with what accountability, and through which institutional interfaces. In this sense, AI is not merely a computational system, but a governance system.
...
A Question of Trust, Not Technology As artificial intelligence systems increasingly enter medicine, finance, education, labor, and public administration, the central question facing societies is no longer whether AI performs well, or whether it aligns with abstract ethical principles.
The deeper question is whether a society possesses the institutional capacity to authorize, audit, deploy, and hold AI systems accountable.
This is not primarily a technical problem. It is a trust problem.
...
AI is no longer an interface layer. It has entered the execution layer of organizations.
This changes management fundamentally.
Classical management theory — from Taylor to Weber to Fayol — was built on a shared assumption: humans are the only entities that execute work.
That assumption no longer holds.
Today, AI systems:
execute tasks, make operational decisions, generate financial, legal, and governance artefacts, leave persistent execution traces. Treating them as “tools” is no longer structurally valid.
...
A Structural Transition Our organization is undergoing a structural transformation.
This is not a product pivot, nor an operational optimization.
It is a reconfiguration of how the organization thinks, executes, and scales under conditions where AI agents are active participants.
Redefining Collaboration The first question is no longer whether AI can assist humans.
It is whether AI agents can function as collaborative actors within the organization.
This requires clarity on:
...
SaaS was the dominant service syntax of the previous generation.
Its default assumptions were stable:
a human user, a graphical interface, and workflow interaction mediated by screens. That interface assumption is no longer stable.
As AI agents become operational actors, services must become legible and executable to non-human users.
Core Position Service-as-Agent-Service treats service not as an application surface, but as a structural unit that can be:
parsed, authorized, invoked, delegated, composed, and audited by agents. This is not “SaaS with AI,” and it is not automation via prompts.
...
Definition Executable Semantic Order describes the structural conditions under which semantic constructs can be transformed into constrained, verifiable, and auditable execution.
This work does not treat semantics as representation, interpretation, or meaning-as-text.
It concerns the minimum ordering required for semantic commitments to participate in execution without collapsing into ad hoc human judgment.
In this sense, executable semantic order operates at a pre-system, pre-application layer: it defines when a semantic description may legitimately be treated as an executable premise.
...
Semantic ISA defines the semantic–execution boundary required for deterministic, inspectable, and accountable AI-native execution.
Without an explicit instruction boundary, semantic intent propagates through execution as opaque control flow, making composite task behavior non-replayable and responsibility assignment unstable.
The canonical definition is maintained here:
→ Concepts/Semantic-ISA
This page summarizes its role within the broader position on executable semantic systems.
Introduction In recent discussions on artificial intelligence governance—particularly those emerging from public policy and academic forums—there is a recurring claim that certain aspects of AI systems are fundamentally unexplainable, and that society must simply learn to accept this fact.
But what does unexplainable actually imply in practice?
In many organizational contexts, especially within Taiwanese corporate culture, the absence of explanation is often resolved through informal substitutes: managerial apologies, symbolic accountability rituals, or interpersonal mediation. These mechanisms may restore social equilibrium, but they do not scale, nor do they constitute a reliable foundation for trust in AI systems.
...