This section documents earlier systems and experiments that informed, but do not constitute, my current research.
These projects were developed prior to the formal articulation of my present theoretical framework. They should not be read as implementations of that framework, but as exploratory systems that surfaced persistent structural constraints later addressed explicitly.
They reflect early structural intuitions around execution, coordination, composition, and constraint, expressed across different layers of system interaction.
Taken together, these works trace a gradual progression—from human input mediation, to execution abstraction, to compositional execution order, falsification mechanisms, and coordination models.
Each stage exposed limitations that could not be resolved at the level of tooling or workflow alone, ultimately necessitating a semantic and ontological formulation rather than merely preceding it.
Censer was an exploratory proposal for governing the deployment and execution of machine learning models under conditions of unclear legal liability.
The project investigated how model execution could be made conditional, revocable, and compensable through institutional mechanisms. Central to the design was the concept of verifiable claims: explicit commitments about model behavior that could be challenged, falsified, and, if violated, trigger predefined consequences such as rollback, suspension, or compensation.
Rather than constraining execution at the semantic or runtime level, Censer placed responsibility and enforcement within a governance and smart-contract framework. Deployment rights, auditing incentives, and insurance reserves were distributed among multiple stakeholders, reflecting an early attempt to externalize accountability for machine learning execution.
...
SenseTW was an open-source civic-tech project under the g0v community, aiming to build a long-term public-issue tracking and civic-participation platform.
Rather than being a typical news site, the project emphasized persistent attention, structured issue mapping, ongoing commentary, and collaborative documentation — seeking to prevent critical social and political issues from fading once media focus moves on.
The initiative explored how public awareness and communal memory could be sustained through digital infrastructure: by turning ephemeral discussions into persistent, structured, and shareable records.
...
TrustableAI was an My first startup effort focused on operationalizing responsibility and risk controls in machine learning deployment pipelines.
Through its product Augit, the company explored integrating fairness testing, dataset versioning, model documentation, compliance-oriented certification concepts, and early notions of AI insurance into a unified CI/CD workflow for machine learning systems. The objective was to determine whether deployment readiness could be assessed and enforced through engineering processes rather than ad-hoc review.
...
ke-e was an experimental library exploring property-based and generative testing techniques, developed to systematically probe input spaces and structural assumptions in software systems.
The project focused on generating constrained yet variable data in order to expose boundary conditions, invariant violations, and hidden failure modes. Testing was framed not as verification against expected outputs, but as falsification through structured perturbation.
At the time, ke-e addressed practical concerns in testing and data robustness. In retrospect, it articulated an early infra-level intuition: meaningful testing requires mechanisms for systematically stressing structural commitments, even before those commitments are semantically articulated.
...
ICLang was an early experimental system investigating coordination languages and process composition in distributed environments.
The project explored whether heterogeneous processes could be composed and executed through explicit coordination structures, without requiring semantic interpretation of their internal logic. Each process was treated as a black box, constrained only by its input–output behavior and communication patterns.
At the time, the system was motivated by practical questions around orchestration and service composition. In retrospect, it articulated an implicit structural intuition: execution order and behavioral constraints can be defined independently of understanding, intention, or interpretation.
...
HyExec was an experimental system for wrapping Unix shell commands as fluent, composable objects in JavaScript.
Rather than treating command execution as opaque strings or immediate side effects, HyExec exposed execution parameters—arguments, options, flags, ordering, and grouping—as manipulable structures prior to execution. Command invocation was deferred until explicitly triggered, allowing execution to be described, rewritten, and composed before occurrence.
The project focused on separating execution description from execution itself. Fluent chaining and dynamic command grouping were used to express execution order as a first-class interface, independent of the underlying shell semantics.
...
Kuansim was a civic technology project initiated within the g0v community to address the persistence of public attention on social and political issues.
Rather than competing with news media on immediacy or coverage, the platform was designed around follow-up as a first-class structure. Issues introduced on Kuansim were intentionally tracked over time, resisting the common pattern in which public discussion fades once mainstream attention shifts elsewhere.
The project treated attention as a finite and fragile resource, requiring structural support to be sustained. By organizing commentary, ongoing updates, and solution-oriented discussion within a single platform, Kuansim explored how civic awareness could be made durable without assuming expert participation from users.
...
Harrow was an experimental implementation of Arrows as executable pipelines in Python, inspired by the Arrow abstraction in functional programming.
The project treated execution not as isolated function calls, but as composable structures supporting forward and backward composition, branching, fan-in/fan-out, parallel execution, and looping. Execution order was expressed through formal combinators rather than implicit control flow.
By modeling execution pipelines as Arrow compositions, Harrow made execution order itself a first-class, manipulable object. State propagation, feedback, and trace-like behaviors were represented structurally within the execution model.
...
BoLiau was an early experimental framework for task and mission orchestration, developed to manage chained operations and deferred execution in script-based environments.
The system treated tasks as composable units, allowing workflows to be constructed through sequencing, continuation, and lazy execution. Individual tasks were considered operational black boxes, coordinated through explicit control structures rather than semantic interpretation.
At the time, the project addressed practical needs around workflow automation and batch operations. In retrospect, it reflects an early engagement with process composition and execution ordering, without yet articulating semantic constraints or ontological commitments.
...
VSGUI was an early library for mediating human interaction with system execution through constrained graphical dialogs.
Built on top of Zenity and UCLTIP, the project treated user input not as free-form text, but as structured, bounded signals—such as confirmations, file selections, password entries, and progress acknowledgements—suitable for direct integration into execution workflows.
The library focused on reducing human interaction to a set of explicit input primitives that could be safely propagated into automated system execution. Human responses were constrained, typed, and failure-aware, allowing scripts to incorporate human-in-the-loop decisions without collapsing execution structure.
...