Unjustified Confidence and the Violation of Human Subjectivity in ML-Based AI

Scope and Target of This Critique This essay is deliberately limited in scope. The critique applies to machine-learning–based AI systems, particularly large-scale generative models trained via statistical pattern extraction from historical data. It does not address symbolic systems, rule-based automation, or explicitly constrained decision engines. The focus here is not performance, intelligence, or usefulness, but a specific failure mode that emerges from the dominant ML paradigm. Background: Why ML-Based AI Behaves This Way Most contemporary AI systems are built on machine learning architectures that: ...

December 21, 2025 · Tyson Chen

What Is a Regulatory Market for Artificial Intelligence?

As global demand for artificial intelligence (AI) regulation increases, regulatory strategies across regions have begun to diverge. The contrast between the European Union and the United States highlights both the challenges and opportunities of transatlantic cooperation. The EU has largely adopted a centralized, comprehensive regulatory approach, while the U.S. favors a more decentralized, risk-management-oriented strategy. This divergence reflects deeper philosophical differences in how technological governance is conceived, and it raises fundamental questions about the future of global AI deployment—and the structure of the regulatory market that will govern it. ...

March 24, 2024 · Tyson Chen

Verifiable Statements as Acceptance Tests in AI OEM Delivery

Observation In traditional OEM and ODM processes, suppliers provide a manifest alongside delivered hardware or software. This document functions as evidence that contractual deliverables have been met and is later referenced in acceptance tests or disputes. In AI model delivery, this role is weak or absent. Models are often delivered as opaque binaries, leaving the purchaser unable to verify provenance, training conditions, or behavioral claims. This creates a structural asymmetry: the supplier knows what the model is, while the purchaser bears the operational and legal risk. ...

July 18, 2023 · Tyson Chen

On ITRI’s Trustworthy AI Verification Framework

This article reflects my assessment of Taiwan’s early attempts to institutionalize trustworthy AI verification, written in 2023, prior to the widespread deployment of large language models. In recent years, the Industrial Technology Research Institute (ITRI) in Taiwan has invested heavily in artificial intelligence infrastructure. One of its key initiatives is the development of a localized Trustworthy AI Evaluation and Verification System, aligned with international standards. The framework emphasizes several core dimensions: ...

May 19, 2023 · Tyson Chen