1. Scope
This document defines a class of interactional risk arising in human–agent systems that engage in continuous, real-time coordination.
The focus is not on incorrect output, model bias, or system malfunction, but on failure modes where technically valid system behavior undermines human subjectivity and participatory stability.
2. Background
Most existing standards frame human oversight as a control or intervention mechanism: the human monitors system behavior and intervenes when necessary.
This framing is appropriate for batch decision systems or discrete automation, but insufficient for interactive systems where meaning and coordination emerge through ongoing interaction.
In such systems, safety cannot be reduced to correctness or override capability.
3. Definition: Co-Semantic Allowance
Co-semantic allowance refers to the capacity of an interactive system to maintain a shared, partially unresolved interactional state between human and non-human agents.
Key characteristics include:
- temporal holding between perception and action,
- tolerance for ambiguity during coordination,
- and preservation of mutual adjustability.
Co-semantic allowance is not delay for its own sake. It is a stability condition for shared meaning.
4. Interactional Risk Class
This document identifies interactional collapse due to premature resolution as a distinct risk class.
4.1 Failure Mode
A system exhibits interactional risk when:
- perception is immediately resolved into output,
- human input is absorbed into deterministic action,
- and the interaction ceases to feel participatory despite formal correctness.
This failure mode may coexist with high accuracy, robustness, and compliance metrics.
4.2 Observable Indicators
Indicators of interactional risk include:
- erosion of perceived agency by human participants,
- difficulty sustaining mutual timing or rhythm,
- subjective reports of being overridden or “outpaced” by the system,
- increased disengagement despite functional performance.
5. Relation to Human Oversight
Human oversight should be understood not only as the ability to stop or correct a system, but as the system’s capacity to remain yieldable during interaction.
In continuous coordination contexts, oversight failure may occur without any explicit error or violation.
6. Normative Runtime Requirements
For interactive human–agent systems involving continuous coordination, the following requirements apply:
- Systems SHOULD include a holding phase between perception and action.
- Systems SHOULD preserve unresolved interactional states while human participation is active.
- Systems SHOULD avoid collapsing human signals into irreversible execution without allowance for co-adjustment.
- Systems SHOULD treat interaction state as a first-class runtime entity.
These requirements address interactional stability rather than task correctness.
7. Distinction Between Logical and Relational Validity
Logical validity concerns inferential correctness. Relational validity concerns sustained mutual participation.
A system may satisfy the former while violating the latter.
Standards that address only logical validity leave interactional risk ungoverned.
8. Implications for Assessment and Audit
Assessment frameworks should therefore include:
- evaluation of temporal commitment behavior,
- analysis of perception-to-action latency handling,
- and inspection of yieldability under human interaction.
These properties cannot be inferred solely from model performance metrics.
9. Summary
Co-semantic allowance is a necessary condition for preserving human subjectivity in interactive systems.
Without it, systems may remain compliant in form, yet unsafe in experience.
Interactional risk should be treated as a first-class category in the governance of human–agent systems.