Abstract
As discussions around prompt engineering intensified in early 2023,
a popular metaphor emerged: describing users as “AI chanters”
who recite incantations to elicit responses from a model.
This note argues that the metaphor is not merely inaccurate,
but structurally harmful.
It mispositions human agency, obscures responsibility,
and subtly trains users to relate to AI systems in a subordinate or mystical frame.
The Problem with the Metaphor
The term chanting implies a ritual act:
- a predefined utterance
- directed at a higher or opaque force
- producing outcomes without accountability
Even when used jokingly, metaphors shape cognition.
Repeated use of this framing encourages an implicit model where:
- humans request
- systems decide
- responsibility dissolves
This is not how AI systems should be used,
nor how they function in practice.
Delegation, Not Invocation
Interacting with AI systems is not an act of supplication.
It is an act of delegation.
The human:
- assigns tasks
- specifies constraints
- evaluates results
- bears responsibility for outcomes
The system:
- executes within given boundaries
- proposes candidate outputs
- does not own intent or judgment
This relationship is managerial, not mystical.
On Counterarguments
Some argue that “chanting” in fiction does not imply prayer, but rather the expenditure of personal resources (e.g. mana points).
While this reading is coherent in fantasy contexts,
the metaphor still fails in technical ones.
The key issue is not belief in gods,
but the absence of responsibility attribution.
Chanting—regardless of its fictional mechanics—suggests:
- the act itself is sufficient
- outcomes are externalized
- failure belongs to the system, not the actor
This is incompatible with real-world AI deployment.
Naming Shapes Posture
What we call a role defines how people inhabit it.
Calling users “chanters” subtly suggests:
- loss of agency
- abdication of control
- playful distance from consequences
Alternative framings—however imperfect—are structurally healthier:
- AI operator
- AI trainer
- task delegator
- system supervisor
Earlier, I half-jokingly used the term
“Etiquette Trainer for Inorganic Life”.
The joke concealed a serious point:
what is being trained is not intelligence,
but norms of interaction, constraint, and response.
Why This Matters
As AI systems move from novelty to infrastructure, language that infantilizes users or mystifies systems becomes dangerous.
Poor metaphors produce:
- sloppy delegation
- blurred accountability
- misplaced trust
- governance failures
This is not a semantic dispute.
It is an early design choice in how society learns to stand in relation to non-human systems.
Status
This note records an early intuition: that terminology around prompting is not neutral.
Later work would reframe this more formally as issues of agency, delegation, and institutional design.
The warning remains valid.
Note on provenance
This text was originally published on my personal blog in April 2023, during an early phase of public discussion around prompt engineering and human–AI interaction metaphors.
It is preserved here with minimal revision as part of an ongoing effort to document the conceptual lineage behind later work on norms, delegation, and governance.
The original context predates the current wave of agent-based systems and institutional AI deployment.