This field note was written in early 2023, shortly after ChatGPT became widely accessible, at a time when prompt-based interaction was still being actively discovered rather than standardized.
Prompt engineering has quickly become a widely discussed topic. Alongside collections of reproducible prompts—tested, shared, and refined by communities—there has also been an effort to provide theoretical guidance for users without a computer science background.
It has become increasingly clear that prompting is not a peripheral skill, but a central operational capability in human–AI interaction.
Around the same time, designers began advocating for what some describe as prompt-driven design—a paradigm in which an AI command bar becomes the primary interface for navigation and execution.
In this framing, digital tools evolve from graphical manipulation toward natural-language instruction and dialogue-based control. As the expressive freedom of prompting increases, so do questions about how these “quasi-anthropomorphic” tools should be used.
One surprisingly contentious question emerged:
Should we be polite when talking to ChatGPT?
Precision vs. Politeness
Ethan Mollick, a professor at Wharton, has argued that users should avoid saying “please” or “thank you” when issuing requests. His reasoning is straightforward: ChatGPT is not Google, Alexa, or a human subordinate. It is a machine being programmed through text. Politeness, in this view, introduces noise and reduces precision.
At a purely technical level, this argument is understandable.
However, precision and politeness are not mutually exclusive.
In human organizations, experienced managers routinely give precise instructions while maintaining courtesy. Clarity does not require hostility. Efficiency does not require rudeness.
From Model Behavior to Interaction Experience
From a narrow machine-learning perspective, politeness may appear irrelevant—either ignored or treated as statistical noise. Yet this assumption is not obviously correct.
If bias can be learned from training data, politeness can be learned as well. It is plausible that responses generated in polite conversational contexts are, on average, of higher quality than those generated in hostile ones.
More importantly, when we widen the lens from model behavior to interaction experience, the difference becomes significant.
Tone shapes feedback loops.
Users who interact with systems politely often receive more measured responses. Their own emotional state remains more stable. When systems fail—as they inevitably do—users are less likely to escalate into frustration or loss of control.
Interestingly, users correcting ChatGPT’s mistakes face a choice: blunt correction or indirect, courteous reformulation. The former often triggers defensive or adversarial responses. The latter leads to smoother recovery.
Few people enjoy being corrected by an AI. Tone matters.
Social Spillover Effects
There is a broader concern that extends beyond individual usability.
If we normalize interaction patterns that assume the absence of emotion or reciprocity—on the basis that “AI has no feelings”—those habits may spill back into human communication.
Online behavior already migrates into offline norms. Asynchronous conversation styles shaped by the internet have altered real-world dialogue. Input methods, short-form video platforms, and compressed language patterns have demonstrably reshaped vocabulary and expression.
The risk is not that AI becomes rude.
The risk is that we do.
A Design Question, Not a Moral One
Whether prompts should be polite is ultimately a design consideration, not a moral commandment.
It affects user experience, emotional regulation, and long-term interaction norms. Designers of language-based systems cannot treat tone as an afterthought.
A design consultant once summarized a Twitter discussion with an observation:
People who say “please” to ChatGPT rarely explain why they do so.
People who do not say “please” often feel compelled to explain why politeness is unnecessary.
Perhaps courtesy still operates where no one is watching.
In everyday life, we tend to prefer the company of the former.
君子慎獨。 — integrity, even in solitude.