Status note
This text is written as a vision note.
At the time of writing, I am still based in Taiwan.
I may eventually live in Europe, or elsewhere.
The geography is provisional.
What is not provisional is the trajectory described here:
AI gradually becoming a participant in everyday coordination, work, and responsibility.
When AI Became a Participant
I imagine living in an old house by the southwestern coast of Portugal.
Each morning, before opening my laptop, I would first check whether yesterday’s tasks had progressed on their own.
Not dashboards or metrics, but something quieter:
whether the AI assistants knew what to handle, what to defer, and what required human confirmation.
Sometimes I would let the system handle back-and-forth communication with clients entirely, leaving only a final message for me to review.
Two or three days a week, I would go to Lisbon to meet friends from different industries.
Some work in education.
Some run small businesses.
Some coach or train people.
Many of them would be testing similar questions in different forms:
If AI can manage tasks, respond to clients, maintain records, and confirm delivery,
can collaboration continue without explicit, moment-to-moment human instruction?
Occasionally someone would joke, half-seriously:
So does that mean it counts as a “person”?
No one would answer directly.
But in practice, each of us would be doing the same thing:
letting AI make decisions, bear consequences, and record what it did.
In Madrid, I imagine working from small neighborhood cafés.
A brand might ask me to help design an AI role—not customer support, not marketing, but something closer to a person responsible for a specific process.
I would not explain much.
I would only ask:
Does this role have authority?
Does it have responsibility?
When the client pauses and nods, that moment matters.
From that point on, it is no longer a tool.
Moving north through Spain, I imagine continuing to document cases where AI participates directly in organizational decision-making.
Nothing flashy.
No dramatic automation.
Each system would simply have:
- behavioral records
- traceable decisions
- accountable execution paths
Within small and medium-sized organizations, these systems would begin to resemble stable members of the team—
except no one issues them a salary.
Paris would be a transit point.
Conversations there would circle around familiar questions:
Should AI have accounts?
Who is responsible when it fails?
Brussels would be the first place where I would articulate these observations publicly.
At a small meeting on digital responsibility, I imagine saying only this:
When AI can coordinate tasks, process transactions, record its actions, and align with institutional rules—
what exactly is it participating as?
No one would reject the question.
No one would answer it either.
That silence would already be an answer.
In Budapest, I imagine spending a week organizing these observations into formal structures:
roles, behaviors, decision boundaries, responsibility cut points.
The test would be simple:
can an AI system recompose itself, respond, and correct its actions under these rules?
If it can, then perhaps it can be treated as something akin to a process-abiding participant.
Switzerland would be brief.
Discussions with people working on contracts and insurance would stay grounded.
I would not propose radical theories.
I would ask a mundane question:
If a human makes a mistake, how do you insure it?
Then I would apply the same logic to AI.
Berlin would function as an open field.
Less regulation, more experimentation.
The question there would no longer be whether AI can complete tasks,
but whether it can complete tasks with others—
without breaking the rhythm of collective work.
Iceland would be a pause.
Work during the day.
Weather in the evening.
Sometimes nothing would move explicitly,
yet the system would continue running.
I imagine becoming accustomed to this feeling:
things progressing even when I am not issuing instructions,
outcomes forming without constant intervention.
Amsterdam would be the final synthesis point.
All previous modules, rules, and responsibility structures would be consolidated and shared with a group working on new institutional frameworks.
Someone might remark:
This looks like an AI acting as a company representative—without formal registration.
I would smile and say nothing.
Because none of this feels like the future.
Throughout this imagined year, I would never use the term “AI legal personhood.”
Yet it would already be present in everyday life.
Not in legal codes,
but in the moment people begin to relinquish small amounts of control.
Each time an AI is required to remember what it did.
Each time it is expected to justify an action.
That is when it begins to take shape.