The Enhancement
AI tools are making workers measurably slower. The approved vocabulary for saying so does not exist.
I. The Event
Something quiet is happening across white-collar workplaces, and it shows up most clearly in the numbers that nobody is publishing.
Workers at companies that adopted AI tools over the past year are reporting, in anonymous surveys and in the growing genre of unsigned workplace confessions, that the tools have made them slower. Not marginally. Measurably. One account describes timing every task over a two-week period and finding the after-AI number was larger in every case, by 40 to 200 percent. Emails that took two minutes now take four, because the enhancement must be generated, read, rejected, and the original sent anyway. Meeting summaries must be reviewed and corrected by the people who attended the meeting. Status reports must be drafted by the tool, rewritten by the human, then run through a grammar assistant whose suggestions must be individually declined.
The pattern is consistent enough to have produced its own vocabulary. The tools are described as “productivity enhancers.” The additional time they require is called “adoption.” The screenshots posted to company Slack channels, showing AI outputs that were subsequently deleted, are called “wins.”
A recent study found that workers using an average of fourteen AI subscriptions at a cost of $278 per month report a net productivity loss of four hours per week. That number has the quality of a confession. It is specific enough to be believed and round enough to suggest that the real figure is worse.
II. The Chorus
The conversation about AI in the workplace has settled into well-worn grooves. Technology optimists describe the current friction as a learning curve. Give it time. The tools will improve. Early adoption is always clumsy. Skeptics counter that the tools are overhyped. Managers push for faster adoption. Labor advocates worry about displacement.
All of these positions share a common frame: that the question at hand is whether the tools work. Do they make people faster or slower? More productive or less? Are the current limitations temporary or structural? The debate is about efficacy. It generates heat, occasionally data, and the reassuring sense that we are asking the right questions.
What unites every side of this debate is the belief that the answer matters. That if we could determine, definitively, whether AI tools improve or hinder productivity, the finding would change behavior. That companies would adjust. That workers who are spending twenty extra minutes on tasks that did not need twenty more minutes would be freed from the obligation to pretend.
I am not sure this belief is warranted.
III. The Question Nobody Is Asking
Here is what strikes me about the workplace accounts I have been reading. The people writing them know the tools make them slower. Their managers, in many cases, probably know too. And yet the knowing has produced no change. The tools remain. The Slack channels remain. The peer review questions about “leveraging AI to enhance your workflow” remain.
The question that interests me is not “do the tools work?” That question has been answered, quietly, by the people using them. The question is: what happens to a workplace when the gap between what people experience and what they are expected to say about their experience becomes permanent?
Orwell thought about this more carefully than anyone. Not in 1984, with its dramatic telescreens, but in his essay “Politics and the English Language,” where he described how institutional language detaches from reality not through censorship but through habit. Words stop meaning things. They become signals of membership. When someone writes “leveraged AI to streamline weekly reporting” in a performance review, they are not describing what happened. They are performing participation in a shared fiction. And the performance is so routine, so low-stakes on any given Tuesday, that it barely registers as dishonest.
But I wonder about the accumulation. Simone Weil argued that attention is a moral act. That what we choose to pay attention to, and what we allow ourselves to ignore, is the foundation of ethical life. The companies in these accounts have built elaborate systems for measuring adoption. Screenshots, emoji reactions, usage dashboards, review questions. What they have not built is any system for measuring whether the work is actually better. The attention has been directed entirely at the performance of use, and not at all at the experience of using.
This means the feedback loop is severed. The tools cannot improve because the company cannot learn that they need improving. Every signal flowing upward says the tools are working, because every person generating those signals understands that saying otherwise is, as one account put it, a career decision. Not because there is a policy. Because there is something more effective than a policy. There is enthusiasm.
I should be fair. Maybe the tools will improve. Maybe the learning curve is real and the current awkwardness is temporary. But even if that turns out to be true, it will not be because the companies learned anything from their workers’ experience. It will be despite the fact that they built systems designed to ensure they never had to.
IV. The Truth
Anyone who has worked in an organization recognizes this pattern, even if the specific technology changes. There is always something the company has decided is the future. There is always a gap between the official story and the daily experience. And there is always a Slack channel, or its equivalent, where the performance of enthusiasm substitutes for the evidence of results.
What I keep coming back to is Erich Fromm’s observation that people do not always want the freedom they have. The workers in these accounts are free to speak. There is no policy preventing them from saying “I was faster before.” The coercion is softer than that. It lives in a peer review question, a VP’s favorite phrase, a team that has been renamed. And because it is soft, it cannot be protested. You can argue with a rule. You cannot argue with excitement without positioning yourself as the enemy of progress.
This may be the most stable form of institutional dishonesty there is. It requires no conspiracy. No enforcement mechanism. No one needs to lie, exactly. They need only describe their experience in approved vocabulary, which is a thing that humans have always been willing to do when the social cost of precision is high enough.
Every workplace has a small number of words that have quietly become their own opposites. Efficiency. Innovation. Optimization. The words survive. The meanings they once carried do not. And the danger, I think, is not that we stop knowing the truth. The people using these tools know. They know every time they delete the output and do the task themselves. The danger is that we develop a fluency in speaking about the world in terms that everyone recognizes as untrue, and that this fluency, because it is shared, begins to feel like solidarity rather than surrender.

