For the business leaders and consultants who championed the use of artificial intelligence inside their organisations, the promise was efficiency: smoother workflows, clearer communication and more time for staff to focus on higher-value work.
But as generative AI tools such as OpenAI’s ChatGPT, Microsoft Copilot and Google’s Gemini have spread through offices, that promise is curdling into frustration.
The key issues at stake with AI use are trust and competence. Poor use of AI results in a trust deficit that can be similar to anyone in a team or company producing low-quality content.
What is ‘AI slop’?
Rather than simplifying work and saving time, many workplaces are now awash with “AI slop”: machine-generated material that looks polished but is often riddled with small errors, awkward wording and unclear meaning; forcing colleagues to recheck, clarify and redo the work.
AI slop shows up in consulting reports, audit papers, legal filings, internal updates and meeting notes – material that looks complete but often turns out to be wrong, overwritten or misleading. It extends beyond writing, with AI producing charts, presentations and marketing materials that appear credible but can prove inaccurate or incomplete.
Benjamin Voyer, professor of behavioural science at ESCP, defines “AI slop” as “the rejection of an effortful path to knowledge creation and the adoption of an easy and automated path, made possible by the latest generation of large language models”.
When trust breaks down
For Voyer, AI slop reflects not a failure of technology but of behaviour; a cultural and psychological drift toward ease over effort, and automation over accountability. In other words, the problem is not AI itself but how people choose to use it.
The consequences are real. “The key issues at stake with AI use are trust and competence,” he explains. “Poor use of AI results in a trust deficit that can be similar to anyone in a team or company producing low-quality content.”
That erosion of trust can affect many workplaces, because few roles are entirely independent and most rely on collaboration and trust. But when trust is chipped away, productivity suffers, because colleagues spend more time checking, verifying and second-guessing one another’s work.
“Expertise is also at stake as it questions the core and actual competencies of team members,” adds Voyer. “AI makes it easier for employees to mask their areas of weakness, and that raises issues for teams to understand who can be accountable and has expertise, and on what topics, in a team.”
The psychological toll
Evidence of this is already showing up in organisations in different sectors. Deloitte was forced to repay part of a government contract in Australia after a report prepared with the help of AI contained factual mistakes, illustrating the reputational risks firms face when AI-generated work goes unchecked.
According to research from Stanford University, US desk-based employees believe roughly 15% of the work they receive on average is AI slop, and often needs to be corrected or rewritten. The data point to a growing hidden workload: time lost not to creating, but to correcting.
Voyer also warns of a psychological cost: over-reliance on AI, he argues, can make work less engaging and discourage independent thinking. “If producing daily tasks suddenly becomes less engaging and low-stakes, motivation will suffer, and employees risk experiencing ‘boreout’ – the opposite of burnout,” he says.
This risk turns the usual story about AI and productivity on its head. Instead of freeing people from low-value work, over-reliance on AI can make tasks unstimulating, creating a cycle of checking and fixing what machines have already produced.
The result is not greater efficiency but more work of a different kind: digital tasks created, ironically, by the very tools meant to save time.
AI makes it easier for employees to mask their areas of weakness, and that raises issues for teams to understand who can be accountable and has expertise, and on what topics, in a team.
How managers can respond
So what can be done about this? Managers, Voyer argues, need to harness AI’s productivity gains without letting its use erode the quality of their teams’ work. And he offers a way to do exactly that.
“Invite individuals in teams to offer two versions of each document they work on. Their own version, and their ‘best’ AI version. Then critically discuss the work in a team,” he says.
The goal, he suggests, is to normalise AI use rather than police it. Many employees are already using generative AI without disclosing it, largely because company rules remain unclear or inconsistent. This can lead staff to breach policy without realising, or to expose sensitive data through unapproved tools.
Voyer says his approach, by contrast, has two advantages. “This immediately removes suspicion of using AI, because it is accepted and part of the discussion. Then it stimulates people to do better than AI, not the least by criticising outputs.”
The human edge
Looking ahead, Voyer believes the rise of AI slop will force organisations to rethink how work is valued. “To me, it will help recategorise tasks into high and low value ones,” he says. “Low-value tasks should be done by AI, as they will be done in a faster and cheaper way. High-value tasks will also benefit from the research and synthesis power of AI.”
In other words, AI slop could prompt a return to human strengths like creativity, critical thinking and collaboration.
In this view, the spread of AI slop could also force organisations to define more clearly which tasks can be automated and which rely on genuinely creative thinking. “Divergent thinking remains in the realm of human intelligence,” Voyer says.
The best defence against AI slop, he argues, is not tighter control but open use, with teams willing to question, compare and improve on what machines produce. Preventing AI slop is not about rejecting technology, but reclaiming responsibility for how it is used.
That, Voyer suggests, is how efficiency becomes more than a promise.
