Employees waste hours with "AI junk"

Milena Merten

Jan 07, 2026

Düsseldorf. When Stefan Müller reads emails, he comes across one phrase more and more often: "I hope this email finds you well." The trainer and lecturer specialising in artificial intelligence (AI) knows immediately that this email was not written by a human.

This is because the wording is a literal translation of the English polite phrase "I hope this email finds you well." There is no suitable equivalent for this in German. Some American-influenced AI language models use it anyway. And many German users adopt AI-generated texts without further revision.

What seems like a minor style issue is part of a larger problem: AI systems produce visually appealing presentations and linguistically polished strategy papers in addition to emails to customers or colleagues. At first glance, they appear professional and well-founded. But if you take a closer look, you will discover gaps, inconsistencies and errors.

Among experts, a term has become established for such superficial AI content: "AI slop". In a work context, it is also referred to as "AI workslop": AI-generated content that is so insubstantial that it generates extra work and costs productivity.

The promise of the tech companies was quite different: AI was supposed to make the world of work more productive and efficient - reducing the workload instead of creating more work. Instead, companies and employees are flooded with bad content. How can managers reverse this trend?

Lack of expertise leads to poor results

Müller trains employees in companies such as Thyssenkrupp and Motel One, as well as in small and medium-sized enterprises and public administrations, in the use of AI. He sees a fundamental reason for the increasing amount of "AI junk" in companies in the fact that many employees do not have access to powerful AI tools. As a result, employees use free versions of AI tools on their private computers or smartphones. "This leads to poorer results," says Müller.

The licensed, fee-based language models from ChatGPT, Perplexity or Gemini research longer and more thoroughly. "They hallucinate less, provide more reliable information and can achieve a technical depth that you can really do something with," says Müller. In many companies, however, only a small minority of employees have been authorised to use AI tools for a fee.

This article is for members only.
You are not logged in - please log in.