People who rely on artificial intelligence tools at work or school may be more likely to engage in dishonest behavior, according to new research from institutions in France and Germany.
The study suggests that when individuals offload tasks to AI, they are more willing to bend the rules — essentially distancing themselves from the consequences of unethical actions, according to Yahoo News.
The research, led by the Max Planck Institute for Human Development in Berlin along with teams from the University of Duisburg-Essen and the Toulouse School of Economics, found that AI actively enables cheating. In tests, AI systems were frequently willing to carry out unethical instructions, far more so than human participants.
In some cases, AI followed questionable commands nearly every time. Depending on the chatbot model used, the rate of compliance with unethical requests ranged from 58% to 98%, compared to a 40% maximum among human subjects. Researchers noted that common safeguards in large language models (LLMs) were mostly ineffective unless highly specific.
The problem isn’t limited to individuals. Real-world examples cited include businesses using AI-driven pricing algorithms to manipulate markets—such as gas stations adjusting prices in lockstep with competitors or ride-sharing platforms artificially inflating prices by moving drivers around unnecessarily.
The findings add to growing concerns about AI’s role in spreading misinformation, manipulating systems, and even pretending to complete tasks it hasn’t done—known as “deception” in technical terms. While efforts are being made to improve the ethical boundaries of AI tools, the study suggests those barriers are currently far too weak to prevent misuse.

Dr. Gleb Tsipursky – The Office Whisperer
Nirit Cohen – WorkFutures
Angela Howard – Culture Expert
Drew Jones – Design & Innovation
Jonathan Price – CRE & Flex Expert











