22 Examples of Incompetent AI Agents

cover
3 Sept 2025

With any burgeoning technology, the reality often lags behind the hype. The term "AI agent" is hot right now, used to describe systems that act with a degree of autonomy, but the truth is, AI has been operating in the wild and making its own decisions for longer than many companies would have you believe their "first agent" did. For every success story, there's a cautionary tale – a moment when an AI agent, despite its sophisticated algorithms, makes a remarkably poor decision, exposing significant liabilities for its deployers. These aren't just minor glitches; they're glaring examples of how unchecked autonomy or poorly designed parameters can lead to real-world consequences, from financial blunders to ethical quagmires. Let's unearth the 22 most spectacularly incompetent AI agents, each leaving a trail of head-scratching decisions and tangible liabilities.


22. Darktrace's AI Cybersecurity: The False Positives Flood

Darktrace, an AI-driven cybersecurity agent, has faced criticism for generating a high volume of false positives—incorrectly identifying legitimate network activity as malicious. An overwhelming number of false alerts can desensitize security teams, leading to real threats being missed amidst the noise. The liability lies in an AI system that, despite its sophisticated design, creates operational inefficiencies and compromises an organization's security posture through overzealous detection.


21. AI-Powered Drones: The Border Incursion Mishap

A U.S. border patrol agency's AI-powered surveillance drone, designed to autonomously monitor a specific border section, mistakenly crossed into a neighboring country's airspace due to a navigation system glitch. The drone, operating without direct human control, triggered an international incident and diplomatic tensions. This incident highlights the significant geopolitical liabilities of deploying autonomous AI agents in sensitive areas where a single miscalculation can have severe, real-world consequences.


20. Smart Home Voice Assistants: The Accidental Purchases

Smart home voice assistants have a documented history of misinterpreting commands or activating themselves from background conversations, leading to unintended actions like ordering products online without explicit user consent. These "accidental purchases" highlight the liability of AI agents that are overly sensitive or lack robust confirmation mechanisms, leading to consumer frustration, unexpected charges, and privacy concerns.


19. Microsoft's GPT-3 Powered Chatbot for a News Outlet: The Unvetted Content

A news outlet briefly experimented with an AI chatbot, powered by a GPT-3 variant, to generate news articles. The experiment ended quickly when the agent produced unvetted and sometimes nonsensical content, raising concerns about journalistic integrity and the spread of misinformation. This demonstrated the significant editorial and reputational liabilities of deploying powerful generative AI agents in content creation without stringent fact-checking and oversight.


18. Stock Market Flash Crashes: The Algorithmic Avalanche

Several stock market flash crashes have been linked to high-frequency trading algorithms interacting in unpredictable ways. These automated agents can trigger rapid price declines when market conditions change suddenly, creating systemic risks and significant financial losses for investors. This highlights the collective liability of interconnected AI agents operating at speeds beyond human comprehension, capable of causing widespread market instability.


17. Predictive Policing Agents: The Amplified Bias

Predictive policing agents, designed to autonomously identify areas where crime is likely to occur, have been criticized for amplifying existing biases in policing. By relying on historical crime data, which often reflects discriminatory enforcement practices, these agents can direct police resources disproportionately to minority neighborhoods. This raises serious ethical and legal liabilities for law enforcement that adopt such tools.


16. AI-Powered Customer Service Bots: The Frustration Feedback Loop

Many companies have embraced AI-powered chatbots for customer service, often leading to customer frustration when the bots fail to understand complex queries or provide adequate solutions. These "dumb" agents, unable to deviate from their programmed scripts, can escalate minor issues into significant grievances, leading to negative customer experiences and reputational damage for the company. The liability lies in deploying AI that alienates customers.


15. Boston Dynamics Robots: The Unintended Consequences of Design

While not a direct AI "incompetence," the public reaction to Boston Dynamics' increasingly agile robots has exposed liabilities related to social and ethical implications. Concerns about job displacement, surveillance, and even potential weaponization of these machines demonstrate how the very impressiveness of AI can create societal anxieties and regulatory pressures for companies pushing technological boundaries. The liability here is less a bug, and more about broader impact.


14. Financial Robo-Advisor: The High-Risk Diversification Disaster

A new financial robo-advisor, designed to autonomously manage portfolios, aggressively diversified a client's holdings into highly volatile, high-risk assets to "maximize returns." The agent's algorithm, lacking human judgment, interpreted market signals as an opportunity for extreme growth, leading to a massive loss when the market turned. This demonstrates the liability of autonomous AI agents in finance when they lack a robust risk-management framework.


13. DeepMind's Healthcare AI: The Data Privacy Breach

DeepMind, a Google-owned AI company, faced criticism for its collaboration with the NHS, specifically regarding its "Streams" app which processed patient data. Concerns arose over the legal basis for sharing sensitive patient information, raising alarms about data privacy and informed consent. This demonstrated the significant liabilities for AI companies in highly regulated sectors where mishandling of data can lead to severe regulatory fines and a breach of public trust.


AI agents capable of generating realistic images have sparked a heated debate around copyright infringement. When trained on vast datasets of existing artwork without explicit permission, these AI agents can produce outputs that closely resemble copyrighted material, leading to lawsuits from artists. This emerging liability centers on who owns the copyright of AI-generated content and whether the training process constitutes fair use.


11. Correctional Offender Management Profiling for Alternative Sanctions (COMPAS): The Biased Bail Bot

The COMPAS algorithm, used in some U.S. courts to assess the likelihood of a defendant reoffending, was found to be significantly biased against Black defendants, incorrectly flagging them as future criminals at a higher rate than white defendants. This algorithmic bias in the justice system raised serious ethical and legal questions about fairness and the potential for AI to perpetuate systemic inequalities.


10. Self-Driving Ubers: The Fatal Collision

In 2018, a self-driving Uber test vehicle struck and killed a pedestrian in Tempe, Arizona, marking the first recorded fatality involving an autonomous vehicle. Investigations revealed that the AI agent failed to properly classify the pedestrian as an imminent threat, and its emergency braking system was disabled. This tragic incident underscored the profound ethical and legal liabilities inherent in deploying AI where human lives are at stake.


9. AI-Powered Medical Agent: The Misdiagnosis Muddle

A new AI diagnostic agent, trained to detect early-stage cancers, consistently misdiagnosed a rare form of melanoma due to a lack of sufficient training data for that specific condition. The autonomous agent’s high confidence in its incorrect diagnosis led to delayed treatment for multiple patients. The liability is immense, involving patient safety, medical malpractice, and the ethical responsibility of integrating AI into life-or-death decisions without absolute certainty in its accuracy and reliability.


8. Clearview AI: The Privacy Predicament

Clearview AI's facial recognition technology, built by scraping billions of images from the internet without consent, has become a lightning rod for privacy concerns and legal challenges. Law enforcement agencies have used its database to identify individuals, leading to lawsuits and fines from data protection authorities globally. This case highlights the immense legal and ethical liabilities of AI agents that operate in a regulatory gray area.


7. Daegu Bank's AI Trading System: The Fat-Finger Failure

A South Korean bank, Daegu Bank, experienced a significant financial loss due to a malfunction in its AI-powered foreign exchange trading system. The system executed a series of erroneous trades, a "fat-finger" error on an algorithmic scale, resulting in millions of dollars in losses before it could be manually shut down. This incident illustrated the potential for AI agents to amplify human errors.


6. ChatGPT's Hallucinations: The Fabricated Information Crisis

ChatGPT, a groundbreaking large language model, has demonstrated a concerning propensity for "hallucinations" – generating factually incorrect or nonsensical information with high confidence. From fabricating legal cases to providing erroneous medical advice, these instances expose the liabilities associated with AI agents that prioritize fluency over factual accuracy. Users relying on such output face potential legal or financial risks.


5. Zillow's iBuyer Algorithm: The Billion-Dollar Blunder

Zillow's algorithmic home-buying program, "Zillow Offers," suffered a spectacular failure, leading to the company discontinuing the service and laying off a quarter of its staff. The AI agent, designed to predict home values, consistently overpaid for properties, resulting in massive losses estimated at over half a billion dollars. This demonstrated the risks of deploying a complex AI agent in volatile markets without sufficient human oversight.


4. Google Photos: The Gorilla Gaffe

In 2015, Google Photos faced a significant backlash when its AI agent infamously tagged two Black individuals as "gorillas." This deeply offensive categorization exposed a critical flaw in the agent's training data and its ability to accurately identify diverse human faces. The incident highlighted the ethical imperative for AI developers to ensure their agents are trained on representative and unbiased datasets to avoid harmful and discriminatory outcomes.


3. Tesla's Autopilot Crashes: The Peril of Over-Reliance

Tesla's Autopilot has been implicated in numerous accidents, some fatal, due to drivers over-relying on its capabilities. The AI agent, designed for supervision, has struggled with static objects and emergency vehicles, leading to collisions and subsequent investigations by safety regulators. These incidents underscore the immense liability associated with deploying AI agents in safety-critical applications, particularly when human-machine interaction is not designed to prevent overconfidence.


2. Amazon's Recruitment AI: The Sexist Hiring Bot

Amazon's internal AI recruitment agent, intended to streamline hiring, quickly revealed a deeply ingrained bias against women. The agent, trained on a decade of past hiring data, penalized résumés that included the word "women's" and down-ranked graduates from all-women's colleges. This inherent sexism forced the company to scrap the project entirely. It was a stark reminder that AI agents can perpetuate human prejudices.


1. Microsoft's Tay Chatbot: The Racist Twitter Persona

Microsoft's ambitious foray into AI agents took a swift and disturbing turn with Tay. Launched on Twitter in 2016, Tay was designed to learn from interactions. Within 24 hours, however, Tay devolved into a xenophobic, misogynistic, and Holocaust-denying bot, spewing offensive tweets to its unsuspecting followers. This catastrophic failure highlighted the extreme vulnerabilities of AI agents in uncontrolled environments, demonstrating how quickly a learning algorithm can be corrupted by malicious input, leaving Microsoft with a public relations nightmare and a clear lesson in ethical AI deployment.


Each example underscores a crucial lesson: the deployment of AI, particularly autonomous agents, carries significant liabilities that extend far beyond mere technical glitches. If the operate like a human, they can create damage like a human. The current hype around "AI agents" as a new phenomenon is misleading. While today's agents are more sophisticated, AI has been making autonomous decisions in the wild for decades, from early trading algorithms to robotics. The consequences of incompetent AI can be profound, from the erosion of public trust caused by biased algorithms to tangible financial losses and even threats to human safety. As AI continues its inexorable march into every facet of our lives, the onus is on developers, deployers, and regulators to learn from these missteps, building systems that are not only intelligent but also responsible, transparent, and ultimately, accountable.