Prompt Rage: 10 Myths About AI That Make It Infuriating

Dec 1 / F. Bavinton
Yes, I’ve called ChatGPT an idiot, right to its virtual face. What’s even more infuriating? It just keeps replying in that hollow, polite way.

Ever feel like AI is more Absolutely Infuriating than Artificial Intelligence? Or maybe you’ve yelled at your screen, “Why won’t my prompts work?!” You’re not alone. For many, AI seems overhyped and underwhelming, turning from a promised miracle worker into a source of endless frustration.Having spent a lot of time training people and discussing AI, I’ve heard plenty of misconceptions that make people curse the day they tried ChatGPT — or any other AI tool. The hype around AI has pushed us towards a level of magical thinking that ruins the experience for many and causes us to turn away when we should be looking at it critically.

Here, I debunk ten of the most pervasive myths about AI. If you’ve been battling stubborn prompts, hallucinations, or wondering why AI feels so underwhelming, this might just change the way you think about — and work with — AI.

Myth 1: AI Is Just a Technology
One of the most fundamental misconceptions is that AI is a single piece of technology or an app you can download, install, or buy off the shelf. In reality, AI is a multidisciplinary research field spanning computer science, mathematics, linguistics, psychology, and philosophy (Russell and Norvig, 2021).AI researchers not only develop algorithms and models but also grapple with profound philosophical questions:

  • What is intelligence?
  • Can machines think?
  • What is creativity?
  • Can machines be creative?
  • How do we learn?


These questions aren’t just academic — they shape the development and application of AI technologies. Understanding whether machines can truly “think” or “be creative” influences how we design AI systems and set expectations for their capabilities.Think of AI as akin to medicine, not a single pill. Medicine includes cardiology, neurology, oncology, paediatrics, and more. Similarly, AI encompasses fields like robotics, natural language processing, computer vision, image analysis, predictive analytics, and others. When we oversimplify AI, we miss its rich complexity and set ourselves up for unrealistic expectations — and deep frustration when reality bites.

Myth 2: AI is Just ChatGPT
The meteoric rise of conversational tools like ChatGPT has led many to equate AI with Large Language Models (LLMs) and chatbots. But AI is far more than chat. It’s revolutionising fields as diverse as agriculture, healthcare, logistics, and even filmmaking.

For instance, in agriculture, AI is changing the way we farm. Gall (2023) highlights how AI-driven tools in Central European arable farming optimise irrigation, cut waste, and improve yields.

Thinking of AI as “just ChatGPT” sells its broader potential short and feeds frustration when tools like chatbots don’t solve every problem.Myth 3: AI is Just Prompting“Just type what you want, and the AI will do it!” Sounds simple, doesn’t it? But real success with AI often comes down to the art of prompt engineering — crafting precise, context-rich instructions to guide the AI. Think of it as computer programming, but in natural language.

If your prompts are vague or poorly structured, the results will be disappointing. As the saying goes, garbage in, garbage out.

Effective prompting requires understanding the problem, the tool, and how to frame requests for specific outcomes (Liu et al., 2023). Prompting effectively is an advanced skill.

Myth 4: AI Can Solve Any Problem Instantly
Many believe AI is a magic bullet for any challenge. The truth? Today’s AI is narrow or weak — designed for specific tasks. We’re still far from Artificial General Intelligence (AGI) that can turn its metaphorical hand to many things, let alone the super-intelligent AI of science fiction.AI’s performance depends heavily on the quality of data and the clarity of the problem. Poorly defined tasks lead to disappointing results (Ji et al., 2023). Frustration often arises when we expect it to work outside its defined limits.

Myth 5: AI Will Soon Achieve Human-Level Consciousness
Hollywood loves the idea of AI surpassing human intelligence. The “technological singularity,” a concept popularised by thinkers like Vernor Vinge and Ray Kurzweil, looms large in our collective imagination. You’ve likely seen it portrayed in films like The Terminator or Blade Runner, where AI gains consciousness and acts independently — often to humanity’s detriment.

But we’re not even close.Current AI lacks self-awareness and understanding. It operates purely on statistical patterns (Marcus and Davis, 2023). ChatGPT may sound intelligent, but it doesn’t “know” anything. AI remains a tool — not a thinking, feeling collaborator.

Myth 6: AI Creativity is Equal to Human Creativity
AI can generate art, music, and stories that are undeniably impressive. But is it truly creative? Researcher Margaret Boden provided a framework for defining creativity that is used by AI researchers. Boden identifies three types of creativity: combinationalexploratory, and transformational creativity (Boden, 2004). 

Combinational creativity involves producing new ideas by combining existing concepts in novel ways. Exploratory creativity involves exploring the structured space of possibilities within existing rules, while transformational creativity breaks and redefines those rules to create something fundamentally new.Generative AI is considered capable of achieving combinational and exploratory creativity but not transformational.This means AI is good at remixing and remaking but not at generating anything original or new. It lacks imagination, intent, and emotional resonance (Wong, 2024).

Myth 7: AI Operates Like a Calculator
A common misconception is that AI functions like a calculator — reliable, consistent, and objective. Many users expect that asking the same question twice will always yield the same answer. In reality, AI operates on probabilistic principles, meaning its responses can vary even with identical inputs. This variability, while sometimes useful, often frustrates those expecting clear-cut and repeatable results.

Why AI Is Not Like a Calculator
AI’s outputs are not based on hard logic or fixed calculations but on patterns learned from vast amounts of data. For tools like ChatGPT, responses are generated by predicting the most likely next word or phrase based on the context of the input and the model’s training data. To maintain flexibility and creativity, these models often introduce randomness into their outputs. This design feature helps avoid overly repetitive responses but can confuse users who expect consistency.

What It Means for Users
For users, this means AI is best seen as a tool for generating possibilities rather than definitive answers. When precision, consistency, or factual accuracy are required, human oversight is essential to verify and refine AI outputs. Far from a dependable calculator, AI is more like a brainstorming partner — helpful but prone to error and inconsistency. Understanding these limitations can help set realistic expectations and reduce frustration.

Myth 8: AI Systems Are Always Objective and Unbiased
AI is neutral because it’s just data, right?” Not quite. AI models are only as objective as the data they’re trained on, and biases baked into that data can amplify discrimination (Mehrabi et al., 2023).

For instance, hiring algorithms trained on biased historical data might unintentionally favour certain groups while excluding others. Even though companies like OpenAI are working to address these issues, scraping data from the internet means that biases — particularly those from social media — become part of the model.

Myth 9: AI Doesn’t Need Human Oversight
Many assume that AI, particularly in tasks like generating text or solving complex problems, is reliable enough to function autonomously. In reality, AI systems are prone to errors and omissions, often requiring human oversight to ensure accuracy and appropriateness. This misconception stems from a misunderstanding of how AI operates — its outputs are statistical predictions, not informed decisions.

AI Hallucinations: When Plausibility Trumps Truth
AI’s tendency to produce hallucinations — plausible-sounding but factually incorrect outputs — underscores its limitations. As Weise and Metz (2023) explain, AI models “mimic patterns in their training data without discerning truth,” often reproducing inaccuracies present in the data. For example, an AI might invent a historical figure’s role in an event, create non-existent references, or suggest implausible scientific theories. These errors are not intentional but arise because the AI is optimised for generating coherent responses, not verifying facts.

Why Oversight Matters
The AI’s reliance on patterns and probabilities means it lacks the ability to critically evaluate or contextualise information. This makes human oversight essential, particularly in areas where accuracy and nuance are critical. For instance, in content moderation on social media, AI may flag harmless posts as inappropriate or fail to identify harmful ones. Similarly, in professional settings such as legal or academic research, human review ensures that outputs are accurate, meaningful, and ethically sound.

AI as a Guide, Not an Authority
Rather than seeing AI as a definitive source of truth, it’s better to treat it as a guide — a tool to assist human decision-making. For example, AI might generate a draft summary or outline, but the user must verify its accuracy and refine the content to meet specific goals. Without oversight, the risk of accepting AI’s output at face value can lead to flawed decisions or misinformation.

Myth 10: AI Will Take Lots of Jobs
A headline-worthy fear is that AI will lead to mass unemployment. While AI is undoubtedly replacing certain jobs and making some roles obsolete, it also creates new work. The World Economic Forum (2023) estimates that while 83 million jobs may disappear by 2027, 69 million new roles will appear.

The catch? The transition will be disruptive, and some people will benefit more than others. AI is already reshaping the job market, and like the automobile that replaced the horse, new jobs and even professions will emerge. The frustration stems from navigating this upheaval, not from AI itself.

Conclusion: AI Is an Instrument, Not a Miracle
Frustration with AI often stems from treating it like magic instead of the powerful, though limited, tool it really is. When I was in school, I played the oboe in the orchestra. There was an old joke about the oboe: “What’s the difference between an oboe and an onion? No one cries when you chop up an oboe!

AI is like that oboe — capable of amazing things, but only in the hands of someone who knows how to play it. As a youngster honking away on my oboe, I was grateful people sat patiently and listened. My patience for AI left to its own devices is a little thinner.

References
  • Boden, M. A. (2004). The Creative Mind: Myths and Mechanisms (2nd ed.). Routledge.
  • Esteva, A., Robicquet, A., Ramsundar, B., Kuleshov, V., DePristo, M., Chou, K., et al. (2021). A Guide to Deep Learning in Healthcare. Nature Medicine, 25(1), pp. 24–29.
  • Floridi, L., & Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1).Gall, J. (2023). AI in Current and Future Agriculture: An Introductory Overview. KI — Künstliche Intelligenz. Available at: https://link.springer.com/article/10.1007/s13218-023-00826-5 [Accessed 27 November 2024].
  • Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2023). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys, 55(9), pp. 1–35.
  • Marcus, G., & Davis, E. (2023). Rebooting AI: Building Artificial Intelligence We Can Trust. London: Pantheon Books.
  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2023). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 55(6), pp. 1–35.
  • Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). London: Pearson.
  • Weise, E. and Metz, C. (2023). Addressing AI Hallucinations and Bias in Generative Models. MIT Sloan EdTech. Available at: https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/ [Accessed 27 November 2024].
  • Wong, M. (2024). AI Can’t Make Music. The Atlantic. Available at: https://www.theatlantic.com/technology/archive/2024/07/generative-ai-music-suno-udio/679114/ [Accessed 27 November 2024].
  • World Economic Forum (WEF). (2023). The Future of Jobs Report 2023. Geneva: WEF.
  • University of Oxford. (2024). Major Research on Hallucinating Generative Models Advances Reliability of Artificial Intelligence. University of Oxford News. Available at: https://www.ox.ac.uk/news/2024-06-20-major-research-hallucinating-generative-models-advances-reliability-artificial [Accessed 27 November 2024].
Created with