Synced files: - ai-misconceptions-reading-list.md (radio show research) - ai-misconceptions-radio-segments.md (distilled radio segments) - extract_license_plate.py - review_best_plates.py Machine: ACG-M-L5090 Timestamp: 2026-02-09 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
22 KiB
AI Misconceptions - Radio Segment Scripts
"Emergent AI Technologies" Episode
Created: 2026-02-09 Format: Each segment is 3-5 minutes at conversational pace (~150 words/minute)
Segment 1: "Strawberry Has How Many R's?" (~4 min)
Theme: Tokenization - AI doesn't see words the way you do
Here's a fun one to start with. Ask ChatGPT -- or any AI chatbot -- "How many R's are in the word strawberry?" Until very recently, most of them would confidently tell you: two. The answer is three. So why does a system trained on essentially the entire internet get this wrong?
It comes down to something called tokenization. When you type a word into an AI, it doesn't see individual letters the way you do. It breaks text into chunks called "tokens" -- pieces it learned to recognize during training. The word "strawberry" might get split into "st," "raw," and "berry." The AI never sees the full word laid out letter by letter. It's like trying to count the number of times a letter appears in a sentence, but someone cut the sentence into random pieces first and shuffled them.
This isn't a bug -- it's how the system was built. AI processes language as patterns of chunks, not as strings of characters. It's optimized for meaning and flow, not spelling. Think of it like someone who's amazing at understanding conversations in a foreign language but couldn't tell you how to spell half the words they're using.
The good news: newer models released in 2025 and 2026 are starting to overcome this. Researchers are finding signs of "tokenization awareness" -- models learning to work around their own blind spots. But it's a great reminder that AI doesn't process information the way a human brain does, even when the output looks human.
Key takeaway for listeners: AI doesn't read letters. It reads chunks. That's why it can write you a poem but can't count letters in a word.
Segment 2: "Your Calculator is Smarter Than ChatGPT" (~4 min)
Theme: AI doesn't actually do math -- it guesses what math looks like
Here's something that surprises people: AI chatbots don't actually calculate anything. When you ask ChatGPT "What's 4,738 times 291?" it's not doing multiplication. It's predicting what a correct-looking answer would be, based on patterns it learned from training data. Sometimes it gets it right. Sometimes it's wildly off. Your five-dollar pocket calculator will beat it every time on raw arithmetic.
Why? Because of that same tokenization problem. The number 87,439 might get broken up as "874" and "39" in one context, or "87" and "439" in another. The AI has no consistent concept of place value -- ones, tens, hundreds. It's like trying to do long division after someone randomly rearranged the digits on your paper.
The deeper issue is that AI is a language system, not a logic system. It's trained to produce text that sounds right, not to follow mathematical rules. It doesn't have working memory the way you do when you carry the one in long addition. Each step of a calculation is essentially a fresh guess at what the next plausible piece of text should be.
This is why researchers are now building hybrid systems -- AI for the language part, with traditional computing bolted on for the math. When your phone's AI assistant does a calculation correctly, there's often a real calculator running behind the scenes. The AI figures out what you're asking, hands the numbers to a proper math engine, then presents the answer in natural language.
Key takeaway for listeners: AI predicts what a math answer looks like. It doesn't compute. If accuracy matters, verify the numbers yourself.
Segment 3: "Confidently Wrong" (~5 min)
Theme: Hallucination -- why AI makes things up and sounds sure about it
This one has real consequences. AI systems regularly state completely false information with total confidence. Researchers call this "hallucination," and it's not a glitch -- it's baked into how these systems are built.
Here's why: during training, AI is essentially taking a never-ending multiple choice test. It learns to always pick an answer. There's no "I don't know" option. Saying something plausible is always rewarded over staying silent. So the system becomes an expert at producing confident-sounding text, whether or not that text is true.
A study published in Science found something remarkable: AI models actually use 34% more confident language -- words like "definitely" and "certainly" -- when they're generating incorrect information compared to when they're right. The less the system actually "knows" about something, the harder it tries to sound convincing. Think about that for a second. The AI is at its most persuasive when it's at its most wrong.
This has hit the legal profession hard. A California attorney was fined $10,000 after filing a court appeal where 21 out of 23 cited legal cases were completely fabricated by ChatGPT. They looked real -- proper case names, citations, even plausible legal reasoning. But the cases never existed. And this isn't an isolated incident. Researchers have documented 486 cases worldwide of lawyers submitting AI-hallucinated citations. In 2025 alone, judges issued hundreds of rulings specifically addressing this problem.
Then there's the Australian government, which spent $440,000 on a report that turned out to contain hallucinated sources. And a Taco Bell drive-through AI that processed an order for 18,000 cups of water because it couldn't distinguish a joke from a real order.
OpenAI themselves admit the problem: their training process rewards guessing over acknowledging uncertainty. Duke University researchers put it bluntly -- for these systems, "sounding good is far more important than being correct."
Key takeaway for listeners: AI doesn't know what it doesn't know. It will never say "I'm not sure." Treat every factual claim from AI the way you'd treat a tip from a confident stranger -- verify before you trust.
Segment 4: "Does AI Actually Think?" (~4 min)
Theme: We talk about AI like it's alive -- and that's a problem
Two-thirds of American adults believe ChatGPT is possibly conscious. Let that sink in. A peer-reviewed study published in the Proceedings of the National Academy of Sciences found that people increasingly attribute human qualities to AI -- and that trend grew by 34% in 2025 alone.
We say AI "thinks," "understands," "learns," and "knows." Even the companies building these systems use that language. But here's what's actually happening under the hood: the system is calculating which word is most statistically likely to come next, given everything that came before it. That's it. There's no understanding. There's no inner experience. It's a very sophisticated autocomplete.
Researchers call this the "stochastic parrot" debate. One camp says these systems are just parroting patterns from their training data at an incredible scale -- like a parrot that's memorized every book ever written. The other camp points out that GPT-4 scored in the 90th percentile on the Bar Exam and solves 93% of Math Olympiad problems -- can something that performs that well really be "just" pattern matching?
The honest answer is: we don't fully know. MIT Technology Review ran a fascinating piece in January 2026 about researchers who now treat AI models like alien organisms -- performing what they call "digital autopsies" to understand what's happening inside. The systems have become so complex that even their creators can't fully explain how they arrive at their answers.
But here's why the language matters: when we say AI "thinks," we lower our guard. We trust it more. We assume it has judgment, common sense, and intention. It doesn't. And that mismatch between perception and reality is where people get hurt -- trusting AI with legal filings, medical questions, or financial decisions without verification.
Key takeaway for listeners: AI doesn't think. It predicts. The words we use to describe it shape how much we trust it -- and right now, we're over-trusting.
Segment 5: "The World's Most Forgetful Genius" (~3 min)
Theme: AI has no memory and shorter attention than you think
Companies love to advertise massive "context windows" -- the amount of text an AI can consider at once. Some models now claim they can handle a million tokens, equivalent to several novels. Sounds impressive. But research shows these systems can only reliably track about 5 to 10 pieces of information before performance degrades to essentially random guessing.
Think about that. A system that can "read" an entire book can't reliably keep track of more than a handful of facts from it. It's like hiring someone with photographic memory who can only remember 5 things at a time. The information goes in, but the system loses the thread.
And here's something most people don't realize: AI has zero memory between conversations. When you close a chat window and open a new one, the AI has absolutely no recollection of your previous conversation. It doesn't know who you are, what you discussed, or what you decided. Every conversation starts completely fresh. Some products build memory features on top -- saving notes about you that get fed back in -- but the underlying AI itself remembers nothing.
Even within a single long conversation, models "forget" what was said at the beginning. If you've ever noticed an AI contradicting something it said twenty messages ago, this is why. The earlier parts of the conversation fade as new text pushes in.
Key takeaway for listeners: AI isn't building a relationship with you. Every conversation is day one. And even within a conversation, its attention span is shorter than you'd think.
Segment 6: "Just Say 'Think Step by Step'" (~3 min)
Theme: The weird magic of prompt engineering
Here's one of the strangest discoveries in AI: if you add the words "think step by step" to your question, the AI performs dramatically better. On math problems, this simple phrase more than doubles accuracy. It sounds like a magic spell, and honestly, it kind of is.
It works because of how these systems generate text. Normally, an AI tries to jump straight to an answer -- predicting the most likely response in one shot. But when you tell it to think step by step, it generates intermediate reasoning first. Each step becomes context for the next step. It's like the difference between trying to do complex multiplication in your head versus writing out the long-form work on paper.
Researchers call this "chain-of-thought prompting," and it reveals something fascinating about AI: the knowledge is often already in there, locked up. The right prompt is the key that unlocks it. The system was trained on millions of examples of step-by-step reasoning, so when you explicitly ask for that format, it activates those patterns.
But there's a catch -- this only works on large models, roughly 100 billion parameters or more. On smaller models, asking for step-by-step reasoning actually makes performance worse. The smaller system generates plausible-looking steps that are logically nonsensical, then confidently arrives at a wrong answer. It's like asking someone to show their work when they don't actually understand the subject -- you just get confident-looking nonsense.
Key takeaway for listeners: The way you phrase your question to AI matters enormously. "Think step by step" is the single most useful trick you can learn. But remember -- it's not actually thinking. It's generating text that looks like thinking.
Segment 7: "AI is Thirsty" (~4 min)
Theme: The environmental cost nobody talks about
Here's a number that stops people in their tracks: if AI data centers were a country, they'd rank fifth in the world for energy consumption -- right between Japan and Russia. By the end of 2026, they're projected to consume over 1,000 terawatt-hours of electricity. That's more than most nations on Earth.
Every time you ask ChatGPT a question, a server somewhere draws power. Not a lot for one question -- but multiply that by hundreds of millions of users, billions of queries per day, and it adds up fast. And it's not just electricity. AI is incredibly thirsty. Training and running these models requires massive amounts of water for cooling the data centers. We're talking 731 million to over a billion cubic meters of water annually -- equivalent to the household water usage of 6 to 10 million Americans.
Here's the part that really stings: MIT Technology Review found that 60% of the increased electricity demand from AI data centers is being met by fossil fuels. So despite all the talk about clean energy, the AI boom is adding an estimated 220 million tons of carbon emissions. The irony of using AI to help solve climate change while simultaneously accelerating it isn't lost on researchers.
A single query to a large language model uses roughly 10 times the energy of a standard Google search. Training a single large model from scratch can consume as much energy as five cars over their entire lifetimes, including manufacturing.
None of this means we should stop using AI. But most people have no idea that there's a physical cost to every conversation, every generated image, every AI-powered feature. The cloud isn't actually a cloud -- it's warehouses full of GPUs running 24/7, drinking water and burning fuel.
Key takeaway for listeners: AI has a physical footprint. Every question you ask has an energy cost. It's worth knowing that "free" AI tools aren't free -- someone's paying the electric bill, and the planet's paying too.
Segment 8: "Chatbots Are Old News" (~3 min)
Theme: The shift from chatbots to AI agents
If 2025 was the year of the chatbot, 2026 is the year of the agent. And the difference matters.
A chatbot talks to you. You ask a question, it gives an answer. It's reactive -- like a really smart FAQ page. An AI agent does work for you. You give it a goal, and it figures out the steps, uses tools, and executes. It can browse the web, write and run code, send emails, manage files, and chain together multiple actions to accomplish something complex.
Here's the simplest way to think about it: a chatbot is read-only. It can create text, suggest ideas, answer questions. An agent is read-write. It doesn't just suggest you should send a follow-up email -- it writes the email, sends it, tracks whether you got a response, and follows up if you didn't.
The market reflects this shift. The AI agent market is growing at 45% per year, nearly double the 23% growth rate for chatbots. Companies are building agents that can handle entire workflows autonomously -- scheduling meetings, managing customer service tickets, writing and deploying code, analyzing data and producing reports.
This is where AI gets both more useful and more risky. A chatbot that hallucinates gives you bad information. An agent that hallucinates takes bad action. When an AI can actually do things in the real world -- send messages, modify files, make purchases -- the stakes of getting it wrong go way up.
Key takeaway for listeners: The next wave of AI doesn't just talk -- it acts. That's powerful, but it also means the consequences of AI mistakes move from "bad advice" to "bad actions."
Segment 9: "AI Eats Itself" (~3 min)
Theme: Model collapse -- what happens when AI trains on AI
Here's a problem nobody saw coming. As the internet fills up with AI-generated content -- articles, images, code, social media posts -- the next generation of AI models inevitably trains on that AI-generated material. And when AI trains on AI output, something strange happens: it gets worse. Researchers call it "model collapse."
A study published in Nature showed that when models train on recursively generated data -- AI output fed back into AI training -- rare and unusual patterns gradually disappear. The output drifts toward bland, generic averages. Think of it like making a photocopy of a photocopy of a photocopy. Each generation loses detail and nuance until you're left with a blurry, indistinct mess.
This matters because AI models need diverse, high-quality data to perform well. The best AI systems were trained on the raw, messy, varied output of billions of real humans -- with all our creativity, weirdness, and unpredictability. If future models train primarily on the sanitized, pattern-averaged output of current AI, they'll lose the very diversity that made them capable in the first place.
Some researchers describe it as an "AI inbreeding" problem. There's now a premium on verified human-generated content for training purposes. The irony is real: the more successful AI becomes at generating content, the harder it becomes to train the next generation of AI.
Key takeaway for listeners: AI needs human creativity to function. If we flood the internet with AI-generated content, we risk making future AI systems blander and less capable. Human originality isn't just nice to have -- it's the raw material AI depends on.
Segment 10: "Nobody Knows How It Works" (~4 min)
Theme: Even the people who build AI don't fully understand it
Here's maybe the most unsettling fact about modern AI: the people who build these systems don't fully understand how they work. That's not an exaggeration -- it's the honest assessment from the researchers themselves.
MIT Technology Review published a piece in January 2026 about a new field of AI research that treats language models like alien organisms. Scientists are essentially performing digital autopsies -- probing, dissecting, and mapping the internal pathways of these systems to figure out what they're actually doing. The article describes them as "machines so vast and complicated that nobody quite understands what they are or how they work."
A company called Anthropic -- the makers of the Claude AI -- has made breakthroughs in what's called "mechanistic interpretability." They've developed tools that can identify specific features and pathways inside a model, mapping the route from a question to an answer. MIT Technology Review named it one of the top 10 breakthrough technologies of 2026. But even with these tools, we're still in the early stages of understanding.
Here's the thing that's hard to wrap your head around: nobody programmed these systems to do what they do. Engineers designed the architecture and the training process, but the actual capabilities -- writing poetry, solving math, generating code, having conversations -- emerged on their own as the models grew larger. Some abilities appeared suddenly and unexpectedly at certain scales, which researchers call "emergent abilities." Though even that's debated -- Stanford researchers found that some of these supposed sudden leaps might just be artifacts of how we measure performance.
Simon Willison, a prominent AI researcher, summarized the state of things at the end of 2025: these systems are "trained to produce the most statistically likely answer, not to assess their own confidence." They don't know what they know. They can't tell you when they're guessing. And we can't always tell from the outside either.
Key takeaway for listeners: AI isn't like traditional software where engineers write rules and the computer follows them. Modern AI is more like a system that organized itself, and we're still figuring out what it built. That should make us both fascinated and cautious.
Segment 11: "AI Can See But Can't Understand" (~3 min)
Theme: Multimodal AI -- vision isn't the same as comprehension
The latest AI models don't just read text -- they can look at images, listen to audio, and watch video. These are called multimodal models, and they seem almost magical when you first use them. Upload a photo and the AI describes it. Show it a chart and it explains the data. Point a camera at a math problem and it solves it.
But research from Meta, published in Nature, tested 60 of these vision-language models and found a crucial gap: scaling up these models improves their ability to perceive -- to identify objects, read text, recognize faces -- but it doesn't improve their ability to reason about what they see. Even the most advanced models fail at tasks that are trivial for humans, like counting objects in an image or understanding basic physical relationships.
Show one of these models a photo of a ball on a table near the edge and ask "will the ball fall?" and it struggles. Not because it can't see the ball or the table, but because it doesn't understand gravity, momentum, or cause and effect. It can describe what's in the picture. It can't tell you what's going to happen next.
Researchers describe this as the "symbol grounding problem" -- the AI can match images to words, but those words aren't grounded in real-world experience. A child who's dropped a ball understands what happens when a ball is near an edge. The AI has only seen pictures of balls and read descriptions of falling.
Key takeaway for listeners: AI can see what's in a photo, but it doesn't understand the world the photo represents. Perception and comprehension are very different things.
Suggested Episode Flow
For a cohesive episode, consider this order:
- Segment 1 (Strawberry) - Fun, accessible opener that hooks the audience
- Segment 2 (Math) - Builds on tokenization, deepens understanding
- Segment 3 (Hallucination) - The big one; real-world stakes with great stories
- Segment 4 (Does AI Think?) - Philosophical turn, audience reflection
- Segment 6 (Think Step by Step) - Practical, empowering -- gives listeners something actionable
- Segment 5 (Memory) - Quick, surprising facts
- Segment 11 (Vision) - Brief palate cleanser
- Segment 9 (AI Eats Itself) - Unexpected twist the audience won't see coming
- Segment 8 (Agents) - Forward-looking, what's next
- Segment 7 (Energy) - The uncomfortable truth to close on
- Segment 10 (Nobody Knows) - Perfect closer; leaves audience thinking
Estimated total runtime: 40-45 minutes of content (before intros, outros, and transitions)