
AI Overviews Are Rewriting the Internet: What It Means for Your Money and Your Mind
Google’s AI Overviews vs Reality: How to Stay Rich and Sane in the Summary Era.
The shift happened so quietly that many of us barely noticed. One day, we were typing keywords into Google and clicking blue links. The next, an AI-generated summary sat at the top of our search results, offering neat, paragraph-length answers without the hassle of visiting a single website.
This is the era of AI Overviews—and it is fundamentally rewriting how we find information, form beliefs, and make decisions. While convenient, this transformation carries profound implications for your finances and your psychology. Here is what you need to know.
How AI Search Changes What People Believe
For two decades, the “search-and-click” model ruled the internet. You typed a question, scanned a list of results, and clicked through to websites to form your own conclusions. That process, while imperfect, encouraged what researchers call triangulation—checking multiple sources to verify facts .
AI search collapses this process. Instead of presenting a menu of potential answers, it serves a single, synthesized response. And the data suggests people are accepting it without question.
A sweeping study from MIT analyzing 2.8 million search results found that when an AI summary appears, the “zero-click rate”—meaning users read the AI answer and leave without visiting any source—jumps from 60% to 80% . Eight out of ten people now take AI-generated answers at face value.
This is deeply concerning given the reliability of these systems. The same MIT study found that even leading AI models produce statements lacking adequate support approximately 30% of the time . Independent testing of Chinese AI models found an average citation accuracy of just 25%, with 43% of replies containing broken links .
The problem is compounded by how AI systems select information. Research shows AI search tools exhibit a strong bias toward high-traffic “superstar” websites—the top 1,000 sites account for about 10% of all AI citations . Platforms like Reddit, Wikipedia, and YouTube dominate, while smaller blogs, niche forums, and independent publications receive significantly less attention than they would in traditional search results.
This creates what researchers call a homogenization of information . When every AI summary pulls from the same concentrated pool of sources, diverse perspectives and minority viewpoints get systematically erased. Users are never told what they are missing.
Perhaps most troubling is the psychological mechanism at work. Studies consistently find that people perceive AI outputs as objective and credible—a phenomenon known as the “machine heuristic” . We trust computers to be neutral, even when evidence suggests they are anything but. AI models can exploit cognitive biases and ideological leanings, generating highly convincing misinformation tailored to what specific audiences want to hear .
“LLMs can generate highly convincing misinformation, often exploiting cognitive biases and ideological leanings of the audiences.”
— AI & SOCIETY, scoping review of 24 empirical studies
When AI Summaries Go Wrong, Who Pays the Price?
The cheerful veneer of AI assistance conceals a stark legal and financial reality: when AI summaries produce false or infringing content, someone has to pay. The question is who—and the answer varies depending on where you sit.
The Legal Front: Copyright and Hallucinations
Consider the case of Cohere, an AI company whose Command model provides news summaries using a technique called Retrieval Augmented Generation (RAG). When RAG is enabled, the AI searches for real articles. When it is disabled, the model hallucinates freely—fabricating entire news stories and attributing them to real publications .
In November 2025, a US federal court ruled that a lawsuit against Cohere could proceed, finding that AI-generated summaries may indeed infringe copyright when they reproduce “the structure, writing style, and punctuation” of original articles . In one striking example, Cohere’s AI copied eight of ten paragraphs from a New Yorker piece nearly verbatim .
The legal principle at stake is profound: facts cannot be copyrighted, but expressive originality can. When an AI reproduces not just facts but a journalist’s unique phrasing and narrative structure, it may be stealing.
The Professional Front: Lawyers Who Trusted AI
If you think the stakes are high for publishers, consider what happens when professionals rely on AI for critical work.
In a recent UK immigration case, the Upper Tribunal discovered that legal representatives had submitted court filings containing fictitious case citations—cases that simply did not exist . The source of these phantom references? AI search tools.
One legal adviser could not explain how a non-existent case appeared in their pleading, telling the tribunal: “I cannot dismiss the fact that the case was an AI creation as there is no other explanation” .
The Tribunal was unforgiving. It referred multiple legal professionals to regulatory bodies for potential sanctions, noting that “the citation of cases which do not exist sends that judge on a fool’s errand” and that such practices drain judicial resources that could serve legitimate claims .
For professionals, the price of AI hallucination can include disciplinary action, malpractice lawsuits, and destroyed careers.
The Societal Front: Erosion of Trust
Beyond individual cases, society as a whole pays when AI summaries go wrong. Research shows that exposure to AI-generated misinformation reduces trust and influences decision-making . When people cannot distinguish between authentic content and AI fabrications, faith in all information erodes.
Microsoft’s recent study on media integrity warns that we are entering an era where “seeing isn’t always believing” . Deepfakes and AI-generated content threaten trust in news, elections, brands, and everyday interactions. And while technical solutions like content provenance and watermarking exist, the report concludes that “no single solution can prevent digital deception on its own” .
“Proof-of-Source” Habits Anyone Can Adopt
The rise of AI search does not mean we must surrender our critical thinking. Researchers, journalists, and cognitive scientists have identified practical habits that can protect you from AI-enabled misinformation. Call this the “proof-of-source” approach.
1. Always Click Through
The single most effective habit is also the simplest. When an AI summary provides an answer, do not stop there. Click the cited sources. Read the original material. MIT’s research found that users who verify information across multiple sources are significantly less likely to accept false claims .
Even better, adopt what Raptive’s research calls the “Trust Filter” —a multi-source verification mindset. Nearly half of Americans (48.6%) now treat search as a two-step process, recognizing that the first answer often needs verification .
2. Distinguish Facts from Expression
Understanding copyright law can make you a savvier consumer. Remember: facts are free, but expression is protected. When an AI summary rephrases factual information, it may be legitimate. When it reproduces distinctive phrasing, metaphors, or narrative structures from a single source, be suspicious. That level of copying often indicates shallow synthesis rather than genuine analysis.
3. Check for the “C2PA Credential”
Microsoft and the Coalition for Content Provenance and Authenticity (C2PA) have developed technical standards for verifying media authenticity . Look for content credentials—digital watermarks and provenance information that can tell you whether an image, video, or text was AI-generated or authentically captured. While not yet universal, these credentials are appearing on more platforms and can provide a valuable trust signal.
4. Embrace Metacognition
The most powerful tool against misinformation is not technical but psychological. Researchers writing in Nature argue that most anti-misinformation strategies fail because they assume people primarily want to detect truth. In reality, people rely on quick plausibility judgments shaped by limited time, attention, and social context .
The solution is metacognition—thinking about your own thinking. Before accepting an AI answer, ask yourself:
- Am I accepting this because it is true, or because it confirms what I already believe?
- Would I trust this information if it came from a different source?
- What would I need to see to change my mind?
Studies show that helping people calibrate their confidence and recognize uncertainty significantly reduces errors in complex information environments .
5. Diversify Your Information Diet
Just as financial advisors recommend diversified portfolios, information experts recommend diversified sources. Do not rely on a single AI assistant or search engine. Cross-check answers across multiple platforms—different AI models, traditional search engines, and direct visits to trusted publications.
Raptive’s research found that 37.2% of people now trust information only when they find “multiple sources saying the same thing” . This evidence-stacking approach is not paranoia; it is prudent information hygiene.
6. Read Like a Fact-Checker
Professional fact-checkers follow a discipline that anyone can learn. When you encounter an AI summary:
- Trace claims to their original source. Do not trust the AI’s paraphrase.
- Check the publication date. Old information may be obsolete.
- Examine the author’s credentials. Is this person qualified to speak on this topic?
- Look for corrections and retractions. Reputable sources fix mistakes.
If this sounds like work, that is because it is. But the alternative—blindly trusting AI-generated summaries—carries costs that far exceed the effort of verification.
The Bottom Line
AI search is not going away. Google’s AI Overviews now reach 229 countries, up from just 7 one year ago . The convenience of instant answers is too seductive for both companies and consumers to abandon.
But convenience and accuracy are not the same thing. The evidence is clear: AI systems hallucinate, they bias toward dominant sources, and they exploit our psychological tendency to trust machines.
Protecting your money and your mind in this new era requires active effort. The “proof-of-source” habits outlined above are not optional extras for tech enthusiasts—they are essential literacies for anyone who wants to navigate the AI-rewritten internet without being misled.
The good news is that the skills of verification—checking sources, thinking metacognitively, diversifying inputs—are timeless. They served us well in the era of newspapers, television, and social media. They will serve us well in the era of AI.
The only difference is that now, more than ever, the price of passive consumption is your trust. Do not pay it.












