Why AI Won’t Kill the Internet, But Could Save It
Sir Tim Berners-Lee doesn’t think artificial intelligence will destroy the web, but he’s deeply concerned that AI could collapse the advertising-driven economic model that currently sustains it. In a recent interview on The Verge’s Decoder podcast with editor-in-chief Nilay Patel, the inventor of the World Wide Web offered a nuanced perspective on AI’s impact: while chatbots and large language models threaten to dismantle the click-through economy that funds most online content, they might also help realize his original vision of a decentralized, user-controlled internet. This matters because the web’s foundational architecture – the system of links, traffic, and revenue that supports everything from independent journalism to niche blogs – is crumbling as AI tools increasingly bypass websites altogether, providing direct answers without sending users to original sources. Speaking in November 2025 to promote his new memoir “This Is For Everyone: The Unfinished Story of the World Wide Web,” Berners-Lee laid out both the existential threats facing today’s web and the technical solutions that could preserve its core values while adapting to an AI-driven future.
How the inventor of HTML and HTTP sees today’s crisis
Tim Berners-Lee occupies a unique position in any discussion about the web’s future. As the British computer scientist who wrote the original proposal for the World Wide Web in 1989, created HTML and HTTP, and built the first web browser, he stands as one of the most influential inventors of the modern digital age. Born in the same year as Bill Gates and Steve Jobs, Berners-Lee distinguished himself by famously sharing his invention for no commercial reward, enabling the widespread adoption that transformed humanity into its first digital species. He currently serves as CTO and co-founder of Inrupt, senior researcher at MIT’s Computer Science and Artificial Intelligence Laboratory, and professor at the University of Southampton School of Electronics and Computer Science. His perspective carries particular weight because he was there at the very beginning, and unlike many tech pioneers who have retreated from public discourse, he remains actively engaged in shaping the web’s evolution through his work at the World Wide Web Consortium (W3C) and his decentralization startup.
From his vantage point, the web that emerged from his 1989 vision has both exceeded and betrayed his expectations. “A lot of it is as I imagined,” he told Patel, noting that his core goal was creating “a general platform to add to other people’s creativity.” What he couldn’t have predicted were the “many wacky things people do”, nor the concentration of power that has occurred. His central concern today revolves around what he calls the “infrastructure of the web,” particularly the advertising-supported model that funds the vast majority of online content. “I do worry about the infrastructure of the web when it comes to the stack of all the flow of data, which is produced by people who make their money from advertising,” he stated in the interview. “If nobody is actually following through the links, if people are not using search engines, they’re not actually using their websites, then we lose that flow of ad revenue. That whole model crumbles. I do worry about that.”
This isn’t abstract theorizing. The mechanism of destruction is straightforward: when users ask ChatGPT, Claude, Perplexity, or Google’s AI Overviews a question, these systems provide synthesized answers drawn from web content without requiring users to visit the original sources. The click-through that generates advertising revenue disappears. Publishers lose both visibility and their primary funding source. Already, approximately 70% of searches no longer result in clicks, according to industry data referenced in the interview context. The economic implications are staggering given that web advertising generates roughly $398 billion annually, with Google alone posting $100 billion in quarterly revenue and Meta hitting $51.2 billion in a recent quarter. As these figures suggest, the web’s current economic model isn’t just significant, it’s the foundation supporting everything from major media outlets to individual bloggers and niche educational sites.
The monopoly problem predates AI but compounds the crisis
Berners-Lee’s diagnosis extends beyond AI to address the concentration of power that has characterized the web’s evolution over the past two decades. His analysis is precise and unflinching. “When you have a market and a network, then you end up with monopolies. That’s the way markets work,” he explained. He then catalogued the consolidation: “There was a time before Google Chrome was totally dominant, when there was a reasonable market for different browsers. Now Chrome is dominant. There was a time before Google Search came along, there were a number of search engines and so on, but now we have basically one search engine. We have basically one social network. We have basically one marketplace, which is a real problem for people.”
This monopolistic landscape creates what he describes as a fundamentally disempowering environment for users. His critique of Facebook (Meta) illustrates the point: “Well, everybody’s on Facebook, so they don’t have the website. They all use Mark Zuckerberg’s website. When people look you up on Facebook, you don’t control actually what they see … Mark Zuckerberg’s algorithms control what news gets fed to them as they’re looking at your stuff. That’s very disempowering. It is very useful to Facebook. They have a lot of data about people that they use for targeting them with advertisements … but what we’ve lost is the ability for individuals to have power.”
The convergence of AI capabilities with this monopolistic structure creates a particularly troubling dynamic. When a handful of companies control both the platforms where content lives and the AI systems that synthesize and present information, the potential for manipulation and the erosion of diverse perspectives becomes acute. The web’s original promise of decentralized knowledge creation and sharing gives way to algorithmic curation by entities whose primary incentive is maintaining user engagement for advertising purposes.
How AI floods the web with synthetic content, eroding truth and trust
The crisis extends beyond economics to epistemology – how we know what we know, and whether we can trust what we read online. Berners-Lee and other researchers are tracking a disturbing trend: projections suggest 90% of online text could be AI-generated by 2030. This creates what researchers call “model collapse,” a phenomenon documented in a July 2024 Nature paper titled “AI Models Collapse when Trained on Recursively Generated Data.” The finding is stark: “indiscriminate use of model-generated content in training causes irreversible defects in the resulting models, in which tails of the original content distribution disappear.”
What does this mean in practice? As AI systems are increasingly trained on content generated by other AI systems, a recursive loop of synthetic text, the diversity and quality of information degrades. Rare perspectives vanish. Nuanced arguments get smoothed into bland consensus. The “long tail” of specialized knowledge and minority viewpoints disappears from the training data, leaving future models with an increasingly narrow and uniform understanding of human knowledge and culture. This represents an existential threat to the web’s value as a repository of diverse human knowledge and creativity.
The trust dimension compounds the problem. Nearly half the time, according to research referenced in the interview context, AI assistants distort or misrepresent news content when summarizing it. Thousands of automated sites have emerged that spread misinformation and fake news, often optimized to appear credible to both users and AI systems. The uniformity of AI-driven content threatens not just individual truth claims but the entire epistemic infrastructure we rely on to distinguish reliable from unreliable information. When synthetic content becomes indistinguishable from human-created content, and when AI systems train on their own outputs, we risk creating what some researchers call an “information desert” – a web where finding genuine expertise, original research, or authentic human perspective becomes increasingly difficult.
From my perspective as someone who has spent decades working in user-centered web development, this degradation of information quality represents a profound failure of the systems we’ve built. The web was supposed to democratize knowledge and enable direct communication between experts and interested learners. Instead, we’re creating layers of mediation – algorithmic summaries of AI-generated content based on earlier AI-generated summaries – that obscure rather than illuminate. The pedagogical implications alone are troubling: how do students learn to evaluate sources when the sources themselves are increasingly synthetic? How do researchers build on prior work when that work may be algorithmically generated rather than reflecting genuine empirical findings or rigorous analysis?
Why Berners-Lee believes the web will survive and how we can guide it
Despite these stark challenges, Berners-Lee’s core message is one of qualified optimism. The title of his Decoder interview captures his stance: “Sir Tim Berners-Lee doesn’t think AI will destroy the web.” His reasoning rests on several pillars, beginning with the recognition that AI has actually succeeded where his own earlier project failed. For decades, he championed the Semantic Web – a vision of machine-readable data that would allow computers to understand and process web content more intelligently. “We never built the things that would extract semantic data from non-semantic data,” he acknowledged. “Now AI will do that.”
This represents a fundamental shift. AI systems can now extract structured information from websites regardless of how they’re formatted, accomplishing what the Semantic Web project couldn’t achieve through standards and protocols alone. “Now we’ve got another wave of the Semantic Web with AI,” Berners-Lee explained. “You have a possibility where AIs use the Semantic Web to communicate between one and two possibilities and they communicate with each other. There is a web of data that is generated by AIs and used by AIs and used by people, but also mainly used by AIs.” Projects like Schema.org, which Google promoted to help search engines understand website content through structured metadata, have created a foundation of machine-readable information that AI can leverage. The Semantic Web “has succeeded to the extent that there’s the linked open data world of public databases of all kinds of things, about proteins, about geography, the OpenStreetMap, and so on.”
This technical infrastructure provides a basis for optimism, but Berners-Lee’s more compelling argument centers on agency, both individual and collective. Social media platforms, he points out, “are all just code.” They can be changed. The web’s architecture is not fixed in stone; it reflects choices made by engineers, corporate strategists, and policymakers. Different choices remain possible. His proposed solutions fall into several categories, each addressing a different aspect of the crisis.
Personal data sovereignty through the Solid protocol
Berners-Lee’s primary technical solution involves returning data control to individual users through what he calls “pods” – personal online data storage containers. Through his company Inrupt, co-founded with John Bruce, he’s implementing the Solid protocol, which allows users to maintain a single sign-in across services while keeping their data in user-controlled storage rather than corporate silos. “Give everybody a little bit of cloud storage, their own personal cloud storage, call it a Solid pod, they have complete control over that,” he explains in various interviews discussing the technology.
The implications are profound. Instead of websites collecting data about you when you visit, they would request access to specific data from your pod, subject to your permissions. Your fitness information, shopping preferences, reading history, and personal documents would all reside in a space you control. This architecture enables what Berners-Lee calls the “intention economy,” where technology helps users achieve their own goals rather than capturing attention for advertisers. It also addresses the trust problem: when you control your data, you can choose which AI systems access it and for what purposes.
Building on this foundation, Berners-Lee envisions personal AI assistants that work for individuals rather than corporations. He refers to this concept as “Charlie”, an AI agent that operates like a lawyer or doctor, with fiduciary responsibility to serve your interests. “Sometimes you have the whole data spectrum – all of the data to do with your collaborations and your coffees and your projects and your dreams. And the books you’re reading and … all of your life, then that is in your pod. You run AI on that. That could be sweet,” he told Patel. Unlike Amazon’s Alexa or Apple’s Siri, which serve their corporate creators’ interests, these personal AI assistants would leverage your complete data profile to provide genuinely useful, personalized assistance while maintaining your privacy.
Alternative economic models for sustainable content creation
Addressing the collapsing advertising model requires rethinking how content creators get compensated. Berners-Lee advocates revisiting micropayments – small transactions for accessing individual pieces of content. “You could write the protocols. One, in fact, is micropayments. We’ve had micropayments projects in W3C every now and again over the decades,” he explained in the interview. “So, suddenly there’s a ‘payment required’ error code in HTTP. The idea that people would pay for information on the web; that’s always been there.”
The difference today is scale and integration. With modern payment infrastructure, micropayments become feasible in ways they weren’t in the web’s early decades. Berners-Lee also supports initiatives like Cloudflare’s pay-per-crawl proposal, where AI systems would compensate content creators for accessing their material. Building such payment requirements into open web standards would create a framework that “AI crawlers and other clients across the ecosystem would have to honor by default,” as the interview discussion noted. Whether users are individuals or AI systems, the key is establishing protocols that enable appropriate compensation: “of course whether you’re an AI crawler or whether you are an individual person, it’s the way you want to pay for things that’s going to be very different.”
These aren’t merely technical solutions; they represent a fundamentally different economic logic. Rather than an attention economy where platforms profit by keeping users engaged and serving them advertisements, Berners-Lee envisions an intention economy where users pay (often in tiny amounts) for the value they receive, and creators are directly compensated for their contributions. This model better aligns incentives: creators focus on providing genuine value rather than optimizing for engagement metrics, and users pay for what they actually find useful rather than having their attention extracted as the product being sold to advertisers.
Decentralization, open standards, and structural reform
Underpinning all of Berners-Lee’s proposals is a commitment to decentralization and open standards. He’s careful to distinguish his vision – what he calls “Web 3.0 (with a dot)” – from blockchain-based “Web3” hype. He’s skeptical of cryptocurrencies, calling them “only speculative,” and notes that blockchain technology is “neither fast nor offered privacy to users” in ways that would make it suitable for the web’s future infrastructure. Instead, his approach focuses on open protocols developed through bodies like the W3C, where diverse stakeholders can contribute to standards that serve broad interests rather than narrow corporate agendas.
Addressing the monopoly problem requires structural changes beyond technical protocols. Berners-Lee points to regulatory efforts like the European Union’s Digital Markets Act as important steps toward promoting competition. But he’s also pragmatic about the nature of network effects: markets and networks naturally tend toward concentration. The solution isn’t to pretend this won’t happen, but to create governance structures and regulatory frameworks that prevent dominant platforms from abusing their positions and that preserve interoperability so users can move between services without losing their data or social connections.
His call for what he describes as “a CERN-like body to oversee global AI research,” as reported in India Today, reflects a similar logic. Just as CERN coordinates international particle physics research, a global body could coordinate AI development in ways that prioritize safety, transparency, and alignment with human values over pure corporate profit-seeking. This represents learning from the mistakes of social media, where largely unregulated platforms optimized for engagement created massive societal problems from misinformation to political polarization.
Reflecting on resilience: why the web’s future depends on choices we make now
As someone who has dedicated my career to user-centered web development, I find Berners-Lee’s analysis both sobering and energizing. The web faces genuine existential threats from AI – not in the sense that the technology infrastructure will disappear, but that the economic models and information ecosystems that make it valuable could collapse, leaving us with what one commentator called “a zombie web” of AI-generated content serving ads to AI agents. The projections are genuinely alarming: 70% of searches generating no clicks, 90% of content potentially AI-generated by 2030, half of AI summaries distorting their source material. These aren’t distant speculative concerns; they’re trends already visible in current data.
Yet I share Berners-Lee’s fundamental optimism about the web’s resilience, precisely because the web is not a natural phenomenon but a human creation. It reflects choices – about protocols, about business models, about governance structures, about technical standards. As Berners-Lee emphasizes, platforms “are all just code.” They can be rewritten. The question is whether we have the collective will to make different choices than the ones that led to our current predicament of concentrated corporate power, surveillance capitalism, and algorithmic manipulation.
His proposed solutions – personal data pods, AI assistants that work for users, micropayment systems, decentralized protocols, regulatory frameworks that promote competition – are technically feasible. The Solid protocol exists and is being deployed. Micropayment infrastructure has been built. The regulatory frameworks are being crafted. What remains uncertain is whether these alternatives can achieve sufficient adoption to create viable alternatives to the dominant platforms, and whether policymakers will implement regulations that genuinely change incentives rather than merely creating compliance costs that further entrench incumbents.
From a user-centered design perspective, Berners-Lee’s vision aligns with fundamental principles our field has long championed: user agency, informed consent, transparency, and designing for human flourishing rather than corporate metrics. The attention economy violates these principles systematically, treating users as resources to be extracted rather than autonomous individuals to be served. The intention economy he envisions restores agency: users control their data, choose how AI assists them, and pay directly for value received. This is not just ethically superior; it creates better incentives for genuine innovation focused on serving user needs.
The stakes extend beyond economics to epistemology and culture. A web where 90% of content is AI-generated synthetic text, trained recursively on earlier AI outputs, loses its value as a repository of human knowledge, creativity, and diverse perspectives. The model collapse problem is real: as rare viewpoints disappear from training data, as nuanced arguments get smoothed into algorithmic consensus, as authentic expertise becomes indistinguishable from synthetic content, we risk creating what researchers call an information desert. Future generations would inhabit a web where finding genuine human insight, original research, or authentic cultural expression becomes increasingly difficult – a profound impoverishment of our collective intellectual and cultural commons.
Conclusion: heeding the inventor’s warning while embracing technological possibility
Tim Berners-Lee’s message in his Decoder interview crystallizes around a central paradox: AI threatens the web’s current economic foundations while potentially enabling the decentralized, user-controlled vision he originally intended. The collapse of the advertising-driven model isn’t speculation but observable reality, with click-through rates plummeting and AI systems increasingly serving as intermediaries that extract value from content while cutting out creators. The flood of synthetic content erodes trust and threatens the diversity of human knowledge. These are genuine crises that demand urgent attention.
Yet the inventor of the web doesn’t believe AI will destroy his creation – because destruction and survival aren’t the only options. Transformation is possible. The technical solutions exist: personal data pods, open protocols, micropayment systems, AI assistants aligned with user interests, regulatory frameworks that promote competition and user control. What’s required is the collective will to implement them – from engineers writing code, to entrepreneurs building alternatives to dominant platforms, to policymakers crafting regulations, to users choosing services that respect their autonomy.
As educators and researchers engaged with web technologies, we bear particular responsibility. We train the engineers who will build these systems. We conduct the research that illuminates both problems and solutions. We model alternatives in our own practices and institutions. We advocate for choices that serve broad human flourishing over narrow corporate profit. Berners-Lee’s optimism isn’t naive; it’s grounded in the recognition that the web’s future remains undetermined, subject to the choices we make individually and collectively in the coming years.
The web’s resilience depends not on technological determinism but on human agency. Berners-Lee has given us both the diagnosis and the prescription. The question is whether we’ll heed them.

Leave a comment