The Internet is Dead, and We Have Killed It.

Or: Surviving and Thriving in the Textpocalypse.

The “dead internet theory” goes something like this: the internet died around 2016, when someone (the U.S. government, usually) replaced most organic human activity with bots and algorithms. The purpose, as it always is in such theories, is to control the population.

In other words, “the U.S. government is engaging in an artificial intelligence powered gaslighting of the entire world population.”

Good news: the internet isn’t dead…

Bad news: …yet.

It’s highly improbable (though, I grant, not impossible) that the U.S. government would be able to turn the entire internet into a digital Potemkin village. The internet is not dead, since you are here and I am here and so are thousands and millions of others. But forces greater than the U.S. government are at work to kill the internet—human ingenuity and human laziness.

Human ingenuity created a powerful tool: Generative AI. ChatGPT is a fascinating generative artificial intelligence. Ask it a question, and it will happily give you answers (as long as the question is not “how many times does the letter R appear in the word ‘strawberry’?”).

Despite all appearances, ChatGPT is not actually thinking about its responses, at least not like you and I do. Instead, it is a kind of prediction engine. Programmers trained ChatGPT by feeding it millions of lines of text to review. Based on that review, it can now predict the answers to your questions.

ChatGPT’s predictions look really accurate. They look so accurate, in fact, that ChatGPT fooled some lawyers into filing briefs with a federal court that contained citations to cases that don’t exist. Because ChatGPT knew what a legal citation is supposed to look like, it was able to create legal citations that looked real enough to fool those lawyers but that were, in fact, invented out of whole cloth.

Irony abounds: human ingenuity created a tool able to fool human ingenuity.

Human laziness saw an opportunity and started using artificial intelligence to create content for humans—lots and lots of content for humans. So much content, in fact, that Google had to change how it ranks webpages to avoid websites that “feel like they were created for search engines instead of people.

Remember how programmers created these AI models in the first place: by inputting millions upon millions of words, sentences, and paragraphs. Obviously, before AI could create content, there was not AI-created content. Thus, humans made the first set of data that was used to train AI.

But now, AI has generated much of the content on the internet today. To continue refining AI models, AI researchers must feed those models more data—more words, sentences, and paragraphs. The AI researchers are collecting that date from the internet. The result is a vicious cycle: AI creates the data that trains the AI that creates the data that trains the AI, and so on.

This problem has been described as a “textpocalypse.” The thesis runs, more or less, that AI-generated content will one day overrun the internet, drowning our once-mighty digital empire of knowledge in a sea of textual “gray goo.” One can imagine users throwing up their hands in frustration at meaningless, AI-generated search results choking out human-generated content like weeds.

Or, one can be a bit more optimistic.

The internet, as it exists now, has become centralized around certain information brokers like Google, Reddit, and Youtube. AI-generated content will have an outsized impact on these centralized sites. Google is in a Red Queen’s Race to try and mitigate, or even benefit from, AI’s impact, but it seems most voices say the wave will break nonetheless.

Communities, however, will not see the same impact as large-scale information brokers. Artificial intelligence cannot (at least for now, and probably forever) infiltrate offline or hybrid communities. Small-scale communities built around human relationships (i.e., real-life interactions) are poised to thrive even as (or if) the internet dies.

Print writing is likely to see a return to prominence as “scammy AI-generated book rewrites” flood online e-book markets. Traditional publishers are built on human relationships and human-curated content. Readers will likely turn to traditional publishing houses to sort through the noise and find the quality human content.

Self-published authors need not despair, however. A future full of AI content opens the door to marketing strategies built around human relationships. Private author communities—Facebook groups, forums, newsletters—will allow self-published authors to connect directly with their readers in a far more agile and personal way than traditional publishers. Word-of-mouth referrals from current readers will also carry more weight with potential new readers.

Writers have always been artists. In our brave new AI-generated world, writers are also artisans. “Human made” will be the new “hand made” label, a maker’s mark that commands a higher price. Just as there will always be a demand for AI’s fast fashion, there will also be a demand for the bespoke, tailor-made writing that only human authors can create.

I believe human writers can thrive in spite and perhaps even because of AI. This newsletter is dedicated to testing, exploring, and sharing strategies for writers. Each week or so, I’ll send out an email doing just that. If that’s something you’re interested in, subscribe and join me in writing Words for Humans.