Press "Enter" to skip to content

The Rise of AI: ChatGPT does the opposite of the printing press and it’s making humanity dumber


Note: This is the second part of a two-part series. Read Part I.


PDF: I made a fancy PDF version of this post if you want to print or enjoy offline. Download it here.


In Part I, we discussed warnings from computer science luminaries including Alan Turing, Ray Kurzweil, and Vernor Vinge about the exponential rate with which technology improves. Most of those thinkers expect AI to surpass human intelligence in a future that’s closing in fast. The gap between man and machine has been shrinking for years, and whatever machine flips that power balance could be the last invention humans ever create. 

The smartest species on Earth has always been the most powerful, and it’s not close. Exponential learning means the smarter a machine gets, the faster it can learn. Philosopher Nick Bostrom said a functional superintelligence would be learning so fast that the moment it becomes our equal, it’d take only a moment more for it to outsmart us by many thousand-fold. 

Let’s say humans have an intelligence of 10. If it takes AI a century to reach 10, it might only take an hour for AI to then hit an intelligence of 11, and then just a few minutes for it to hit 20 or 50 or 100. So once you find out that AI is on the brink of hitting human-level smarts, you could see breaking news that same day about the existential threat posed by smarter-than-human bots.

Bostrom and other leading thinkers have all acknowledged superintelligence presents a plausible path toward human extinction. But ahead of doomsday, widely accessible AI tools are already showing signs of reshaping the way people think and work. 

Specifically, ChatGPT, the most popular among the so-called large language models, has taken the internet by storm since launching in November 2022. It can write articles, emails, real-estate listings, and more. Microsoft poured in a $10 billion investment into parent company OpenAI in January, and the hype has sparked an AI war among the world’s tech giants.

The rest of this essay explores why ChatGPT could catalyze the most dramatic revolution since Johann Gutenberg invented the printing press, and how the technology makes humans dumber by increasing the gap between knowledge and understanding.

ChatGPT is the anti-printing press

Five-hundred years ago, human knowledge was limited by a paltry literacy rate and a depressing lack of literature. Books — handwritten by a small number of scholars — were so rare that libraries chained them to desks. 

But Gutenberg’s moveable-type printing press created an information explosion when it hit the scene in 1450. Within 50 years of Gutenberg’s first printed Bible, over 1,000 printing houses sprung up across Europe. Over ten million copies of books entered circulation, newspapers began using the moveable-type, and a fast-growing number of people were learning to read. 

Gutenberg made it possible to mass-produce abstract thought, and humanity got smarter. This sparked the Renaissance, the scientific method, and eventually modern Enlightenment thought.  

ChatGPT, for its part, does the opposite of the printing press. The tool doesn’t enable the same profusion of knowledge that the printing press did, but instead distills massive amounts of information into bite-sized, plain-language ideas, without citing any sources.

“GPT” stands for Generative Pre-Trained Transformer. To be sure, it’s a triumph of human ingenuity. In seconds, it can produce original sentences and paragraphs on high-level topics drawn from billions of data points.

ChatGPT can imitate writing styles if you ask it, and it communicates with authority. The ostensible perfection of its messaging creates the illusion of credibility in the answers, or “automation bias,” which isn’t a new phenomenon among computers. Manufacturing statements and presenting them as fact is a dangerous game. Think about that in the context of misinformation campaigns, propaganda, and media biases.

As the technology improves, it will become even less distinguishable with human-generated communication. It’s not conscious, yet it can hold a coherent enough conversation. Some claim it’s already passed the Turing Test. There’s room for these tools to be used wisely, but over time it seems inevitable they will reshape our cognition. 

Ted Chiang argued in The New Yorker that ChatGPT indeed makes copies of existing information, but then makes copies of those copies so that the end result is a blurry, low-fidelity rendition of the original data. It’s like the game Telephone — a message becomes more muddled each time it’s passed along. It paraphrases to the Nth degree. 

What really is the value in looking at a lower quality version of something when we still have access to the original? And in what world does that make humans more intelligent?

More knowledge, less understanding

ChatGPT expedites the technological Renaissance that the internet and smartphones started: Knowing how to use tools takes precedence over actual understanding. While it’s difficult to say whether this is a productive trajectory, it surely widens the gap between knowing and understanding. 

To be clear, by “knowledge” here I mean the information in your head, and I’m using “understanding” as one’s ability to grasp and comprehend that knowledge. ChatGPT favors the former by a mile.

The difference between the original Renaissance and now is that knowledge is no longer viewed as something to be achieved gradually. There’s a diminishing chance information comes under rigorous scrutiny because the assumption embedded in ChatGPT is that knowledge, as a concept, already exists.

When knowledge is assumed, learning and understanding comes to a screeching halt.

For centuries, learning took place through forming hypotheses and testing ideas in an open forum. Research meant reading and digging, and knowledge required synthesizing information to the point of understanding. 

The trial-and-error process that’s inherent to learning can be leapfrogged with ChatGPT. The scary part is we don’t know where its answers are coming from. It’s all hidden within an AI’s black-box of a brain. 

With large language models, human intuition and work ethic take a backseat, and “discovery” is only necessary as far as typing in the right query. Our thinking is detached from the wisdom of crowds, because knowledge is bottled up in an opaque program that spits out answers.  

ChatGPT reached 100 million users in under three months. (TikTok took nine months and Instagram took 30 months). All things considered, this isn’t surprising. It offers an easy, quick-hit solution for students, writers, and other knowledge-based tasks, so much so that many workers are concerned about ChatGPT’s threat to jobs. I’ve used it myself to produce blog post content

In his recent article, Chiang maintained that blurry copies of copies of information is a bad starting point for learning and knowledge. Not only that, but he said it’s a poor place to start if you have any aspirations at creating anything original. 

“Some might say that the output of large language models doesn’t look all that different from a human writer’s first draft, but, again, I think this is a superficial resemblance,” Chiang wrote. “Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an A.I.”

A new crutch and limitation for knowledge

Thinking back to technology’s exponential rate of improvement, we can safely predict ChatGPT to become much more sophisticated over time. That pace also suggests the AI underpinning the tool will become too complex for its engineers to keep up with, which means that an increasingly popular source of distilled, bite-sized information is going to be running under increasingly mysterious operations.

Like the printing press, ChatGPT will increase the net volume of human knowledge. It’s not a stretch to believe it could soon pen travel guides, encyclopedias, academic papers, and even fiction novels. 

But unlike the printing press, and contrary to nearly all progress since the Enlightenment, human understanding is under no obligation to increase alongside these developments. If bots can provide us fast answers to hard questions, mankind will soon have a new crutch that makes knowledge more accessible but minimizes the need to know anything. ChatGPT makes knowledge optional.

In other words, we may soon find out ChatGPT is making us dumber the more we lean on it.

Just like we can’t describe colors we haven’t seen or animals that don’t exist, we do not have language to describe what exactly happens when human knowledge grows out of range from human understanding. Since society has never had to grapple with technology like this, there’s no blueprint for what comes next. Politicians and technologists are guessing at next steps just as much as you and I. 

We are authoring this story in real-time, but the main character may not even be us.


This is the second part of a two-part series. Read Part I.

One Comment

Leave a Reply

Top stories