The Case for Being Human
January 1, 2023 seemed like more than a new year.
Open AI’s Chat GPT had recently released its new protocol and people everywhere were buzzing about what it could do.
Thanks for reading How to Be Human! Subscribe for free to receive new posts and support my work.
Jordan Peterson was one of the first to laud its abilities: “I said, ‘Write me an essay that’s a thirteenth rule for Beyond Order, written in a style that combines The King James Bible and the Tao Te Ching.’ That’s pretty difficult to pull off, you know, any one of those things is hard. The intersection of all three—that’s impossible. Well, it wrote it in about three seconds, four pages long, and, it isn’t obvious to me, for better or worse, that I would be able to tell that I didn’t write it.”
People started using Chat GPT to write and edit code for their websites. Some started writing their resumes and cover letters with the tool. People used it to take and pass graduate-level business, law, and medical exams. One recent report revealed that one office worker uses it to perform 80 percent of his job.
Around the same time, a new AI image generator, MidJourney, swept the world by storm with shockingly realistic images—of beautiful fashion models who had never been born, of believable historic events that didn’t happen, of breathtaking landscapes that don’t exist.
Audio AI has improved to a similar degree of late to the point where we can hear The Beatles perform a cover of Oasis’ ‘Don’t Look Back in Anger’ and question whether or not the Fab Four used a time machine to access the ’90s anthem.
It’s impossible to see these images or hear these songs and not be dazzled. But, just as quickly as one gets excited about the possibilities, the worry sets in: If AI can do my job, does that mean that I’m expendable? A WEF report found that 2% of all current roles could be replaced by AI by 2027. It’s probably a conservative figure. As Jordan Peterson conjectured, the rise in AI will mean a third of universities will go bankrupt in the next five years. It’s not a stretch to say that, within those next five years, the entire socio-economic system will become unrecognizable.
And that’s not even the scary part. Beyond economics, what about humanity as a whole? If AI can create art, music, and poetry better than we can, it’s hard not to question whether we matter at all.
The Singularity Is Nigh
Futurist Ray Kurzweil and others have hypothesized about ‘the Singularity’, the point at which AI becomes more intelligent than humans, takes on a life of its own, and continues to improve without the assistance of humans. Many have viewed the Singularity as a point when humans will have ceased to serve a purpose and thus usher in a post-human paradigm.
Though we have not reached that horizon yet, AI like Chat GPT and MidJourney have gotten us so close that it is impossible to neglect. As Kurzweil would put it, “The Singularity is near.” And that fact alone should bring pause no matter what job you currently have. In an age where robots can do everything humans can do and do it better, the question isn’t properly ‘Can we survive?’ but rather ‘Should we survive?’
There are many—including Kurzweil—who welcome the Singularity as a matter of evolutionary progress. And given the kind of things we’ve seen from AI, it’s tempting to join them. If AI can evolve on its own, faster, and beyond human capabilities, then why shouldn’t we let it? If robots are better humans than we are, then what good are we?
But this is to assume that our main purpose is to evolve and progress. Now is a good time to step back and reassess whether that’s the case.
The Lost Art of Being Human
If you were to ask people what it means to be human, I don’t think many would have an answer. And most answers wouldn’t be good ones. We have gotten so wrapped up in the modern way of living—in technology, in medicine, in mass culture—that we have lost touch with our own humanness, our purpose, our telos, and, as such, have become incapable of judging our own value, much less our own purpose. Now, with the rise of AI, we can’t help but slump at the foot of our looming masters.
The fact is that for the last hundred years or so, we have steadily forgotten what it means to be human. Industrialism and technology have altered our world so thoroughly that we no longer fit in it any more. Unable to fit, we conform, and, once we conform, it becomes our new normal. It starts with conveniences—telegrams, toasters, televisions—but then the conveniences start to take over. As with Goodhart’s Law, the measure becomes the target, and soon we define ourselves by what was completely foreign a generation ago.
Chesterton said, “This is the huge modern heresy of altering the human soul to fit its conditions instead of altering human conditions to fit the human soul.” Modern times in a maxim.
In Orality and Literacy, Walter J. Ong examines just how this has happened reliably since the dawn of history. He frames his argument with an obvious but nonetheless startling fact: Prior to writing, all human cultures were oral cultures, passing information from generation to generation by the spoken word. We sophisticated modern readers instantly think how limiting it must have been and how much better our thought processes and memory are now that we can record things. Ong shows how the opposite is true. As we have offloaded our thinking and memory to techne like books and Google Docs, our ability to remember has atrophied. Ancients in Homer’s time would memorize epic poems and recite them word-for-word on a regular basis. Today, it takes robust training to even fathom such a feat.
Every technological advance follows a similar pattern: What starts as a novelty turns into a convenience and then into a necessity. Those born before 1990 can recall what it was like before the Internet and cell phones. Now none of us can go for a road trip or order a pizza without the help of the web. What it was like to sit alone at a restaurant waiting for your date to return without checking social media is a matter of profound speculation.
It has come to a point that we see those who don’t use all the latest technology as weird, perhaps sub-human. This might sound strange when thinking of tangible things such as the Internet and smart phones, but not so when we consider one kind of technology that has overcome our culture—medicine. Consider vaccines. What started as a kind of speculative innovation grew to become a life-saving convenience and is now considered mandatory in most public arenas. Those who don’t partake in the technology are ostracized and, as seen in the most recent debacle, dehumanized.
But what if we got it all wrong?
What if popping the pill doesn’t actually make us more human but less? What if the technology that we have developed to satisfy our nature is in actuality drawing us away? What does it mean to be human anyway? Before we throw our hands up and bow to the robots, it is at least worth considering.
A Preamble
As such, it is my intent to use this space to consider these timely questions. Ultimately, I aim to offer nothing less than a fresh perspective on what it means to be human. But first, it’s necessary to look at how we forgot that basic principle.