“Immersed in Another Consciousness”: Meghan O’Gieblyn

In the summer of 2022, about four months before the release of ChatGPT, I found myself in a state of spiritual confusion. I’d started my dream job the previous year—leading business development for a major university’s AI research institute—but had grown increasingly alarmed by what I learned about the direction artificial intelligence was taking.

I’d already been very interested in the societal impacts of AI when I took the job. I was aware of many of its many associated species-level concerns and philosophical quandaries, both the apocalyptic and the slightly more mundane. Once I had started the role, however, I was surprised by the dogmatism that I saw undergirding AI culture, the prevalence of hierarchical cults of personality, and the ease with which scientific, rational resistance to dominant ideas was crushed. Perhaps these issues subconsciously reminded me of the Catholic Church I grew up with.

A book I read that summer, God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning by Meghan O’Gieblyn, radically enlarged my perspective of these problems and helped me understand why my uneasiness felt so familiar. It’s a brilliant work of narrative nonfiction that, though it was published in 2021, does a better job of explaining our relationship to AI—on both a personal and societal level—than anything I have come across since, even as thousands of writers, podcast hosts, and academics have scrambled to understand what AI is doing to us.

The book explores how our spiritual impulses express themselves in our technology and the philosophies that guide its growth, contrasting our behavior today with the behavior that has led to the development and dominance of religious structures throughout human history. Despite my own path to the book, God, Human, Animal, Machine is less about tech culture or religious culture and more about human nature—and the nature of the machines that are coming to occupy an ever-larger place in our lives.

I was thrilled to have the opportunity to correspond with someone whose work has had such a profound impact on my life. This interview, conducted via email over the course of about two months in fall 2024, largely focuses on God, Human, Animal, Machine and its subjects: spirituality, technology, and the place of the writer in our modern milieu.

Miklos Mattyasovszky: In God, Human, Animal, Machine, you describe how your childhood and early adulthood as a fundamentalist Christian shapes your experience with advanced technology such as robotics and artificial intelligence. The scope of the book is incredibly vast, covering the history of philosophy, religion, and science. What are the principal challenges an essayist faces when bringing her individual experience to bear upon such a broad cross-section of human history? How did you overcome them?

Meghan O’Gieblyn: It was certainly a challenge. The hardest thing was finding the right balance between personal narrative and research, which came after a long process of trial and error. When I first embarked on the book, I didn’t want to include my story. I wanted it to be a “serious book” about the intersections between religion and technology, and I was going through a phase where I thought memoir was unserious. It all seems very silly now. Unsurprisingly, it was very hard to write an interesting book, to convince the reader that she should be interested in these topics, when I was not saying anything about why I was interested in them. The arguments I was making felt too abstract, and the topics I felt compelled to address kept ballooning. The whole thing started to feel like an unwieldy Wikipedia article. The truth is that the questions I was curious about were very particular to my experience. They were theological questions that I’d struggled with when I was young, as a doubting believer—questions about immortality, free will, and the possibility of superintelligence that is above human moral systems—and that now were popping up in conversations about emerging technologies.

Meghan O’Gieblyn

And as soon as I put myself, my “I,” into the book, there was a new sense of energy, direction, and purpose. The useful thing about including first-person experience—whether it’s an essay or a book—is that it narrows the focus. It relieves you of the burdens of having to be exhaustive because you’re only trying to take on the questions that are relevant to your life. And those are the questions that end up having the most energy because there’s something very real that’s at stake for the writer. The “ideas” books that I love most are those that feel very idiosyncratic, like you’re traveling down the rabbit hole with someone, getting a glimpse into the map of their private obsessions and preoccupations.

MM: That strikes me as a very useful bit of craft advice. The first person does not have to contain everything; in fact, the first person perspective releases the essayist from the obligation of including anything boring or inapposite. And yet so many of the feelings you evoke in the book are relatable and universal. I’ve heard interviewers tell you that they identify deeply with the period of spiritual and personal crisis you describe in your book, even if they didn’t come from the same faith background. It certainly resonated with me.

Writers have intimate knowledge of this paradox between the universal and the individual. God, Human, Animal, Machine explores similar dilemmas that exist within religion and technology—the third section is titled “Paradox.” How does being a writer affect how you perceive the paradoxes present in the contemporary technological and spiritual landscape?

MO: Yeah, the funny thing now in hindsight is that the problem I faced during the writing process—my reluctance to speak from my point of view—was the very thing the book was “about,” which is the impossibility of accounting for first-person experience in science and technology, which operates from a third-person perspective. Science is supposed to be objective, to describe things that can be measured and weighed without any reference to the person who is measuring and weighing. But that means we can’t really talk about consciousness, which can only be experienced from inside. That’s why there’s all this confusion about whether AI has or could have interior experience: how could we know? Thomas Nagle noted once that many of the big problems in philosophy—the mind-body problem, the question of free will—come down to the paradoxes that emerge when you try to reconcile subjective and objective points of view. Interpretations of quantum physics are similarly split over subjective and objective explanations of reality. We keep trying to describe reality as though we’re gods who can see it from the outside, but sometimes positionality—who’s looking, from where—really matters.

As a writer, I see the first-person voice as an expression of epistemic humility. It’s a way to acknowledge where I’m coming from, but also to establish the limitations of my knowledge and my perspective. People often dismiss personal writers as navel-gazers or narcissists, as though writing about oneself were a kind of self-aggrandizement. But it can be a form of modesty as well—or meekness, to use a Christian term. It’s funny that the beatitudes are also rooted in paradoxes: The first shall be last, the meek shall inherit the earth. So many spiritual koans use paradoxes as a way to acknowledge the limits of human language and logic and our inability to grasp the absolute. What we can see, as individuals or even as a human species, is always just a small slice of reality.

MM: Is what we can make limited by our perspective? Or can our creations—whether machines or books—access greater wisdom or ability than their creators?

MO: Richard Feynman used to have a saying on his chalkboard: “What I cannot create, I do not understand.” He was talking about mathematics—the fact that “understanding” requires knowing how to derive or explain each step of a process. If you can’t do that, then you don’t really understand. I was thinking a lot about that quote when I was writing the book. Or rather, I was thinking about the inverse: What I cannot understand, I cannot create. It seemed to me that the rise of deep learning and black box models marked a departure from this kind of thinking: it’s now possible that we can create things that we don’t understand. When AlphaGo won the Go championship, even the lead engineer at DeepMind, who helped design and build the machine, couldn’t explain how or why it made the winning move. I think there’s something similar that happens in the writing process—at least for me. I start out with a rough plan, or a design. But the most crucial elements of the book are emergent features—themes, ideas, and connections that I could not have anticipated in advance and that arise from the process itself. If asked, I don’t think I could reverse-engineer any of my books. They are black boxes. In some ways, they are probably smarter than I am—though I’ll add that, like machines, they are static and predictable. They don’t grow and change like humans do.

MM: In the book’s first section, “Patterns,” you draw deep connections between Christian notions of an immortal soul and ideas, such as mind uploading, explored by transhumanists like Nick Bostrom and Ray Kurzweil. But you point out that the first place in which the term ‘transhuman’ actually appears is Dante’s Paradiso. Has the relationship between literature, spirituality, and technology fundamentally changed since Dante’s time, or are all the core principles still present?

MO: I think there are a lot of similarities, perhaps more than we think. Many of the questions are the same, but the metaphors have changed. Dante was the one who coined the term “transhuman” (or trasumunar, in Italian), which was supposed to describe the physical change that the human form would undergo during the biblical Resurrection. When I discovered that etymology it clarified a lot of things for me. I’d been reading about transhumanism for years and kept sensing these quasi-religious undertones in its projections about the future—the promise that we’ll have new bodies, that we’ll become immortal, that the earth will be renewed—which are almost identical to the prophecies that appear in the Judeo-Christian apocalyptic tradition. Dante believed that God would bring about that transformation, while people like Bostrom and Kurzweil were trying to figure out how to do it with technology.

My book really sprung from a desire to point out continuities and patterns that I sensed but that weren’t being overtly acknowledged. Contemporary theories about mind-uploading were almost exactly the same as the debates the early Church Fathers were having about the Resurrection (sometimes, in that case, using the very same metaphors). Debates about superintelligence harkened back to these scholastic controversies about the sovereignty of God. There’s always been a vigorous cross-pollination between science and spiritual traditions. The Russian Cosmists were inspired by biblical prophecies to explore space and come up with solutions to longevity. Newton was into all kinds of weird occult interpretations of the Bible. Wolfgang Pauli was writing bizarre stuff with Carl Jung in the 1950s that tried to marshal quantum mechanics and the collective unconscious to explain synchronicities. There’s always been this back and forth between science, literature, and spirituality, despite our best efforts to keep those realms separate.

MM: Anyone who read your book when it was released in 2021 would know it was not incidental that the AI ‘boom’ we’ve experienced since November 2022 has been driven by language models. What do you think the foundational role of language in AI means for writers? There are obvious job security concerns, copyright and IP concerns, et cetera—but beyond that, how does it challenge writers’ conceptions of themselves and the world around them?

MO: I was chatting recently with a friend of mine who’s a writer and also works as a translator and deals with a lot of AI generated content in her day job. She was talking about how much time she spends thinking about “language surplus”— all the responses that are generated and trashed, or else turned into slop. She kept saying, “it’s such a waste of language!” She was expressing something that everyone, I think, feels right now, which is that there are too many words in the world and that the proliferation of synthetic text is cheapening language, the way that inflation devalues a currency. But she was also speaking as a writer, someone who knows how much it “costs” to produce language when it’s the product of thought. It’s amazing to think what a seismic change has taken place under our noses. Until two years ago, you knew that any language you encountered was produced by a human mind.

It's hard to say how things will change for writers. For me, reading LLM output has made me realize how much of what I value in good writing comes down to the messiness of another mind at work. I’m the kind of reader who has always turned to books because I’m eager to know how other people think about the world. I want to be immersed in another consciousness, particularly one who thinks in unexpected ways. I don’t care that much about the beauty of language for its own sake. You can tell right away, within the first few sentences of a book or an essay, whether you’re in contact with a mind that you want to spend time with on the page. That’s one of the most difficult things for AI systems to emulate. And the truth is that this kind of writing has always been rare, even before these technologies came about. The vast majority of prose you encounter in the world feels as though it could have been written by anyone. I guess the best case scenario is that AI will force us, as writers, to lean into the idiosyncratic and irreplicable rhythm of thought, as opposed to merely churning out proficient prose.

MM: In my more optimistic moments, I’ve been thinking that AI might drive amateur and professional literary critics alike to ask, “How human is this writing?” instead of “How good is this writing?” (or maybe as a new way of asking “How good is this writing?”) But there’s a paradox there, too, because the minute we start trying to categorize something, we start thinking like machines!

Is my optimistic vision for the future of literary criticism worth striving for? What’s the most human way to assess the humanity of a piece of writing?

MO: I like that vision too. I think human-ness is mostly what we mean when we talk about good art, even if we haven’t traditionally defined it that way. I’ve often sensed that critics, even when they are speaking in the language of aesthetic criteria, are trying to communicate something much more intuitive—basically, whether the critic found the writer to be trustworthy. Maybe I should speak from experience: I feel this, as someone who writes criticism myself. When I’m reviewing a book, I often use technical terms—I might point out that the writer uses too many cliches, or that there are inconsistencies in the narrative voice. But the thing I’m always trying to describe, at the end of the day, is whether or not I felt that the narrator was seeking the truth. Do I believe this person’s vision of the world, or their clarity of mind? Do I think they’re deluding themselves? Machines—at least the language models that we have today—are constantly deluding themselves. They have no access to the world. They hallucinate all the time. This is one reason why I can’t imagine getting anything out of the Great American LLM Novel, should it ever come about. Even if the story is fantastic and the language is beautiful, there would be nothing at stake—no person who’s trying to figure something out, or get down to the truth of the human experience, which is what I’m always looking for, as a reader. That’s an intrinsically human project, and I think that’s largely what we mean when we try to articulate the value of a work of literature.

MM: You write about receiving a letter from Ray Kurzweil where he says, “The difference between so-called atheists and people who believe in “God” is a matter of the choice of metaphor.” After reading his letter, you reflect that much of Christian thought, science, and technology alike might amount “to a singular historical quest, one that was expressed through analogies that were native to each era.” That sentiment has really stuck with me since I first read the book—it seems to point to a universal human desire for meaning. Why do you think we reach for metaphor so quickly when searching for meaning?

MO: It's a good question, and a complex one. When I first started writing, I was interested primarily in metaphors that we forget are metaphors—dead metaphors, as they’re sometimes called. The notion that the brain is a computer is so ubiquitous today that allusions to the metaphor are basically invisible. We invoke it every time we say we have to “process” new information, or are having trouble “retrieving” something from our memory. Information is also a metaphor, a fairly recent one that is used to describe all kinds of complex systems: brains, forests, swarms. All of those can be described as information-processing systems. These analogies are very useful, in a practical sense. But there are people who insist that the metaphor is real, that the mind is not like a computer, but really is a computer. In religion, when someone refuses to acknowledge metaphors, we call them a literalist, or a fundamentalist. And it seems to me that there’s a strain of science that veers toward fundamentalism.

Of course, it’s impossible to use language without metaphors. It’s not as though we can just abandon them. Even to say something simple, like to refer to the future as being “ahead” of us, or the past as “behind” us, is drawing on metaphorical concepts (making time into space), and if you got rid of all metaphors, there would be nothing left. But since language, at least in this sense, is intrinsically human, this means that we’re often (knowingly or unknowingly) projecting our mental concepts onto the world, onto nature and our own technologies. There’s a kind of anthropomorphism built into the whole scientific endeavor, which often contains these linguistic reflections of ourselves. There’s a moment in Hannah Arendt’s The Human Condition where she talks about how science is always trying to transcend the human gaze, but ends up reflecting it instead. It’s as though nature were a trickster who is constantly thwarting our efforts, she writes, “so that whenever we search for that which we are not, we encounter only the patterns of our own minds.”

In literature classes, metaphors are often described as a bridge from one idea to another, a way to say that a cloud, for example, is like a flower. We try to understand ourselves as humans through these likenesses too. We say that we’re made in God’s image, or find similarities between ourselves and our tools: a human is like a chariot, a clock, or a computer. Those bridges are the most basic form of meaning-making. And I suppose that’s why metaphors are so deeply connected to meaning. They create a kind of map of the world. We’re constantly finding patterns, and yet the patterns that we observe are not in the world — or they’re not just in the world. They’re also the patterns of our own minds.

MM: The Arendt quote makes me wonder if Kurzweil—who you point out rose to fame as an inventor—and other transhumanists are being tricked by their own ambition. In God, Human, Animal, Machine, you write, “As many transhumanists have acknowledged, it’s entirely possible that our new, digital selves will entirely lack subjective experience, the phenomenon we most often associate with words like ‘spirit’ and ‘soul.’”

What do we have left without subjective experience—do you think invention is possible without it? For example, is ‘Generative’ AI generative in any true sense?

MO: It depends on how much value you place on consciousness. Kurzweil and many people who work in tech don’t seem to think it’s very important. It’s just an epiphenomenon—a user illusion that doesn’t really do anything. Given that we now have technologies like Generative AI that can perform many tasks that we’ve long associated with human minds (writing, art) it’s tempting to see this as evidence that they are right, that our subjectivity is just a pointless sideshow. That seems to be what Sam Altman was saying, for example, when he tweeted “I am a stochastic parrot and so r u.” The implication is that we humans are not really thinking when we speak or write, but just mindlessly generating words, like a language model. That notion is completely at odds with common sense and our most basic experience as thinking persons.

MM: Has humanity’s comfort with metaphor led us to dismiss the gravity of this very serious potential problem?

MO: I do think the confusion stems from a foundational problem with metaphor. Both AI and cognitive science grew out of this analogy between mind and computer. The whole point of that metaphor was to get around the problem of consciousness. The mind was this weird, amorphous thing that was impossible to study in a lab. It was a little too much like the soul. But if you could describe the mind as a pattern of information, then it was possible to conceive of thought as a purely computational, mathematical process. It could be scientific. It’s a useful metaphor—you could even say a ‘generative’ one—in that it’s allowed for all the information technologies that we use today. But it’s also led to so much confusion and wishful thinking, like the idea that a computer might become conscious. We invented the metaphor as a way to avoid having to account for consciousness, so the idea that consciousness will somehow magically “emerge” from these systems just doesn’t make any sense.

What remains to be seen is how much can be done without consciousness. Will generative AI manage to be truly innovative—or inventive, as you put it? I don’t think I’m alone in finding most of its output to be very familiar and bland. It’s possible that this is a technical problem, something that can be solved with a different kind of architecture. Or maybe not. The more interesting question to me is whether creative tasks have any value when they’re divorced from human subjectivity. Will people want to experience art that didn’t come from another mind? Are we creating a world where the creative tasks that we find most meaningful are handed over to bots that don’t get any pleasure out of it (or any feeling at all)?

MM: That seems like a good—if complex and troubling—note to end on. Thank you so much for corresponding with me, Meghan.

*

Meghan O’Gieblyn’s writing can be found in Harper's Magazine, The New Yorker, n+1, The Point, The Baffler, The New York Review of Books, The Guardian,  The New York Times, and other publications. She is the recipient of three Pushcart Prizes and the 2023 Benjamin H. Danks Award from American Academy of Arts and Letters, and her essays have been included in The Best American Essays and The Contemporary American Essay anthologies. Her first book, Interior States, won the 2018 Believer Book Award for nonfiction. She writes the advice column Cloud Support for Wired. God, Human, Animal, Machine was published by Doubleday in 2021.

Miklos Mattyasovszky

Miklos Mattyasovszky is a writer of fiction and nonfiction living in Ithaca, NY, where he is an MFA candidate at Cornell University and an editorial assistant at EPOCH.

Next
Next

Interview: George Saunders