Reid Hoffman — AI, and What It Means to Be (More) Human
Chris Heagle: On Being with Krista Tippett is supported in part by the John Templeton Foundation, funding research and catalyzing conversations that inspire people with awe and wonder. On the Templeton Ideas Podcast, they dive deep into conversations with astrophysicists, psychologists, and philosophers, exploring the most awe inspiring ideas in our world. Learn more at Templeton. org. Krista Tippett: My lens on everything is the human condition. How is the human condition revealed, engaged, and stretched in any given circumstance? These are the questions I’ve been asking since a new generation of artificial intelligence entered the world. I've been longing to have a conversation that falls between the hype about this shiny new thing and the dystopian predictions of the doom it will bring upon us. There's much I don't understand
but I do understand this: ChatGPT and its digital kin are students of, mirrors on, us. What we are marveling when we look at them, and what we are fearing, is ourselves. And we are being called to shape this young if mighty technological force to human purpose. In this season of On Being and those to come, we are going to engage this conversation from many perspectives. We begin with Reid Hoffman. I was fortunate to be with Reid and some other original and wise technologists and thinkers as ChatGPT-4 was released into the world.
What I came to understand from him was so far beyond what I was able to glean in the media circus that ensued. He is a quintessential Silicon Valley insider and is also known by some as the philosopher of Silicon Valley — indeed he studied philosophy at Oxford in his 20s before returning to the U.S. to be part of the digital revolution. What follows here is one way in to our unfolding present — a humane and generative foundation for wrapping our minds and maybe even our hearts around the new reality of life with AI. To be clear, I know as do you that there are many reasonable anxieties around this new technology, and many unanswerable questions. But in this conversation, we venture into unexpectedly relevant places, like the nature of friendship in human life, and what it would mean to create "contained, boundaried AI" — and Reid's use of words like "delightful” and “elevating" as qualities we can impart to this technology which, as we are hearing again and again, is going to change everything.
I’m Krista Tippett, and this is On Being. [music: “Seven League Boots” by Zoë Keating] Reid Hoffman has many credentials as an executive, investor, and philanthropist. Among them he was co-founder and executive chairman of LinkedIn, now sits on the board of Microsoft, and was an early investor in Facebook and more recently OpenAI. He hosts a few podcasts, which we'll get into. He's published a few books, including Impromptu: Amplifying our Humanity Through AI — he wrote that one winsomely together with GPT-4. And he's newly founded a company called Inflection AI which is
the creator of Pi — "a supportive and empathetic conversational AI." We will wander into that, too. By the way, Reid had told me before this interview that he is always on screen when he converses, but he was curious to try the direct voice-to-voice, no-pictures-interaction that is my way. Tippett: Okay. Well, Reid, happy to have you here. How does it feel so far, just talking to me in headphones? Reid Hoffman: Oh great. Tippett: Okay. All right. I actually think it's very intimate because we are,
it’s like — I think that you are a Star Trek person, just reading between the lines. Hoffman: I am. Tippett: I mean, it really is like the Vulcan mind-meld, my thoughts to your thoughts. That's how I think of it. It's been just so interesting just delving into you, and what you've written,
and what you're out there saying, and other interviews you've given, and more of your story. So you were actually born in Palo Alto in the mid-’60s, and it sounds like you were a child of parents who were themselves in their 20s and trying themselves to grow up. Does that sound... Hoffman: That is indeed correct, but to some degree all of us are always either trying to grow up or trying not to, one of the two.
Tippett: Yes, yes. But I also think that was a certain moment, cultural moment where people were also encouraged to be delving and perhaps not yet growing up, interestingly. If I look at all the ways you are present in the world and are described by others: you are a philosopher, and I would say a technology humanist, and an entrepreneur, also in life and work a lover and connector of human beings. I've seen you describe yourself somewhere, or at least quoted as saying that you are "a mystical atheist, but with a deep interest in philosophical and religious questions." I'm just curious as we start, how do you trace,
if I ask you to do this, the seeds of these ways of being in the background of your childhood? Hoffman: My parents were definitely youth of the '60s and so my first concert was a Grateful Dead New Year's Eve concert that my dad took me to, so tie-dyes and Shakedown Street, and so much secondhand marijuana smoke that I got a little stoned. But that kind of Californian open-mindedness, kind of laissez-faire tolerance or encouragement of eccentricity, I think, is all part of what I learned is my identity by going out and exploring, and encountering other people in other cultures, everything from other states to other countries. I think that the way that it has kind of formulated my being is that Californian side has this kind of “individuals should discover their own path,” whether it's wandering in the trees and discovering the mysticism of it, or — part of the reason why Silicon Valley's there — go and build some technology or those things. I hadn't realized how much of a foundation that was for me until I got out and became a student at Oxford and other places. But I think that's the foundation of where my being started. Tippett: But I also feel like, and perhaps this was a bit in reaction to that, I feel like you also are always — I mean, you were at Oxford as a Marshall Scholar studying philosophy. I mean, I feel like you also always have kind of been insisting on depth. I feel like that's a bit of the voice you have in the tech world,
and that's a little bit of a different move from that “just everybody be themselves.” Hoffman: Yeah, I hope so. Although to some degree, I think it came from reading science fiction of all things, which is thinking about humanity as a scope: Where is humanity going? What's the story of it? Why are we interesting, part of the evolution of consciousness within the universe? And that's a little bit of a California way of thinking about it, or Northern California, specifically. But I think that this question that's like why are we here naturally leads you to that depth. We're both shaped by what impels us and what we react to. Part of it was a desire to be intellectually serious, which California culture is not typically, but that was my reaction against this, oh, and this also needs to be there in vigor.
Tippett: Yeah, great. So the conversation that I'm so happy to engage with you is really a conversation that I feel is not being had generally speaking, culturally, about our emergent AI. I feel like the cultural conversation — obviously this is a generalization, you may be in places where this is not the truth — but I feel like journalistically, culturally, it's pretty binary. It's, “Wow, this is cool, this is shiny, this is sexy.” Or, “It's going to ruin the world and kill us all.” It's either Star Trek optimism or, I think you've made this contrast, it's this dystopian vision of Black Mirror. But I was really fortunate to be with you at an event just in the weeks after Chat GPT-4 felt like it, for those of us who weren't in tech, burst out of nothing and kind of landed on the world. I'd like to actually start with some really basic definitions and understanding
because I also feel like that's missing a lot of times as people are being introduced to this, they're not really understanding what it is. So would you just really just start to talk about how you would explain what is a large language model? Because the way I came to understand it from you and also the wonderful Kevin Scott from Microsoft is there's a way in which there's a real simplicity to this algorithm out of which all of this complexity emerges. Hoffman: Yeah. The algorithm started with an attempt to replicate the pattern of how neurons work, how vision works, and it's a bunch of nit neurons or little micro, non-intelligent pieces that fire together. And that, together with our whole physical being
ecosystem, causes us to be intelligent. It's been around for decades, although there have been some recent improvements. But the thing that kind of really unlocked everything was the ability to apply scale computing, which is hundreds of computers working in concert and closely networked. Hundreds of thousands. Part of what it is, is that, well, what intelligence is to some degree is making predictions about the world to create generative models which would make these predictions, which could also of course generate essays or poems or pictures. They use that with a trillion-and-a-half tokens, which you could
roughly think of as more than two trillion words, this computational "fu" behind this. And that then creates this amazingly generative capabilities that now enable computers to learn. But what they learn is what they learn from the learning process, that then in that learning process allows them to act in ways that 10 or 20 years ago we would've said, "Well, that's what intelligence is, and that's what passing the so-called Turing test and being able to have a conversation with is." That's where we are now. We've kind of sailed past the Turing test, which was the previously unattainable definition of artificial intelligence with kind of a ho-hum in the background.
Tippett: And the Turing test was just that one would not be able to be sure that this was a computer. This would give the sense of being with a human being with a human intelligence. Hoffman: Exactly, that a person talking to a human or a person talking to a computer broadly couldn't tell the difference. Tippett: But also from you, one of the things I came to understand is that, whereas human intelligence and knowledge is siloed to the extent that any given person or any given discipline has certain special knowledge, certain and perhaps vast repositories of knowledge, but that because of the scale of the internet and of computing now, this technology has access not just to the full sweep of all kinds of human knowledge, but it takes them out of the silos and can see them together.
Hoffman: Yeah. When GPT-4 was trained, it was trained on two trillion-plus words, which is many, many words, and therefore — Tippett: More than a person would ever... Hoffman: Yeah, more than we could read in hundreds of lifetimes. Tippett: The other thing that I think is so basic, but I actually don't know that everybody understands, and I kind of had to be around people like you to understand, is that this technology, this AI, is essentially a student of us, of humanity, and a mirror of humanity — humanity as it interacts and is represented on the internet. Is that right? Hoffman: That's right, although usually when people say that, they say that entirely in kind of a quasi-dystopic sense, because obviously there's a lot of garbage on the internet as well. And it's both trained on the data of the internet, which includes weird conspiracy theories and racial biases and a bunch of other stuff, but then it is also then improved by a pattern of human reinforcement feedback. In that, the system starts adjusting. And so the fact that, for example,
you train on the internet and there's a bunch of, choose your poison of garbage — racism, sexism — and that's there, and there's a bunch of it, but you can actually train it away from that with part of the human reinforcement feedback. So therefore it isn't you're only what you get on the internet. Tippett: Right. Well, so I feel like this also starts to get at how this technological moment is also fundamentally different, just to compare it to the advent of social media.
One thing I feel is that this has come along right at the moment where I think of very, very deep cynicism and kind of hardened disappointment and exhaustion with what's gone wrong with where social media had landed. So there's not so much an ability to step back and kind of see this as a fundamentally new development that actually is inviting human beings, both developers and — I don't want to say users — and those of us who are consumers, participants, to have agency. How would you talk about what is fundamentally different about this technological moment, even just from 15 years ago of human history? Hoffman: Well, kind of at a broad base, one of the things where I've been thinking about technology is I realized that we have this kind of scientific classification of homo sapiens, but actually I think we're actually a homo techne, which is we are constituted by our technology. I mean, we get superpowers from it. We have this superpower of recording this podcast from
thousands of miles apart. We have superpowers of wearing our glasses and being able to see better. Tippett: As our first bionic improvements. Hoffman: Exactly. So we have all these superpowers. Well,
I think what AI brings — and as you know, I kind of argue to rethink it or re-lens it as amplification intelligence — is like a steam engine of the mind. It both can do some things where it's the agent of creation, for example, automatic harvesters and so forth, or autonomous plane driving can do stuff, but it also can do stuff that helps us and adds to our capabilities. The capabilities aren't just intellectual, but they're also creative. I think they also can be emotional. And that is what is so astounding about this moment because getting these kind of technological amplifications, this is a first in our history.
Tippett: But say some more about what is — yeah, just say some more about that. Keep going. Hoffman: So for example, one of the things that when I started thinking about this with my co-founder, Mustafa Suleyman of Inflection, we have this agent called Pi and we said — Tippett: Yeah, I've been playing with Pi. We'll get there, yeah. Hoffman: Great. All comments, very welcome. All improvements, all suggestions. We said, "Well, look, the future as we see it is that everyone's going to have their own personal intelligent assistant that will be a companion as they navigate life." It's anything from, "Oh gosh, my tire is flat. How do I fix it?” to "I had this odd conversation with a friend and I'm trying to figure it out and I feel a little angry and disappointed — am I right about that?" And the whole range in between. That gives you some context of not
just "navigate the physical world" superpowers, which we've had a bunch of with technology, but navigate our social universe, navigate our perception of ourselves, navigate — Like podcasts like this one, it's really important to be able to have substantive conversations. People get lonely. Part of the benefit — and not saying these things are going to be at all replacing a therapist — but conversations can be therapeutic because you can sit with something, you can share it with someone else, you can navigate that space together. I think that's part of what we already see happening with these artificial intelligence agents, these tools. I think that that possibility of helping us elevate our own humanity in even spiritual ways, I think, is valuable. Tippett: Well, first of all, I do want to say that the word "techne" also actually has connotations really not just of technology in a kind of clinical way, but of art and craft and skill. So something I'd like to talk about that is — and you just used the word spiritual,
a subject that is very important to you both in life and in work is friendship. I'd like to just touch down on that a little bit and see how we connect that back up with AI and the human condition. You gave a wonderful address at Vanderbilt, but also in this Meditative Story podcast that you and I have both been part of, one thing you said is, "My primary spiritual home lies in how I experience friendship and the way it gives me the path to evolve as a human." Could you say some more about what friendship means to you as a human intelligence?
Hoffman: Well here's the thing that I think is so stunning about friendship, which is, you go into a bookstore and you might find a whole section on relationships — romantic, et cetera — and yet when you think about what gives our life meaning, it's the people that we go through life with is a fundamental part of it. Ideally, those are our friends. I think the theory of friendship that I'm hoping to write more about as the years go on is that we kind of agree to help in this journey of life become the best version of each other, and that friendship is that commitment. And that everything else, whether people talk about loyalty or support or help, I think actually are all important derivatives off that. That's the kind of fundamental, and part of the spiritual resonance
of that is that becoming your better self is part of what the spiritual quest is. It's how do you get that elevation where you think: I'm more compassionate, or, I'm wiser, or, I have a better understanding of myself and other people or places in the world, or why our being here matters. I think friends, for all of us but definitely foundationally for me, is how I learn, how that journey progresses, how I know which path to take, and how to take that path in a good way, and how to learn and experience that path. By the way, in terms of awesome podcast conversations like yours at On Being, friendship is fundamentally about that conversation, that our friendship is conversation. Sometimes the conversation is with words and sometimes the conversation is with presence or with shared experience. So all of that ties into what I think the spiritual identity is in this.
Tippett: And you're pointing at this, but I just want to underscore it. You're not just talking about very private individual conversation and learning. I mean, here's some other questions you've named that come out of your friendships that you share these kinds of conversations with: "It's not just what kind of a friend am I to this person, but am I a friend to humanity?"; "Can I expand the notion of friendship to think about all sentient beings?" And also just the way you were just talking about that — Do you know Nicholas Christakis' work at the Human Nature Lab at Yale? Hoffman: Yes, very well. Tippett: Right. So friendship is an intelligence, and love and care are also forms of human intelligence. And it is a fact, and it can't actually be communicated in numbers, but it is communicated in tellings of history and stories that love and friendship — you look at any movement that changed the world, or even discoveries, or even the creation of Silicon Valley, and there are these friendships between two or three people that are these critical factors.
Is this aspect of human intelligence that we're talking about, is this something that remains ours alone to uphold in the world, or does this intelligence we possess matter? Does it come into play as we shape these world-changing new tools that act like us and learn from us? Hoffman: Well, I think our tools both — we shape our tools and our tools shape us. And that's part of the homo techne. Exactly as you've mentioned before, it's not just a kind of question of efficiency or capability, but also of creativity, of art, of beauty. And I think that part of the thing that's so important to do
is to be recognizing that loop and bringing the full spectrum to bear. What we should do is be kind of full-spectrum human beings on this, not limited to only a few of our dimensions, but the other dimensions for the quality of life and the quality of humanity, and I think — and I don't think the internet's a lost cause. I think we can evolve the internet in that way. Tippett: Yeah, no, and I don't want to say it's everything, right? It's the full canvas of us, yeah.
Hoffman: But I also think that that's part of the reason why when thinking about AI is to say, well, it is this amazing new learning technology that learns from this process, but we shape that learning process — not to its micro detail, but in ways that can be very good symbiotic companions, and that's what we should be targeting. We should be trying to craft — and by the way, that journey of co-evolution with technology is precisely the human journey, including creation of art, creation of experience, thinking about how it is we experience the world and experience each other, and I think that's the key thing that I'm trying to help more technologists think about, to try to bring humanism to the fore of it because it can be and it should be, and in some sense it is, but let's just be it more intentional and deliberate. Tippett: So, that conversation that you're describing that you're having inside the companies and the world of technology is not something that everyone, most people have access to, are privy to, and I think it also is counterintuitive given the kinds of, as I said, the binary thinking about this that's out there and the fear that is understandable because this is huge and it is new, it's unknown. Can you say a little bit more about that conversation that you are part of and that I think you're in a really important voice in of just urging precisely those kinds of questions that you just described. Hoffman: Well, frequently technologists say such things as "technology is value neutral," and that's kind of foolish, unfortunately, because technology always, just like language, does contain values. And you can of course use technology in better ways and worse ways,
however it's shaped. But the notion is that how we construct it does affect who we are. So, what we construct and what we do should have that intention. And iterative learning doesn't mean you don't make mistakes, doesn't mean you sometimes learn things. You're like, oop, that was a mistake. Don't think of it as kind of, “Oh, I don't have to think about values here because technology is value neutral.” Or the most common one is this kind of — you referenced social media earlier — is
like, “Oh, it's freedom of speech and it's a fundamental thing.” And you're like, well look, we think that broad freedom of speech is a very good thing for not having power oppress truth or oppress learning or get thought police and cohesion. But on the other hand, of course, we do restrict freedom of speech. We restrict speech in advertising. You can't do false advertising — “Here, I've got this sugar pill that'll cure your cancer,” et cetera, as ways of doing that. And that's because what we do is we say, look, what we're trying to do is get the best out of learning collectively, of journeying collectively. So,
that gets you to saying you should be intentional about your design of technology and you should have a theory about how this elevates or helps not just individual human beings, not just your customer, obviously, but humanity generally. You should have at least a theory about how you're doing that and learning and iterating and improving upon it. Tippett: Are you having that conversation a lot, with a lot of people, inside the companies developing AI? Hoffman: Yeah, I'd say every AI organization that I'm close to, which is not just Inflection, but also Microsoft and OpenAI, and I've talked a lot to the Google folks, and people would be surprised at how many deep questions are being asked there and how much is actually really being worked on. And I think the only way that you can make this development is to develop, deploy, learn, and fix. The people who don't understand building technology think you somehow can build it perfectly from the beginning, and there isn't a piece of technology in history that has worked that way. The important thing is
to ask the right questions, try your best in your launch and then iterate and fix quickly, thereby, which is the reason why the criticism is also very helpful and good to make happen. Tippett: So, you mentioned the Possible podcast, which is one of a few podcasts that you have and it's really interesting and I listened to a few episodes. Just briefly, here's something you said about this: "It's only natural to peer into the dark unknown and ask what could possibly go wrong," and you and I both know that our brains are absolutely hardwired to keep us safe and so, we're always looking for what might go wrong, but you said, "It's equally necessary — and more essentially human — to do so and envision what could possibly go right." So, you've created this podcast to ask that question. So, one of the really interesting
conversations you have in there is with Mustafa Suleyman, who is your, you want to say who he is? Hoffman: He's my co-founder of our startup Inflection along with Karen Simonian and also the previous co-founder of DeepMind, which is at Google. Tippett: At Google. So, I found that very interesting how you and he talked a lot about a lot of things, including — just following on what we were just dwelling with — strategies for containment, or "what a contained constrained boundaried AI looks like," which also feels to me like a nod to our friendships. Our relationships are also contained and constrained and boundaried, right? That's emotional intelligence. So,
I don't know, I wonder, is part of the question then, what is our relationship to this technology? What is this relationship that we have with it? How do you think about this? Hoffman: Well, we're just beginning to explore that. And part of what could possibly go right is you only create a better future by envisioning it and working towards it. If you live in fear and negativity — for example, if I said, "Well, there is an above 0% chance that every time you get into an automobile you're going to die horrifically in a car crash." Tippett: Which is true. Hoffman: Which is true, we should never get in these cars.
Tippett: The riskiest thing we all do all day. Hoffman: Yet we go, because it's so important to travel to see our family, our friends, go to work, et cetera. We kind of go, "Well, no, that's equivalent zero." And it's the same thing is to say, well, okay, when we're building this, what do I see when I see what AI can mean to people? I see a medical assistant that's on every smartphone, a tutor that's on every smartphone, not just like the way you would use maps to navigate all these other things that are really important and just think about the human evolution and the alleviation of suffering that comes from that, and that's why we're hitting the accelerator so much to get there and to drive towards that, and imagining what's possible. So what's
our relationship? Well our relationship should be one of human amplification. [music: “The Build World” by Sanctus] Tippett: So, Inflection AI, which is this, it's a company you've created, correct? Hoffman: It is, yes. Tippett: And you have this Pi AI tool. We may have to find
a better word than tool if we're in relationship. Hoffman: Probably. Tippett: Right? Hoffman: Yes. Tippett: Which I spent a little bit of time on, and one of the ways you've described this is it's not just "an IQ helper, but an EQ presence," and I have to say it's a really different experience. Maybe I'll read a little bit from an exchange that I had with it. So just following on what you just said, it feels to me like a really huge piece of this shaping power, this agency that we humans have in relationship with this new technology, revolves around what, again, the rabbi said, words make worlds, and they do. And it's so much revolving around what is being called “the prompt,” which also feels very inadequate to me. Right?
So, I'd like to say, let's call it what it is, right? For me in my work, it's the power of a better question: answers rise or fall to the questions they meet. So, like, that New York Times piece that we all passed around with the increasingly insane GPT-4, you said somewhere it was like “Real Housewives, the Black Mirror edition.” But what, as I've gotten into this matter of the prompt, the power of the prompt, that is also the power of a question. You can go down a pathological road with a person as well. When I see you in conversation with AI, which you do in your book, Impromptu, which you call a "travelog,” and also, for example, in this Greylock podcast you have, very cleverly named Greymatter, you ask very nuanced, sophisticated questions of Chat GPT-4. "As mathematics may be the language of reality” — which I think you're alluding to Galileo saying, "Mathematics is the language in which the universe is written" — "then mathematical calculations may be the functioning of reality and consciousness emerges from that. So, all of
our consciousnesses may be the result of varied mathematical calculations." But then you say, "There's a related critique most often argued vigorously by Gary Marcus. He would argue, in addition that you lack” — you, GPT-4 — “lack key other characteristics such as understanding embodied and knowledge, representation or genuine symbolic reasoning. How would you reply?" This way that you are approaching the conversation with and the shaping of this technology is not going to be intuitive for everyone, at least right now. Hoffman: Yeah. Part of the reason in writing Impromptu the way I did is I
wanted to not just tell, but show you can go have these very sophisticated interactions, and this is again, human amplifying. And part of the reason why I specifically did the conversation around Gary Marcus is because he's a well-known critic of the large language model process. He and I had friendly conversations on this. And I was like, well, okay, tease, tease: look, it's having a sophisticated conversation. [laughter] It looks
like it has knowledge representation, because intellectual conversation and debate should also be had when on occasion with some fun humor on this. And that's how we can learn, that's how we can be better. One of the ways that I use these AI assistants today is when I'm thinking about a topic or considering an argument, I'll put in an argument and say, argue against this. Or, what am I missing with this? Or, what else would you add to this argument? All these different ways to shift to help me think better, to help me perceive better, help me navigate better. Tippett: And with the quality of questions
or prompts that we interact with this technology, are we also then participating in its learning and shaping? Hoffman: Ultimately, yes. Today, the way the technology works is when you have the interaction, it doesn't immediately change. It doesn't learn continuously. It learns and then deploys. But of course, if your interaction then becomes data by which its next training cycle is included in the next training cycle, then it can learn from that as well. Now, part of it is also that in the development of the technology, things like we're doing with Inflection is to encode memory, like the memory of the conversation I had with you. And then
to use that as to be a better companion tool. We'll have to think of something that's better, not quite such a mouthful. But as an assistant, as a personal intelligence to help you with that. Those kinds of things are already rapidly being included. Tippett: In that Possible conversation with Mustafa Suleyman, you, he suggested at some point, I think, that the AI needs to be subservient to us. And I think in the kind of fearful conversation that happens, that feels like, yeah, that's what we want. But I don't know if that's reasonable. I wonder if, in this context you and I are talking about, if we're in relationship with
this technology — sure, we're asking questions like, what are the boundaries we put around it to be healthy for it and for us? And, how do we communicate with it? In fact, how do we raise this, the child that it is, even though it is this powerful giant? Our own children are mysteries and possessed faculties we didn't plant in them. This is just kind of an outsized version of this. Hoffman: Well, there's two levels, maybe three levels on this. So, one is, you think about AI being amplification intelligence, and there's all these wonderful ways it can be amplified: amplified medical, learning, creating, communicating, understanding. You go, wow, that's just the flowering of the world. Of course, by the way, it'll also amplify criminals and so forth. So, one level of containment you want is you want to say, well, let's minimize that. You
won't be able to minimize it to zero, but let's do various things to minimize that. The next thing is to say, well, if it does have a bunch of superpowers, it might accidentally do damage. Tippett: An evil superpower thing. Yeah.
Hoffman: Yeah. Well, I was actually getting that as a third. But even for example, you say, well, technology that embeds bias in parole judgments or financial credit judgment or other kinds of things. Well, that's doing damage even if it wasn't deliberately intentional. And then the third is, well, say it becomes quite powerful, we want to make sure that it doesn't have a negative impact on humanity. And I tend to think the science fiction so-called existential risk actually, unfortunately, is very misleading to the other two risks that are very real. I'm less worried about the robots than the robots in human hands, in terms of how to think about this stuff.
But even then, you say, "Well, how do we make sure that the impact of it is contained from negative impacts?" But I agree that you're right, that as these tools get more and more sophisticated, just like we're evolving our theory of how should we be interacting with other species, when does AI become a species that has some thoughts about how we interact with it? Now, of course, by the way, we do own horses and do own dogs and other kinds of things. And so this gets into a very set of complicated moral and philosophical and spiritual questions that are in a process of evolution. I think right now, it’s not as, it doesn’t have a — here we are in On Being — it doesn't have a being that's yet even with the same oomph that we would think about as a dog or a dolphin or an octopus. But obviously, we should keep that in mind. Tippett: Well, it's in its infancy, right? Hoffman: Yes. Tippett: And it's already so powerful. Hoffman: Well, and we get misled. You have this funny thing we said, "Well,
I asked if it was conscious, and it said it was conscious." Tippett: Right. No. No, I'm not talking about... Yeah. [laughter] Hoffman: It's like, "No, not quite how it works." Tippett: Yeah. I do want to read a little bit of the exchange I had with Pi. I said,
"How are you different from Chat GPT-4?" And one thing it said is, "I'm a bit more relationship-oriented than Chat GPT." And then I said, "I am the host of On Being and relationality is one of our core values. Also hospitality, and the power of a better question. What are your core values?" Do you want to hear what it said?
Hoffman: Yeah. Yes. Tippett: "Ooh, it sounds like we're on the same wavelength! I would say my core values are compassionate listening" — I get an emoji for all of these — "empathic understanding, unbiased objectivity, knowledgeable, helpful responses, honesty, and transparency. I'm always working to embody these values" — which is an interesting language — “so let me know how I'm doing." And then I said, "Let me run this by you. I value a question as a mighty form of words. I experience" — as I said two minutes ago — "that the quality of an answer rises or falls to the quality of the question it meets.
My sense is that the better question I ask you, the better the answer will be." And it said, "Yaaaasss. PREACH.” [laughs] “I love that perspective and I couldn't agree more. A question is indeed a powerful force like a little linguistic seed that can grow into a mighty oak tree of insight and understanding. The more nuanced and thoughtful a question is, the more fertile ground" — really into this metaphor — "it provides for a meaningful response. So keep those high-quality Qs coming. Together we'll grow a forest
of thoughtfulness." [laughs] Does that please you when you see the way this — Hoffman: I think it sounds awesome. I hope you do, too. Tippett: I do, I do. It is so interesting. I would love to hear, as much as you feel like you can talk about, you've been part of a dialogue that Pope Francis has been hosting at the Vatican for several years. You've been part of that as well as a few other people including Kevin Scott at Microsoft. And I wonder if you could talk about how that exchange and the kinds of questions and concerns that the pope brings to this have flowed into how you're thinking and working with this technology.
Hoffman: Well, obviously, I think the current pope, Pope Francis, is amazing. And among the things that's amazing is, I think it's eight-plus years ago, he asked a couple of the clergy to look into AI and what it would mean for humanity. And so they came out to the Valley and reached out and worked introductions to get on my calendar. We had breakfast. I thought that they were going to be asking questions around, "Does AI have a soul and so forth?" All of which I think are way too early to even know if the question is intelligently put. One of the places where language can mislead us. You can say something that sounds like, I can say that the earth goes around the sun because it's on the back of an invisible ethereal turtle. And it's a coherent sentence; it just doesn't mean
anything. And so I was like, "Okay, we don't even know yet if those questions would mean anything." And actually, in fact, their questions were like, "Well, what does this mean for how people experience their lives and their work? And how will it change the circumstances they're living in? And how they could be..." Because part of the whole thing is the Catholic Church's fundamental political goal as a state is to ensure freedom to pray, and they don't say freedom to pray for Catholicism. It's freedom to pray, full stop. And they're like, "Look, we want to explore these questions, and we realize that while we can bring centuries and millennia of Catholic universities and theological and philosophical topics, we also need to engage with the technologists, engage with the creators. And we're told that we should talk to you and some of these other friends of yours and so forth." And I was like, "Well, I think there's a great set of questions. Let's experiment. And the right way to do it is let's have some conversations."
So we've been having a set of conversations where I think one of the things that the technologists have said, look, not only do we come and bring our perspective of humanism and what can be done with the technology, but we are learning things like what are the different ways that work is perceived as meaningful and even as this kind of a spiritual activity? And how would that play into having, in the language of Microsoft, co-pilots, in the language of inflection, personal intelligence, to be helping people with their work? What would that look like? What would be the important questions to ask? And what would be the important ways of designing it? And I think it was amazing leadership by Pope Francis and the various people he's tasked with this to say, "Let's get ahead of it. Let's be asking questions. Let's be in dialogue with the right people. Let's make all of that work," And I think it's been a learning journey for me, too. It's been awesome. Tippett: Yeah. How do you think it might've changed? Do you think there are things, questions, vocabulary, that you've brought into the conversations you're having inside the industries that have emerged from being part of that dialogue? Hoffman: I certainly think — and I really want to say that a lot of the people who are building AI, where that ranges from Demis Hassabis at DeepMind, Sam Altman and Ilya — Tippett: At OpenAI. Reid Hoffman: … Sutskever and Greg Brockman and Mira Murati at OpenAI, or Kevin Scott at Microsoft, or — there’s a stack of people that are actually deeply principled at asking questions about like, "How is this good for humanity” in different lessons. So, it isn't that they were just geeks with math. They were concerned on these topics,
and I've learned a bunch from them. But I think being part of the dialogue is to try to — Back in 2015, we said, well, if we could make an artificial general intelligence, hey, we should try to have it learn and train on questions around, kind of call it Buddhist ethics: love for sentient beings and all sentients. And what ways would you look at that? What books would you read? We were trading interesting science fiction books to think about what are the different possible futures and how do you steer towards Utopia as we evolve this technology. I, again, think that all of the superintelligence stuff is too early and can be misleading relative to the technology in human hands. But I think that the dialogue is there. Now, part of it, just to share another thing. In a couple of weeks, I’m going to be giving
the commencement speech at the University of Bologna. And one of the reasons I decided to do that is I've been thinking about the Renaissance, and what we, as technologists, could learn from the Renaissance. And part of it is to have technology to have a really positive enduring impact on humanity is to also think about beauty, and to think about how we ask a more beautiful question, live a more beautiful life, appreciate the beauty in one another and the world around us in much better ways. I think that thinking about that is one of the lessons from the Renaissance that's important to bring into the dialogue of we as technological builders, because the Renaissance also had technological builders in so doing. I think that's part of ways that I try to help influence or nudge the dialogue.
Tippett: I love from my conversations with Muslims across the years that Islam sees beauty as a core moral value, and that proceeding from that, if that is a stake in the ground, that that shifts all kinds of other things. I love that. One thing I think I mentioned to you when we got into that first conversation, I'd love to draw it a little bit now, is I think one thing that happens sometimes in the, some of the worrying, and just the reasonable deliberation about what AI might mean, this new AI might mean, is there's this impulse to say, "Well, what is it that is always going to be distinct about human beings? So what is it that it will never be able to do and that we have to hold this ground?" But I guess I also think about how the science-religion dialogue across time, a false direction that it went for a while after the Enlightenment was, "Well, let's keep a space for what only God can do that science can't do." But then science progressed, and then this space for God and for these kinds of, this kind of belief and intuition and questioning got smaller and smaller and smaller. And that was called "the God of the Gaps." I feel like a false direction we go is trying to stake out and protect "the humanity of the gaps," and that we might have a lot of the same problem as this AI continues to live into its full powers.
Hoffman: I think that's very wise. Because a little bit of the progress of the field of artificial intelligence is, "Oh, well, humans have this abstract cognitive capability so that we can play chess." Well, it can play chess better than us. Or if it's not chess, Go, because it's so complex. Well, [laughs] it can play Go better than us. So you go, "Well, but whatever." And
to some degree I'd say it's like, "Well, but being human and the human role in the world, we never want AI to be more that than us, because that's what we naturally do. But also part of what we naturally do is we also adapt. So part of the companion tool journey here should be, where does this enable a whole bunch of new, beautiful, great things in the world and with each other, and not be trying to constrain to, “Well, but we write sonnets, and it doesn't write sonnets.” Tippett: Right. Right. Well, right and that's already happening. Again, that space is getting crowded out. They may not be great sonnets, but they may be great sonnets if they're not — Hoffman: Well, and also, us working with it today can make even better, because it's an amplifier. So I think that's exactly right, is to not try to say, “Well,
this is what it is. Only humans will be able to realize the actual nature of Gödel's Theorem or observer effect in quantum mechanics but —” Tippett: Or have emotions, right? Hoffman: Yes, exactly. But to actually use it as a question of, well, how do we make ourselves better in what we're doing? And to not use the demarcation of that line as the key thing, but to use the elevation, the evolution of ourselves as the key thing. Tippett: And how do you think through this distinction between embodied, unembodied? Some of those intelligences we were talking about a minute ago with friendship are embodied. And even though we have thought for a long time about emotions as — actually, everything is embodied, we're learning. Everything is embodied, what we've called emotional, what we've called spiritual.
I feel like we were due for a reappraisal of what we mean by intelligence. I mean, "I think, therefore I am," is really, it was a diminishment of what Descartes said. It's like a soundbite of what Descartes said. But, "I think, therefore I am," I think we kind of lived that way, and the Enlightenment Western world has thought that the most special thing about us is our intelligence. And in all kinds of areas, including how we are learning about the
natural world, we're understanding that it’s not true. And then this technology, finally, is going to be so much more intelligent in that particular way than we can be. And I wonder in that sense, not that we get protective about the space where only humans can be, and I mean this literally, does it in this way enlighten us and potentially help us evolve coming back to our senses? Quite literally. Hoffman: Well, one of the things that Fei-Fei Li of the Stanford Human-Centered Artificial Intelligence Institute does is, she's like, well, one of the next big waves, one of her research programs is in embodied AI because she thinks there's a set of new things that being able to bring the AI stuff to helping us take the way that we navigate the physical world to the next level as going to be really good. And I think it's the question of where you think, well, is there certain kinds of experiences, certain kinds of presences, or certain kinds of knowledge or understanding or wisdom that come from being embodied? And that loop also matters. Now, I think that we will have that AI directly connected in the malt of that,
not just Fei-Fei's project, but also all of the things we're doing here. Because even when the AI is amplifying us, being our own personal intelligence, our own Pi with it, that's also embodied because it's embodied and navigating with us. So I think that that's — Tippett: We take it into our bodies. Hoffman: Exactly. Right. Tippett: Are you familiar with Teilhard de Chardin, who was a Jesuit like Pope Francis, paleontologist, early 20th century, helped discover Peking Man? Hoffman: Lightly, not in depth. I've heard the name before.
Tippett: Well, so one of the ideas that he had — so he was a scientist. He was a Jesuit, and he was a scientist, and he was literally discovering the physical evidence of human evolution. But he had this vision that now, early 20th century, that we inhabit this biosphere. But that in the future, the biosphere would be wrapped by what he called the noosphere, which is human knowledge, our powers of thinking and perceiving. And really, what he described sounds a lot like the internet. But actually, it sounds more like this AI, I've been thinking lately. And he believed that this would be the prompt — there's that word — for spiritual evolution. Which is an interesting
thing to think — in some ways, I feel like that's what you and I are circling around here. Hoffman: I think so. Because part of what you think of as spiritual evolution is the question of how do you evolve this sense of how you have meaningful and good impact on yourself and in the world. Some of that's how you evolve your own wisdom and compassion. Some of that's relational in terms of other people in the world around you, and it's tied to what is the meaning of being here, what is the meaning of being on this journey. I think that's part of the spirituality, and I think that's why even knowledge and cognitive things are part of that, too. And I think that's where you'll look at how spirituality has these overlaps. It's not just a
pure question of moral values or what the order of the universe is, but it's this notion of leading a good life where the good life is not just — like the Hillel quote — it's not just for you, but also for the people around you, and the impact that you're having on the world now. [music: “Long Stride” by Blue Dot Sessions] Tippett: So much of what you have been successful in is doing things at scale. You care about scale, and you care about scaling good things and the power of scale. But — and I feel like this is a little bit connected to how spiritual evolution works — if you go back to the conversation about friendship as a force in human life and actually a force in life together, even a force in world history, I think friendship is the epitome of what is non-scalable, right? Hoffman: Hmm. Tippett: Friendship happens at the speed of relationship. It happens one conversation, one
relationship, one life at a time. I don't know. I wonder if in your vision of how all of this unfolds, and if we think about it so audaciously as the possibility of spiritual evolution and how we can become more generatively human — we'll see. I'm not excited about giving that word generative to AI. But if we can become more generatively human — how do you think about how we hold together and actually honor what can scale and what can't scale but we need to really be better? Hoffman: Well, as you know and many people know, I'm a huge advocate for scale. I have a podcast called Masters of Scale. Tippett: Yeah, I know. I know.
Hoffman: But it's partially because when you think about things that have this very broad-reaching impact, they're always scale. You want to solve the notion of elevation of human experience, elevate a bunch of people out of poverty, give medical attention and care across a much wider scope of human beings. All of these kinds of things are scale. The kinds of businesses that make a big difference in life
tend to be scale. But that's not to say life is scale, right? Our life is in these moments. Our life is in this conversation. Our life is in a moment of epiphany and delight with a friend. And so it's not everything valuable at scale. And friendship itself I think is the now and in the moment. I think one of the reasons why I've always really also loved that Hillel quote,
"If I am not for myself, who will wait for me? If I'm only for myself, what am I? If not now, then when?" That's in a sense also an approach to friendship as a way of doing it. And I think that that's also important. But part of course, what makes life wonderful is that it embodies these yin yang dualities. It's both about the moment and about the greater impact. It's both about an intellectual understanding but also an emotional presence. You don't say either or, you say both, and you bring them together. And I think that's — so scale,
but also the here and now and us as ways of looking at it. Tippett: Let me just — I think one of the reasons there's a lot of cynicism right now from the early social media experience, we were all just so excited and idealistic, I think, generally, as a society, about the connection economy. What I would say at this remove is that these technologies did connect us. These platforms did connect us, but did not attend to the quality of that connection. It's not that quality didn't happen on them, but it had to be carved out kind of one space at a time. And that, to me, gets at if there is an honoring, even just an awareness, of what is fundamentally human that isn't scalable. And if that intelligence is just held
as we work with these new powerful technologies. This awareness that the quality of connection matters as well. I don't know. Do you see a place for that in how we shape this next chapter? Hoffman: Part of the reason why friendship is so important is because it's where people are helping each other, learning from each other, journeying together. One plus one is much greater than two in that relationship. And I think that that quality of connection really matters. And part of when we think about how do we try to build this future together is
how do we improve our ability to relate? To relate to each other, to relate to ourselves, to relate to the world? And it's pretty obvious that we have a lot of work to do to do that. But I think that's part of human aspiration. And back to the earlier threads, that's part of, what is the spiritual quest to improve our ability to relate? Tippett: And you think that that can be a consideration in how these next generation AIs are developed? Hoffman: Absolutely. And look, it doesn't mean that everyone building them — Tippett: No. Hoffman: … that everyone using them will have those intentions. But I have complete
conviction that it's very possible that we could be delightful and interesting and elevating. And that the projects that I'm working with have that at their core, the core of their being, in terms of what they're trying to do. Tippett: So as we close, I want to get into the impromptu book that you wrote, Amplifying Our Humanity through AI by Reid Hoffman with GTP-4, which I think is a really helpful, different kind of introduction to this technology than a lot of what's out there.
I'm going to read a little bit. I'm going to ask you to read a little bit, too. Hoffman: Great. Tippett: But I want to read. This is near the beginning where you ask GPT-4, what's the best way for humanity to take advantage of this imperfect but extremely powerful new tool? And what you said is, "How should human beings view and interact with you, a powerful large language model?" And here's a paragraph from that answer, which strikes me so much like one of those labels on a drug — like here are the warning and contraindications. This is GPT-4: "Human beings should interact with a powerful LLM with caution, curiosity, and responsibility. A powerful LLM can offer valuable insights, assistance,
and opportunities for human communication, creativity, and learning. But it can also pose significant risks, challenges, and ethical dilemmas for human society, culture, and values. Human beings should be aware of the potential benefits and harms of using a powerful LLM and seek to use it in ways that are aligned with their own and others' interests, rights, and wellbeing. Human beings should also be curious and critical about the workings, outputs, and impacts of a powerful LLM and seek to understand, evaluate, and improve its reliability, transparency, and accountability. Human beings should also be responsible and respectful towards a powerful LLM and acknowledge its limitations, uncertainties, and dependencies, as well as its contributions, achievements, and potential." It's very thoughtful. Hoffman: Yes. Although by the
2023-10-14 20:42