Reid Hoffman confronts the AI critics
Are AI’s harshest critics causing irreversible harm to humanity? How far should leaders lean into AI? And just what would Reid sound like speaking Chinese with a British accent?
Reid Hoffman sits down with CEO of WaitWhat and longtime friend Jeff Berman to tackle these pressing questions and more. Join them in an unguarded exploration of how the AI landscape has continued to shift in the six months since Reid published Impromptu, the book he co-authored with ChatGPT.

Are AI’s harshest critics causing irreversible harm to humanity? How far should leaders lean into AI? And just what would Reid sound like speaking Chinese with a British accent?
Reid Hoffman sits down with CEO of WaitWhat and longtime friend Jeff Berman to tackle these pressing questions and more. Join them in an unguarded exploration of how the AI landscape has continued to shift in the six months since Reid published Impromptu, the book he co-authored with ChatGPT.
Table of Contents:
- Reid Hoffman on the current state of AI development
- How Reid Hoffman would approach writing a sequel to Impromptu
- How criticism of AI has restricted its use
- Solving the data privacy issue
- How non tech companies can engage with AI
- Utilizing AI text-to-speech capabilities
- Why Reid Hoffman didn’t sign the letter urging AI labs to pause development
Transcript:
Reid Hoffman confronts the AI critics
REID HOFFMAN: I was talking to Jeff Bezos. He was like, “We could give people the ability to do the audiobook in their own voice with AI. And we could also give them the ability to choose other voices.”
If I wanted to have, you know, ‘Reid as a British accent.’ And eventually, here’s him speaking perfect Chinese, which of course I can do zero of.
Next year, I suspect it’ll just be, which version of the Reid voice you want? Do you want the Masters of Scale Reid voice? Do you want the fireside chat Reid voice? Do you want the ‘Reid Hoffman doing his Humphrey Bogart interpretation?’
I mean, whatever the thing is, we’ll have all of that as part of it. And that’s again, human amplification.
JEFF BERMAN: Hi listeners. It’s Jeff Berman, CEO of WaitWhat — the company that brings you Masters of Scale.
As a long time friend of Reid Hoffman’s, I wanted to capture the sort of conversation we would have off-mic — a conversation about the growing impact of AI and what leaders and aspiring leaders can do not just to survive, but to thrive in this new era of Artificial Intelligence.
It’s been more than six months since Reid released his book Impromptu, written in collaboration with ChatGPT. In this discussion, we explore how the landscape of AI has evolved, and we take a peek at where it’s going.
In this special episode, you’ll hear how early stage business leaders should tap their networks to take advantage of AI. And how leaders at big organizations should create long-term strategic plans that can adapt at the drop of a hat.
[THEME MUSIC]
Reid Hoffman on the current state of AI development
BERMAN: Reid, you and I sat together almost exactly 51 weeks ago. And you told me that it was time to buckle up, because what was coming in AI was going to blow my mind and was going to be a wild ride.
So I’m curious on your perspective about what’s happened in the past year. What’s different from what you expected? What surprised you, and where you think we are?
HOFFMAN: Well, so I had been distracted by the general discourse around superintelligence to think of the superintelligence as something that’s happening in the future.
And here’s what I realized. We already have superintelligence. GPT-4 is superintelligent. That doesn’t mean, for example, if you say “okay, GPT-4, tell Reid Hoffman how to better invest in artificial intelligence or craft good artificial intelligence that’ll say anything that the experts don’t already know, because it’s not super intelligent in that way” — but the way it is super intelligence is by its breadth. So for example, if you said, “hey, I’ve been thinking about neuroscience, can you give me the parallels between modern neuroscience and game theory?”
It’ll do that. And maybe there’s a few human beings who can do that. But then you say, “Well, okay, also give me the parallels between modern neuroscience and contemporary oceanography and the study of that.” And it can give you that too. And it’s the same thing. And it’s that use of it for breadth, which also is something that, you know, which is not capable of in humans, because it’s ingested over a trillion words as a way of doing it. But because of that, it is already super intelligence.
BERMAN: It is that breadth of superintelligence that has radically shifted my search behavior. I would say I’m going to GPT at least half as often as I’m going to Google.
Is that happening for you too? Do you see that happening more broadly? Is this going to completely disrupt the search industry?
HOFFMAN: Well, I think it definitely transforms the search industry. And I think that once people start using it, they use search overall less.
And part of the reason for that is very straightforward, which is: sometimes you’re looking for an answer, not 10 blue links. And of course, that’s part of the reason why the Microsoft concept was adding, you know, Bing Chat to Bing as a way of doing it. And I think that is fundamental. It’s not just a replacement for things that search did badly, but it’s also, I use it differently.
So, for example, part of what I will use the AI systems for is I go, well, I’m thinking about making an argument, for example, human beings will always be able to adapt their ability to be creative with their tools.
Here’s my argument. And then I can go to the AI and can say “make the better argument.” And then I can say “make the counter argument.” And I can look at that too. And then I can begin to go to, okay, what do I think is the right way to make this argument, balance against the concerns and criticisms? And all of that is done kind of immediately.
Now, there’s one other use case that I think is probably worth highlighting, because I think one of the things that people are not tracking is that by use of AI, we can actually make our interactions with each other even more human. And here is a simple way of putting it, which is, you know, many of us get very busy, get a whole bunch of email communications, and don’t have the time to really respond in the way that we would like to respond.
Well, actually, in fact, AI today can say, oh, I get this detailed email and say, it’s like, for example, we’re in a work context, it’s like, well, I’d like to initiate the following kind of change in our process.
You know, your real thing is, hey, that’s complicated. Let’s have a meeting about it. And you can just say, okay, let’s have a meeting. It’s like, oh, that’s not appreciative of all the work that someone has put into it.
You go to GPT-4 and it says, “oh, I can see why you think that’s a very interesting idea. And there’s a bunch of different compelling things to it. I particularly like this, this, and this. On the other hand, it does have some complexities, and we’ll probably be better off if we first get together as a group and talk about it.”
So why don’t we do that? And I really do appreciate your bringing this up. And that’s a much more human way to respond to it, but it takes all the time to do it. But your kind of agent of preference can help you craft that.
And actually, in fact, that allows you to raise your game. You might change a sentence or two. You realize that would be the better way to respond.
And part of how we learn as human beings is we learn from each other. So if someone responds compassionately to you, you learn something in that compassion. You learn about them. You learn about the reaction.
It brings out the, generally speaking, it brings out the better parts of yourself. And that’s a, the reorientation of perspective that, you know, I think we will all come to. And the quicker we come to it, the stronger and better we’ll be.
How Reid Hoffman would approach writing a sequel to Impromptu
BERMAN: Reid, about six months ago, you published Impromptu, on which you had an unusual co-author because you “co-wrote” it with ChatGPT.
As you look back, given the evolution that we’ve seen in the space and the technology, how would Impromptu be different if you were writing it together today?
HOFFMAN: It’s interesting. In getting Impromptu out, I wanted to not just tell the story of human amplification, but I wanted to show it.
Now, if I were writing a sequel to it, then I would probably do a different set of chapters. Like I would say, we have a line of sight to a medical assistant that can help everyone who has access to a smartphone navigate medical considerations, including by the way, directing you to your GP if you have one, but there’s, you know, 5 billion smartphones in the world and, you know, less than a billion people have access to a doctor.
So, like, helping all the people who don’t have access to a doctor is far better to do. And even the people who have access to a doctor, you can do it while you’re talking to your agent. And your agent says, “hey here are three things you might want to go talk to your doctor about.”
And you show up and say, “here’s the dialogue I already had. And here’s the three things,” because sometimes by the way, doctors make mistakes and actually working together gets you there faster and better and everything else. And I would include that as a chapter.
How criticism of AI has restricted its use
BERMAN: If you’re a parent and you’ve got a kid with a rash on their arm and you can’t reach your doctor in that moment, are we at a point where that parent can take a picture of that rash and ask an AI to diagnose it and to recommend a course of action where you trust the answer to that?
HOFFMAN: We certainly are in capability. Because of medical regulation, all the providers of the AI essentially are tuning down the ability to do that, which is kind of paradoxical. Because you’d say, well, this could be really helpful, but because of legal liability and everything else, we’re saying, you can’t do that.
And that’s absurdly stupid and causing a lot of human pain. And what you should do is say, “hey, I’m not a GP. You should consult a GP. But if you can’t, here’s a thought. And here’s something you can go talk to your GP about.” And, you know, it’s on you to make that decision. I’m not the legal liability GP of that. And as long as you said, that’s fine, then we should enable it.
And for example, a researcher at Microsoft, a guy named Peter Lee, has written an excellent book about showing how GPT-4 can already do medical diagnoses better than, you know, the majority of human GPs.
And then people go, well, is it going to replace doctors? And the answer is hopefully not, because while that can be helpful, the doctor can look at it and say, “well, you know, in this case, while it says that’s the top thing, it may not actually be getting this and maybe we should look at the second thing.”
So all of that is there. And the craziness is because many critics of AI believe they’re being virtuous and helpful to humanity by kind of articulating their criticism as highly vocally as possible, including a lot of journalists and everybody else, then they feel virtuous because they’re saying, “I’m pointing out this thing that could be a danger.” It’s like, yeah, but that’s all you’re doing, and because you’re not pointing out the raison d’etre to make this work, you’re causing thousands to millions of human beings to suffer by not having access to this medical agent and deferring that, because you’re saying the criticism is the most important thing to understand about this.
And so while you think you are helping humanity, you’re actually hurting humanity. And it doesn’t mean don’t be critical, you know, they say, “oh, you’re saying never say criticism.” Of course not, you know, navigate intelligently, but realize there is this amazing set of outcomes that are super important to billions of people, that our real question should be: how do we get there as vigorously, as soon as possible?
Solving the data privacy issue
BERMAN: So the other side of the medical specific example is privacy. How would you advise individual users or business leaders who are uploading sensitive data to AIs to think about the privacy of their data?
HOFFMAN: Well, a lot of work has been done on this and will continue to be. We’re in early days. It’s fast iteration. And generally speaking, I tend to think that people overblow the risks on this stuff. And so you go, “well, I, you know, my kid has a rash. I’m trying to solve the rash.” It’s like, well, actually, in fact, it’s much better to solve the rash than it is to go, “well, I don’t know exactly what happened with the picture of my kid’s rash,” you know, et cetera, et cetera.
And you know, I’m kind of trusting that this fast moving early-stage company is going to be, you know, intelligent and judicious about it. And, you know, there may be an error there, but it’s overall much better.
Now, that being said, as we iterate, part of what’s going to happen is we’re going to get much, much better on this. One of the things that has been kind of concerning to me is if you said, look, if we put all of the medical data into a database and then said, now let’s try to figure out what are mitigating forces against cancer, we could do that if we didn’t have the intense individual privacy thing, relative to every individual user. And the benefit to all of us would be very large. But people obviously first worry about the negative downsides of their own data and they say, “no, no, that I don’t want to participate in that.”
Well, one of the things that’s going to come from AI is: how do you look at a big pile of data and then generate a fictional piece of data that has the same characteristics of the other one? So you say, well, we should allow this AI to look through all of the medical records. And then it will construct a bunch of data that will never come back to anybody else, right? That then could be used for all that medical research.
Ultimately, we’re going to figure out how to do this in a way that we’re going to get a lot of benefit. I think the optimistic step forward is the most important thing to be thinking about, even as you navigate, you know, potential potholes and so forth.
BERMAN: After a short break, we’ll hear why Reid chose not to join his peers in signing an open letter on the existential risk of AI. Plus, Reid shares how his collaboration with Microsoft will amplify his ability to record his voice in ways we’ve never thought possible.
We’ll be right back.
[AD BREAK]
BERMAN: Before the break, we heard Reid Hoffman explore how individual users should consider data privacy while experimenting with AI.
Now, Reid explains how not just business leaders but leaders of all organizations who aren’t tech-savvy can still take advantage of AI’s capabilities. Let’s jump back in.
I’m confident that most tech companies are already well ahead and experimenting with AI, trying to figure out how to work with it.
How non tech companies can engage with AI
If you’re working in or running a non tech company and you just don’t have the expertise, you don’t have the knowledge, you don’t have the staff that’s already there, other than maybe playing with it here and there, what’s your advice to business leaders in those kinds of companies for how to get started?
HOFFMAN: What I would say is figure out within your network, which includes within your company, figure out who you have who’s good at kind of being experimental.
And then say, “hey, can you help me with this?” You know, engage them. And, you might say, “look, what I’d like you to do is figure out a kind of a micro project about how we could be using this effectively for ourselves and then come back and demonstrate it to all of us,” right? You may discover that, hey the AI tool today in 2023 is not really fully ready for the thing that I would most want it to be doing, but I’ve now got a sense of it.
And then as the AI tools evolve in 2024, and I’m paying some attention to the evolution, I might suddenly realize, oh, it wasn’t this thing that I was looking for, but this other thing that could be really helpful to us. And then let’s start engaging in that, and let’s engage on an ongoing basis.
BERMAN: Roughly a decade ago, I worked at a big company that engaged in a five-year. They called it the SLURP — the Strategic Long Range Planning Process. And when you’re running digital at a big company and they’re asking you to look out five years and model your business, it’s really tough, but a lot of big businesses do work this way and they can predictably work this way.
Given the changes in AI, if you were advising a big company business leader, how would you be suggesting they revise their long range planning process, given the pace of change on AI?
HOFFMAN: Well, I think: try to get a sense of what you think the parameters of change will be, potentially for you and your industry. Probably presume that it’s more than your presumptive. And then realize that you’re going to have to replan pretty constantly and quickly.
You know, we have to plan with some ability to replan and flexibility. We have to pay the price for maintaining some optionality and flexibility. You know, that’s the kind of thing to go more into, because the thing that is guaranteed is that next year, these tools are going to be even better than they are today. And to give you a microcosm of how good they are today, in the Microsoft co pilot product, developers who are using it for the suggestion of code that they’re writing are accepting over 50% of the code suggestions.
And if you think about that, you say, it’s not quite a 2X productivity increase, but it’s pretty stunning.
Utilizing AI text-to-speech capabilities
BERMAN: So that’s a good transition back to Impromptu, which I chose to listen to. And I enjoyed listening to it in part because I got to hear GPT 4’s answers in a voice different from yours.
So I’m curious, especially given GPT 4’s text to speech capabilities, you know, again, in terms of the Impromptu 2.0, whether you would use that, how it would look different.
HOFFMAN: So what I wanted to show was that there is an ability to even be more human and connect, because I don’t have the time to spend the 80 hours reading the book. And so I said, all right, I’d like to show how the AI voice generation can both be trained to the cadences of my voice, the ongoing relationship that I have with my listeners on Masters of Scale, you know, the readers, you know, or listeners to the book, and be there and be present for them, but also, you know, within a way that operates.
So, we did, actually, a Microsoft technology called Vale that was the text-to-speech part, which they’ve been very careful not to release because they don’t want it to be misused. But of course, I’m not misusing it. I’m using it to read Reid’s voice and then a totally fictional voice for an AI.
When I was talking to Jeff Bezos about what he was doing, he was like, oh, wow. Then one of the things we could do is we could give people the ability to do the audiobook in their own voice with this, and we could also give them the ability to choose other voices.
If I wanted to have, you know Reid as a British accent, and eventually, for example, when you have Impromptu as a Chinese voice, it’ll go, okay, here’s Reid’s vocal cadence, and here’s him speaking perfect Chinese, which of course I can do zero of. So those things also then become possible as part of the human amplification.
BERMAN: Reid, the ability to create different accents, to speak in different languages, to immediately translate you into a perfect dialect, one can only imagine the different use cases here. I’d love for you to go a little bit deeper into what your audio collaboration with Vale might actually sound like?
HOFFMAN: Well, so I think it’s not perfect. We worked on two different channels for it, cause we were exploring — one channel is text to speech. And that tends to do the best kind of human Reid voice.
And then there’s another, which is speech to speech, which is: you have a human actor, you know, read the speech and then you transform their voice into the Reid voice. And that one does better at not having some weird pauses and other things, which the text to speech does, but also doesn’t mimic kind of the cadences as purely as the text-to-speech one does.
And so, because it has the human actor who is doing the Reid speaking and so forth, and there are some people who are very, very good at doing that, but it’s more human, but slightly different human.
That’s what we learned by doing this so far. Now part of it is, you know, next year, I suspect it’ll just be: Upload text file, get which version of the Reid voice you want. Do you want the Masters of Scale Reid voice? Do you want the fireside chat read voice? Do you want the Reid Hoffman doing his, you know, kind of, Humphrey Bogart interpretation?
I mean, whatever the thing is, I don’t know if that one will be next year, that one maybe the year after, but we’ll have all of that as part of it. And that’s part of, again, human amplification.
Why Reid Hoffman didn’t sign the letter urging AI labs to pause development
BERMAN: Reid, before we go, I would be remiss if I didn’t ask you about the open letter regarding the looming threats presented by Artificial Intelligence. A number of your peers in tech — people like Elon Musk and Steve Wozniak — signed the statement, but you chose not to. I’d love to hear why that is. Can you give us some insight into why you made that decision?
HOFFMAN: So, you know, it’s very popular to talk about existential risk. And there was a statement that a bunch of my friends signed earlier this year, saying AI should be treated as an existential risk along with climate change, et cetera. And one of the things I think is naive about that statement and the reason I didn’t sign it is because, while it can be an existential risk, just like, you know, for example, you create nuclear power and nuclear weapons are an existential risk and so forth, it is also one of the ones that makes the largest positive difference column. It can help with climate change. And for example, AI is going to be essential to monitoring and saving us from future pandemics.
And so I think that there’s a bunch of different areas, not just as the language assistant, but as the extraordinarily valuable tool for the ongoing thriving of humanity and human society, that it’s going to be and is being developed across these things.
And that’s the kind of thing that’s going to be very important to us. So that is kind of a more advanced AI game, but those are areas where this essentially really matters.
BERMAN: Reid. It is always fantastic to catch up with you.
HOFFMAN: Jeff, very much likewise. We have infinite things to talk about.
BERMAN: Historically, business leaders have played a critical role in shaping society and leading change. Especially given the pace of innovation and growing volatility, business leaders must focus not just on shareholders, but stakeholders as well. Including, most importantly, our own team members.
Reid argues that a human must be at the center of whatever we do in AI. I would add that we need to center humans — and humanity — in all of our work. We hope this episode inspires you to think about how to do that more effectively.
Thank you again to our own Reid Hoffman for his invaluable and candid assessment on the state of AI. If you haven’t heard our recent series of episodes, AI+You, I implore you to dive in. The series functions as an actionable playbook, leaving you primed to tackle this new and exciting chapter.
Thank you for listening.