
Table of Contents:
- How AI transforms human agency
- The mindset of adopting AI technologies
- Winning over AI skeptics
- Who Reid’s new book ‘Superagency’ is for
- AI’s impact on jobs
- Steering AI through this transitional moment
- Is the concern around emerging tech new?
- Reid Hoffman predicts the near future of AI
- Navigating the bad actors who leverage AI
- How the Trump administration will impact AI
- Advice for business leaders during this time
Transcript:
Reid Hoffman on AI ‘Superagency’
REID HOFFMAN: Would you rather have a radiologist read your X-ray scan or would you rather have a radiologist with an AI?
And the answer is with an AI every day of the week, eight days a week, because that then gives me a much better health outcome.
And so, the society that you experience in this kind of super agency is when many people get the same superpowers all together, and we’re all benefiting from our own and from others.
BOB SAFIAN: That’s Reid Hoffman, the co-founder of LinkedIn and founding host of Masters of Scale. Reid is always instructive to talk with, especially about AI, which he has ardently championed as a founder, as a Microsoft board member, and as an author. He’s just released a new book called “Superagency” about the fast-evolving AI era. Today we dig into it. So let’s get started. I’m Bob Safian, and this is Rapid Response.
[THEME MUSIC]
SAFIAN: I’m Bob Safian. I’m here with Reid Hoffman, co-founder of LinkedIn, partner at Greylock, founding host of Masters of Scale. Reid, great to see you.
HOFFMAN: Great to see you as always, Bob.
How AI transforms human agency
SAFIAN: So you have a new book out today called “Superagency” with co-author Greg Beato. Some people have called the book a surprisingly positive take on AI and on humanity. And now I think the surprise is less about you being optimistic than about the topic, that there’s so much skepticism right now about the future of both AI and humanity.
Can you start first by defining the word agency, regular old agency, and then sort of what super agency is beyond that?
HOFFMAN: So agency is our ability to kind of express ourselves in the world, to make choices, to configure our environment, to say “this is, kind of, what I want to have happen to me, to my environment around me.” Obviously, nobody has infinite agency, but we all have some agency and we aspire to that as part of what we do.
AI, like other kind of general-purpose technologies that have come before, gives us superpowers. Superpowers are like a car gives you superpower for mobility, the phone gives you superpowers for connectivity and information. AI gives you superpowers for the entire world of information, navigation, decision-making, etc.
And what super agency is, is not just when you as an individual get the superpower, but when you and many of the people around you, when millions of people throughout society also get that superpower. Just as a car doesn’t just transform your mobility, your ability to go somewhere, when other people’s mobility is similarly transformed, like a doctor can come for a house call, a friend can come to visit. So the society that you experience with this kind of super agency is when many people get the same superpowers, and we’re all benefiting from our own and from others.
SAFIAN: I mean, the fears around AI, I guess, are that AI will eventually limit human control. And when you’re talking about super agency, you’re sort of positing the opposite, that we’re going to have more control.
HOFFMAN: Well, it’s actually different, but more in some important ways. These technological transformations of agency are never only additive. They’re mostly additive. Like the car is broadly additive. But of course, if your agency was previously that you were a driver of a horse carriage, that agency changes.
Like when you have a phone, you can reach out to other people, but other people can also reach out to you. So you’re available. Agency kind of transforms in these cases. You can already see it if you start playing with these agents. You can now do things and accomplish things that you couldn’t accomplish before, which unlocks your ability to learn things, your ability to communicate things, your ability to do things faster and in more interesting ways.
So that’s part of the reason why it’s really important that we actually play with these technologies. We engage with them. We do serious things with them. We do what I call in the book “iterative deployment,” and that’s what’s so important for us all engaging on this path heading towards super agency.
The mindset of adopting AI technologies
SAFIAN: You talk about two different kinds of super agency or two different tracks in some ways. One is where AI helps me complete tasks that otherwise would be hard for me to do alone. And the other is where AI does things for me, without me.
HOFFMAN: You might say, when you’re getting into an Uber, is this a gain of agency or a loss of agency? Because you’re not driving anymore. But of course, because you’re choosing it and because you’re engaging in it, you actually gain agency because you didn’t have to have the car there. You might have had one or two cocktails too many and still have the mobility.
The mindset is important because if you say, “I am getting an Uber because I hate it and because I’ve got a stranger who’s driving me to some different place, and I don’t really want to be doing it, it’s like being carted off to jail,” then of course you’re going to have this enormous perception and decrease in agency. But if you adopt it with the right mindset and say, “Hey, this is something I am seeking to do,” and start from curious to positive, like the first time you were on a bike, right? You’re kind of terrified of it. You’re like, do I really need to be learning this? You’ve got training wheels on. But once you begin to learn it, you go, “Ah, I now have increased mobility. I can have new exercise. I can get new experiences.” And that’s the kind of thing of leaning into that agency, which I think is really important.
As we adopt new technologies.
SAFIAN: Yeah, and I guess, trusting that agency, I mean, your example about the Uber driver, it makes me think about I’m giving agency over to the driver, but the driver is probably giving agency over to a maps app that is telling them the best way to get to my destination, right? And they’re trusting the map, and I’m trusting that they’re trusting the map, and all those things sort of lay on top of each other.
HOFFMAN: Yeah. And actually, I wouldn’t say ‘giving over agency’ because you always have some agency in that choice and ongoing, but like you’re utilizing the agency. You’re engaging the agency.
SAFIAN: I’m offloading some mental load from myself and allowing to spend my time doing something that I feel is more valuable.
HOFFMAN: Yeah, exactly. One of the very earliest things when I was saying, “Why do we want autonomous vehicles?” It’s because drinking and driving goes from a horrific act of negligent or deliberate evil to something that might be actually quite relaxing and enjoyable.
Winning over AI skeptics
SAFIAN: You’ve been preaching about the potential of AI for some time. You wrote a book with ChatGPT to demonstrate the potential. You’ve made digital twins of yourself to try demystifying it.
Not everyone is convinced. What do you feel like you have to sort of fight most in getting people over this, and what prompted you to do the book now as a way to try to make that change?
HOFFMAN: Well, the book is kind of a natural extension from “Impromptu AI,” co-written with AI, trying to show how it’s amplification intelligence and how you could use it in these positive cases.
My biggest hope and persuasion is that people who are AI fearful or skeptical may begin to add some AI curiosity and kind of say, “Hey, look, I should try to play with this.”
Part of what super agency is about is to say, look, it doesn’t just matter for yourself, but it’s other people getting exposure to this that will also be good for your life. For example, if you think about the fact that I have a smartphone, I have a medical assistant that is as good or better than the average doctor.
Would you rather have a radiologist read your X-ray scan, or would you rather have a radiologist with an AI?
And the answer is with an AI every day of the week, eight days a week, because that then gives me a much better health outcome.
So it’s not just me and my superpowers, but other people gaining superpowers also helps me.
SAFIAN: Even if I’m not engaging quite the way you would like me to most, I’m still going to get some of the benefits of this. It’s going to be part of cultural changes.
HOFFMAN: Ultimately how people get to adopting and adapting their lifestyle to these new technologies is because they begin to see, “Oh, actually, in fact, this is a new, very good thing.” As opposed to when cars were first introduced, they were considered so dangerous that they had to have a person walking in front of them, waving an orange flag.
Now, we got rid of that regulation very quickly. And it’s like, okay, well, they’re dangerous, but okay, can we contain and shape the danger in ways that are small relative to this massive benefit of super agency and mobility?
Who Reid’s new book ‘Superagency’ is for
SAFIAN: When I think about you, Reid, it’s like you have these different slices of audience across your influence. You’ve got your own peers in tech, and then you’ve got business people and entrepreneurs who aren’t in the tech and AI space; it’s not their core competency.
I wonder, when you’re constructing this book, are you thinking about each of those groups separately and hoping to get different reactions from each of them out of it?
HOFFMAN: There are two primary groups in broad categories that I’m writing to. The first and foremost are the people who are skeptical about AI, uncertain about AI, fearful of AI, and I’m trying to say, this is why you should add AI curiosity and AI hopefulness. It doesn’t mean that I’m going to persuade all the skeptics or the uncertain and fearful in one book.
But to add that in and to begin to see that the only way that you can get to the kind of future that you want to get to is when you steer towards a positive future. You can’t get to the future you want by just trying to eliminate the future you don’t want.
You can eliminate a future you don’t want, but you eliminate a lot of other futures too, including a lot of good ones. So that’s the thing about getting to that good future.
Now, that’s my primary group. Secondarily, it’s actually also for technologists, companies that are building these new technologies, to say, look, what are the concerns that people have?
The concerns around jobs, the concerns around misinformation, the concerns around privacy, they all kind of come back to concerns around agency. And so if you then become a technological builder, developer, iterator, etc., with a focus on how do you enhance human agency,
That is a design lens that I think is actually, in fact, really important. And that’s the reason why that’s the second group.
AI’s impact on jobs
SAFIAN: And this design lens for technologists, it’s not necessarily that like, “Oh, I shouldn’t design something because it might replace someone’s job,” but to be mindful that if there’s a way to design it so that it augments a job instead of replacing a job, to make that choice.
HOFFMAN: Well, or to always be thinking about how would I do that? Could I do that here? It doesn’t mean don’t do replacement because, for example, there’s a lot of jobs where we have human beings trying to act like robots, like customer service, following a script, and robots will do that better.
We want to create new jobs that are essentially human jobs, where you might have a little bit more agency, a little bit more creativity, a little bit more ability to express yourself, etc., versus just following the script, which is the kind of thing we want to create a lot more of.
Steering AI through this transitional moment
SAFIAN: AI acting on its own seems to be what scares people the most about it. But I’ve thought that the likelihood that I’m going to lose my job to an AI alone may happen at some point, but I’m more likely now to lose my job to someone who uses AI better than I do, right? Although if I’m losing my job, maybe it doesn’t matter that much either way, which one I’m losing it to.
HOFFMAN: Yeah, so, part of the thing that I love about thinking about technology is whenever you think there’s a problem, including a problem created by technology, you think about can technology be a solution. So, yes, I do think that a lot of jobs will then start requiring the use of AI and AI agents as part of being professional. It’s a little bit like if you said, “Hey, I’m a professional today, and I don’t use a computer, or I don’t use a smartphone.” It’s like, no, not really good.
So there are technological requirements, which increase with new tool sets for doing jobs, and AI is definitely going to be one of those. That being said, part of the solution, you go, “Oh, my God, am I going to be out of a job?” Well, actually, in fact, and this gets back to the kind of the book being for technologists and thinking about human agency, it’s like, well, how do we help people have their agency to learn the new skills and say, “Hey, yes, my job is going to be taken over by a human using an AI.”
Well, how about that human be me? Or, okay, so this particular one doesn’t work, but how can the AI help me find a different job? In many ways, I think we will naturally get there, but I think, you know, just because we’ll naturally get there doesn’t mean we can’t get there better by being intentional in having design.
And it’s one of the reasons I identify myself as a bloomer in the book versus a zoomer, because I actually don’t think that everything will just be great with technology.
I think we actually have to steer it intentionally because when human beings encounter new general-purpose technologies as early as the printing press, all the rest of them, we mess up in various ways. We handle the transition of new technologies badly. And part of the reason why I’m doing this book, this podcast, things like this, is to try to say, let’s do this transition much better. It doesn’t mean we won’t have suffering in the transition. It’s like, well, I don’t want to be learning the new job. I don’t want to be learning the new tool. And it’s like, well, unfortunately, you’re going to have to, right?
But, if you embrace it with some agency, we can possibly make that both less painful and have more opportunities. We are entering into the cognitive industrial revolution, and all you have to do is look at any simple books about the industrial revolution to recognize transitions can be painful.
Let’s do these ones better.
SAFIAN: Reid may not be a Zoomer when it comes to AI, but he’s always been a techno-optimist, that tech is an essential tool in creating a better future. So how do we handle the risks and trouble spots with today’s AI?
[AD BREAK]
This is Rapid Response, and I’m Bob Safian. Before the break, we heard Reid Hoffman talk about what he means by “Superagency,” the title of a new book he just released. Now he shares his vision of what an AI-infused workday will soon look like, how we can address the risks and trouble spots with the latest AI developments. Let’s jump back in.
Is the concern around emerging tech new?
SAFIAN: Some of the technologies in recent years that we’ve all gotten really excited about, like social media and smartphones, we sort of underestimated the societal impact, right? The wow factor gave way to these consequences that we didn’t foresee, like filter bubbles or too many hours of screen time. How much do you think this sort of alarm about AI is because of that recent history?
HOFFMAN: I think we always hit alarm when we hit new technology. So I don’t think it’s just because the concern around social networks, information flows, and children, which are very legitimate concerns and issues that need to be addressed.
But also we would have hit it anyway, even if it wasn’t there. I mean, we had all these concerns. I mean, you and I are old enough to remember all the descriptions around the internet about how it was kind of like a terrible place, cyberspace which was very dangerous, and who would ever want to upload their credit card and buy something that’s probably full of criminals, frauds, and dangerous people?
Well, and there still is a bunch of bad information on the internet, but there’s also a bunch of really good information.
I mean, the kinds of things you can find with Wikipedia in broad cases, listening to random podcasts, such as these two guys talking to each other, there’s all of this stuff that can be quite good.
SAFIAN: In some ways, when new technology comes in, the best parts of it, we get used to so quickly, like the world never existed without this before. Can you imagine if I needed to have a paper map to get around? I wouldn’t even know how to move myself through the world.
And then those trouble things pop up, and I guess that’s why you want to keep iterating and improving because those trouble spots will always pop up.
HOFFMAN: Look, the trouble spots will pop up whether or not we have new technology or not. And as you know, one of the chapters in the book is innovation and safety. People always think innovation is just change and risk. And actually, in fact, you create the car, and part of the reason like we now have cars that you can drive at 50 miles an hour, 60 miles an hour, 70 miles an hour on the highway, is because we innovated on safety, in brakes, car construction, seatbelts, and which things to iterate on and do.
You only really discover by going down the road. You can’t sit before you create the car and invent all the things. There’s no way to do that. And so that’s why the iterative deployment engagement and getting millions of people to engage is so important to how we create these things in ways that help us as a society and as humanity.
Reid Hoffman predicts the near future of AI
SAFIAN: So do you have a vision of what like super agency mode would look like in a so-called average day for a U.S. professional on the job? Like, do you have a vision about what that day looks like? What happens?
HOFFMAN: Well, I’m super curious, and I’m quite certain that some of the things I think are right and some of the things I think are wrong. Look, I think that what will happen in relatively short order of years. We will be using co-pilots to help us.
Say Bob, you and I were meeting in person. It would be odd if I put the phone on the counter, put on the agent, and just had the agent note-taking, suggesting certain things in the conversation, and so forth.
I think that will become typical. It may not become typical when, hey, what we’re doing is we’re just talking about the vacations that each of us had or something else and that kind of thing. But whenever we’re working or wherever we’re having a conversation with some kind of goal in earnest, I think we’ll do that.
Navigating the bad actors who leverage AI
SAFIAN: I’m curious if there are areas that you are worried about with AI, like what are the ones that are sharpest? I’m thinking about something I read recently. I think it was in Axios about the national security implications, whether it’s military or information security, or whether it’s code breaking. Are those areas that you get more anxious about?
HOFFMAN: I’m less worried about the AI by itself. There’s obviously a bunch of Hollywood things like Terminator robots and so forth.
But since AI gives you superpowers, it also gives rogue states superpowers, terrorists superpowers, criminal superpowers. And so I’m worried about the, and by the way, all of them have an incentive to adopt early, experiment with it, and so forth. And so I’m worried about how do we navigate that?
And it’s one of the reasons why I help advise a whole bunch of governments, kind of help set up safety and alignment conferences, talks between the various providers and builders of artificial intelligence. And that’s another area I think of worry.
While I think that this category of doomer of existential risk, “oh, someone’s going to build terminators either autonomous or with humans and present a human existential risk,” I think that AI is much more naturally going to reduce the number of humanity’s existential risks like pandemics, asteroids, and other things, even as it is.
But even as it is, I think it’s important to say, well, hmm, what should be our international global treaties on the use of autonomous weapons, on kind of building killer robots, and what other things should we do there in order to have, the next century be the best human century?
SAFIAN: And I guess you want technologists at the table when those discussions are happening because they understand better than anyone what’s practical and what might unfold. But at the same time, you don’t want just technologists at that table.
HOFFMAN: Oh, for sure, not. Part of what we covered in “Superagency” is this kind of notion that, look, part of how technology develops is not just an individual tech company with some potential regulatory body. Part of what makes tech companies do this well is they have a growing network of customers.
They have a network of investors, they have a network of employees and their families. There’s a whole stack of these interlocking networks that create what we call the consent of the governed. And I think that it’s important to have that kind of broad participation, hence iterative development. And I myself, while my undergraduate degree was in artificial intelligence, my master’s degree is in philosophy. And I think if anything, AI brings out the importance of humanity degrees. When you think about how it should be designed, how should we interact with it, what should be the kinds of things focused on our agency to increase our super agency?
These are the kinds of considerations that a humanities education can be very helpful for.
How the Trump administration will impact AI
SAFIAN: It’s a little early, but with the new administration in Washington, is there any sense yet about how they’ll look at AI or whether their regulatory environment will be any different?
HOFFMAN: It’s no news to our listeners that I was putting a lot of energy into electing Harris versus Trump, but I think that there’s an earnestness around the fact that AI really matters, and there are people who are specifically appointed from the very beginning of the administration in order to navigate.
One of the phrases that I’ve been starting to use a lot more is American intelligence. How do we make that American intelligence? And so that’s building on the CHIPS Act but then also in the new administration kind of making a massive shift of regulation for provisioning new energy, provisioning nuclear energy, provisioning data centers.
So I think all of that has a lot of good potential, and so part of what I’m trying to do is help the country build on that potential and build things that really help American citizens as a group.
Advice for business leaders during this time
SAFIAN: I had a guest on Rapid Response recently talking about the impulse by some businesses and brands to sort of go quiet or in an early administration to be paralyzed to some extent by the uncertain ground rules, guardrails, and repercussions. Do you have advice about how business leaders should act around this inflection point we’re in right now?
HOFFMAN: The short answer is it’s high volatility, and all you have to do is look around the world. It’s not just the various conflicts. But obviously, we have a lot of turmoil in the country.
A tiny percent voted for Trump over Harris. It’s less than 2 million people. You’ve got a divided country in various ways. What that means for business leaders is to expect more volatility, expect more crisis, and expect more uncertainty. So the advice that I give folks is that there may be great opportunities here. A bunch of smart deregulation may open up opportunities like nuclear energy or other things, but also the questions around the uncertainty also create a lot of volatility.
So you should be kind of protective of that. And that’s the advice that I’m giving folks.
SAFIAN: Well, Reid, this has been great. As always, I love hearing the way you’re thinking about it and the things you’re doing. And thanks for taking the time. I look forward to doing it again.
HOFFMAN: Yes, me too. And Bob, always a pleasure.
SAFIAN: Whenever I talk to Reid about AI, I come away excited and also a little chastened. In part, it’s his acknowledgment of the risks, which I tend not to want to think about. But also his encouragement to engage personally with AI in our work life — I’m reminded that I could experiment more, should experiment more. I’m definitely talking about AI, but I could put more into action. Reid’s observation about volatility, from tech changes, from political changes — it hits home. We all need to be as adaptive as possible, and new AI tools can only help. I’m Bob Safian. Thanks for listening.