Amid global conflict, domestic unrest, and AI’s surging impact in all corners of business, it’s getting harder than ever to decipher noise from substance. To help us navigate this challenge, Reid Hoffman returns to Rapid Response, sharing valuable insights about Trump’s public spat with Elon Musk, the crisis in the Middle East, and how his new AI healthcare startup functions in the age of RFK Jr. Plus, Hoffman assesses Meta and Apple’s recent strategy to compete with OpenAI, and whether AI is realistically poised to spark a “white collar bloodbath”.

Table of Contents:
- Global chaos and economic stability
- Trump's erratic policy on economics
- The shifting landscape of US government engagement and influence
- Navigating disruption and opportunity in healthcare
- AI's impact on jobs and the "white collar bloodbath"
- Business strategies for the AI-driven era
- Zuckerberg's race for superintelligence
- Building trust in AI
Transcript:
Trump vs Musk, superintelligence, and the next wave of AI
Reid Hoffman: Most American consumers don’t realize how valuable the global market has been for them. I spent a number of weeks in Europe traveling around, doing conferences, and the number of people talking to me saying, “Look, we really liked having US as a stable trade partner, but maybe China’s a more stable trade partner for us now,” then I think that’s fundamentally very bad for the US economy.
Bob Safian: That’s Reid Hoffman, co-founder at LinkedIn, partner at Greylock, and founding host of Masters of Scale. Today, Reid and I dig in about what’s noise and what matters most in 2025’s fast-changing environment from Trump and Musk’s public spat to the conflict between Israel and Iran to whether AI will really lead to a “white-collar bloodbath,” and what Mark Zuckerberg means when he says Meta is pursuing “superintelligence.” There’s much to get to, so let’s jump in. I’m Bob Safian, and this is Rapid Response.
[THEME MUSIC]
I’m Bob Safian. I’m here with Reid Hoffman, co-founder of LinkedIn, partner at Greylock, founding host of Masters of Scale, and co-host of Podcast Possible. Reid, great as always to chat with you.
HOFFMAN: Bob, it’s great to see you.
Global chaos and economic stability
SAFIAN: There is so much going on in the economy and in tech right now. It’s hard to know where to focus. How would you describe this moment with so much change on so many fronts? Do you compartmentalize topics? Do you say in this bucket is what’s going on in AI, and over here is what’s going on with US trade, or is it all interconnected for you?
HOFFMAN: Well, it’s probably broadly just for focus and being able to not just get lost in the entire soup of chaos is in topics, and this connectivity is on key major themes. So obviously AI plays across everything, like what does it mean not just for obviously work, and what does it mean for geopolitics, and what does it mean for the invention and science, and what does it mean for medicine? But also even when you see, let’s send in the National Guard and the Marines to LA, that doesn’t necessarily generate the question of AI, but it does generate the, we live in chaotic times. There’s a couple of questions that go across all of them.
SAFIAN: And when something happens like Israel starts bombing Iran, is that its own bucket, or does it get you to start rethinking other things that are around it also?
HOFFMAN: It’s mostly its own bucket, but it does go, and there’s this general, the world is descending into chaos. We’ve self-ended the post-World War II American world order, and I think that there will be thousands of lives and lots of suffering that will come from that because I think the post-World War II American world order has been one of the most golden times in human history, and the fact that we have deliberately self-destroyed that will have a bunch of different costs, and I think the costs include increasing conflict and chaos everywhere in the world. So that has its macro theme, which the Israeli bombing of the Iranians is in.
Now, I also think, of course, the Iranian nuclear program is a problem. In Trump one, I actually thought his ending the treaty deal was a mistake, but the Iranians are a force for engaging in global terrorism and global conflict. So the fact that there’s something being done to try to contain nuclear ambitions is not in itself a bad thing. It’s probably unfortunate that the way that that’s happening is bombing, but I would’ve preferred the treaty, negotiation, words always better than physical violence.
SAFIAN: Yeah. I have to ask you about the public split between Elon Musk and Donald Trump. Maybe it’s a truce now, it’s hard to say. Regardless, Silicon Valley must be buzzing. I’m curious what the reactions are. Are there any lessons that leaders are drawing from it in dealing with the president, in dealing with Elon?
HOFFMAN: Well, I think there’s a whole stack of things. I think one is there were a bunch of Silicon Valley leaders who deluded themselves that Trump was the sensible business man and the Dems were anti-business, and that was that straightforward, and they went, look, part of the reason why entrepreneurial people like Elon were getting in the business is because they felt that, for example, one of the biggest expense lines in the federal government is the deficit, is just the interest payments. I’m like, it’s like we as a country have over decades borrowed money on credit cards to finance our operations. And just like anyone who knows, when they borrow money on credit cards, it puts a huge dent in their operating income and what to do. And that’s what we’ve been doing as a country.
And they said, “See, they’re really sensible. Elon’s going to go in and cut a bunch of costs, and we’re going to have a budget that is not going to be as much of a deficit.” And I think part of what Elon’s learning from the Big Beautiful Bill that’s being proposed is we’re actually not being business-sensible here. And so Elon sparked in the way he normally sparks, which is to make a whole bunch of, frankly, massively inflammatory tweets. And that caused a general reaction on both sides, which informed everyone. It was like, one, this Big Beautiful Bill is not a pro-future of American business because it’s going to increase our debt line a whole lot, that that issue got drowned in various power politicking.
Elon saying, “The president’s only going to be here for three and a half years. I’m going to be here 40 years. You should listen to me, not him,” et cetera, et cetera. I think that the question that this is not, call it, business rational, not sensible on governance, not sensible on managing the debt, not sensible on how do we actually, in fact, help the country get rid of its debt, get back to good business, is I think the broad lesson that everyone’s drawing from it.
SAFIAN: In terms of business-sensible, do people look at the time and effort that Musk has put into politics and spending his time in Washington? Was that time well spent in a business-sensible way?
HOFFMAN: I think most people think it’s probably been a pretty bad net negative. What are the actual things that DOGE has really accomplished? It started with these big claims about shaving off fraudulent expenses to help right size the deficit, and there were a lot of claims about reducing fraud, and literally zero substantiation about the reductions being particularly fraud-oriented or anything else. So I think that’s not been successful.
And then obviously Elon’s engagement with this has caused… I mean, I see, call it, three times as many Teslas with anti-Elon bumper stickers in the neighborhoods I drive around in than there are Teslas without anti-Elon bumper stickers. Most purchasers of Teslas believe in such things as climate change and think that we have to be rational on these topics. And the fact that Elon is staying silent on an administration that’s claiming that climate change is a complete fiction, I think that’s been… not just has DOGE been a big problem, but I think the Tesla customer base is in clear revolt on this stuff. And so I think all of those things have been real challenges.
Trump’s erratic policy on economics
SAFIAN: Trump’s economic policy, certainly the tariff policy has been wildly erratic. The deals he’s talked about remain incomplete. Business rarely thrives in that sort of uncertainty, but so far, at least judging from the stock market, things seem to have stabilized. Has Trump already fundamentally changed the US economy, or so far has it been mostly noise?
HOFFMAN: So I think it’s been fundamentally changed, but the impact hasn’t been fully felt. I think that most American businesses and most American workers and most American consumers don’t realize how valuable the global market has been for them, that even some of our closest allies… Let’s take Canada, for example, tourism from Canada is way down, purchases of American products in Canada is way down. It’s entirely based on pronouncements from the White House and actions from the White House about all of this that have led to this happening. And that’s just a microcosm of things that will have a bad impact on American society, American industry, American companies, American workers, and American consumers. And I just think that it hasn’t really fully hit the numbers in the market yet.
I spent a number of weeks in Europe traveling around, doing conferences as I do earlier this year, and the number of people talked to me saying, “Look, we’ve really liked having US as a stable trade partner, but maybe China’s a more stable trade partner for us now,” then I think that’s fundamentally very bad for the US economy.
The shifting landscape of US government engagement and influence
SAFIAN: Who are you talking to these days when it comes to understanding government policy? I mean, you have a lot of access during the Biden administration. Where do you get your information today? Where can you have an impact?
HOFFMAN: My ethos has always been for decades, any Western democracy leader who has been duly elected, I try to help them as much as possible. I have zero connectivity with the current administration. The US is not really looking for not just mine, but a lot of people’s, like me, input on how to win an industry with AI, what kind of things should mean for policy, and other kinds of things. More of my advice has been going to the other Western style democracies. I continue to be willing to help any well-elected government official trying to solve a problem because it’s part of being pro-humanity, pro-society, pro-Western society evolutions.
SAFIAN: You launched a new AI company in January, Manas AI, focused on drug development. The US healthcare system has often resisted new approaches to improving health outcomes. I mean, lots of companies have become roadkill on that route. How much do the challenges in healthcare appeal to you, seem like they’re an opportunity versus sort of make you cautious?
HOFFMAN: It’s both. You’re not surprised by my answer, Bob. One of the things that I think we tend to do badly within regulatory state and everything else is by trying to make things very reliable, we enshrine the past against the future. And there’s all kinds of ways in which the regulatory system within healthcare makes the future slower, much more expensive, much less taking risks in order to accomplish things that are really incredible. And that’s my general play is what are smart risks you can take that have society transforming, industry transforming, human health or positive humanity outcomes transforming to have.
And the fact that there is a… the healthcare system is so challenging and difficult to cross all these different fronts. It’s in innovation, it’s in payment systems, it’s in liability, it’s in all of this stuff. And so that all, it makes me cautious when you’re in a circumstance to say all of these barriers mean that it’s impossible to win, it’s impossible to succeed.
On the other hand, obviously it’s extremely important, and if you can figure out ways to navigate the challenges in which other people don’t, you can build stuff that’s extraordinary. And that’s part of the reason why when I was talking to Siddhartha Mukherjee about how AI can accelerate drug discovery, he said, “Look, cancer is the exact right thing.” He is obviously the celebrated author, researcher, et cetera, on this, and he’s brought drugs to market successfully approved by the FDA. He understands the whole ecosystem of that. And he said, “Okay, let’s partner together, then do this.”
And perhaps what we anticipate very strongly is the acceleration that we get from AI in doing this kind of drug discovery is going to be so massive that that gives us enough energy and enough acceleration that even doing, navigating all of the different challenges in the healthcare system: regulation; payments; et cetera, will be navigable because the benefits will be so huge.
Navigating disruption and opportunity in healthcare
SAFIAN: The tenure of RFK Junior, does that add to the challenges? Is he more open to the opportunities that you’re putting in, or do you just can’t even think about those things?
HOFFMAN: I’m always hopeful when someone comes in a disruptor that they will do stuff… that the disruption will be positive for society, positive for humanity, positive for industry. On the other hand, everything I’ve seen unfortunately continues to echo my worries that I articulated during the confirmation process, which is the confirmation of RFK Jr. may be measured in thousands of American lives that are lost. And that kind of, call it, non-intelligence around vaccines and other kinds of things seems to be the top focus of his work so far versus the how do we bring more technology to benefit more Americans to bear, which would obviously be the thing that I would prefer new disruptors focus on and think is the right thing to do.
SAFIAN: Whether it’s RFK Jr. or Elon Musk, the move fast and break things ethos that underpins much of Silicon Valley’s success hasn’t translated smoothly to US federal policy in 2025. How can we better navigate the line between good disruption and bad disruption, especially when it comes to the realm of AI? We’ll talk about that after the break. Stay with us.
[AD BREAK]
Before the break, Reid Hoffman shared his insights on Trump, Musk, RFK, and the US economy. Now, Reid digs in on critical AI developments from claims that AI will spark a white collar bloodbath to Meta and Apple’s efforts to catch up with OpenAI and Microsoft. Let’s dive back in.
AI’s impact on jobs and the “white collar bloodbath”
I saw that the AI firm Anthropic CEO, Dario Amodei told Axios recently that there was a coming white collar bloodbath, that people are dramatically underestimating AI’s impact on jobs. Do you agree with that? Do we know what the impact will be?
HOFFMAN: Well, frankly, no one knows. Anyone who claims that they know exactly what the impact is is either self-deluding or other deluding. Dario is right that over, call it, a decade or three that will do a massive set of job transformation. And some of that job transformation will be replacement issues. Just like, for example, when computer programs, Excel and other things started happening, the people whose jobs it was to write in the ledger, the accounting ledger is that job went away. But on the other hand, the accountant job didn’t go away. The accountant job, if anything, everyone was predicting that the accountant job would go away. And actually, in fact, the accountant job got broader, richer, et cetera. It’s like scenario planning, financial analysis.
Just because a function’s coming in that has a replacement area on a certain set of tasks doesn’t mean even all of this job’s going to get replaced. It does mean there’s going to be a lot of transformation and that some specific things that used to be described by job description will be completely gone. But that doesn’t even necessarily mean that a function is going to go way down. So I think there is a massive tsunami of transformation coming.
And I called Dario to talk to him, because deeply respect his point of view on lots and lots of topics, about the bloodbath thing, and we see the world in different variables here because bloodbath just implies everything going away, versus where I tend to think all having published Superagency is that we at least have many years, if not a long time of person plus AI doing things.
Could I just replace, for example, my accountants with GPT-4? The answer is absolutely not. That would be a disastrous mistake. And it’s just simply like saying, also, let’s replace my marketing department or my sales department with GPT-4, absolutely not.
Now, should each of these departments be using GPT-4 and Copilot and Claude and everything else in order to operate? Yes, they should be starting to experiment with that. There’s various ways in each of these departments you can be amplified with this work, but that’s nowhere close to a bloodbath. The bloodbath is a very good way to grab internet headlines, media headlines.
But, one, not soon, two, even on the impact in terms of what’s happening, I think transformation is the better thing. That does involve some replacement. We do [inaudible 00:19:49] revolutions, but it tends to be when it’s a large percentage of jobs in a very short timeframe.
What jobs are most likely to be replaced? They’re the ones where we’re trying to program human beings to act like robots, like a customer service script, et cetera, in terms of the way you’re operating, and that’s the most natural replacement. But even then, it’s unclear that that will be a bloodbath because of the question of what the adoption of companies look like, how it’s refined in, even with massive transformation that will happen in a relatively short timeframe.
So yes, I think people are underestimating AI’s impact on jobs, but I think inducing panic as a response is serving media announcement purposes and not actually, in fact, intelligent industry and economic and career path planning.
SAFIAN: We continue to see new advances in AI and new services and new uses, but there are also indications of what some people call AI fatigue. There’s a study from S&P that says that some companies are scrapping AI initiatives at an accelerated pace. Why is it so hard for some companies to figure out how to get positive returns from AI at this stage?
HOFFMAN: It’s partially because it requires… And this is what can happen with AI generally, but what it requires doing is retooling how the job works, retooling how the teamwork play works, retooling what the expectations and the system are. Because the thing they most naturally want to do, hence a little bit like Dario’s white collar bloodbath, is they want to say, “Okay, I unplug Bob or Reid from job X and I plug in AI agent in order to do it,” and then it just all continues to work that way.
And by the way, that’s not what the end workflow is going to look like, period. Even when you’ve gotten these agents to quasi-autonomy and they’re being able to function within a workflow process, even as they get there, that’s not the way… work is not going to stay in the same organization, and that’s the most natural thing for companies to do.
So then they go, okay, well, we don’t really know what to do here. So what that gets back to is individuals going, well, actually, in fact, I have a good sense. It’s like I’m the early adopter of a computer, I’m an early adopter of I have a good sense of how to use this in order to do the things I’m doing, and I’m moderately changing my job, I’m using it to amplify myself, et cetera, et cetera. And so it gets back to the individuals doing this versus the AI initiative.
Business strategies for the AI-driven era
SAFIAN: And so that the workers will themselves, in some ways, define what the jobs or the future and the workflow is like from experimenting.
HOFFMAN: Yeah. Well, certainly in the next few, to use the poker analogy, turns the card. That’s certainly what’s going to be happening. And I’ve seen a whole bunch of that all over the place, including in my own work and in our work.
This is one of the things that when you get to industrial age companies versus AI age companies, and what is the way that the strategy and workflows are going to need to work? And the thing that industrial age companies, for example, what is still generally taught at most business schools and all the rest is you… And by the way, this part of one of the miracles of modern management that US business schools led and other business schools also participated in was getting to professional management. Industrial management was how do we build this industrial efficiency process of evaluating risk, understanding customer engagement, understanding workflow development, understanding efficiencies of supply chain and work and all the rest, created a massive amount of productivity. And that was a really, really good thing. But that was industrial age management.
And what happens in AI management, as you begin to get to the AI agent, what would be the dynamic process of how companies should work? Because this industrialized process is almost certainly incorrect. Within a small number of years, you won’t have, as it were, any fully individual knowledge workers or information workers. Any so-called individual contributor will be deploying with multiple AI agents on their team, and they will be managing these AI agents on their team generating stuff. So the notion of managing work becomes much more of the individual contributor work. That’s a clear lens in terms of how the world of work will be evolving.
Now, you get back to what should companies be doing now. Well, the answer is the whole thing is going to be much more dynamic, experimental, and evolutionary. It’s going to look more like if we want to shift to a military metaphor, commando teams, Delta Force teams and less like supply chain logistics teams. And so you need to be building in that kind of experimental learning, evolving even when the initial experiments aren’t really working for you because you haven’t figured out how to put them into your industrialized process, but you want to keep with that experimental process because what the future will look like will be much more dynamic and fluid in terms of how work works.
And so I think it’s a mistake to be scrapping. It may very well be, well, that one didn’t work, reset, just try it again completely. That’s totally fine. But saying, “Oh, we tried that AI thing. It didn’t work for us.” Like, oh boy, that’s not going to work out.
Zuckerberg’s race for superintelligence
SAFIAN: Mark Zuckerberg has made headlines in recent days pushing for what he calls AI superintelligence. And then many folks have sort of said that artificial general intelligence, AGI, was still far off. Are Mark’s plans for real?
HOFFMAN: Well, Mark’s plans are for real. I mean, I think he’s trying to get Meta because Mark is a very good strategist and thinker. By them having focused on all of the AR and Oculus and glasses, they didn’t focus as much on the AI. And part of what he’s trying to do is get them to intensely focus on AI, on what they should be doing, because while they’ve released some good open source models and so forth, there’s a reason why when people are talking about AI making a difference, they talk about OpenAI, and they talk about Copilot, they talk about Anthropic Claude, and they talk about Google Gemini, and then after that, they start talking about things like DeepSeek from China and so forth there. Meta hasn’t even gotten into that list as is not Amazon. And even though Apple tries to market itself with Apple Intelligence, we have yet to see the intelligent part of the Apple Intelligence. I’m sure it’s coming, but it’s still TBD. And so Mark being metastrategic is diving into this with intensity.
Here’s the complexity and nuance of it, which is, look, we already have superintelligence in some ways. GPT-4 is already superintelligent in ways that outperforms human beings on breadth and synthesis of tasks. If you use deep research, the speed at which you can generate a deep research report is, frankly, superhuman. Doesn’t mean that there aren’t errors, doesn’t mean that it should be off doing it by itself, but that superhuman capability exists.
Now, it doesn’t make me anxious because I actually think the natural impulse will be, it will be very positive for humanity, for society, for industry. It doesn’t mean that there won’t be super difficult transitions, transformations in jobs and so forth, but the highest probability is that it will be good.
Building trust in AI
Now, that being said, can it be bad? And the answer is yes, it could be bad by mistakes itself, it could be bad by its hand in human being, it could be bad by carelessness and a set of other things. And so trying to increase the probabilities of great outcomes for society, industries, humanity, et cetera, is great, and trying to decrease the bad ones is great. And that, we should be doing.
Now, Mark announcing superintelligence as a goal, e.g., please include me on the map with everyone else, I think that’s a good thing to happen. There’s a lot of work between here and there in order to make that happen.
SAFIAN: Some of the anxiety about AI I feel like is there’s this term, new to me, interpretability. It’s sort of a euphemism for the fact that we don’t really know how generative AI systems arrive at their outputs. Is that something that needs to be solved for people to trust AI in a different way? Or are you not particularly worried that interpretability remains a mystery?
HOFFMAN: It’s critical that it’s solved for certain kinds of tasks. Should we put AI in charge of our nuclear defense grid? Absolutely not, no way, bad idea. But interpretability is a scale, and there’s a bunch of stuff where the current level of interpretability is totally fine for how we operate. We were earlier talking about customer service and you said, “Well, we don’t have really good interpretability. Do we mind putting autonomous agents out there at the front end of customer service calls?” The answer is no, because look, how bad could it ultimately be?
SAFIAN: But you wouldn’t necessarily make that your air traffic controller at this point?
HOFFMAN: No, exactly. Now, that being said, we do want to solve this problem as much as we can. And so there’s both efforts within the major AI labs, OpenAI and Anthropic and everything else, trying to increase interpretability, trying to have better governance of these systems, better absolute reliability systems. And then I also… Here’s a prediction, I think over the next 2 to 10 years, we’ll be also building new technological systems that are integrated in with the current LLMs, generative AI that will massively increase the reliability of these things possibly through some interpretability, but is blended into the systems. Essentially, the reliability goes way up, and even if the interpretability only goes somewhat up. And I think that is something that we are intelligently working on.
SAFIAN: Well, as always, Reid, I appreciate your candor in taking my questions on all these wide-ranging things. I really do appreciate it.
HOFFMAN: Well, likewise. And Bob, it’s always fun to talk with you, and I think it may be almost like my Vulcan sign-off of the live long and prosper is I look forward to the next conversation too.
SAFIAN: Reid’s grasp of AI and his penchant for nuance always leaves me feeling smarter. One of the key things I take away is that amid the frenzy around AI and the accelerated pace of change, it’s worthwhile to remind ourselves that it’s a marathon and not a sprint. While some tools may seem like plug-and-play solutions, we need to be strategic in how we incorporate AI into our workflow today and how we think about that workflow evolving in the future. We don’t know what tomorrow will bring in tech, in the marketplace, in politics and geopolitics. What we do know, it’ll be the choices of human leaders and each of us individually that will determine how AI and other disruptions define our world. I’m Bob Safian. Thanks for listening.