Your business can’t wait until AI is perfect
Table of Contents:
- Inside Bret’s new AI start-up, Sierra
- A contrarian take on AI hallucinations
- How AI will transform careers
- Why businesses need to get AI ‘at bats’ now
- Why Bret joined the OpenAI board
- On the importance of AI safety
- Scale lessons from Bret’s career across start-ups and tech giants
- The ‘vibe shift’ of how the public sees Big Tech
- How to build a healthy start-up ecosystem
- How AI companies can build trust with users
- Lightning round questions
Transcript:
Your business can’t wait until AI is perfect
BRET TAYLOR: When Google moved from its office in Mountain View to its first campus, it was the Silicon Graphics campus. So, Silicon Graphics was once a very great company that was big enough to have a campus, but had done so poorly that at some point they were selling their campus to another company.
REID HOFFMAN: Bret Taylor. He was there at this time, when Google was ascending to the pantheon of big tech companies. In fact, he was integral to the creation of Google Maps. It was the first of many world-changing companies he’s been a part of. Later, he’d be named Chief Technology Officer at Facebook. And work in another office building with a storied Silicon Valley past.
TAYLOR: When Facebook went from having a building in Palo Alto to having a campus, it was Sun Microsystems old campus. So, I had this experience so early on in my career of being at these newfangled companies that basically took over the carcass of a once great technology company that at one point was at the top of the stock market and big enough to have these huge campuses. And I just realized that the half life of big technology companies is not as long as we often think.
HOFFMAN: I’m Reid Hoffman, your host. Bret Taylor is chair of the board at OpenAI, the company responsible for ChatGPT. He is also the founder of a new AI-driven start-up, Sierra. And just to note, I’m an investor in Sierra, and I consider Bret a longtime colleague and friend. I was eager to talk to him about his views on AI and the decades of scale lessons he’s accumulated.
Inside Bret’s new AI start-up, Sierra
HOFFMAN: Bret, well, welcome to Masters the Scale. I’ve been wanting this for a number of years and finally we’ve made this happen. So, I’m very much looking forward to this.
TAYLOR: Thank you. I am too.
HOFFMAN: You have a new AI start-up. Tell me a little bit about Sierra and what you’re doing.
TAYLOR: So Sierra is a conversational AI platform, which is a very big concept, but we have a very specific focus, which is helping consumer brands around the world build branded conversational AI agents.
I have a 14-year-old daughter, and she’s getting her learner’s permit next year, which is insane. And I have a sense that if it were a few years from now, rather than tapping around my insurance company’s website, I’d be chatting with an AI agent to add her to our premium.
Every company will want their own agent and they’ll spend as much care and attention on their AI agent as they do building their website, as they do building their mobile app. This is going to be one of the main digital artifacts as a business that you make.
And I think the concept of an agent isn’t just about AI and automation. I actually think it’s a new type of customer experience. My intuition is that with the advent of large language models, and ChatGPT being that inflection point with large language models, talking to software is perhaps the most ergonomic way we can interact with software. Because you don’t need an instruction manual to have a conversation.
And this isn’t just about automation AI, that’s a big part of it. It’s almost like we’re building your digital experience as a brand. And I think that’s a really exciting business opportunity.
A contrarian take on AI hallucinations
HOFFMAN: So let’s broaden out to AI. How do you see AI transforming the society around business? Like businesses’ engagement with consumers, consumers’ engagement with businesses, businesses operating with businesses, and then the broader social patterns.
TAYLOR: I’ve found it increasingly hard to predict the future since the advent of ChatGPT and the explosion of these foundation and frontier models in software. So I’ll tell you what I think I know, but I’ve really tried to have a lot of humility and have very much an open mind to being wrong about many of my theories. And, one thing is I think it will change our relationship with software. I remember when my family members first encountered ChatGPT, and they first encountered instances of hallucination.
HOFFMAN: A hallucination is what we call it when you enter a prompt into a large language model and it generates an answer that contains false information.
TAYLOR: And I remember how hard it was for my family members to comprehend why. Because we have been trained for the past 30 years to think of computers as these extremely powerful rules engines.
A correctly running computer could look up facts. When you search something in Google, all the results are valid websites that you can click on and visit. And now all of a sudden, with ChatGPT, occasionally a computer would make something up. And it violated all the preconceived notions we had about computers.
And then similarly, when I was recommending to my family members that one of the main uses of ChatGPT was actually a creative foil, it also violated their mental conception of what computers were good at. And I think at the end of the day, large language models are a new category of software that I think society is still wrapping its head around.
HOFFMAN: Thinking about this term hallucination or making something up, how does that impact how we think about trust in these models and tools?
TAYLOR: I think the amazing positive impacts of these generative models and their flaws are very intertwined. What affords their ability to be creative and responsive and aligned with the person using them is also the basis for hallucination. If you want a contrarian take on this, I think it will help with the disinformation problem a bit because I think people don’t assume everything coming out of these agents is 100 percent accurate, and they verify it.
And my hope is the combination of these AI tools as sort of the Iron Man suit to help you verify that information is correct, combined with actually having people double check the bits that they see online, might actually have a surprisingly positive effect on how we view information integrity and digital environments.
HOFFMAN: An techno-optimistic view, which is one I concur with.
TAYLOR: I also think that more and more software will actually complete a job. I think software to date has been a productivity enhancer. Now we’re going to have AI that actually returns the results of doing work. I think that will change our relationship with software. It will create a lot of questions in society about everything from job displacement to personifying or anthropomorphizing software in our workplace.
And I think this wave of software will, like the early waves of software, when we first brought the PC to many businesses, hopefully drive productivity into the workplace in ways that we haven’t seen for a very long time.
How AI will transform careers
HOFFMAN: What’s your advice to young people, your 14-year-old daughter, coming into this new world of careers and productivity given the coming tsunami of AI productivity?
TAYLOR: I don’t think my perspective has changed that much about how to think about education because I’ve always thought of education as something more than learning a skill. While it’s a bit trite, most university presidents would say the role of the university is to teach students how to think. And I think that is broadly true.
And similarly as a software engineer, the act of typing the code into your terminal – I guess I’m showing my age, typing into VS code these days, I’m still a terminal guy myself – isn’t the job. The job is to create robust software that functions correctly. And you say, okay, we’re gonna have AI emitting a lot of code for software. What is the role of the software engineer? I would hope that it makes the role of software engineering more self-actualized. We might not be typing as many characters. We might be operating code generating machines.
I think the operator of a code generating machine is actually a more strategic role than typing if statements all day. And that’s one example, which is writing code, just because I think it’s one of the professions that has been most immediately impacted by AI. But I think that’s true of a lot of professions. Which is, I think it will make the better people even better, by giving them more leverage. It will probably increase the gap in the distribution between the best people and the worst people. The best people will now have an Iron Man suit of AI capabilities giving them kind of superhuman abilities.
HOFFMAN: How much does that broaden across all of the functions, whether it’s business operations, finance, legal, customer service, sales, etc. Give some broadening from the software engineer into the other job functions.
TAYLOR: I think a new graduate will probably assume that the skills of their craft and the tools of their craft will almost certainly change in their career. Just think about what it meant to be a marketer 20 years ago versus a marketer today. It’s almost a completely unrecognizable profession. It’s almost data science at this point compared to what it was.
And I think if you go into your career understanding that your tools don’t define what you do, but actually it’s the impact. And I think the folks who become ossified and the tools that they use defining their career will get left behind.
And I think employers on the other side of that will need to embrace reskilling, tool learning, things like that. Just imagine being the first accountant to ever use Excel and how intimidating that must have been. We’re all going to have those moments in our career, is my take. And I think having that learning mindset is the most important thing.
I’m a huge believer that when old jobs go away, new jobs get created. And it can feel really uncomfortable. I have a strong intuition that 40, 50 years from now, there’ll be a number of new jobs that might not even match our mental model of what a job is, but they’ll be jobs.
And I think it’s because humans want to work. I think it’s because humans want to differentiate themselves. If you go into a classroom, some kid wants to be the smartest, some kid wants to be the strongest, some kid wants to be the fastest, some kid wants to be the best looking.
We’re all competing for status in this world, and no amount of productivity in the economy will change that aspect of society or human nature. And as a consequence, we’ll create an economy around the technology that exists around us to enable us to be what we’ve always been, as flawed as humanity is.
And it’s on all of us to sort of help mitigate the disruption and downsides of that change. But I’m an optimist in the long term.
Why businesses need to get AI ‘at bats’ now
HOFFMAN: So then, as business owners or executives, what do you think the smart thing to do vis a vis this AI transformation is? What’s the way to think about everything from running your business well to dealing with the workforce?
TAYLOR: Let me start with just a really simple rule of thumb that I believe in, which is: get at bats. And what I mean by that is use this next generation of AI, use the foundation and frontier models that are now widely available, in your business as widely as you can. Not every one of those experiments will work but if you believe that this is a force that will change your business forever, you don’t want to start developing experience in the technology once you know it’s perfect. Because probably the reason you’ll know it’s perfect is because your competitor proved that it works. Just being direct about it.
And so I think right now, it’s really important for leaders to give their management teams permission, and in fact demand some experimentation here, recognizing that not all of those investments will bear fruit. But on the other side of it, the companies who get a lot of at bats, get a lot of experience, once the technology is truly mature will be meaningfully ahead of their competitors in applying this technology to drive real business outcomes.
HOFFMAN: If you were kind of just throwing out things to get people more concrete imagery about at bats, like what kind of things? Would it be stuff in sales? Stuff in internal analysis? What would you say are few things to generate “at bat” thinking?
TAYLOR: I’m a big believer in giving your employees access to ChatGPT. I think that right now it’s an incredibly useful tool. Your employees are probably already using it, by the way.
So tell your employees you should be using this as a tool in your day-to-day job. You should be using it to help refine an email. You should be using it for advice. You should be using it for analysis. So that’s just the baseline.
I think every employee should have access to AI. On the big internal jobs, one of the great things about large language models is synthesis and summary. Looking at data and summarizing large amounts of content is an incredible opportunity. If you have transcripts from a call center and want to summarize them, what a great job for AI.
There’s just so many, I would say, I hate this term, but low hanging fruit, in that area internally and back office operations. You don’t need to start with the core of your business. But if you’re not deploying an AI for every customer, if you’re not automating some back office processes, if you don’t have a branded AI agent facing your customers now, when? When are you going to start learning about these experiences?
HOFFMAN: Still ahead, I talk with Bret about his role on the OpenAI board, and scale lessons he’s learned from his time with tech giants like Google, Facebook and Salesforce.
Why Bret joined the OpenAI board
HOFFMAN: Welcome back to Masters of Scale. You can find this episode and more on the Masters of Scale YouTube channel.
You may remember that back in the fall of 2023, conflicts between OpenAI CEO Sam Altman and the board of directors there made news after the board ousted Sam. Soon after, Sam was reinstated, and some people left the board. This was the tense moment when Bret Taylor came on as the board’s new chair.
HOFFMAN: Despite your intense focus on building a company and being a super focused operator, you were persuaded to join the board to help with OpenAI. What was the thing that made you go ‘Okay, I have to step into this’.
TAYLOR: It’s a great question. The personal emotional reason is I think it’s important for OpenAI to exist. I remember when we had dinner, right after ChatGPT had come out and you were on the OpenAI board at the time. And you couldn’t really tell me all the things that were coming, but I was talking about the impact that both DALL-E and then ChatGPT had had on me.
And I could just see the excitement in your eyes. And I’m probably paraphrasing incorrectly, but you sort of said, I’m going to devote my career to this. Like, this is all I want to spend my time on. And I credit OpenAI almost exclusively for that feeling inside of me.
I remember the summer where DALL-E, I think maybe DALL-E or DALL-E 2, the moment with the avocado chair. I don’t know when that was, it was the sort of summer prior to ChatGPT. When I saw that I had this just, purely emotional reaction saying ‘I had no idea that computers could do this’.
And I had spent my career as a software engineer, obviously not in the frontier of AI research, which is why it surprised me so much. And then ChatGPT came out and it just impacted society broadly.
And then when the governance crisis happened, and, you know, I got some phone calls from both the outgoing board and Sam and there was some mutual trust that I could help mediate the outcome. And there weren’t a lot of people that those parties had mutual trust in. I felt like I was in a position to help this organization that had such a huge impact on me and so many people I knew, just to help it survive. Because I think there was a period in there when I was pretty concerned it might all fall apart.
HOFFMAN: OpenAI was created in a pretty interesting and unique way. It started as a 501c3 – a not-for-profit – with a mission to benefit humanity broadly. Later, it also added a subsidiary that’s for-profit. And there aren’t many companies of any kind making more waves right now than OpenAI. When Bret joined as the board’s chair, he agreed to uphold what’s called a “duty of care” and a “duty of loyalty” – basically, his job is to steer it honestly through this society-shaping moment.
TAYLOR: And so the mission of OpenAI is to ensure that artificial general intelligence benefits all of humanity. And I joked when I was talking to my attorney, when I was considering joining the board, I’m like, so I’m a fiduciary to humanity? Is that what that means? And you know, he laughed and said, well, kind of, yes. But it really means that when you’re thinking about, what are my obligations as a member of the board here, it has to do with what does it mean to benefit humanity.
And the hard part about a mission is it can be sort of an inkblot test. If you talk to some people about what does it mean to benefit humanity, maybe they’ll fixate on the benefits. They’ll say, ‘how can we make sure that there’s no digital divide’?
How do we ensure that a person with the lowest powered smartphone in the middle of a country with very poor Internet access can access AI? You might talk to another person who’s really focused on safety and say, ‘I’ve seen Terminator. I know what Skynet is. How do you ensure that doesn’t happen’?
So, that’s one of the nuances of the mission, I think, capturing everyone’s most optimistic and most pessimistic takes about the impact of AI in society.
I consider it a privilege. I think everyone’s acting with the best intentions, just to be clear, and I think it comes from the fact that this mission is something that, like the company, the organization has interpreted in a certain way, but not everyone agrees about everything.
And that’s very, very complicated. Technology is neither good or bad. It’s how it’s used. It’s how it’s applied. So I think the mission really defines what the organization does.
On the importance of AI safety
HOFFMAN: One of the specific things is OpenAI has a safety and security committee. What’s the specific role that you’re taking there? What’s the way the world in general should understand this as part of having good governance and navigating this world-changing technology?
TAYLOR: When you think about OpenAI and its mission, safety is just a hugely important part of it. This is a geopolitical issue, not simply a technology issue. So the committee is a combination of both members of the board and members of management to provide broadly, oversight and governance on safety decision making around the models.
I view all of these things as evolving. I think the goal is to ensure that OpenAI meets its mission. So the creation of that committee was essentially a reflection of just how important that safety and security are individually to the mission. How that committee operates, it would disappoint me if it didn’t evolve as the models become more advanced. But broadly speaking, the OpenAI methodology of responsible iterative deployment, this committee is just obviously a really important part of that process.
Scale lessons from Bret’s career across start-ups and tech giants
HOFFMAN: One of the things that people who followed your career some know is that you’ve already done two successful start-ups, FriendFeed and Quip. Going into Sierra, what are the kind of key lessons from your earlier start-ups? With FriendFeed, which went to Meta and Quip, which went to Salesforce, what were the start-up lessons that you would give the Masters of Scale audience?
TAYLOR: Both FriendFeed and Quip were new ideas. And I think for Sierra, we are building conversational AI for customer experiences, and it’s a very competitive market, but I’m not convincing people that, hey, this AI thing is kind of a big deal. Usually when I’m walking into the room, people have that worldview already, and they’re saying, why does your thing work? Why is it better than our competitors?
But to get to your question, I would say I’ve had the privilege of not only starting two companies, but working at some of the great companies, Salesforce, Google and Facebook. And I really do view my philosophy on management and technology as a composite of a lot of the pieces that I loved about each of those companies.
What I really liked about Google was its first principles thinking about developing infrastructure and scale. It was a very vertically integrated company, the way it built data centers all the way through the search engine. I think a lot of the great AI companies have very similar philosophies. You have to pay attention to your infrastructure and your cost to serve if you’re going to make a decent margin AI business. And I think for me, Google was one of the great examples of that in the early days of the internet and it would not have succeeded without that, by the way.
Facebook was by far the most interesting and innovative product culture. There was the sort of maligned now ‘move fast and break things’. But I actually think at the time, what that really meant was giving people permission to iterate and experiment. And I really loved the pace of Facebook, which really comes from Mark Zuckerberg. And then Salesforce, I think it’s the great enterprise software company. Both organizationally, but also in how that company executes its go to market, how it engages with the customers.
And it comes from, I think, Marc Benioff’s just completely unique and amazing ability to create excitement around this technology, how it does branding. That’s something that I was really inspired by.
I love to say that Sierra represents the best in class of all of those. It’s obviously more nuanced than that, but I just actually feel so grateful to have seen greatness so many times. And if I can mimic even a little bit of all of those things, I’ll consider it a great victory.
The ‘vibe shift’ of how the public sees Big Tech
HOFFMAN: You have the perspective of having worked at Google, Facebook, Salesforce, and then also being a start-up person. What would be the thing that you would say ‘Hey, people should think this way differently about these large tech companies’.
TAYLOR: I’ve lived through the transition from technology companies being largely beloved to a vibe shift in popular culture. I remember when I was on airplanes and I had my Facebook bag early on, the worst someone would say is, ‘You need to have a like button. I want a dislike button. You know, I don’t always like everything’. And that was the harshest criticism. And now, if you got on an airplane with a Facebook bag, someone might be like, ‘Why did you cause the downfall of Western civilization?’
And I am much more of an optimist around technology, which is why I’m in this industry. And when I think about some of the largest tech companies, I’m not saying they’re without flaw. But I also am very grateful for, as both a consumer and as a business, what I get from these amazing technologies. Whether it’s Google search or the chrome browser, Amazon Web Services or Amazon, the commerce platform. So if there’s any high level thing, I think people should focus a lot on how to enable a healthy start-up ecosystem if they’re concerned about Big Tech more than they should be concerned about trying to limit Big Tech.
It’s a very nuanced thing, and I have so much empathy for regulators working on this. But I do believe the creative destruction in Silicon Valley is greater than any place in the world, period. And the best possible thing to do to hold technology companies accountable is to encourage competition or what I think Andreessen and others call Little Tech.
I think that’s actually probably the most viable thing. And I think it comes from the natural creative destruction inherent in technology. And that’s why I just think that the right way to hold technology companies accountable is really around supporting that destruction, supporting that start-up ecosystem.
How to build a healthy start-up ecosystem
HOFFMAN: I completely agree. What is a healthy start-up ecosystem? If you were like, New York City mayor or British prime minister, whatever, what kinds of things would you say? Do more of X do less of Y for that?
TAYLOR: I think capital, talent, and culture are the three ingredients in my opinion. I’ll start with talent. I just think because of the existence of so many incredible companies here you end up with this sort of natural flow of talent between Big Tech and Little Tech that I think is really healthy. From the early days of Silicon Valley, all the early tech companies were started by alumni of Fairchild Semiconductor. So we’ve just had this natural flow of people going from one company to another.
It’s actually an ecosystem where people work at a company and achieve some modicum of financial independence. Sometimes it’s a lot. Sometimes it’s just simply, you’re willing to take more risk.
And then the existence of the capital here and the existence of other talent means you have the pool to start a new company and the cycle continues. I think we have a culture here that, it doesn’t mean it celebrates failure, but it’s tolerant of really ambitious experiments in a way that is really hard to quantify and certainly hard to replicate for people outside of this.
And it’s why I think you’ll see some founders whose first company doesn’t really work and they start another company and they get a lot of investors. And there’s sometimes some cynical social media posts about it. Like, God, they failed and now they’re investing (again). Whereas the investors here are like, well, this person had some great ideas and man, there’s such a better entrepreneur now than they were the first time. I’m going to take a bet on this person. I think this attitude of tolerance, of experimentation, is so unique to Silicon Valley and just something I’m so eternally grateful for.
How AI companies can build trust with users
HOFFMAN: I agree. I also think there’s kind of the thing of Annalise Saxenian’s Regional Advantage, the network of learning by talent moving around by the conversations by not having anti-competes. I think there’s a stack of stuff there that’s pretty important.
We talked about the vibe question. What are the lessons from the social media companies? What are things that we should be doing as start-ups, as an industry, and what AI companies in specific should be doing to keep trust, build trust?
TAYLOR: I’d say there’s two axes I think a lot about, which is safety and job displacement. So on the safety front, I think it’s really important that AI companies are very cognizant about areas where their technologies can go off the rails or produce really poor experiences.
And I think that’s number one: We need to recognize that if you’re building this technology, all of our actions impact everybody else. On the job displacement side, I do think every company should think about reskilling and the new types of jobs being created. I think treating that as sort of outside the domain of what we software companies work on is not only insensitive, but I think somewhat irresponsible as well.
At our company, Sierra, we’ve allocated some of our equity to be devoted to issues around job displacement and reskilling. I think those will be the most important ingredients to building societal trust.
HOFFMAN: I concur. Super important.
Lightning round questions
HOFFMAN: Before I let Bret go, I asked him a few more lightning round questions to better understand how his brain works.
HOFFMAN: What is the habit that has helped you succeed the most?
TAYLOR: Sleep. Sleeping regularly seven to eight hours a night on a regular schedule. Prior to having children, I was the very stereotypical engineer that would stay up till, you know, as late as I could finishing the project I was working on. It turns out your kids wake up at like 6 am no matter when you go to sleep. So I was sort of waterboarded into sleeping on a normal schedule. And since then have grown to regret not having the sleep schedule for my entire life.
HOFFMAN: When you’re feeling stuck about a big decision, who do you talk to?
TAYLOR: I talk to my co-founder. I think a co-founder relationship is the closest thing to a marriage in business. And I think the greatest strength of co-founders is communication. And he’s like my first phone call.
HOFFMAN: Yep. Completely agree. What’s one book that you think everyone should read?
TAYLOR: This is one I don’t think everyone should read, but I just, for some reason, as an entrepreneur, I like: Endurance. It’s a story of the team that was trying to reach the South Pole and got stuck for years and survived in some remarkable feat of human endurance. And there are low points in starting a company where you say, well, if they could eat seal meat for two years in the middle of a sheet of ice, I can get through this moment right now.
HOFFMAN: Right. And well, by the way, if you ever have a temptation, I highly recommend going down to Antarctica because it’s the closest you get to the phenomenology of being on a different planet with the safety of this one. And if you are tempted, I can give you some pointers. So, all right. Last lightning round question. How do you hope AI will change your future?
TAYLOR: That’s a wonderful question. It’s so hard because I started an AI company. And so, I think the greatest achievement for an entrepreneur is to create a company and a brand that outlives you. And I think you’ve done that with LinkedIn. Though you’re still here, I don’t mean to imply your imminent demise.
But what I mean by that is that LinkedIn has got to the point where it doesn’t need Reid Hoffman anymore. And for me, I hope in Sierra, we’re creating a company that is an enduring brand that will outlive me. And I think AI is the technology shift that’s enabled us to create this company. And so I’m really grateful to be around for this opportunity and having the time of my life doing it.
HOFFMAN: Well, Bret, as always, I could talk to you for hours more, but it’s been a great pleasure and honor having you on Masters of Scale.
TAYLOR: Thank you for having me.
HOFFMAN: The prolific impacts of Bret Taylor’s career have combined to prepare him well for the two pillars of focus in his professional life now. As a start-up operator, he’s intentionally blending the best bits of all his former workplaces into a new company. And as chair of the board at OpenAI, he’s leaning on every ounce of his time spent in Silicon Valley to help guide artificial intelligence toward a future that benefits all of humanity. A pretty big job. And I’m glad someone like Bret has taken it on.
I’m Reid Hoffman, thanks for listening.