Empathy in AI
Mustafa Suleyman and Reid Hoffman — co-founders of Inflection AI, and two of the leading AI minds today — sit down to discuss the human-centered ideals at the core of Mustafa’s cutting-edge work. They offer a clear eyed, optimistic view on how AI will revolutionize our lives.

Mustafa Suleyman and Reid Hoffman — co-founders of Inflection AI, and two of the leading AI minds today — sit down to discuss the human-centered ideals at the core of Mustafa’s cutting-edge work. They offer a clear eyed, optimistic view on how AI will revolutionize our lives.
Table of Contents:
- Mustafa Suleyman’s introduction to AI
- What Mustafa Suleyman learned from DeepMind
- Mustafa Suleyman on trying to launch AI products at Google
- Mustafa Suleyman on designing & scaling Pi
- Breaking down different lenses of scaling AI
- Mustafa Suleyman on his book, The Coming Wave
- How to form trust with new AI products
Transcript:
Empathy in AI
MUSTAFA SULEYMAN: It takes a little bit of naivete to declare that your mission is to build artificial general intelligence ethically and safely.
REID HOFFMAN: The goal of Pi is not to take a lot of time and distract you from human beings. The goal of Pi is to help you in your interactions as human beings,
SULEYMAN: How do I learn more? How do I plan better? How do I get more efficient? And now you are going to have a personal AI that will be working for you in the background.
HOFFMAN: People go like, well, I’m just going to sit down and watch television and have my pie deal with everything. It’s like, no, no, that’s not actually in back of the design. The world gets built the way we intentionally design and build it.
SULEYMAN: As creators of technology, we suddenly have a new way of creating very, very magical experiences for people.
CHRIS MCLEOD: That was Mustafa Suleyman and Reid Hoffman — co-founders of Inflection AI. I’m Executive Producer Chris McLeod. Reid recently sat down with Mustafa to discuss the ever-changing landscape of artificial intelligence, as well as the ideals that were essential in creating the AI assistant, Pi. And we’re so excited to share this interview with you today, because it’s the perfect prologue to our upcoming miniseries, AI and You, where Reid will talk with an array of AI leaders, including Mustafa, to explore how you can harness AI to scale your productivity, your business, and yourself, while staying safe in the process.
Mustafa Suleyman has been an AI leader for decades. Before he co-founded Inflection AI with Reid, he was co-founder of the AI company DeepMind, which was acquired by Google in 2014. Mustafa’s recent book, The Coming Wave, explores the seismic changes that are on the horizon for humanity, and what we should be doing to build a future that is safe and secure for everyone. The interview you’re about to hear is a peek inside the minds of two of the foremost AI leaders. They’ll give a clear-eyed, but optimistic view on how AI will revolutionize our lives. Now, Reid’s interview with Mustafa Suleyman.
HOFFMAN: Well, I have been looking forward to this Masters of Scale interview for years. Mustafa Suleyman, my co-founder of Inflection, a friend, who I have learned from in multiple vectors, ranging from poker to actually, in fact, really serious things that matter. So Mustafa Suleyman, welcome to Masters of Scale.
SULEYMAN: Thank you, Reid. It is super exciting to be here. I have obviously learned infinite amounts from you over the last decade, and so, I’m super excited to be here having this conversation. It’s been a long time coming.
Mustafa Suleyman’s introduction to AI
HOFFMAN: And so, let’s start with, when you dropped out of Oxford, and were thinking, okay, how do I make a contribution? You’re like, okay, I’m going to create in the U.S. what is called a 501(c)(3), a nonprofit, but kind of an equivalent of a NGO for helping society. You worked at the mayor’s office. What got you into artificial intelligence? What made you suddenly realize, oh gosh, this is something that I should be turning my attention to?
SULEYMAN: Yeah, I guess I’ve always been a systems thinker. I love thinking about the relationship between complex ideas over time. And when I dropped out of Oxford, it was to start a nonprofit charity — a telephone counseling service. And I was really interested in the question: how do we have the biggest possible impact at scale to make the world a better place? And that was genuinely back in 2002, my main motivation. And for the first five years of my career, I worked in nonprofits, I worked in local government. I ended up co-founding a conflict resolution firm and working all over the world as a facilitator and a negotiator at the UN for big companies, for local governments. And all of that experience led me to realize that actually, it’s technology that really has an outsized impact in the world. And, technology was accelerating at an incredible pace even in those days. I think it was 2008 or so, when I really started to pay attention to Facebook’s growth. And it was that realization that this product had gone from a complete non-entity two or three years earlier to then having, I think it was like a hundred million monthly active users at that time. I was like, wow, technology really is the thing that is going to transform our world, and I want to try and participate and try and steer it to deliver the best possible impact that it can for our species.
HOFFMAN: So say a little bit about the conversations that you started with Demis and Shane.
SULEYMAN: Yeah, so I mean, the best way to get involved in a completely new alien paradigm is to throw yourself in at the deep end. And so, I ended up hanging out with Shane Legg and Demis Hassabis at the Gatsby Computational Neuroscience Unit at University College London. And they very kindly snuck me in through the back door to listen in to the lunchtime lecture series, which was about how neuroscience could be a tool for inspiring new machine learning algorithms. And I have to say, whilst I didn’t understand all of it, I found it deeply inspiring. And we started going for lunch together, the three of us at Carluccio’s, which is an Italian restaurant, a couple minutes around the corner from Russell Square in London. And we started thinking big and planning what it would be like if we put together an AGI company. And over the course of about six months, started reading papers, reading textbooks, really teaching myself to understand at a high level, how these machine learning systems work, what their objective function is, how they’re trained.
I think the really cool thing that we had spotted at the time was that models that had the ability to learn were in fact starting to gain traction. And artificial neural networks had been around since the eighties, proposed in various different formats. And traditionally, a lot of the AI models had been handcrafted. So they were kind of just complex sets of rules that interact with one another. And everyone realized that wasn’t how the brain worked. What everyone was looking for was sort of more brain inspired type algorithms. And the idea of course was that a reward signal from an environment could update a set of weights in a network to ensure that the network was a better fit for the learning objective, eventually now building the kind of models that we see with large language models. And I think looking back, it takes a little bit of naivete to be bold enough to declare that your mission is to build artificial general intelligence ethically and safely. That was the strap line of our 25 page business plan, which we took to Silicon Valley in the summer of 2010, and ended up successfully pitching Peter Thiel at the time for a minuscule 1.5 million pounds, which we were over the moon about. And that then became our series A.
What Mustafa Suleyman learned from DeepMind
HOFFMAN: Yep. And then, what were some of the key insights that you were getting from the DeepMind time that kind of shed a light forward to where we are now? What was the lens, kind of DeepMind forward, from the early days,
SULEYMAN: What the team were trying to do was use these models to generate a novel example of a black and white handwritten digit. So the AI would read lots and lots of examples of images. I think they were sort of like 300 pixels by 300 pixels, black and white handwritten digits. And then it would try to generate a brand new seven that didn’t exist in the dataset, for example, that wasn’t a match. It was a novel style of handwriting. And we sort of saw this almost like a small video, like a short 20 seconds, as the video goes from complete black and white pixelated fuzziness and slowly resolves to a clear black background with a very distinct seven emerging out of the darkness. And it sounds really small, but it was a really mind blowing moment for me. And in many ways, all of the subsequent breakthroughs, whether they were learning to play Atari games or AlphaGo or AlphaFold or many of the health applications that we did that produced best in the world class of X-ray recognition or ophthalmology eye disease diagnosis, each of these was really a continuation of that very early work back in 2011, 2012.
HOFFMAN: And when did you start seeing that scale compute was going to be an important part of the equation? What was the beginning of that realization?
SULEYMAN: Well, we certainly didn’t see scale back in 2013 in the way that we think of scale now. Even though, looking back, there were really large amounts of compute that were used to train the Atari model, for example. So, the Atari model used two petaFLOPS of compute. A flop is a floating point operation; it’s a unit of computation. And a petaFLOP, PETA, P-E-T-A, refers to a million billion operations. So at the time, it used 2 million billion operations, which sounds like an enormous number. But to put that into perspective, every year since then, the cutting edge of AI models has used 10 times more compute over the last 10 years. The amount of compute used to train the best and the biggest models in the world has 10 x’ed — 10 orders of magnitude in 10 years. So that gives you a sense of the trajectory that we’ve been on over the last decade. It is kind of hard to comprehend.
Mustafa Suleyman on trying to launch AI products at Google
HOFFMAN: And so when did you shift to the large language model focus, and what brought you to Silicon Valley as a way of building that?
SULEYMAN: Well, I moved to work at Google full-time in 2020, and I was lucky enough to be able to work on an earlier version of Lambda, which was called Meina at the time. And I joined a team that was maybe five or six people strong at the time, and it was really just a small research project, and the model was significantly smaller than GPT-2. It was sort of, almost incoherent, but you could see glimmers of really impressive texts, like every now and then, it would string together a sentence or two sentences, and it seemed really, really impressive. And over the course of my time there, once we renamed it Lambda, and we scaled it up and trained a much larger model, it was just unbelievable to see how good it had become so quickly. And I think what we built with Lambda was an interactive back and forth agent. So in many ways, it was ChatGPT way before ChatGPT, and we were completely blown away with how good your seventh and tenth turn of conversation was with the model, because it obviously had the prior turns of interaction in its working memory. And I think that was the key insight that made us realize that actually, agents were really going to be the future of this next wave of technology.
HOFFMAN: And this is where at the very earliest glimmers, the inflection and Pi story begins, because that conversation about like, wait, the agent interaction is part of what unlocks all of this capability into human experience, and creates this productive amplifier in a number of different ways. And obviously one of the things that you were trying to do at the time was kind of say, okay, I’m trying to get this productized and launched to Google. It’s part of my job here. And Google at the time, until ChatGPT kind of kicked them in the rear end and got them moving, was kind of like, no, no, no, no, no. We don’t want to launch anything that threatens our search business, and we don’t know how people respond to this, so let’s just keep this as an R&D project. And you’re like, well, but wait, this could be so important for humanity.
SULEYMAN: We tried really hard to get that launched at the time, but there just wasn’t the appetite for taking the kind of risk. And it was pretty clear, I think, to a lot of people at Google that this was potentially going to unseat Google’s existing search business. And so super hard for a company to try to compete with itself and upend itself from within. But I think the interactive component of it was just so obvious. I mean, once we had hooked it up to search, we were trying to sort of ground the generations that the AI produced in the context of the search results, and try and make it more factual by sort of training Lambda to reference a search result when it produced an output. And actually that’s what you now see in Bard, Google’s new AI. It’s pretty much exactly the same model. So it’s very clear to me that actually it was going to be conversation that is the new user interface.
I think that the future of the web and future of digital interactions in general will be that you just ask your AI. I think everybody is going to have a personal AI, and what you really want is to make sure that it is aligned with your interests on your team, in your corner, because you’re going to turn to your AI for all sorts of important and exciting and entertaining and even sensitive moments in your life. It’s not just going to be about directions or getting factual information. It’s also going to be about venting your frustrations. It’s going to be about asking a stupid question that you’re sort of too embarrassed to ask a friend or a colleague. Or it’s going to be sharing in a quiet, private moment when you sort of want to reflect. And so, I think that that new user interface gives us a kind of clay, it’s like a new design material. As creators of technology, we suddenly have a new way of creating very, very magical experiences for people. And as a result, these sort of more relationship-based, interactive based AI agents are what I think is going to be the future.
Mustafa Suleyman on designing & scaling Pi
HOFFMAN: And so, say a little bit about… we have multiple theories of scale that we’re going to actually get to in this discussion, because there’s scale in a number of different interesting vectors, which may surprise some people. But let’s start with a micro, which is the design of Pi. What did you think were the key things in an agent? What are some of the places where people can understand Pi is different from ChatGPT and other things that they may have heard of? What’s as it were, the one-on-one, or the tactile design of how Pi currently operates?
SULEYMAN: So we’ve designed Pi, which by the way stands for personal intelligence to be really sensitive and kind and supportive. Our thesis was, what makes for great conversation, and that really is going to be the backbone of the new set of surfaces in web, in apps, and in all of your devices. It’s going to be conversational. So Pi needed to be very respectful, very patient, very kind, always curious, right? It’s super important that Pi seeks to understand your intent. It doesn’t make assumptions. It doesn’t stay firm when it’s wrong. So it was really important to us that Pi is able to back down and seek feedback and ask clarifying questions. And I think that’s slightly different to the other AIs on the market, which try to be more of a sort of conversational Wikipedia, giving you facts and lists and assuming that it knows the right answer straight away. And that was the core hypothesis is that if we can design a personal AI that really gets to know you over time, that remembers the conversations that you’ve had in the past, then increasingly it’ll be able to personalize its style and tone to your style and tone, and therefore give you a much better quality experience.
HOFFMAN: So, let’s now broaden that out some, to the view of the universe, in which Pi is a stepping stone upcoming, the personal intelligence universe and what that means for amplifying human beings. When this vision gets to scale, what does that vision look like?
SULEYMAN: Well, I think that everybody over the next decade is going to have access to a personal intelligence, a Pi. I think there will be many, many different types of AI in the world. Some of those AIs will represent brands. Some of them will represent businesses or digital influencers. Some of them would be trying to sell you stuff. Maybe you’ll have a healthcare AI or a lawyer AI. Even governments will have their own AIs that help you with government services and your tax return. And so, what you want as a consumer is an AI that is really on your side that can represent you and that can interact with other AIs that are trying to sell you something or persuade you of something. And really, that’s what a personal AI is. I think over time it will start to feel like a browser for your life.
So if you imagine what a browser is today, it represents the sum total of your digital curiosities. Some of your tabs are going to be clusters where you are researching, I don’t know, a new camera that you want to buy. Another set of tabs will be you trying to book a new holiday. Another set of tabs would be like you doing some research for work, or maybe you’re looking for a new job. Each of these are threads or lines of inquiry in your life that require you to maintain state. You sort of have to remember, where am I at on this little mini journey and how do I learn more? How do I plan better? How do I get more efficient? How do I keep up the rhythm of this little inquiry or investigation in my life? And now, you are going to have a personal AI to help you maintain state, to help you dig deeper and learn more, that will be working for you in the background. Your Pi is going to go off and find new articles, give you summaries, find you nice “how-to’s” and little videos, little snippets to help you keep progressing on each of these lines of inquiry. And that’s kind of how I see it — preserving state across all these different areas of your interest and helping make you smarter and save you time.
HOFFMAN: It naturally brings a kind of comedian perspective, like, well, my Pi is going to talk to your Pi, and we’ll sort it out, and intersect that with the kind of human amplification, the humanism that’s at the core of the design of this. Because obviously to some degree people go like, well, I’m just going to sit down and watch television and have my Pi deal with everything. It’s like, no, no, that’s not actually in fact, the design. The world gets built the way we intentionally design and build it. How does that play into the human amplification side?
SULEYMAN: Well, Pi is clearly, in the end, in the next few years, going to save people vast amounts of time. It is going to give you back time and I think it’ll be a question for you as to how you want to spend that time. I hope that it means that it will free us up to spend more time with our kids and our friends and our family and our loved ones and be out in the world, because you’re going to spend less time doing mundane administrative tasks online. In many ways, Pi is going to be a chief of staff for you, or like a PA or a secretary — organizing, planning, booking, buying, arranging. The amount of time that we all spend just processing payments online or just ordering the groceries when it’s a run of the mill order and we know what we need… there’s a lot of wastage in front of our screens. So I think trying to make your life more efficient and more productive hopefully frees you up to spend more time interacting with other people. Equally, it could be that you now have more time to pursue your hobbies and your passions and your new learning interests, or it might be that you’re trying to upskill yourself to change jobs, switch careers, move cities. Each of those things will be something that your personal AI can help you with too.
HOFFMAN: Yeah, and one of the ways that I put a different lens under the same point you’ve just made, is the goal of Pi is not to take a lot of time and distract you from human beings. The goal of Pi is to help you in your interactions as human beings. So if you come, I have this difficult discussion with a friend and you come and start talking to Pi, it’s not, oh, just talk to me and I’ll make you feel totally better and you can ignore your friend totally. It’s like, no, let me help you, how you engage with your friend and understand each other and have a great conversation as part of the design ethos. It’s to help you be your best self and navigate the world you’re in, whether it’s life or work or anything else. But from “you, out” is I think another lens into what you just said.
SULEYMAN: Yeah, totally. I mean, that’s a great way of putting it. I mean, in many ways, Pi is kind of there to absorb your tiredness and help you translate your frustrations on to Pi rather than on to other people in your life. And I think that’s just… it’s hard to wrap your head around. It really is a completely different experience. And that’s where I think we have to really try to make the right choices, really think very, very carefully. I mean, one of the things that you and I have talked about a lot and that we are very clear on is that we don’t want to let Pi be used for romantic relationships. So other people will build those things, potentially. It’s just not our bag. So now if someone sort of approaches that kind of style of conversation with Pi, Pi will be really deliberate and clear that that’s kind of off limits. It’ll be super respectful. That’s the nice thing about Pi. It never judges you. No matter what values or views you come with, it’ll try to talk it through with you and present both sides of the argument, but it’s never going to put you down or reject you. But it does have its own boundaries and it has its own values and they may not work for everybody, but we certainly, we hope that it’ll be a healthy way to proceed for most people.
Breaking down different lenses of scaling AI
HOFFMAN: Yeah, indeed. Now let’s go to another version of scale, which is the compute behind this. Say a little bit about what the scale of compute is, what inflection is. We just had an announcement recently about H100 clusters. Say a little bit about this.
SULEYMAN: Right. Yeah, I mean, so this entire story of large language models is a story of scale. It’s really quite surreal. So the training data sets, for example, use hundreds and hundreds of billions of tokens, or you can think of them as words, more texts than any one of us has ever read in our lifetimes or ever could, even if it was the only thing that we did. And you’re right, in terms of compute, that is really the core engine powering the progress of this new revolution. And the workhorse of that engine is the Nvidia GPU graphics processing unit — these chips that were previously primarily used for gaming and turned out to be brilliant parallel processes when it comes to running computations for neural networks. So we, as you mentioned, have managed to gather together a pretty impressive supercomputer. Recently, we announced our new supercomputer of Nvidia H100s, and on the open source ML performance benchmark, Nvidia, and our partner CoreWeave and us actually demonstrated that it was the fastest computer in the world, which is pretty remarkable in itself. And we also just announced our new funding round, where we’ve been lucky enough to be able to raise $1.3 billion and we’ll be using that to build the largest supercluster in the world, which we’ll have up and running by the autumn of this year. And that’s incredibly exciting and it’s just quite surreal that, us as a 40 person, one year old startup, has been able to build the largest cluster in the world.
HOFFMAN: Say a little bit about what that size of cluster is relative to how people talk about exoscale computers and supercomputers and so forth. Another of the lens of scale here is on the amazing of the scale compute.
SULEYMAN: Well, one way to think about it is that for each of the words being read by the AI, the AI needs to learn the relationship between that word and the previous words in a sentence. And there could be billions and billions of possible combinations of previous words in a sentence. So you can think of it as kind of like learning an all to all relationship between all the words that it has read in order to give you the likelihood that the next word in a sentence, because it’s predicting or generating the next word in a sentence, it gives you the likelihood that one word will appear over another, right? So for any sentence that Pi generates, it’s also generating scores of other words that could come next. You can think of it for each word that’s like a ranking of the next 10 words and then 10 words and then 10 words. So it’s kind of surreal that that albeit sort of simple method could produce complexity and fluency, like what we’ve got, which is just amazing. That maybe gives you a bit of an intuition for the case that the more compute you have, the more times you can run those computations, and the machine choose over all the possibilities if you like. And that’s why you and I have been pursuing scale over the last year.
Mustafa Suleyman on his book, The Coming Wave
HOFFMAN: No, exactly. So let’s go to another lens of scale, which is the governance of this. It’s one of the things I said at the very outset of our interview is that there’s just a ton of stuff that I’ve been learning from you. And one of the things that is I think our maybe third serious conversation very early, was the question of what are the right governance mechanisms and how does this work? And let’s start the governance conversation with your coming book. And I deliberately punned on that, the coming wave. Say a little bit about the coming wave and why you went very retro in writing a book.
SULEYMAN: Yeah, I mean, really writing a book was an excuse to think deeply and seriously about what was happening, is kind of a meditation or a reflection. And the key question I was sort of trying to answer with the book, The Coming Wave, was what technology trajectory are we actually on? So what does it mean that everything in the history of our species that we have produced has actually got cheaper and easier to use, the more it’s demanded. So if it’s useful, everybody wants it, everybody therefore drives to produce it, that drives the price down and that means that it spreads far and wide. That is true for every technology that has been valuable to us in centuries. And what was clear to me during the pandemic when I had a little bit of time to not fly and not travel so much and really reflect,was that if this was true for AI, that is both going to be the most incredible boost to creativity and productivity in the history of our species, but also potentially an incredibly destabilizing time, because anybody who has an agenda, who wants to amplify their narrative, their ideology, whether it’s political or commercial, anybody is now going to have a tool or an aid at their side to turbocharge that agenda.
And I’m a huge optimist for technology, and that’s why I build it. But I also believe that those who are concerned about the potential dark side, need to be actively participating in building and creating and shaping it to the best possible outcomes.
HOFFMAN: Well, let’s talk a little bit about, because your book is, and our dialogues are actually one of the places where I treat the concerns most seriously, because most of the folks who go concerns go everything from the relatively laughable and silly six month pause letter, which is kind of wrong in its theory of human nature, wrong in its theory of technological development, wrong in its theory, just all of that to, oh my God, the sky is falling or something else. And none of that is constructive or positive towards getting to a better universe. The thing is actually not just, let’s steer away from this potential landmine, but here’s how we steer to positive and possible futures, hence techno optimism. So say a little bit about what that steering course needs to look like, how you’re hoping that the book coming out in September will help people frame that and understand that.
SULEYMAN: There’s a kind of extreme anxiety and doomer-ism that is kind of bubbling up at the moment, which I think is being amplified and encouraged by this kind of fear, that suddenly the sky is going to fall in. And I think that’s a really dangerous attitude. I mean, if you look around you, it’s really technology that has delivered all of the progress and benefits that we see that has created stability and order and civilization and extended our life expectancy by double, over the last century and a half, and lifted billions of people out of poverty and enabled all of us to be connected and educated. I mean, it really, it’s almost ridiculous that we have to make that defense and quantify all of those benefits. At the same time, we have to be eyes wide open about the potential for this to cause instability, but there’s many more practical near term threats to stability, which we should be focused on rather than talking about existential risk and AGIs suddenly emerging and exploding and taking over and manipulating us and taking over armies and all of this kind of fear-mongering. Sorry, I’m just getting a phone call. Sorry about that.
HOFFMAN: It is a classic in recorded interviews.
SULEYMAN: Yeah. Well, obviously I would have it turned on airplane, but I’m connecting to you. And of course it’s a spam call.
HOFFMAN: Which of course!
SULEYMAN: Course, of course. What else could it be?
HOFFMAN: It’s an AI robo column from something. I was like, wait, I want to be part of this conversation.
SULEYMAN: Yeah, where were we?
HOFFMAN: Well so it’s… where you were was on the existential risk, and part of it, of course, is the fact that one of the places where most of the existential risk people are doing a disservice and causing an increased likelihood of dystopia, is that by focusing on future robot overlords versus how human beings use this technology, including by the way, criminals, crazy people, bad state actors, all the rest of this, this is the destabilization point, actually in fact increases dystopian outcomes, because they mislead the focus.
SULEYMAN: Yeah, I mean, that’s totally right. I mean, so, the near term threats include things like a massive spread of misinformation, which has the potential to destabilize elections, a massive reduction in the barrier to entry to causing cyber attacks. And there are very practical security and anti misinformation steps that we should be taking. But the challenge for those who are sort of more negative about the existential risk and so on, is that actually that requires very kind of practical and operational roll your sleeves up and get in and build solutions and make things safe and secure. This is eminently doable. It’s going to require a huge lift and probably a lot of changes to the way that content is moderated on platforms. It will require new algorithms, and of course it will require new regulation of types over the next few years. And that’s the messy, hard work of trying to operationalize change and make things just incrementally a little bit better. And I think some people have got slightly caught up in the sci-fi conversation, which is easy to trigger a nice dopamine hit when you’re having a dinner with friends or something, and that might be distracting from the practical work we need to get on and do.
HOFFMAN: So obviously we agree intensely that governments need to be involved. Part of the slogan that I’ve been using in talking to governments, because generally speaking, if you listen to the press and listen to the committees of people, they’re like, oh, we should slow these big tech companies down. It’s like, well, actually, in fact, that’s a bad participation, because we have line of sight to a medical assistant, a tutor on every phone. Think about the human suffering that you alleviate, the human potential that you enable with these kinds of things. The real question is how do we get there? And your real question as the government isn’t, how do we slow this stuff down? How do we shape it so we minimize risks? How do we shape it so that we get these benefits to the bulk of our people as fast as possible?
How to form trust with new AI products
Let’s shift this to another subject that we’ve talked about in a lot of depth. Like many things since we’ve talked about this for 10 plus years. Trust, that trust is really important, trust in technology, trust in Pi, trust in governance. What are the key things that all of the actors here should be doing, in everything from product development to companies, products, communicating to people? We can talk about inflection, specifically what cuttings we’re doing, and then also like, media and government. So the elevation of trust is going to be extremely important. And obviously we want it to be a well-founded trust, trust with good purpose.
SULEYMAN: Look, I think fundamentally, we are at the beginning of a new revolution in the history of our species. There’s going to be a completely different quality of object arriving in our world. I mean, just as hardware over the last 60 years has gone from huge big TV screens in your living room to tiny devices in your pocket that stream HD, we are on the same trajectory for access to intelligence. That is going to completely change the landscape of society, culture, politics, religion. It’s really going to change what it means to be human. It’s going to change what governance looks like and what it means to earn an income. It’s going to change what national boundaries actually look like over the next 30 to 40 years. It’s sort of hard to fathom how profound this change is going to be. And so in amongst all of that, really the most important thing is being able to trust the technology that arrives in your environment.
And the way that we form trust, I think, is that we observe behaviors consistently over time. You trust that your iPhone is going to perform well because it’s consistent, it’s reliable, it does the same thing over and over again. And at the moment, you can’t yet fully trust these large language models. They’re not reliable, they’re not robust, they’re fun, they’re good. They can be super useful, you can learn a lot from them, but they still make mistakes. And I think that it’s going to take us another two or three years to really iron out these weaknesses in the models. And so over time, you’ll build more trust in the model, as you can observe their behaviors consistently over time. And what that enables is us to train these models to be very boundaried and respectful and make sure that humans are always in control and at the top of the food chain.
HOFFMAN: And say a little bit about… what are the things, because part of, obviously, the tech industry has been screwing up on maintaining trust and they’re antagonism agents because people always attack any pillar that’s happening in society that has a differential raise in power, whether it’s finance or banking or oil or whatever. It always happens that way. And so the tech has that too. But on the other hand, of course, in my view, it’s partially because they’re terrible communicators and saying, here’s how we’re designing it. Here’s what our goals are, here’s what our purposes are, and we’re listening to your concerns. And part of how we show that we’re listening is we’re reflecting what we hear about your concerns to you as we’re doing it. But that trust between the builders of the technology and the rest in society is really important to keep that communication and trust going. What’s your current criticism and what’s your suggestions for improvement?
SULEYMAN: Well, I guess my main criticism of technology of the platform companies so far is that they have really tried to make the argument that the platform is neutral, that it doesn’t actually have responsibility for the content that appears on the platform. And this is the long debated Section 230 discussion about who’s liable for content. And I think that instead of arguing 20 years ago that content really had no publisher liability if you are a platform creator, what we should have done is say, okay, look, none of the existing paradigms work. It’s clearly not fair to say that a platform is a publisher, like a newspaper, and anything that it hosts on its platform, it’s responsible for, but it’s also clearly completely unreasonable to go in the other direction. And I think that had we taken a little bit more time to debate that and think it through, then some of the emergent effects that have to do with this kind of spreading of misinformation, this very polarized and outrage driven environment that we now find ourselves in.
I think potentially some of those could have been a little bit avoided. And so, I think the other component is really thinking about the business model. The commercial relationship between you and the content that you consume, really, really matters. And if you don’t appear to be paying anything for it, then you are probably part of the commercial process, like your attention makes you the product, right? And I think that that means that your interests aren’t aligned with the content that you’re seeing. Somebody else is paying to put that in front of you, even if it doesn’t look and sound like an advert, and it’s not explicitly named as an advert. It’s clearly pushing a particular idea. And ranking is very much a form of persuasion. The order in which content appears in a feed really shapes what you end up seeing. And because you never get to the bottom of your infinite scroll.
So I think in this new wave, we have to think about how to address both of those questions, how to minimize outrage and minimize alarmism and polarization and hatred for one another and anger, because that isn’t what we want to build as product designers. But also think really carefully about the business model. I don’t think anybody wants a personal AI that is funded by advertising that is selling you to the highest possible bidder and sort of trying to persuade you to buy something. So that doesn’t mean that there can’t be sponsored content in the experience. Clearly ads sometimes can be exactly what you are looking for and really, really useful. So just getting this balance right, I think is going to be really, really critical to making sure that your personal AI always serves your interest first and foremost, above all, above anything else.
HOFFMAN: So as part of trust and governance, there’s a lot of different things. It’s everything from, what kind of structures we should have in society. But one of the things that you’ve already been doing, and both within the British context and also within the American context, is what’s the governance of the organizations? And so one of the things that was really key for how we were setting up Inflection was to set up as a PBC. So say a little bit about what a PBC is, how to understand it, what the Inflection PBC is, and how this is an effort to provide good governance and to increase trust.
SULEYMAN: Yeah, I’m glad you brought that up. So obviously, when a business goes around and produces products, typically it tends to only focus on its users and its customers, who’s paying its bills. So the existing corporate structure maximizes returns to the shareholder without any legal consideration to the impact on the environment or to the impact on wider society in general, or for just doing good for humanity. So when you and I got together to co-found Inflection, it was a no-brainer that we would incorporate it as a public benefit corporation. So what this is is a new type of corporate structure, which is still a company, and our goal is still to make profits, but we as directors of the company have a fiduciary that is a legal obligation to balance the interests of the shareholders with the interests of wider society and people who are affected by our externalities.
You have to factor in the long-term consequences for people who don’t pay you, who aren’t your customers, but for society in general, and you have to try to do the right thing. And so it’s not going to solve all of the issues of companies, but I think it’s a first step towards trying to create a more balanced social and commercial mission, integrated into one organizational design. I mean, as you know well, but for listeners, you’ve been a great support to me, personally over the last decade, as I’ve tried to establish DeepMind’s governance structure, both when we were acquired by Google, we had an Ethics and Safety Board, which of course you were on, back in 2014, which is pretty incredible, first for its time. And then throughout my time at DeepMind and at Google, we tried to create lots of different oversight boards and different structures to house the future technology.
HOFFMAN: Well, as always, I learned stuff by talking to you. Mustafa Suleyman, thank you for being on Masters of Scale.
SULEYMAN: Reid, thank you. This has been super fun.