How AI adds to human potential
Table of Contents:
Transcript:
How AI adds to human potential
JEFF BERMAN: Hi, it’s Jeff Berman, your host for Masters of Scale. There’s so much talk about artificial intelligence taking things away from people, but on stage recently, I had the opportunity to speak with two leaders in the field about how AI can increase human potential, lowering barriers to entry for fields like content creation.
LAMA NACHMAN: How do you design human AI systems that can actually get you the essence of what human creativity is? And, how do we think about what people need to bring their creativity to the table?
You’re actually democratizing a lot of these technologies. So now you’re enabling a much wider spectrum of people to bring their creativity to the table.
ALEXANDR WANG: It’s a huge boon for storytellers. The question for us isn’t, if we looked at the stories, the movies, or the short films that are being done today, how efficiently could this be done in the future?
BERMAN: That’s Alex Wang, co-founder of the company Scale AI. And before him, you heard Lama Nachman, Intel’s director of human and AI systems research
I joined them in conversation at an event called Intel Vision in early April. The event brought together business leaders focused on the challenges and opportunities presented by artificial intelligence
As AI advances at a blistering pace, it’s a growing part of every business, in addition to being an industry sector all its own.
I took the stage with Lama and Alex to record this episode of Masters of Scale in front of a live audience of active stakeholders.
BERMAN: Oh God, you guys are amazing. All right, take a seat.
[THEME]
[AD BREAK]
Will Sora change Hollywood as we know it?
BERMAN: One of the most breathtaking advances in AI came just a few weeks ago, when OpenAI announced a new tool called Sora. It generates AI video from a text prompt. You type in details of a scene: the characters, the setting, the actions they’re taking, and Sora creates a video from just those words. It’s expected to be released to the public later this year, but Masters of Scale was granted access to experiment with its beta release. Now, it doesn’t do dialogue yet, and of course, you can’t see a video on this audio podcast. But I had to start the show with a Sora demo.
BERMAN: And last night, some of you may have been harassed by me asking for some prompts, for some ideas and, I met Kathy and Sarah. I don’t know if Kathy and Sarah are in the room today, but they gave me a great prompt, which, if we can roll the next video, we got last night, and within an hour we turned this around.
Sarah and Kathy said, “Show me a French bulldog wearing a red fedora in Paris, eating a croissant.”
BERMAN: So what pops up on the screen is a realistic, adorable dog in this chic Parisian setting, chomping a delicious pastry.
BERMAN: Who doesn’t like French bulldogs wearing fedora eating croissants? So you can see how quickly the pace is going.
BERMAN: That’s just one ambient scene, but recently, Sora released an entire short film with multiple locations, voiceover, character development, and real pathos. It looked a lot like an Oscar-nominated short. All of this has shaken up Hollywood, big time. Tyler Perry stopped work on an 800 million studio expansion because he just doesn’t know what AI is going to do to film and TV production.
Jeffrey Katzenberg, one of the legends of the industry, said 90 percent of animation jobs are likely to disappear in the next few years.
So I started there, with Scale AI’s Alex Wang, and Intel fellow Lama Nachman, and I asked, is this the end of Hollywood as we know it?
NACHMAN: So that’s an interesting question.
What we have seen, as more capabilities have come up, is much more interesting innovation that actually came out. So, and it’s really, I mean, it’s a choice that we make, right, because you, every time there is an interesting technology that has the ability to automate, you can go the path of automation or you can go and figure out, how do you design human-AI systems that can actually get you the essence of what human creativity is. It’s like the complementarity between AI and humans. Of course, that will only happen if we design systems with that in mind. So it could go that way.
BERMAN: I love that optimism, and I feel that, and yet Alex, are you as optimistic about this? Or, do you feel like we’re entering this interregnum where the old king is dead but the new king is not yet born?
WANG: It is something like an interregnum. I think it’s a huge unlock for what film could be, or what films can be. One way to think about this is a lot of films now have tens of millions, hundreds of millions of dollars of visual effects budgets. It’s true they won’t have to spend that much on visual effects going forward.
The original advent of animation — advanced 3D animation as Pixar introduced originally was this huge change. There were these kinds of movies that you could create that you could have never made before that were somehow significantly more relatable to children and much more meaningful to children and created a new canvas.
I really think about it as an evolution in that vein, which is we’re continuing to create more engaging, imaginative and powerful platforms for creative content, and Hollywood as a business is going to have to get on board with that.
BERMAN: Do you think job displacement is coming in a meaningful way before these kinds of new industries are created, new jobs are created?
WANG: The pattern that we see over and over again is something like this: there’s a very talented team or group of people within the company, whether it be engineers and coders, or lawyers or consultants, who maybe 20 percent of their job requires very high ingenuity and capability and all of their training and all of their brilliance. But, 60 percent of the job is relatively rote or is something that fundamentally does not actually require all of their training and brilliance and capabilities. The pattern we see over and over again is how do you build copilots that are able to help them do the 60 percent of other work and then maximize their time so they can spend most of it doing the stuff that actually requires their ingenuity and their capability?
That’s what we’ve seen with coding, where what the coding assistants and copilots do is they help not do all the boilerplate code or all the simple fixes or catch bugs more easily, stuff like that. Or the legal assistants help you just draft the entire document but like all the sort of important legal decision-making is still left to the lawyer.
This is like the pattern of the future. Which I think is a pretty optimistic one because this does not in itself result in labor displacement because most of these jobs that we’re talking about there’s a shortage of them in the economy. We need more doctors, we need, uh, more engineers, we need more of them in the world.
And I think this goes towards what I see as the longer term trend, frankly, which is, you know, I think that the AI systems are moving towards being superhuman in some ways, but meaningfully subhuman in others. Already GPT-4 is a much better writer than I am. Just in terms of not making grammar mistakes and being able to structure very flowing prose and being able to write very beautifully, but it’s much worse than me at reasoning. It’s much worse than me at thinking and sort of like very long forms. It’s much worse than me at getting factuality correct. It’s much worse than me in some other ways. And so the direction of the future is going to be these hybrids between human capability and some superhuman AI capabilities to achieve better outcomes.
How will AI impact job displacement?
BERMAN: Lama, I hear Alex, and I want to be all in with Alex on this and yet I have a friend who runs a small law firm, and because of the productivity gains they’ve been able to make using AI, they’ve been able to reduce the number of paralegals they need.
Right? It’s logical. So that efficiency game, the ability to focus on higher-level work, makes a ton of sense, and thinking of AI as a copilot, right? Microsoft did a great job branding their product there. But why, logically, why shouldn’t companies then say, well, we can reduce costs by 40 percent at the same time?
NACHMAN: Yeah, and they can, and that will happen. The question is, is this creating an economy where the demand for all the new things that can happen will end up, you know, resulting in an overall, more opportunities for jobs. There’s no question that there will be a lot of jobs also that will be displaced, right?
Making us more efficient will mean less of us will be required for that same amount of work. But the assumption here is that it is that same amount of work. All of that is really starting from this notion that it’s a zero sum game. I think that’s actually kind of the missing piece of that puzzle, right? Because I think it’s, we’re in such a generative world where opportunity and, you know, new things that we haven’t even dreamt of doing will be made and that will impact certain jobs that, you know, were limited or unneeded in a certain way. But that means, right, it’s, it’s an opportunity as well to continue to innovate and differently from previous innovation. The reason I’m actually optimistic is that if we build AI systems in ways that can support and amplify human capability, then in some sense, it’s like, yeah, and that capability is helping the human continue to evolve.
BERMAN: Do you think there will be that gap? Do you think that we’ll see massive job loss over the next few years before new jobs come up? Or are you more optimistic than that?
NACHMAN: It’s really hard to guess whether the ability to transition to different types of jobs being enabled with these AI capabilities, if that’s going to help people bridge that gap or not. Being an optimist, I see this as a place where we can train people very quickly with new types of skills because of the fact that AI can actually be a support system. But I would be shocked if there was no like dip in some certain areas and fields for sure.
BERMAN: Alex, do you agree?
WANG: I think there’s certainly a possibility and I think that the key I think for us as a society is that AI fundamentally should not just be a cost-cutter. It should be a tool that creates new business models. It should be a tool that creates a lot of economic growth. It should be something that, that really spurs, a huge amount of new products, new business models, new direction and new capabilities outside of just, like displacement of human labor.
AI skills that young people should have
BERMAN: Massive changes are surely coming to the labor market because of AI. And Alex and Lama have very different views of the field, influenced in no small part by where they are in their careers. Alex is a 27-year-old founder of an AI start-up who dropped out of MIT to go into business. Lama is a researcher with decades of experience in studying technology. I asked them what business opportunities and jobs they see coming thanks to AI.
BERMAN: We have just so much uncertainty in the world today, and this feels like it’s yet another layer of it. I’m curious, Alex, you dropped out of MIT, right? College dropout made good. Lama and I were talking before we came on stage, we both have 17 year olds, who are anticipating going on to college in a year and change.
What are the skills that you think young people need to develop today for the economy that’s coming?
WANG: I think prompt engineering is a very important skill. The knowing how to interface with AI systems is an incredibly critical capability that is, I think, very akin to software engineering over the past few decades. But if you take a big step back, what are these algorithms really good at? They’re good at things that are present in their data set. And most of them are trained off of the entirety of the internet.
One thing that’s not present on basically any of the internet or, you know, very sparse in all of the data that these models are trained on is, very consistent and thoughtful long-form thinking. So, lets say, you’re given a very tough problem, at work or in school and it takes like, you know, you have to try one thing and it doesn’t work, and you have to try another thing and it doesn’t work and, you know, it takes like, maybe like 30 or 40 steps to really work through it. And I think for most of us, in our jobs, most of the hard things that we do are akin to this. You know, if you have to organize a very complex project, there’s a lot of trial and error that goes into it. Oftentimes in the field, we call this agentic behavior or agent behavior.
So how can the model actually be able to deal with new information and make choices and kind of like think through many, many steps? One thing you’ll notice is the models today are pretty bad at this. They’re very bad at thinking multiple steps. They usually make a mistake on the third, or fourth or fifth, sort of like what’s called a reasoning step or chain of thought. I think this is something that humans will always be differentiated in, because we’re very good at long form reasoning and very good at, like, thinking over very long time horizons. Models are not good at that, fundamentally. They’re very good at predicting the next token. They’re not good at like, you know, thinking over a very long time period.
BERMAN: I again want to be with Alex on this, and yet I look at the evolution just of AI video, for example, and the pace, it’s unbelievable.
Do you agree that the models are going to take a long time to get there on the long form? Or is this actually just one breakthrough away?
NACHMAN: It’s interesting. There is no question that that’s where AI struggles. And a lot of people come to me ask, “Oh, what do you think people should study?” The one thing that I would say is really critical, right, is really critical thinking and reasoning, right, especially given the fact that you can’t even trust what an AI system will even generate, right? That’s part of my concern around a lot of the state of where AI systems are today. Even though we think about it as an equalizer and it improves equity and democratization, all of that.
In reality, people who are experts can take what they want out of it and know what to ignore. And it’s really people who don’t have that capacity are probably more vulnerable to the mistakes that these systems actually make.
BERMAN: This is where what you and Alex are saying dovetails really nicely, because prompt engineering and critical thinking reasoning actually go very much hand-in-hand.
Uncovering the next stage of AI development
I want to just give a quick personal example and segue into really personal AI. Seven weeks ago, my 14-year-old and I were having lunch and we were planning, uh, a spring break ski trip. We went on chat GPT, and we asked it to compare this year’s weather data using Bing to the past 10 years of weather and snow pack data. We included a few other variables to avoid altitude sickness, ease of travel, et cetera. It spit out a suggestion. And it ended up being a terrific trip. What I’m imagining is the next step, the integration with my calendar, the integration with my purchase history, my credit card, my bank account, et cetera. And the leap from asking that one question through that one prompt to surfacing flights, car rental, hotel, restaurant reservations, et cetera, feels quite short. Alex, how close to that do you think we are?
WANG: I think that like, what I’ll call, L2 level autonomy. That’s already basically here. So I’m, I’m on the, uh, board of Expedia. Expedia’s launched a generative AI travel assistant. Plenty of customers use it. They find it very, very helpful and useful. As part of that, Expedia built models that were sort of fine tuned and customized to be really good at these travel workflows and sort of these travel agent-like questions and answers.
And so I think that’s more or less here. And I think the key unlock is actually the consumer experience on how to make that really natural and easy and flow into all the other, all the other behaviors. What I’m excited about and I think is actually closer than we’d think, is the next unlock that we’re talking about where it’s like actually autopilot true autopilot for everything and that reliability boost in the models. I think could come here faster than we expect.
NACHMAN: I think there is a question about what type of an interaction and engagement one wants, right?
I love the example of autopilot. So I have a Tesla and I’ve tried self-driving, and it drove me crazy. And I was thinking about this as like, I use self-driving ride-share Waymo in San Francisco all the time. I don’t have a problem with that. But I’m sitting in the backseat, I’m not trying to control its behavior, right? I, that’s all I want. If I’m sitting behind the steering wheel, that’s not my natural interaction, right? I am wanting to control that experience. I am wanting it not to change lanes when I don’t want it. Like, why are you changing lanes? That makes no sense. Because I’m trying to map it into my own control level and thinking of what I want done, right?
So it’s really a big part of whether, you know, that we’re ready for what experience has a lot to do with what type of interaction we’re actually enabling with these systems, right?
BERMAN: It’s context-dependent.
NACHMAN: Exactly.
Should we be worried about privacy?
BERMAN: I’m actually, I’m hearing two different things, right? One is, humans are notoriously bad self-reporters, right? We’re really bad witnesses even about ourselves. And so we say we want this, but really, we kind of want that most of the time, right? So, there’s a version where we say, “I’m headed to Phoenix for the Intel Vision Summit, and I’m looking for flights in this time frame, and whatever, and okay, great.”
There’s also a version where it’s got access to my calendar, it’s got access to my email, it’s got access to my flight history, and it’s able to say, Jeff, it looks like you need to go to Phoenix for the Intel Vision Summit. I’m, based on your past history, here are the three flights I’m recommending for you. And so I’m gonna push into the privacy question here because this kind of AI personal utopia state of it, inferring it with 99 point several nines accuracy, what I’m likely to want sounds beautiful, but like, should we be scared about the privacy components?
NACHMAN: We should absolutely be scared.
BERMAN: Where do you see this going, and where should it go on the privacy front?
NACHMAN: Clearly, the privacy issue has been around for quite some time, even before Gen AI, right? I think clearly the need for a massive amount of data to train these systems has made the problem much worse, right?
That in some sense is a little bit different than the “does it know everything about me so that it can hyper personalize?” Whether you’re talking about a consumer or, you know, any other type of application, the reality is these systems are being trained on our data to be able to actually get to that level of intelligence, right? Irrespective of whether it’s actually personalizing our needs. So I think that that can only be solved by regulation, right? Because…
BERMAN: You don’t think a robust open marketplace will solve that problem. It’s going to require government intervention?
NACHMAN: Yes, an open market is something that will bring privacy as a value proposition. And people can always give up whatever they don’t care about. And that’s true. But there needs to be regulation to ensure that actually the privacy, uh, constraints that people are claiming are actually true, right? I think it should absolutely be up to people. It’s your data. You should be able to say, I want to give it up or not give it up. I think the reason people have not taken care of privacy in that way is because they need that data, right?
It’s not because it’s impossible to make these systems private. You can build systems that can do that and not have your data being taken away and used to train other systems. But today the reason it’s not happening is because that’s not advancing the state of AI more generally.
BERMAN: The need for more and more data to advance AI bumps up against the need for government regulation. We’ll hear more on that from Intel’s Lama Nachman and Scale AI’s Alex Wang after the break.
[AD BREAK]
BERMAN: I’m Jeff Berman, and this is Masters of Scale. You can find videos from our interview catalog over at the Masters of Scale YouTube channel. Here’s more now of our live taping at the Intel vision summit on the dizzying pace of AI and the consequences for business leaders.
The path toward government regulation
BERMAN: Alex, I spent four years early in my career as chief counsel to a member of the Senate leadership, which I feel makes me especially well qualified to say how spectacularly unqualified Washington is to regulate AI at this stage.
You’re spending a lot of time in D.C. What are you seeing from Capitol Hill and the administration, and are you bullish on the path toward regulation, or are you bearish?
WANG: What we’ve seen over the past year recently ever since the launch of Chat GPT has been an incredible amount of engagement from all of D.C. I would say, Capitol Hill, from folks in the White House, from folks across many of the departments in really understanding, what the risks are, and what are the, what risks do we need to be very concerned about, such as deep fakes? What are all of the various risk factors of the technology as well as what are the opportunities with the technology? If in the United States, we’re going to have private enterprises investing, you know, tens of billions, potentially hundreds of billions of dollars in the future to build very powerful AI systems, we need the proper guardrails and safeguards in place to ensure these technologies serve the needs of humanity and don’t have major risks associated with them.
BERMAN: And, the risks are massive. I mean, you know, we got social media spectacularly wrong, right? I mean, every bit of data says so. And yet, if we regulate badly or overregulate early, we’re gonna stifle competition, we’re gonna fall behind the rest of the world. Like, we can’t have that. Here’s the fear I have, we’ve already seen deep fakes. We’ve already seen a deep fake robocall of President Biden in New Hampshire. The consumer platforms are, at best, going to be a half step behind, right?
What do we do about that?
NACHMAN: There are very different risks, and the solutions for these different risks are very different, right? Maybe one simple way of thinking about it, is thinking about people who are actually trying to do the right thing, but these systems are so complex that it’s very hard to control. Then there is the other bucket, which is people who actually want to blow up the world, and they’re actually utilizing AI in ways, you know, these are like the catastrophic events, the terrorist threats, things like that.
BERMAN: And there are a lot of those people out there.
NACHMAN: Right. So let’s talk about deep fakes specifically, right, because I think deep fakes to me, you know, I hear all of these stories about how robots will destroy the world and all of the Hollywood narratives.
I think the most likely thing to happen is that when people start to mistrust everything that they see. And they don’t know what’s real and what’s not. There, that’s what worries me the most. Right? Because then you can manipulate people to do anything. You don’t need to have robots blow up the world. You can make people do that.
BERMAN: Propaganda is a powerful drug.
NACHMAN: Exactly. So you have to invest quite a bit into fixing the issue of detecting these systems, right? Detecting deep fakes, you know, looking in every possible way that you can look at that data and say, “Okay, the likelihood that this is actually deep fake.”
BERMAN: Okay. So, Alex, should Meta, and YouTube, and TikTok, and Snap be investing literally probably the billions required right now to protect us?
Is that, is that the answer?
WANG: I think so, but it is a very difficult technical problem to be able to properly mitigate all of the potential AI-generated content that’s out there. Which, to me, lends to a question of like, should we be funding a lot more research in this area? And research in that direction is not something that’s sufficiently funded today.
BERMAN: I’m a born optimist. I can see all of the AI dystopian scenarios playing out, and I lean into the ones that are more encouraging. As we bring the conversation to a close, what do you think we’re looking at a year from now where we’re going, “Wow, that’s incredible!?”
NACHMAN: For me, I think if we make roadways into climate change, and material discovery, those are two areas AI can really transform the way we actually do this work. And if we’re able to make any dent there, that would have a huge impact on society. I think, unfortunately, knowing how governments run, the regulatory side and where that might change, fundamentally, in terms of the responsible AI piece of that puzzle. I think that will take longer.
WANG: To me, progress in pharmaceuticals in particular is an area where I think we’ll see very rapid progress. There’s been incredible advancements in biological foundation models. Unrelated to AI, there’s been huge advancements in synthetic biology, which I think will lead to just a dramatic ability for us to help people who are sick. And I think that you know, there’s probably no nobler mission than that.
BERMAN: Phenomenal. Lama, Alex, thank you for joining us for this live taping of Masters of Scale. Appreciate you both.
NACHMAN: Thank you.
BERMAN: Thank you, everyone.
BERMAN: My conversation with these two leaders in AI makes me both hopeful and concerned. Intel fellow Lama Nachman sees a future where AI is embraced as a tool to further human creativity, not just replace human tasks with automated ones.
But AI systems will only augment work and further ingenuity if they’re designed to do so. On the privacy and safety front, Alex Wang of Scale AI notes that it’s a big business opportunity. And he’s right. Tools that safeguard personal data for AI users or successfully detect deep fakes will add enormous value to the market.
But the tech industry doesn’t exactly have a great track record when it comes to protecting privacy and ensuring security. And we’ll need market incentives to stay ahead of the safety risks. We know for sure that government regulation alone will not be enough.
AI will be even more transformational than social media. And it’s likely better framed as bigger than the industrial revolution. It’s developing much faster than any sweeping change we’ve seen before.
The founders and leaders who stay mindful and run ahead of these risks are those who stand to best serve their customers and our nation through this time of massive change.
I’m Jeff Berman. Thank you for listening.