How to spot a deepfake
Table of Contents:
- Adobe’s push for users to experiment with AI
- Navigating trust in the age of deepfakes
- How deepfakes are created
- Balancing innovation with responsibility
- Weighing the exciting use cases of generative AI with the extreme
- Creating products that can benefit social media marketers and designers
- Inside the Frame.io acquisition
- How AI changes our perception of art
- How copyright will evolve because of AI
- Why Adobe’s Scott Belsky is worried about mid-size businesses
Transcript:
How to spot a deepfake
SCOTT BELSKY: Even though it’s sensational at the moment to write articles about, ‘oh my God, a deepfake!’ It actually is good because it’s popularizing the fact that you shouldn’t be trusting everything you see anymore. When I see the creative use cases for this technology, it’s just extremely exciting, right? We’re going to be living in a hyper-personalized world where the digital experiences around us suddenly meet us where we are. The generative AI capabilities that are making that possible are the same tools that are going to be, you know, abused before they’re responsibly used. And that’s something we all have to help each other navigate.
BOB SAFIAN: That’s Scott Belsky, Chief Strategy Officer and EVP of Design and Emerging Products at Adobe. Anxiety about deepfakes and the malevolent use of AI is growing, especially in the lead-up to this year’s U.S. election. Adobe’s creativity software puts the company at the heart of the controversy. I sat down with Scott to better understand how deepfakes are made, what we all should be looking out for when consuming content, and what the deepfake landscape means for businesses and society. Scott explains how developers like Adobe are trying to verify human-generated content, but he admits that the current situation is very messy. For anyone uneasy about AI’s impact on creative work, this conversation offers some clarity about what’s worth worrying about and what to accept at a moment of chaotic change. I’m Bob Safian, and this is Rapid Response.
I’m Bob Safian. I’m here with Scott Belsky, Adobe’s Chief Strategy Officer and EVP of Design and Emerging Products. Scott, thanks for joining us.
BELSKY: Thanks for having me, Bob.
Adobe’s push for users to experiment with AI
SAFIAN: You and I have known each other for quite a while, both interested in the intersection of technology and design. In the last couple of years, with the rise of generative AI, DALL-E, Midjourney, Adobe’s Firefly… The environment’s become a little more complicated. You know, multiple videos have gone viral in the last year of designers worried about being replaced by AI. It’s kind of a tricky spot for you, sort of embracing the future, really leaning into it without alienating some of your core users.
BELSKY: Yeah, I mean, in some ways though, history rhymes, right? And when you look back and you see the advent of the digital camera, remember that digital photographers weren’t even allowed admission to many of the American Photography Associations. And even go further back and people who used to do family portraits as painters were horribly offended by the concept that you could click a button and suddenly capture a family portrait using this new thing called film, right? Technology has always unleashed more creative potential.
So many of our customers report spending most of their time in our tools doing mundane, repetitive work. And they always ask us for features that help them become more productive and expressive on the creative side and do less of the annoying stuff. How can we not use this technology to do that? And yet at the same time, how should we train these models? And how should we make sure that people’s IP is protected? How do we make sure that contributors are compensated? And so, you know, we’ve tried to take the highest road possible on these fronts. And yet at the same time, to your point, you know, there are folks who have yet to start to play with the technology, and we’re trying to push them to experiment with it.
Navigating trust in the age of deepfakes
SAFIAN: Well, people are intimidated about new tools. And as you say, they can be used for all kinds of different purposes, right? I mean, one of the most concerning AI uses is deepfakes — things that look real and sound real, but aren’t, you know? The New York Times had an article today, I don’t know if you’ve seen it yet, but about scammers, deepfaking Elon Musk. And we’ve seen reports of school kids making fakes of classmates, you know, putting the heads of fellow students onto naked bodies. Like these risks are real, right? They are.
BELSKY: Yeah. I like to say that we’re going from the era of trust but verify to an era of verify then trust. It’s a new world. We’re going to have to verify the source of content and how it was made before we can determine whether we can trust it or not. One thing that Adobe has been trying to lead the pack on, but it’s an open source nonprofit consortium is all around the Content Credentials Movement. And the idea is that good actors, people who want to be trusted can actually add credentials to their content, so they can show how it was made. You can say this is the model that was used. These are the tools that were used. These are the modifications that I made. And you can look at the content and say, ‘Hey, you know, do I feel that I verified this media enough to trust it or not?’ I think that’s going to become the default in the future. And I think that we’re going to start to, you know, sort of see stuff without credentials. And in some ways we’re going to question whether it’s true or not.
SAFIAN: These are kind of like nutrition labels for content, right? Like little watermarks on everything. I mean, don’t you worry that AI is just going to adapt, and we’re going to be able to replicate those kinds of things too? I mean, that must be hard tech to try to figure out how to stay ahead of?
BELSKY: Well, the nature of credentials, again, is it’s not meant to punish bad actors. It’s meant to reward good actors. As humans, we’ve always adapted by changing our kind of bar, you know, for what we were willing to believe. That famous story about the War of the Worlds broadcasts on radio and worrying that the world was ending. But then they quickly realized that there was this thing called fiction and the medium, a lot of what you heard wasn’t true.
And then fast forward to today, I think we’re also going to be somewhat inoculated from the sensation of stuff we see, as we learn that it’s often not true. And the question is what technologies and what norms will emerge that help us determine what we can trust and what we act upon. So the jury’s out right on how these things are going to evolve. But humans are pretty resilient. Like we start to realize quickly that that stuff is not true or needs to be verified before it’s true. And I’m confident that that’s going to become part of the norm for us.
How deepfakes are created
SAFIAN: For the lay person that doesn’t interact with creative design tools. Can you explain how someone goes about creating a deepfake?
BELSKY: Well, the marvel of generative AI is the speed in which you can tune a model towards something, right? if a generative AI model really understands how to make images according to a prompt, right? And then if I use images of you to inform the tuning of this model to output you, right? As opposed to some random person, which is pretty easy to do these days. You know, then I can actually start to generate images of you. And the same thing, of course, goes for video with video models. The same thing goes for audio. I could even actually prompt a net new video of Bob saying what I want Bob to say with the background that I want Bob to have and with increasing levels of quality, you know? I can actually just generate net new videos.
Now, there are so many incredible use cases for this technology in Hollywood. And in fact, Hollywood has always used products like After Effects and Premiere Pro and other technologies to make, you know, imagination come true and to make, you know, Harrison Ford look 30 years younger for the early sequence of Indiana Jones, the latest one. That’s a great use of this technology.
Now what’s happening is that now everyone’s going to have access to this stuff. And so people are going to do all kinds of things, some of which are really creative and exciting, and some of which are really scary. Technology and Silicon Valley as a whole is notoriously great at being creative about what can go right and notoriously bad at being creative about what can go wrong. And we have an opportunity now to say, ‘Hey, let’s also be imaginative about the downside and let’s develop new technologies in ways for people to navigate this world.’
Balancing innovation with responsibility
SAFIAN: Adobe has been through this before in some ways in still images with Photoshop. Are there lessons from that experience that you apply to this or is this of a different order?
BELSKY: So here’s the predicament, right? We know that we need to make this technology available to our customers because they won’t be able to compete. They won’t be able to survive in a world where it takes them ten times as long to do anything that they could do using tools like generative AI. We’ve trained Firefly only with licensed content because we want to have a commercially safe option for our customers to use. But we certainly need to make it available. There’s no question about that.
Now, these technologies will be available everywhere. And there are certainly, you know, even with things like Content Credentials, which we’re really excited about, and we’ve got Meta on board, and we’ve got Google on board, and YouTube, and, you know, we have a consortium of literally thousands of tools and media companies on board to contribute to and support Content Credentials so people can see how things were made and determine where they can trust it. But there will be many tools that refuse to incorporate this stuff and people will create stuff with those tools, whether we like it or not.
SAFIAN: As a business person, it sounds quite tricky because on the one hand, you’re saying you have a responsibility — you’re trying to get Silicon Valley to look at the risks and address them. And on the other hand, you know that if you don’t make these tools available quickly, like someone else will. And so, how restrained can you afford to be to stay on top of the business? I mean, I’m thinking of how OpenAI released ChatGPT in part, because they had nothing to lose and folks like Google who delayed because of the risks, then had to play catch up. So you’re balancing on this.
BELSKY: Well, we are. And I encourage our teams to anchor themselves on our customers. And here’s the thing: When you talk to the average customer, they are excited about superpowers that make them more productive and help them grow their career. They’re cautious about all the things you and I are talking about, and they want to use tools that are both pushing the edge of what’s possible, but also doing it in a responsible way. Of course there are some folks who say, ‘Hey, you know, burn the boats and let’s just do whatever is out there,’ and ‘who cares about how it was trained or whatever else.’ And then there’s also another group on the other end of the spectrum that says, ‘don’t do AI, Adobe.’ Like, ‘just don’t do it. Just ignore it. Why should you have to use this technology? We don’t want anything to change.’ But the average is a large group in the middle that are very pragmatic and responsible about this technology, and those are the folks that we’re taking the pulse of constantly and anchoring ourselves to, and that also feeds the business, right?
SAFIAN: And it’s the biggest customer base.
BELSKY: 100%.
Weighing the exciting use cases of generative AI with the extreme
SAFIAN: In the political sphere, we recently had Donald Trump alleging that images of Kamala Harris rallies were AI-generated. Are we likely to see even more discussion and conversation about deepfake activity during the election? And is that a distraction or is that, like, significant?
BELSKY: Whether it is the internet, you know, whether it is Bitcoin, you know, these are all technologies that were very much used for illicit purposes first, right? This technology is no different. And we will certainly see early use cases that are concerning.
I think the most important thing, if you take a step back, is that we talk about the use cases that we do, you know, even though it’s sensational at the moment to write articles about, ‘oh my God, a deepfake!’ It actually is good because it’s popularizing the fact that you shouldn’t be trusting everything you see anymore. Or when you hear these stories about someone getting a call from their grandmother, it’s their grandmother’s voice asking for money. You know, it’s like a very scary thing, but the answer is not to put a stop to the use of the technology. The answer is to make sure people know how it can be abused and then develop precautions.
And listen, when I see the creative use cases and also the very practical business use cases for this technology, it’s just extremely exciting, right? We’re going to be living in a hyper-personalized world where the media, the commerce, the digital experiences around us suddenly meet us where we are. The generative AI capabilities that are making that possible are the same tools that are going to be abused before they’re responsibly used. And that’s something we all have to help each other navigate.
SAFIAN: I often reflect about how Northern California and Southern California approach technology differently. That Hollywood tends to make tech into, you know, the end of the world and, you know, Terminator and the robots are coming to get you. And Northern California, it seems to be like, technology is going to make everything perfect, it’s utopia. And reality is probably somewhere in between.
BELSKY: You look at the role of sci-fi, right? I mean, sci-fi is a prototype for the future that is both intended to motivate us and scare us. And when we see sci-fi, it really helps us think about the implications and helps us act accordingly. Terminator films are decades old now, right? And yet, now we’re starting to see the age of robots that are hyper-intelligent. Are the developers of these robots taking into account what they learned from watching Hollywood extrapolate what could go wrong, as they’re building these systems and inserting safeguards? I’m sure, right? What if Hollywood and sci-fi and the imaginative minds of creatives in Hollywood are actually part of the system that keeps humanity safe over time — is a fun thing to think about.
SAFIAN: Scott’s perspective on how sci-fi influences real-life technology is intriguing: that for all the government committees and regulations and articles about AI’s potential downside, that perhaps Hollywood storytellers are subconsciously imbedding the most powerful lessons of all, about the risks in a headlong race to the future — especially to sci-fi loving technologists. After the break, we dig into other lessons, about transparency, customer focus and more. Plus Scott shares why he believes small businesses will thrive in the AI era. Stay with us.
[AD BREAK]
Before the break, Adobe’s Scott Belsky explained how deepfakes work and both the risks and opportunities in new design tools. Now, we talk about the new rules of transparency, and why he believes small businesses will thrive in the age of AI. Let’s jump back in.
Creating products that can benefit social media marketers and designers
Adobe provides tools in some ways to two different sides of a competition. I’m not sure. Like they’re individual creators, and then they’re big enterprises, right? Like last week you announced the B2B version of Journey Optimizer for marketing. Like, how do you think about who your target customer is? And as you’re talking about sort of looking to the future, like who that customer will be?
BELSKY: It’s interesting because, you know, over the time that I’ve been involved with Adobe, frankly, until just the last couple of years, it’s always felt in some ways like two companies because the tools were so different, right? And the customers, to your point, were so different, and they oftentimes relied on people to translate between one another. Even the creators talked to the marketers, it was like you needed someone in between to translate it. Fast forward to today, these worlds are really collapsing into one another. Why? Because the social media marketers or the people who are delivering these digital experiences, they want to do so in real time, because of course social’s where a lot of the actions happening these days. They need to be able to take a creative asset and then change it on the fly and then deliver it without having to wait for a process to happen.
Also the creators, they want to also up their game. They want analytics on how their assets are performing. They want to be able to deploy things directly. They don’t want to have to go through these channels and these sort of annoying processes. And as a strategy leader, you know, I’m trying to help connect the dots and get all these teams to work together and ship products that prove this.
Inside the Frame.io acquisition
SAFIAN: I know you acquired Frame.io, tried to acquire Figma… What’s the strategy of what you’re looking for there?
BELSKY: There’s always this desire to either fill gaps or to find adjacencies where the one plus one equals three, right? And, you know, Frame.io is really a great example, the number one platform for collaboration for anyone in the video space. And we have great video tools, but the video tools just serve the editors, not all of the stakeholders that work with them. And it became very clear that that was a major gap to fill. And Frame.io was just the perfect partner.
I think a lot of the great opportunities actually do collapse these worlds. And as a strategy leader internally, when you come across a possible company to buy or to build a partnership with and you’re not sure which business unit should be the sponsor — should it be the marketing folks? Should it be the creative folks? You know, that’s a great sign right now for me because that means that this company is likely a glue that can make these workflows sing together.
How AI changes our perception of art
SAFIAN: I’m curious how your perception of what art means has changed in the age of AI? If a prompt can generate an impressive painting, like is that creative work in the same way as the definition of creativity changes, how much do you think about that stuff?
BELSKY: Well, I love this question. Being involved with the MoMA in New York, like, that’s one common question that the museum that’s so focused on modern art is always asking, right, is: What is art? There was an artist, Refik Anadol, who is an AI artist, and he generates through algorithms these amazing art experiences. MoMA ended up acquiring a piece for its collection. But is it art? Well, I mean, of course, you know, it’s art in the sense that he came up with this, and he’s expressing something, and he’s telling a story. But it’s an entirely new medium.
Now, when you go and you use a prompt and you generate a net new asset, is it art until you change a pixel? There’s actually interesting copyright law questions around that. Do you own it yet? Or do you have to change some of it? And how much of it do you have to change for you to own it and have it to be one of your expressions, right?
Listen, this is an age old debate and question and the only comfort I find is that it’s always been an argument. And yet, that process in and of itself serves its purpose because art is made to get us to ask questions. Art is made to perplex us, to surprise us, to in some cases divide and then unite us, right? Art is meant to be provocative. So as long as it is, I would say it’s doing its job and it is art.
How copyright will evolve because of AI
SAFIAN: The producer of this show asked the design team at our company about what I should ask about Adobe and the issue that they were most interested in was copyright rules, where copyright will evolve in this AI world.
BELSKY: The concerns and questions at the top of creatives’ minds are different now than they’ve ever been in my career.
I’ll proactively mention this debacle we had with the updating of our Terms of Use, which had not changed in 11 years. We had made a modification around when content is scanned for child sexual exploitation imagery. But the change that was made was paraphrased by our legal team and a pop-up that customers got when they had to accept the Terms of Use that said ‘Adobe has updated its content moderation policy.’ Which caused some customers to be concerned about, ‘Oh, Adobe is looking at my work.’ We were really just doing what’s legally obligated. But when they went in and saw the limited license that Adobe, and by the way, every other company on the internet that makes a thumbnail for you, that resizes your content to work on different devices, these are sort of normal things that anyone in technology just knows are par for the course. But customers were suddenly like, ‘wait a second, you know, limited license? Does that mean you’re training on our content?’ Now, we’ve never trained on our customers’ content. We’ve always been very, very clear how our models were trained for generative AI and our compensation program for those who do have content that is part of Adobe stock that’s trained on, et cetera.
But what the lesson is that I took away was that every company now needs to go back and revisit its policies around this stuff to make sure that they are taking into account the concerns of the modern creator. Because the concerns have changed, you know? The limited license that every company required for the last 10 years, no one thought that that might mean that a company is training a model on their work, right? That could conceivably even compete with them, right? And by the way, so we took this within a week. Many late nights to early mornings, we actually went through the entire Terms of Service, we annotated it. We said exactly what we do and don’t do. Usually companies don’t say what they won’t do in terms of service. Usually it’s just what they do, or what they’re getting customers to agree to. We were explicit in what we don’t do. And I’m actually really proud of it. I would say it’s probably the most creator-friendly Terms of Use on the internet right now, as it relates to tools companies. But it was a wake up call. And so I think that to your point about copyright and copyright law, we have to be very proactive now. You know, the legal department is a really important part of everyone’s strategy now to make sure you do the right thing.
Why Adobe’s Scott Belsky is worried about mid-size businesses
SAFIAN: You have such an interesting perspective because you are inside a very big company in Adobe. You’ve been an entrepreneur. You’ve been an early stage investor in a lot of start-ups. Does AI make it easier for smaller enterprises to compete with bigger ones or harder because they have less data to train their own models and the cost of sort of building and running these models is so high?
BELSKY: I have to say that I’ve been increasingly bullish on both sides of the spectrum and increasingly concerned about the middle. And let me explain. There will be some mega, mega, mega companies that have these huge data modes, right? They might be the companies that own the operating systems we use every day. They might be the companies that store the customer data profiles that power brand experiences. But I don’t think there’s many, many of them. And I do think that those companies will only get bigger and more important in the world.
On the other hand, I actually think that small businesses are going to explode in a good way because every business has always been constrained by the lack of building out functions. So if you were a small business that sold, you know, boutique eyeglasses or sweaters or you’re a landscape designer or whatever else, you didn’t have a finance department and a marketing department and a legal department. Well, now you’re going to have these generative AI-powered capabilities for very next to nothing per month, that will make your contracts for you, manage your finances, automate your billing. I mean, you’re going to have the functions of a mid-sized company at your disposal as a small business. And as a result of that, I think a lot more people will go into SMBs. I think they’ll be far more profitable. I think they will start to grow. I’ve been thinking about the small team/big business phenomenon and wondering when we’ll have the first three to five person team that becomes a $100 million business. So very, very bullish on small businesses.
Now, here’s the question, though, like what happens to the middle? What happens to these businesses that are maybe hundreds, if not thousands of people that are employees that are getting kind of outpaced by both the small businesses and those mega businesses. That’s the area where I think there will be some disruption.
SAFIAN: I guess it’s going to be harder for even those smaller businesses to become the super big ones, right? Because you’re not necessarily building out the capabilities in some of these areas. You’re leveraging off of somebody else.
BELSKY: That’s right. But then I would say, why? Why do you want to be one of them, you know? Being someone who’s in a big company, maybe it’s grass is always greener, but I’m like, ‘Oh my gosh, you know, it’d be so much easier with such a smaller team, building something we’re passionate about without all this overhead and cognitive load of managing a large enterprise.’
It’s funny whenever I go to Japan and you don’t have these micro-experiences in Tokyo, where you go to a small cafe or a small restaurant and I mean, small, like 7 to 10 people, you know, in these artisanal experiences, I kind of wonder if that will be more of the future for the rest of the world, you know, if you’ll have this, you know, so many more small businesses and artisanal, crafted experiences where people can do what they’re super passionate about. It would be ironic that will be more human experiences as a result of this very nonhuman technology,
I think one of the things we as humans loved and long for is the old days where, you know, hundreds of years ago, and for the rest of humanity before that, we were known in our village, in the small stores that we frequented, the trading posts that we went to, you were recognized by name and by face, and they knew your favorite this, they knew your kids. And then suddenly, only in the last hundred years or so, post-industrial revolution, everyone became anonymous, right? Every store was made to be generalized for the masses. The internet was built in that model too, where you go to a website, you click your gender, you click your size, like you’re unknown. So I do think that we’re about to enter this world where personalization at scale, and even for small businesses to know you and treat you, make you feel special, you know, that’s a huge opportunity this technology will bring. And I’m excited about that.
SAFIAN: Well, thank you, Scott. Thanks for doing this. It’s always great to talk to you.
BELSKY: Of course. I think we covered a lot of territory.
SAFIAN: What I took away from my conversation with Scott is that when discussing AI, you can’t talk about the good without the bad, and vice-versa. The same tools that make media more accessible, or supercharge a bootstrapped company, are the same tools that can demonize and spout misinformation. As long as we’re well-aware of the angel and demon sitting on each shoulder of AI, we’ll hopefully have the wherewithal to use the technology responsibly.
As for deepfakes, it’ll be a challenge to not fall too far into a conspiracy-riddled mindset. But I think regardless, a more mindful approach to the way we scroll and consume information is ultimately a good thing. At least I hope so. I’m Bob Safian — the real Bob Safian. Thanks for listening.