Make AI your co-pilot
Rapid Response with Bob Safian: Can you truly take advantage of AI before speaking its language? Microsoft’s VP of Design and AI, Dr. John Maeda discusses AI’s common misconceptions and its misunderstood opportunities. A veteran of AI development, John shares valuable insights for entrepreneurs about how to engage with the new technology — from overcoming trepidation to making AI work harder for you — and AI’s potential to help leaders make better decisions.

Rapid Response with Bob Safian: Can you truly take advantage of AI before speaking its language? Microsoft’s VP of Design and AI, Dr. John Maeda discusses AI’s common misconceptions and its misunderstood opportunities. A veteran of AI development, John shares valuable insights for entrepreneurs about how to engage with the new technology — from overcoming trepidation to making AI work harder for you — and AI’s potential to help leaders make better decisions.
Table of Contents:
Transcript:
Make AI your co-pilot
JOHN MAEDA: When you understand how the machine really works, you’re gonna fear it less. This triangle of technology, design, and business … I can hear the technology folks: this is amazing, let’s do more of it. I can hear the design social science part of it saying, we have to ask questions. And I can hear on the business product side, well how is this gonna lead to more profitable business, happier customers? Navigating those three points is so critical for AI to make a true difference.
Building with AI is important. So you’ll know where the proverbial puck is bouncing. It isn’t just like moving, it’s like bouncing off of everything and it’s moving at light speed. Unless you’re there near the models that are evolving, you’re not gonna know where this is gonna end up.
BOB SAFIAN: That’s John Maeda, VP of design and artificial intelligence at Microsoft.
John is working at the center of AI development, including a new programming tool called Semantic Kernel, and is also author of the book How To Speak Machine.
I’m Bob Safian, former editor of Fast Company, founder of The Flux Group, and host of Masters of Scale: Rapid Response.
I wanted to talk to John because understanding AI and where it’s going has become an essential task for all of us, and John has been deep in the area throughout its evolution.
I first met John when he was president of the Rhode Island School of Design. He has since worked at a string of Silicon Valley VC and tech firms, and as an adviser and board member for both consumer and B2B enterprises. John has a masters from MIT and an MBA, as well as a doctorate in design, and has written books about leadership and about AI.
For the next few months, this podcast is going to lean into the rapidly evolving impact of AI for individuals, for businesses, and for society. John provides an ideal place to start by offering an accessible foundation for understanding the complexities we face, and why and how to respond to the changes swirling around us.
[THEME MUSIC]
SAFIAN: I’m Bob Safian. I’m here with John Maeda, the VP of Design and Artificial Intelligence at Microsoft, and author of the book, How to Speak Machine. John, thanks for joining us.
MAEDA: Bob. Thanks for having me. I’m so excited.
Why AI is in the “Spring Break” phase
SAFIAN: So, you have been engaged with AI at various levels for many years as a technologist, as a designer, as an executive. You’ve described our current phase for AI as Spring Break in Fort Lauderdale. And I’m curious what you mean by that. Are you talking about, like, bad behavior, like a party destined for a hangover, or is there something else that’s gonna grow from this party?
MAEDA: Bob, you must know a lot about spring break in Fort Lauderdale.
SAFIAN: A little bit too much, yes.
MAEDA: It’s the fact that this “AI winter” phrase, most people aren’t aware that it comes from the AI nuclear winter. And the fact that much of science in the late 1900s was born of war, which is pretty dark and dim. So I like to contrast that darkness to, oh my gosh, a nice spring break in Fort Lauderdale. The drinks are free, they’re blue. It’s all okay. It’s awesome. It is a switch. It’s a switch from something very dark to something very bright. And to your point, could it be irresponsible? Absolutely. Could it be overdone? Absolutely. Because we’re starting from AI nuclear winter.
SAFIAN: The people who are still afraid, cause there is a lot of still fear around AI…
MAEDA: Absolutely.
SAFIAN: Are they still living in the nuclear winter, or is that always part of a new technology?
MAEDA: There was a time where we were all around the fire. We were hungry, we were thirsty, it was dark. And then there’s something rustling out there in the bushes, and one person says, “oh, I wonder what’s out there,” and, like, wanders out and didn’t come back. Versus: the bushes were wrestling for someone out and like, look, we have something to eat now. I I think that story is ingrained in our DNA.
John Maeda on how to speak machines
SAFIAN: You authored a book called How to Speak Machines, explaining for those who aren’t engineers, and maybe some of those who are, what’s involved in computing and AI. The recent blossoming of generative AI seems to be about the machine speaking our language — that it speaks human. Do we not need to speak machine anymore?
MAEDA: In this day and age, where we can give prompts, natural language prompts, it is extremely powerful to understand what’s going on underneath all of that. And that’s the world of people who can speak machine. And I think that when you understand how the machine really works, you’re gonna fear it less. And if you don’t understand it, it’s gonna feel even more scary. So yes, we want to speak more natural language without speaking machine, but if you don’t know what’s happening behind the scenes, it’s always gonna be mystifying.
SAFIAN: And once you do see what’s happening underneath the machine, it’s not scary the same way.
MAEDA: It’s not scary the same way is the point. Can it be scary? Yes. It’s kind of like looking inside your car. Do you remember when cars — you opened the hood? It was like, whoa, if you stick your hand in there, it’s not good. So, when you understand what’s going on underneath the hood, you understand how powerful it is. And something that’s powerful, we tend to respect and treat a little differently.
SAFIAN: The term AI is used to cover so many things right now, like there’s enthusiasm for Chat GPT, really, that’s catalyzed the whole tech sector, the stock market, billions and billions of dollars in value. But there’s other sorts of AI and machine learning and automation and robotics that’s been implemented in a lot of areas for a while. How much of what we’re talking about now are new things? How much of what we’re doing is talking about old things in a new way?
MAEDA: First off, we’ve had AI for so long, it’s called a computer. A smart engineer could make a machine do things that you couldn’t tell your cat to do. Like, imagine telling your cat, “I want you to move this ball from point A to point B.” You can write a computer program to do that. If your cat did that, you’d say it’s intelligent. There is a phenomenon that was known since the 1960s by the inventor of this kind of AI chat — Dr. Joseph Weizenbaum, my AI professor at MIT, who recognized that any human being, when faced with a machine typing at it, is gonna think a human is behind it. He discovered that any above average human will have the delusion that a human’s behind it. Why? Because we call our car Lola, we anthropomorphize, we can’t help doing that. These models are so much more powerful than we ever could have imagined,that the inference ability of it is next level. We think, oh, there’s a person behind there, but there isn’t a person behind there. That’s a delusion and it’s just that computation got a lot better.
SAFIAN: Our listeners are entrepreneurs and business leaders. How do they adjust to this spring break that we’re in? Is there a first mover advantage for every business in getting engaged with AI? Or is it just if you’re building AI products versus using AI products?
MAEDA: Building with AI is important. So you’ll know where the proverbial puck is bouncing. It isn’t just like moving, it’s like bouncing off of everything and it’s moving at light speed. Unless you’re there near the models that are evolving, you’re not gonna know where this is gonna end up. If you’re end-usering AI via the tools, that’s awesome because you get a speed up and you’re competitive. But to be truly competitive, being near this stuff and building with this stuff is…I don’t know if it’s a first mover advantage anymore. I think everyone’s moving.
Inside Microsoft’s Semantic Kernal
SAFIAN: I know you’ve been working on a tool at Microsoft called Semantic Kernel, and I’m not sure I’m gonna describe it correctly. But it’s an open source tool for creating your own AI or getting the most out of AI. You use this phrase: “better inputs, better outputs.” And I wasn’t sure, like, as a guide for engineers or as a guide for non-engineers to be able to code and get the benefits of AI better. Where does this sit? Am I explaining this at all the right way?
MAEDA: Oh, you have walked into the living room perfectly. So, Semantic Kernel does something called AI orchestration, which is basically the core piece of technology needed to go from point A to point B and do the steps along the way. So like, I want to plan a party — well, you have to do these 10 steps. And it figures out how to do each 10 steps and it does it.
SAFIAN: And Semantic Kernel will then reach out and interact with other software and other tools to be able to create what it is you’re trying to create.
MAEDA: Yes. And classically, programmers have been doing this by hand. This lets you do something that took me a while to understand. This kind of computation is, as we know, non-deterministic — it makes things up. The old way of programming would never make anything up. It was truly Vulcan. When you do both together, it does much more than you expect. So think of those 10 steps from A to B. A few of them might be done with large language model AI, but a few may be done by conventional computation. They got like a hybrid car drawing from two sources of energy. So what it lets you do is do many more, much more complex computational tasks with AI.
SAFIAN: I’ve heard you talk about how there are already chatbots, like the GitHub bot that increases engineer productivity by something like 20%. And that sort of makes it sound like the advantages right now are most useful for people who are already technologists. Is that roughly right where the tech is now, or not necessarily?
MAEDA: So if you look at Open AI’s different models, there’s Ada, Babbage, Curie and Da Vinci. You may have heard one of these names used in sort of passing. It’s very simple. Ada is A, Babbage is B, Curie is C, DaVinci is D, Ada came first. Da Vinci is the newest one. Why that’s important is it helps you understand that these models have been evolving, but they’re all valid for different tasks. There are code models in the A, B, C, D hierarchy, because who’s making these models? Developers.
And so the first cool trick that was found was writing code. I’m not saying it’s by accident, but it’s like, oh wait, it can write code. Wait, it can write papers? So yes, the first gen coding, but basically 1.5 gen, it can write copy for us, it can write this entire podcast. That is what the average non-developer is feeling right now and is perplexed. Right? Like, wait, do you need us anymore?
SAFIAN: And even as you’re inviting me into this living room that you’re in, like how much do I need to understand the coding and what’s going on in which generation it is, versus like, oh, I just wanna do better user research?
MAEDA: The reason why I wrote How to Speak Machine is even in the era pre-amazing AI, I just saw so many people being left behind. And you think of tech and those who understand how to speak machine and those who don’t. And the barrier has been programming. But at the end of the day, if you’re in business, you have to understand it conceptually. So, the Semantic Kernel is made for developers, but I was working with one of our head of PR comms and within five minutes, we were able to walk through the kind of like thorny bush of the install of VS code.
And he’s like, wait, I just generated an embedding. This is what I’m talking about. It’s that kind of moment that I think is not bad to have.
SAFIAN: I did hear you talk somewhere organizations can tap into and parse data now using AI in ways that maybe they’re not aware of.
MAEDA: I’m sure you remember in your past lives when you wanted data, you’re like, we’re gonna need someone to be able to gather the data. We’re gonna need someone to be able to interpret it. We’re gonna have like a data science person. Oh, I think we need someone to actually understand these problems from a social science perspective. So this is like a year and a half has passed and it’s like, wow, I wish I had that data. The competitor had data. So this kind of AI is unusually good at gathering data that is unstructured. It’s good at giving it structure so you can use it. It’s good at testing some general assumptions around how the data could be used. And it’s able to present that to you in a way that is useful.
SAFIAN: I know that having a volume of data helps make your AI more effective. So if I’m a bigger organization, that maybe gives me an advantage. On the other hand, if I can access data somehow and interpret it much faster, maybe it levels the playing field a little bit for a smaller player to get in.
MAEDA: Now with the these kind of new models, we’re seeing democratization.
How many meetings have you been at Bob where you’re like, can someone get some data? And then someone at the meeting says, wait, that data’s wrong. So I think organizations of every scale have offsites to collect data across the organization and they can never do it. What is different is that these kinds of models can work with imperfect pictures of data and they can fill it in with plausible kinds of data.
[AD BREAK]
SAFIAN: Before the break, we heard Microsoft VP of Design and Artificial Intelligence, John Maeda, explain how AI fits into the trajectory of computing, and the advantages for those who stay close to where the puck is moving. Now he talks about how AI can be a tool in leadership decisions, as well as the positive impact of adding friction in an AI world.
Plus, he shares lessons about why we should approach AI as a player rather than a victim, and the work that will be required to guide this tech to its best outcomes.
Many of us have an emotional relationship with our machines — our phone, our computer, our TikTok account. We’re beholden to them, almost. Does AI make that worse? How do we have to recalibrate our relationship to our machines, to our technology?
MAEDA: I use my phone for timers and reminders. But I am the pilot. So I gave it the instructions. I don’t feel it telling me what to do. I have a few colleagues at Microsoft that are calling this collaborative UX. You’re collaborating with your AI. You have to be a good boss. If you don’t give it a smart goal, it’s not gonna know what to do.
If you let AI do everything in an automatic way and it’s making mistakes, it didn’t hallucinate. It’s your fault for letting it do things that you have to actually think harder around.
We’re now in a new math: make AI think harder. You want to tell AI, “Hold it, I don’t think that data’s right.” Because the co-pilot is controlled by you, the pilot. So if you trust the co-pilot too much, it really is on us now.
How AI could change leadership
SAFIAN: Before you wrote a book about speaking machines, you wrote a book about leadership, reflecting on your transition to being president of a university, the Rhode Island School of Design. What does AI change about leadership?
MAEDA: AI in that era could have been extremely useful for me to think of more “what if scenarios.”
Because when you’re in the middle of a crisis, it’s hard to think. Fast thinking is on fleek, because you’re like, desperate, but you know, you gotta turn on slow thinking. So I think AI can be a partner when a leader is doing slow thinking, trying to figure it out.
Being a leader is lonely. Like, who can you trust? And like, oh my gosh, you’re gonna trust an AI? No, I’m just using a calculator that does inference in a very efficient way. So I can ask myself, am I doing the right things and I can game out those things.
SAFIAN: It’s giving you perspective. I mean that’s what the idea of what a co-pilot is. It’s not necessarily saying this is the answer you must use. It’s giving you options.
MAEDA: It’s a weird design principle. It’s add friction, which makes no sense cause you’re like, no, we’re supposed to be frictionless. Right? That’s what simplicity is. You have to add friction to remind the executive, “Hey boss, don’t forget, this is just an AI. It’s up to you to use critical thinking. We have to teach how to use these things better. And a way to use these things well means asking critical questions. So I think critical thinking is going to be the skill that we’re gonna have to teach a lot to make AI think harder, because AI does not naturally critically think.
SAFIAN: You’ve straddled these realms of design and technology, and sometimes technologists create what’s possible without necessarily focusing on the human implications. Designers tend to start from the human side. And I’m curious how you try to navigate that.
MAEDA: This triangle of technology, design, and business…I’m awkwardly placed across these three points. I can hear the technology folks: this is amazing, let’s do more of it, AI Fort Lauderdale for the win. It’s gonna be incredible. Right? I can hear the design, social science part of it saying, we have to ask questions, critical thinking, how does it impact everything? How did it lead to more unfairness, really important humanity questions. And I can hear on the business product side: well how is this going to lead to more profitable business, happier customers, a whole different kind of dimension. They’re all related. I feel like navigating those three points is so critical for AI to make a true difference to business and to the culture and to advance technology.
Where are we in the arc of AI development?
SAFIAN: You’ve been part of this AI arc for a long time. So I’m gonna ask you this question. Like, how far along the curve are we? Like in the development and execution of what’s possible, is like ChatGPT, like 50% of the way there, is it 15%? Is it 5%? Or is there no way for us to really know?
MAEDA: Well, having been there in the eighties working on something called a Lisp machine, which was the AI workstation — Lamborghini of AI — which really a lot of computation owes to things like Python or in large emerge from things that we did in Lisp and on Lisp machines. But that said, it couldn’t do much. I think this kind of AI will do a lot more for corporations and individuals.
This mental model, built out of the machine learning world, it’s gonna take us seven months, and they built a model that identifies a cat in a forest, with 72% likelihood. And you’re like, well, can I identify dogs in a forest? No, no Bob, we’re going have to go back seven months from now, we’re gonna make another model. That’s the way we positioned machine learning over the last five years.
This new AI is just warm boot. You just show up with your data, you sprinkle it on the foundation model and it produces a heat. This is like a whole new kind of AI that’s more on the application side, versus machine learning side. “How to Speak Machine” teaches you how machine learning works, which resulted in these foundation models. But everything after is brand new. There’s gonna be a new kind of AI app developer. They need new kind of tools which don’t exist yet.
What’s at stake with AI development
SAFIAN: Wow. I am deep into your living room now. What’s at stake with all this? How do we parse out the good and the bad?
MAEDA: The stakes against AI and people understanding it it’s so easy to vilify, versus what it can do to accelerate you in your career and change your career. That sounds too Pollyanna happy for many people, but it’s more about being a player versus a victim. This stuff has its challenges. There’s issues with it. And so you’re going to question whether it’s the right thing. That’s why critical thinking, boosting that vitamin in your brain, while you approach this and looking for the opportunities both for your business and for humanity and understanding the technology.
SAFIAN: But we should want to be players in this, right? But be players with our eyes open.
MAEDA: Yeah. It’s not simple. I just think of that mentality, the true player mentality, where they are thinking extremely critically about what they’re doing. That kind of player on the field of AI, seeing more of them, whether in pure tech or pure business or pure design-ish, that’s gonna be interesting. But it requires understanding how to speak machine, how to speak AI, but speak really good human.
SAFIAN: If you’re gonna be a player, you shouldn’t just be a casual player. You have to straddle all of these things in the play that you’re doing.
MAEDA: And it’s different than buying the gear. You know, the people who buy the gear, it’s like, oh my gosh, I’ve got such and such sweatpants, and you see this like headband, this headband is authentic. And you’re like, okay, but can you really play? It means work, which in this AI world, you think, oh, I can lean back and have it do everything. No, you have to work really hard because AI will not think harder on its own.
SAFIAN: Well, John, this is great.
MAEDA: Oh, thanks for the chance to hang out again, Bob. It means a lot to me.
SAFIAN: After talking with John Maeda, a key takeaway for me is the importance of both a technical understanding of AI, at least conceptually, and the very human task of managing its tools and evolution.
So many times in life, what seems too good to be true deserves further inspection. I know that my education about AI is only beginning. Maybe it will need to go on indefinitely. But that’s okay. Learning about the new helps me to clarify my uncertainties.
I hope you’ll join in as I continue to explore AI’s intriguing possibilities and evolving impacts in the episodes ahead.
I’m Bob Safian. Thanks for listening.