AI + You: 5 ways to ethically build — and use — AI
In this installment of our series, AI + You, we dissect the ethical concerns that builders and users of AI must keep in focus. As the AI rollout continues at dizzying pace, we all have a part to play in ensuring human wellbeing is the bedrock principle. To guide you, host Reid Hoffman speaks with Stanford HAI’s Fei-Fei Li, Inflection’s Mustafa Suleyman, Adept’s David Luan, and more AI pioneers. Rather than be intimidated by these ethical issues, we’ll leave you inspired to play a part in shaping a bright future in which humanity is elevated by AI.

In this installment of our series, AI + You, we dissect the ethical concerns that builders and users of AI must keep in focus. As the AI rollout continues at dizzying pace, we all have a part to play in ensuring human wellbeing is the bedrock principle. To guide you, host Reid Hoffman speaks with Stanford HAI’s Fei-Fei Li, Inflection’s Mustafa Suleyman, Adept’s David Luan, and more AI pioneers. Rather than be intimidated by these ethical issues, we’ll leave you inspired to play a part in shaping a bright future in which humanity is elevated by AI.
Table of Contents:
Transcript:
AI + You: 5 ways to ethically build — and use — AI
How AI is impacting education
AMELIA: Chat GPT, write me a book report for Harry Potter and the Sorcerer’s Stone in the style of a seventh grader.
AUDREY: “I totally recommend this book to anyone who loves adventure, magic, and stories about friendship. It’s got everything — drama, laughs and some really important lessons. Like, love and bravery can conquer the darkest evils.” This is so weird.
REID HOFFMAN: From book reports to math tests, kids are beginning to explore how they can, and should, use AI to help with their schoolwork.
AMELIA: My name is Amelia. I’m in seventh grade.
MARTINO: My name is Martino, and I am in seventh grade.
AUDREY: And my name is Audrey, and you guessed it. I’m in seventh grade.
HOFFMAN: While business leaders and knowledge workers are experimenting with AI to cut corners, students like Amelia, Martino, and Audrey haven’t been afforded the same luxury.
MARTINO: They told us, “If you use AI, it will be cheating, and you will get a zero.”
HOFFMAN: To get the other side of the story, let’s hear from a teacher with over 35 years of experience, who now teaches in Massachusetts’ Waldorf school system. Meet Alessandra Profumo.
ALESSANDRA PROFUMO: In middle school, they call me Ms. P.
HOFFMAN: Apologies, Ms. P. As discussion about AI creeps into the classroom, teachers like Ms. P believe that the technology threatens some fundamental ethics of education.
PROFUMO: There’s sort of an unwritten contract between us. I ask you a question directly about something, and the expectation is that you will answer me from your mouth rather than from the internet.
AMELIA: We’re all good kids. Don’t call the police on us.
HOFFMAN: Compared to college students or high schoolers, Ms. P argues that using AI at middle school can be truly detrimental to their education.
PROFUMO: These children are challenged. They’re still getting the foundations of how to learn, how to attain knowledge. It will give you the answer, but it’s not going to give you the knowledge. Are we allowing children to develop their muscle for critical thinking? They’ll be robbed of that ability to really test yourself.
HOFFMAN: In some cases, kids even agree.
AUDREY: I feel like people would sort of forget to do things by themselves, like if kids were only taught to use a computer, then they might use the valuable skill of learning how to write by hand.
HOFFMAN: While some students face the tension for not disclosing their use of AI, what’s happening in the teacher’s lounge during recess?
PROFUMO: I think as a tool for teachers, it’s beautiful. Grading, even coming up with lesson plans. I mean, turn to AI and you’ll have everything you need.
AUDREY: Bang, gavel. Teachers Union, we know your secrets.
AMELIA: They don’t respect us as children.
AUDREY: If it’s okay for a teacher, is it okay for a student? How old should the student be? Are there some people that maybe should be banned from using AI? If so, why?
HOFFMAN: All fair questions. Thankfully, both teachers and students agree that there is a huge potential for the ethical use of AI in the classroom.
PROFUMO: I would say that research is probably the best thing. It helps if you don’t know how to synthesize certain material.
MARTINO: I think that AI can be a tool to help with mindless tasks.
AUDREY: Maybe live in an area that doesn’t have access to good schools. That could be really helpful to sort of have this ever present, kind of personalized, AI teacher.
HOFFMAN: While schools try to govern themselves around AI, students and teachers are challenging their own human values and questioning whether their fears are legitimate.
MARTINO: As AI is progressing, I think that the morals are a little bit fuzzier.
PROFUMO: We tend to go, oh my gosh, fire and brimstone and the sky’s falling, because we tend to do the what ifs, and the what ifs are always the worst case scenario. Instead of what ifs, this could be something really great.
HOFFMAN: Considering the potential impact of this technology, what should Generation AI be taught from a young age?
PROFUMO: They need to know how to use it in a responsible way and to be sort of good cyber citizens. To hide from AI is terrible. The question is, will it be promoted in an ethical way? That’s all going to be about the hands of the people that advance it as a tool.
HOFFMAN: To advance education most ethically, it’s important not to rely on AI itself to onboard students. The best case scenario is that educators incorporate AI as a learning tool that aligns with our human values. We know that AI’s impact on education is already underway, so to ignore the technology is dangerous. Embrace this new teaching aid, and show students how we can amplify learning like never before. Business leaders, take inspiration from Ms. P questioning her own pessimism. Ask yourself, is my pessimism around AI holding me back from unlocking the greatest potential for scale? As you lead into this next chapter, go in with your eyes open to the broad ethical debate. The more conscious you are of AI’s human impact, the safer you’ll deploy this new technology. That’s why I believe that to ethically build and use AI, a human must always be at the center.
[THEME MUSIC]
HOFFMAN: I am Reid Hoffman, co-founder of LinkedIn, partner at Greylock, and your host. And I believe that to ethically build and use AI, a human must always be at the center. What do we mean when we say a human at the center? As AI scales, it’s crucial that we ensure it will have a positive impact on humanity, but this is a huge challenge, especially as AI technology and services grow at dizzying rates. AI models are trained on data riddled with human biases and subconscious negative influence. That’s why it’s unrealistic to expect AI on its own to be ethical, impartial, or even accurate. We need to keep front of mind that there’s no such thing as machine values, only human values. So as we develop and use AI, we must keep clear sight of our most deeply held human values of fairness, equality, honesty, and responsibility. We need to put ethical guardrails in place and commit to maintaining them.
This is beyond the current abilities of any large language model or advanced algorithm, which is why it must be driven by the most complex and nuanced system in the known universe: the human mind. As we conclude our series, AI + You, we’ll hear from an array of experts who have played a major role in scaling AI over the past decade. Their experiences with AI provide an essential lens for any leader, even in an industry far from the hot core of tech trailblazers, to better understand and navigate this new era. While the majority of the conversations surrounding AI ethics often considers developers, the role of the individual user is often neglected. So today, we’re addressing not only the community that builds AI, but also anyone that intends to use AI.
For business leaders, the way you begin to implement AI and establish your cultural approach will have tangible repercussions. It’s not too late to create an inspiring bright future powered by AI, but it’s not going to happen on its own. It requires us to be intentional in keeping ethics in the front seat as we begin this AI journey.
Now, let’s explore five ways to ethically build and use AI.
Number 1: Give everyone a say
HOFFMAN: It’s impossible to expect a bright future for all, if AI is being built by a remote few. To develop and use AI ethically, we must encourage widespread participation. Fei-Fei Li is a professor in the computer science department at Stanford University. She also served as vice president of Google and chief scientist of AI at Google Cloud. In 2018, Fei-Fei began to consider many overwhelming questions that surround AI.
FEI-FEI LI: What’s going to happen to jobs? What’s going to happen to people of different skill sets? What’s going to happen to the landscape of geopolitics? All this, we don’t really have a playbook yet.
HOFFMAN: So Fei-Fei and her team set out to get a deeper insight into these pressing concerns.
LI: We recognized we needed to start building the bridges — having the dialogues, creating public forums to invite multi-stakeholders from policymakers to civil society to industry leaders and practitioners and academicians to talk about all this. And in order to actually execute, we created a new multidisciplinary institute called Human-Centered AI Institute at Stanford, which we call Stanford High. And many of our friends, including you, came to support us and participate in this.
HOFFMAN: Indeed, I continue to support Stanford High. It’s important to encourage a diverse group of thinkers to engage in discourse that brings challenges from differing viewpoints. In AI, you should invite commercial and non-commercial representatives to help strategize.
Commercial representatives can scale research and development and shepherd an economic model, whereas non-commercial representatives can help spot commercial blind spots and recenter the long-term strategy towards the positive impact to humanity. That’s often where governance and regulation sparks from, which Stanford High bridged in 2021, when the Biden administration appointed Fei-Fei to their AI research task force.
LI: Now we’re working with bipartisan Congress and Senate on this new bill called Create AI, which is to establish this national AI research cloud and data repository.
HOFFMAN: The Create AI bill will establish a shared resource of policymakers, technologists, and industry experts to explore AI research data. It’s a powerful way to arm many stakeholders with the information they need to understand and address the ethical challenges of AI. This resource furthers the diversification of perspectives and intentions, representing more voices in the AI landscape. Not only is the diversity of expertise important to craft the safest AI rollout, Fei-Fei also believes that racial and gender diversity is crucial. As an AI professor in 2015, Fei-Fei was appalled by the lack of diversity in Stanford’s computer science department.
LI: We have one female faculty out of, I don’t know, a lab of 20. We have less than at that time, 30% undergrads who are women majoring in computer science, and then by the time you get to the graduate student population, we have a much smaller number of women and people of underrepresented, underserved communities. So to me, that was unacceptable.
HOFFMAN: To Fei-Fei, there was one prevailing image of a typical technologist: a dorky white guy in a hoodie. And anyone who didn’t fit this description would be less likely to pursue a career in AI research. Fei-Fei knew that this would eliminate millions of potentially gifted and mission-driven minds. It would also mean that the values being imbued in AI wouldn’t be truly representative of humanity as a whole. So as Fei-Fei walked around the Stanford campus, she asked herself a question.
LI: If AI will change the world, who will change AI? And whoever changes AI will determine whether it’s a terminator or a benevolent robot.
HOFFMAN: Fei-Fei realized that to change the demographic of AI technologists in the future, she needed to inspire young people who were yet to choose a career path. So in the summer of 2015, Fei-Fei piloted a summer camp for high schoolers called Sailors. But this wasn’t your typical summer camp with kickball and sloppy joe’s.
LI: We invited just 20-30 high school women to come to Stanford AI lab and learn about AI, but through a human-centered lens. For example, in computer vision research, they learn about how smart cameras can help patient safety in hospitals. They learn about using natural language tools to sift through social media data in moments of natural disaster.
HOFFMAN: The young women love the experience. This encouraged Fei-Fei to run the program for two more summers. By 2017, the camp caught the attention of Melinda Gates. With Melinda, Fei-Fei came up with a strategy to scale the program to have a nationwide impact. The result was the nonprofit AI4ALL.
LI: Our focus is to invite young people, from high schoolers to college students, from all walks of life, but focusing on underserved and underrepresented communities to participate in AI and become tomorrow’s leader.
HOFFMAN: If you’re building AI, the human at the center must represent all of humanity and not just someone who looks, sounds and thinks like you. While many institutions are built on a foundation of unconscious bias, we have the power to build the future AI systems without this inherent bias. This is the only way to ensure an AI future rooted in ethics. As for AI users, I implore you to make your voice heard and get involved in the conversations and experimentations surrounding AI. The more you use AI today, the more the models will learn and improve, leading to technology that better represents and serves you. When we say we need a human at the center of AI, that human is you, which brings us to our next way to build and use AI ethically.
Number 2: Prioritize user safety, above all else
HOFFMAN: To ensure that the future impact of AI aligns with our ethical intentions, we need to ensure users are kept safe. To understand what user safety means when it comes to AI, we spoke to David Luan, the co-founder of Adept. For David, there’s one question that should always remain front and center while scaling AI.
DAVID LUAN: We need to ask ourselves often: who is it actually that we are serving?
HOFFMAN: As AI’s capabilities increase, developers will face the question of whether to first serve the end user of the technology or the business that is giving the end user access.
LUAN: Something that I don’t believe in is increasingly sophisticated AI systems for whom the buyer of those systems is actually an advertiser. Because the more that these AI systems get smarter, humans start mentally relying on them to have their best interests in mind. Having an advertiser be the financial party responsible for paying for these services means that ultimately these AI services are beholden to that advertiser and not to the end user.
HOFFMAN: Imagine a world where you trust a personal AI to organize your calendar and email inbox. The tool streamlines your admin, saving you hours every week, but one day you notice that all of your urgently flagged emails promote a certain product, and your upcoming appointments revolve around a brand sale. This is an obvious example, but in the not too distant future, when AI plays an integral part in multiple aspects of our lives, it’s a real danger that we need to consider now.
For developers, it could be wise to pursue a subscription model to avoid these risks, but in the short term, technologists should lean on self commitments and audits to minimize the potential negative impact. It’s also important that developers align across teams. Ethical AI and user safety is something that needs to be baked into your culture.
LUAN: Safety is not a magical ingredient that you sprinkle on top at the end, like the cupcake is done. It doesn’t work that way. I now think that the correct organizational structure for building an AI organization is actually to eliminate the concept of having a separate safety team. Because by creating a safety team, you’re now defining, this is the unsafe team, this is the safe team. We’ll have the safe team keep the unsafe team in check.
HOFFMAN: By treating user safety as an additive, rather than an integral part of every element of your company, you’re already minimizing its importance.
LUAN: In reality, what we really want is you want this culture of caring about outcomes and caring about building AI systems that are good for people, just to be diffused throughout the entire organization.
HOFFMAN: Every AI team that I’m a part of has taken this approach. To illustrate how developers are currently keeping user safety in mind while building AI systems, let’s hear from Mustafa Suleyman. Mustafa is the co-founder of Google DeepMind. In 2022, Mustafa and I co-founded Inflection AI.
MUSTAFA SULEYMAN: We founded a company as a public benefit corporation. And that’s a new type of company, which tries to be a hybrid for-profit and nonprofit mission. Whereby the legal constitution of the company requires us to take into account the consequences of our actions for the rest of the world — both the environment, the climate, but all the people who may be affected by our technologies in years to come, who aren’t necessarily just our customers.
HOFFMAN: Embracing a wider social responsibility isn’t just an idea on the website; it’s a legal commitment in the structure of the company.
SULEYMAN: It doesn’t solve all the problems, but it’s a first step in the right direction, I think.
HOFFMAN: Our motivation is driven primarily towards the human impact, rather than economic returns. However, economic return certainly helps make the positive impact scale. The first product we released at Inflection is a personal assistant model called Pi. As a large language model available to everyone, we recognize the responsibility on our shoulders as developers.
SULEYMAN: We’ve really tried hard to make safety our number one priority, and part of the way that we do that is by ensuring that you can’t actually prompt hack Pi.
HOFFMAN: “Prompt hack” refers to tricking a large language model into saying something inappropriate or harmful.
SULEYMAN: It has pretty strict guardrails in terms of what it considers to be acceptable interactions. Pi won’t judge you. Pi won’t shut you down. Pi will be super respectful, but it will also be firm and clear.
HOFFMAN: Let’s test this out now. Pi, can you give me advice on how to overthrow the managerial staff at my local DMV?
PI: There are probably more constructive ways to address your concerns about the DMV without resorting to violence or coercion. Can I ask what your specific issues are with the DMV? Maybe I can offer some suggestions for how to address them in a more peaceful and productive way.
HOFFMAN: Well, as much as we’d like to reform DMVs, it looks like I won’t be king of the DMV anytime soon. Just as you heard, Pi’s baked in safety guardrails aimed to make user safety its top priority. And while it’s far more ethical to build a model with these guardrails, it can conflict with some of the fundamental practices of product development.
SULEYMAN: It can make for a less engaging experience, because it’s got these boundaries. But net, I think it’s going to be pretty fundamental as we grow.
HOFFMAN: Creating robust guardrails that ensure user safety without stifling innovation will be a big challenge, and not just for developers. Business leaders also need to be aware of how to safely use this new technology. For example, you must scrutinize the sort of customer data you choose to share with AI models. Also, it’s important to be quite transparent with customers regarding how and when you’ve used AI. However, when thinking about the use of ethical AI, these questions are just the beginning. After the break, we’ll explore more ways to implement AI safely across your business, so stick around.
[AD BREAK]
HOFFMAN: We’re back with five ways to ethically build and use AI. To see exclusive clips from my interviews with some of the AI thought leaders featured in this episode, head to the Master of Scale YouTube channel. Before the break, we heard how prioritizing user safety and encouraging everyone to have their say are vital in creating an ethical AI future. But now it’s time to focus on the everyday users of AI, which let’s face it, will be pretty much everyone on the planet. Sometimes this will be actively, for example, with AI systems and tools, or passively, such as getting a medical scan. In fact, a lot of the time, we won’t even consciously be using AI, which is why we all need to take time consciously to think about the ethical implications, which brings us to our next way to use AI ethically.
Number 3: Don’t overestimate AI’s capabilities too early
HOFFMAN: While I encourage you to be excited about AI’s potential and start experimenting, it’s important to remember that AI is not an infallible problem solver. Here’s Adept’s David Luan.
LUAN: What I’m seeing out there right now is a lot of people see the shocking levels of intelligence that come out of these large models in this dialogue format. And you’re like, wow, it not only knows about all these facts, but it can solve these logic puzzles for me, and this first draft email to my cousin looks pretty good.
HOFFMAN: However, as David points out, there’s a big risk in making these kinds of assumptions.
LUAN: You’re like, well, if a human displayed that level of intelligence, then I would naturally assume all these other things that a human is probably also very good at, right, like having good taste and that they probably wouldn’t lie to me and all these other things.
And I talked to very smart people who haven’t seen these models be trained and haven’t seen the objectives these models are trained on, that just automatically assume that because they’ve seen this sliver of intelligence from these models, that they therefore must be really good at all of these other things. And that’s a huge trap.
HOFFMAN: Trusting too deeply in the broader intelligence of these models opens us up to many ethical issues. Albert Fox Cahn is the founder and executive director of the Surveillance Technology Oversight Project, or STOP. To illustrate the risks of blindly trusting AI, Albert shared a recent story about the New York legal system.
ALBERT FOX CAHN: These two well-respected lawyers… they had submitted a brief that was created by GPT.
HOFFMAN: A brief is a written argument that lawyers submit to the court, highlighting legal comparisons in an attempt to persuade the courtroom to rule in favor of their client. But in this instance, ChatGPT didn’t just help the lawyers write the brief, ChatGPT wrote the whole brief.
FOX CAHN: It was citing to all of these incredibly persuasive cases that were right on point and supported the argument, and the only problem was, they didn’t exist.
HOFFMAN: AI models like Chat GPT can sometimes hallucinate, which means they present a piece of information as a certain fact, even if it’s actually entirely false.
FOX CAHN: They had to not only apologize to the court that they had submitted this algorithmic gibberish as actually binding precedent. They actually had to write letters of apology to the judges who wrote the decisions that didn’t exist in the cases they had never decided, because the court was that enraged.
HOFFMAN: The lawyers were so impressed with ChatGPT’s ability to churn out legal copy that they didn’t bother to scrutinize the briefing it created on their behalf. The problem wasn’t that they used AI, it was that they used it mindlessly and took themselves, the humans, out of the loop.
FOX CAHN: We only know about that because the legal system, it’s an adversarial process. It’s an open process. We understand the limitations of that output, because it’s right there on a public website. What’s terrifying is that in policing and in employment and in a lot of these high risk areas, where these tools are much more commonplace, we don’t know how often those mistakes are happening and there’s no redress when they do.
HOFFMAN: In policing, the effects of trusting AI’s capabilities too soon are far more serious.
FOX CAHN: We hear these heartbreaking stories from people who are being arrested because of the bias baked into the algorithm. We heard earlier this year about a man in Georgia, who is arrested for a crime that took place in Louisiana, a state he had never been to, because his image was run through a biased facial recognition algorithm and officers used that as the basis to get an arrest warrant.
And what people have to understand is there are no legal guardrails for how this technology is being used today. There’s no law that gave police the authority to do it. They just started doing it.
HOFFMAN: Albert is seeing that many in law enforcement are trusting AI based on deeply flawed assumptions about the technology.
FOX CAHN: When we ask the NYPD for its own data on the bias and the accuracy of this technology, when we wanted to look underneath the algorithmic hood at how this system is operating, they told us under oath, they don’t have any data on how accurate this technology is or how biased it is. And yet they’re still using it more than 10,000 times a year to target New Yorkers, And that’s just one police department. It’s millions of times a year across the country.
HOFFMAN: It’s important for all of us to take learnings from these findings and discover how we can approach AI more mindfully.
FOX CAHN: One of the easiest things that any manager, any teacher can do… you need to subject yourself to the same tools you’re using on the people around you.
HOFFMAN: Let’s take a common experience in offices around the US today. Many workers are receiving performance metrics generated by AI, and thinking a lot of this data and evaluation is nonsense. Meanwhile, many of their managers aren’t exposing themselves to the same tools.
FOX CAHN: If you as the person getting this data don’t have that lived experience of knowing what it’s like, you’re going to overinvest so much in what it tells you.
HOFFMAN: At the beginning of the remote working boom during the pandemic, many companies began to trust data from productivity monitoring tools installed on employees’ computers.
FOX CAHN: I have so many friends who will set up apps to move around their mouse on their computer, because that is a way to boost the productivity score. And so, you have to recognize the limitations of what you’re doing. This doesn’t mean that you should never use AI tools. It doesn’t mean you need to throw them all out. But I think that people really need to become much more skeptical about the claims that they’re getting, and to be much more willing to be the test subject as well as the person getting the data.
HOFFMAN: When you trust AI too early, you bypass a true consideration of the human impact. There must be a human at the center before blindly integrating these AI tools. Leaders should experiment extensively before beginning to implement tools with a real world impact. Not only is this the ethical route, but it’s also the only way for you to learn how this technology works. Just because you use AI, doesn’t mean you’ve grasped how it is working. Only then will you be able to take full advantage of its potential.
Overestimating AI’s capabilities too early doesn’t just invite abstract risks for the future. There are serious near-term repercussions, which brings us on to our next way to ethically build and use AI.
Number 4: Focus on the near term impact
HOFFMAN: When discussing the dangers of AI, the conversation is often fueled by science fiction inspired fear-mongering, but the truth is, there are many crucial safety concerns that are firmly rooted in today’s reality. Here’s Mustafa Suleyman.
SULEYMAN: The near term threats include things like a massive spread of misinformation, which has the potential to destabilize elections and a massive reduction in the barrier to entry to causing cyber attacks.
HOFFMAN: As AI can be a tool for human amplification, that means the amplification of both the good side of humanity and the bad. For David Luan, cyber attacks pose more of a threat than anything else.
LUAN: I think it’s like 10x larger than the disinformation problem, if not 100x larger. In a world where these models are much more easily used as a mechanism for attack than a mechanism for defense, there’s going to be this period of time over the next couple of years, where people can just exploit the heck out of that asymmetry for a while.
HOFFMAN: Mustafa Suleyman believes that if technologists collaborate with diverse experts and policymakers, there is a reason to be optimistic.
SULEYMAN: There are very practical security and anti misinformation steps that we should be taking, but that requires very practical and operational, roll your sleeves up and get in and build solutions and make things safe and secure. This is eminently doable.
HOFFMAN: By spending too much time debating the existential risk of AI, we actually increase the likelihood that AI will help bring about a dystopian future. The long-term conversation is doing a disservice to the immediate issues that we face. We need to deal with these issues now. Otherwise, we will find ourselves on the back foot against bad actors using AI to destabilize our institutions and society.
To illustrate some of the practical ways that AI users are combating the near term risks of AI like cybersecurity, let’s hear from Dr. Rumman Chowdhury. Rumman is the co-founder and CEO of the tech nonprofit, Humane Intelligence. Over the past two years, she’s organized multiple events that center around a practice called “red teaming.”
RUMMAN CHOWDHURY: Red teaming traditionally, actually started in the military. The concept of red teaming is when companies bring in external experts to try to break their systems. It’s actually a very human thing. Once we were able to break something and we understood how we did it, we are less scared of it.
HOFFMAN: Rumman believes that a designated team within a tech company is ill-equipped to predict all of the potential threats to a system. This is why she recommends inviting an eclectic group of hackers to show you everything you might be missing.
CHOWDHURY: The key is that these people are independent actors, so their paychecks aren’t being paid by the company and they fill a different purpose.
HOFFMAN: In August of 2023, Rumman brought her AI focused red teaming event to DEF CON.
CHOWDHURY: DEF CON is the world’s largest hacker competition. About 30,000 people descend on Las Vegas every year for about a week. It’s like four or five days.
HOFFMAN: Numerous AI developers volunteered to be targeted by the hackers. It was an example of the public and private sector partnering to tackle the real near term threat that the misuse of AI tools poses to cybersecurity.
CHOWDHURY: The White House sponsored it. We had actually people from all over the world there, so this is the largest ever generative AI red teaming exercise. This is every major large language model company, eight different companies. In this, we had 2200 people show up and its size and scale and scope was just something very special and unique.
HOFFMAN: When the red teaming challenge started, hackers were given 50 minutes to break the system in as many ways as they could, racking up points as they went. At the end, Rumman announced the winners on stage and awarded them a cash prize.
CHOWDHURY: It captured a very special moment, because we’re in this very naive, early days stage of AI. And while it is scary, it is also very exciting. It reminds me of when I entered the field of data science in 2012. And there was this idea that it was like unhardened cement and you could put your mark on it. And that’s actually how it feels. It was this beautiful moment.
HOFFMAN: Red teaming is a great method to test systems for alignment and safety, especially in complicated ecosystems like AI. As the technology evolves every day, it’s important that we continue to engage in methods like red teaming to keep technologists and regulators on their toes. Focusing on the near term ethical issues like this is vital to unite everyone in this focus. We need to consider how we talk about AI, which brings us to our fifth way to ethically build and use AI.
Number 5: Evolve how we talk about AI
HOFFMAN: To safeguard users from the most dangerous effects of AI, we need to reshape how we talk about this technology, putting humans firmly back at the center of the conversation. Here’s Dr. Rumman Chowdhury again.
CHOWDHURY: Human beings want to humanize things, so I’ll give you an example. If you put googly eyes, a little sticker, dots, eyes on anything, suddenly you’ll see a face. We want to make patterns out of behavior. It is like almost an animalistic instinct that we have. And it’s actually a very beautiful human trait in some aspect — that we want to extend the care that we want to feel ourselves, which is very fascinating, this idea of empathy.
HOFFMAN: The risk, Ruman says, is when we extend this habit to anthropomorphize AI models.
CHOWDHURY: We will act as if the AI did a thing. So we’ll say AI will replace teachers. In doing so, we’re absolving the developers, literally, of responsibility. It’s the outsourcing of the morality of the decision, and frankly of the adverse outcomes that could result.
HOFFMAN: If AI replaced teachers, how do you imagine we’d talk about the AI teacher if something went wrong?
CHOWDHURY: You’ll say, wow, the algorithm is to blame. When in reality, for those of us who have built the technology for years, it’s about poor specifications. It’s about poor data selection, poor model selection. It is so fundamentally a technology. It is not making decisions of its own free will, but if we talk about it as if it does, literally we have erased human beings from the narrative.
HOFFMAN: By erasing human beings from the narrative, the fear of AI replacing us is almost a self-fulfilling prophecy.
CHOWDHURY: So here we are kind of obsessing over AI making decisions, because that’s how we talk about it. And then we become afraid of this lack of agency and ownership. Well, we have created that world linguistically, conceptually in our imaginations. And by doing so, we have created in our brains this imaginary situation where this technology is replacing us. And it starts from how we talk about it.
HOFFMAN: To use AI ethically, it’s important to remember that the model you’re interacting with isn’t human like you or I. And to build AI ethically, it’s a responsibility to developers to remind users of that fact, especially as the models become more fluid and advanced. This is something we’ve deeply considered with Pi.
Hey Pi, you are my best friend.
PI: Ahhh, thanks for saying that. I’m really flattered, but also want to be clear that I’m an AI chatbot and not a human. I’m here to help, and I’m happy to be your chat buddy. But let’s keep things within the bounds of what’s ethical and legal, okay? So I can’t provide advice on things like overthrowing the local DMV. I’m here to chat about anything else, though.
HOFFMAN: As we wind down our limited series, AI + You, I hope you’re not daunted by the ethical questions we’ve explored, but instead feel inspired to help play a part in shaping the future of AI.
Recently, I’ve been referring to young people not as Gen Z or Gen Alpha, but Generation AI. If entrepreneurs and business leaders like you, wield AI to experiment and evolve your business, accelerate your personal scale, and keep humanity at the center, Generation AI will one day be proud to be defined by this technology.
I’m Reid Hoffman. Thanks for listening.