FEATURING:

Olivia Gambelin
Founder and CEO of Ethical Intelligence

44
Can AI Be Ethical?

February 26, 2024 | 32:43

As artificial intelligence (AI) continues to evolve, how do we ethically approach a technology with such wide-ranging implications? In this episode of The Active Share, Hugo talks with Olivia Gambelin, founder and CEO of Ethical Intelligence, for a conversation about AI ethics, responsible AI, and how our current systems—legal, social, economic, and political—adopt, and adapt to, new technologies.

Meet Our Moderator

Hugo Scott-Gall, Partner
VIEW BIO

SHOW NOTES
00:44 Host Hugo Scott-Gall introduces today’s guest, Olivia Gamelin.
01:29 What defines AI ethics?
03:34 Challenges in AI implementation.
04:20 Using ethical AI as a competitive edge.
05:54 The roles of companies vs. legal bodies in AI.
07:00 Who holds accountability in AI decisions?
14:00 The impact of AI on global cooperation and cultural sensitivity.
21:37 Ethical considerations in social media evolution.
26:40 The optimistic outlook on responsible AI.
32:23 Summary and final thoughts on ethical AI.
Transcript

Hugo Scott-Gall: Today, I am pleased and delighted to have with me Olivia Gambelin. Olivia is the founder and CEO of Ethical Intelligence. She works directly with business leaders on the operational and strategic development of responsible AI, and has advised various organizations ranging from Fortune 500 to Series A startups in utilizing ethics in a decision making tool. She’s the author of Responsible AI: Implementing an Ethical Approach in Your Organization, scheduled for release in June 2024. Olivia, thank you very much for coming on the show.

Olivia Gambelin: Thank you so much for having me, Hugo. I’m excited to be here.

Hugo Scott-Gall: Great. So, let’s get going. Let’s start with I guess a high-level framing question. How do you define AI ethics? What does it include? What does it not include? Is it your definition? Is it a commonly established definition? How do you think about that?

Olivia Gambelin: Well, I can actually break it down into a few different pieces, Hugo. So, from a big umbrella term of how if you meet any ethicist, how we’ll define it is AI ethics is the practice of implementing our human values into our technology. Specifically in AI. That is the commonly accepted one. But, what I like to define AI ethics and responsible AI is, I like to actually split these into two different things. If you’ve heard of the term “responsible AI,” that’s really kind of an industry term that’s been picking up a lot lately, and also the title of my book at the same time.

But, AI ethics to me is a very design-based approach. A very design practice where I’m looking at the technology itself, and I’m looking whether I need to protect for values or align for values. And, I can go into more depth there. But, if you then look from AI ethics to responsible AI, responsible AI is the big umbrella that fits all of these different topics such as AI ethics, regulation, governance, safety. Responsible AI, if you want to take a step back, is the practice of developing AI in a responsible manner. So, much more focused on the operations and development of AI rather than let’s say the exact design and features that you’re looking at.

Hugo Scott-Gall: But, is that still a framework that you’ve developed? Or, is it a commonly accepted frame where you’re importing from elsewhere and advising people on how to implement?

Olivia Gambelin: So, the split that I just told you about between AI ethics and responsible AI is commonly accepted. I think my framework comes into play actually on the responsible AI side. I work specifically through a framework that I’ve designed on how to strategically implement responsible AI and what kind of gaps exist within an organization. But overall, people will see that difference, that split of AI ethics fits underneath this umbrella of responsible AI.

Hugo Scott-Gall: So, when you advise companies, what are the things they are struggling with? What are the most frequently asked questions? And, in response to that, what are the most frequently used answers you give them?

Olivia Gambelin: So, a lot of companies, if they are in a high-risk industry—let’s say financial, health, or anything media. They are very much focused on risk. So, the questions that I get there is what am I missing? Am I compliant with the law? What kind of regulation do I need to be watching for that’s incoming? What kind of risks are associated with our specific use cases? It gets really down into this risk-based I must protect mindset. How do I protect to make sure that I am not doing harm unintentionally? That’s one type of box of questions.

I also get, on the other side, companies that are looking more at taking an innovation based approach towards AI ethics and responsible AI, those kind of questions come in from companies that are in let’s say a little bit more of the creative fields. Or, industries that don’t necessarily have strict regulations or standards already in place. And, those companies are really looking at how do I make this a competitive edge? How do I turn something like privacy, or fairness, or transparency into a competitive edge where I stand out from my competition in the market? So, I don’t have a blanket answer for that. There’s really this kind of two pockets or these two categories that a mindset of companies will fit in.

Hugo Scott-Gall: And, is it your overall view—these are very high-level questions, and I’ve got much more specific ones to come. But, is it your overall view that companies are very much part of setting standards of commonly accepted practices as this evolves? As the capabilities evolve versus just taking guidance, legislation, and interpreting it? That they can be part of demonstrating best practice.

Olivia Gambelin: So, you’re hitting on a huge debate that’s going on right now with some of the major players in the European Union state that the proposed regulation was too strict. Where it was really coming from this mindset that it is regulation and policy that is guiding the practice of AI versus instead of companies having space to be able to help shape the face of best practices in AI.

So, it becomes this hot debate of is it companies coming in giving that guidance, being able to influence as the pace of the market goes or as the pace of innovation goes? Or, is it the responsibility of these legal bodies? I can give you my perspective on this. From my point of view as an ethicist, something is hard coded into our regulations once we figure out concrete answers. Ethics is a very grey space. There’s a lot of needing to find balance and different context setting. There are not necessarily black and white answers.

Once we find those black and white answers, though, that’s when it moves into regulation and law. But, that means that regulation, law, legal compliance, that’s baseline. That is what you absolutely must be doing, but it doesn’t mean what you should be doing. That’s where ethics kicks in right above that.

Hugo Scott-Gall: You mentioned an important legislative act coming. How much of a role should governments have within that? This is a really simple question. But, who gets to decide? Presumably, you shouldn’t have the inventor deciding on the uses for a technology. You need an ethicist alongside. You need multi-disciplinary people around the table to really understand the consequences of something. I’m really curious. But, who should get to decide in your opinion? And, is it public sector? Private sector?

Olivia Gambelin: Absolutely. And, it’s a really layered question there because like you were saying, it’s public versus private. The answer is both. I’m gonna drive you a little bit crazy ‘cause most of the time, I’m gonna start an answer with, “It’s both.” But, that’s a classic ethicist there. It’s public and private. Public meaning we need to have the public interest in mind, but private also considering that that is where the development’s coming in. People in the private sector are the ones that know the technology best.

One of the challenges right now with the current regulation is it’s set by the public sector that doesn’t necessarily have the experience of working on the technology or really understanding how AI is developed. How it’s conceptualized. Who sits around the table in the private sector? It can be behind these doors where the question is we’re not sure, but we’ll try and regulate, and that causes frustration. So, that balance between public and private is incredibly important. Public brings in that social good. That public interest. While private brings in really the expertise and knowhow of what’s going on in the actual development cycles.

Then, to your second point of who sits around the table, well I would love to see more ethicists around the table, to be honest. I think the challenge that we’re facing right now when it comes to that organic growth of our technology of AI is—the measurements of success that we’re using aren’t necessarily in tune with what we as a culture, as a society, as global citizens really want them to be.

So, if you’re looking for measurements of success of short-term revenue growth, or numbers growth, or financial return, then that’s something that you can be looking for around the table. But, if you don’t have someone that’s sat at the table like an ethicist that’s looking at more long-term impact and using that as a success marker, that’s where we start to see some of that imbalance when it comes to that organic growth like you were talking about, and where we need a little more control.

And really, people that I want say have a different approach to how technology is developed in the sense that—like I’ve been saying, I’m an ethicist. I don’t have black and white answers. I exist in the grey. A lot of these questions that we’re facing exist in the grey, and they’re difficult to deal with. So, you need people with that different kind of mindset around the table to balance out the minds that are already sitting there.

Hugo Scott-Gall: I guess as the world became more digital, the legal system—and, not just the legal system, but all the things that are around that aren’t law, but clearly society values and wants to protect, was challenged. As a lot of economic activity shifted online, a lot of human activity shifted into the digital world away from physical. Does AI accelerate the need for the legal system to change? That actually, you were physical then analog, then digital. And now, you’ve got a high level of intellectual capability residing outside of humans that can do more things. It can infer. It can create. It can analyze. It can adapt and change things. It can imitate. Is this a big stress to the legal system? Never mind the ethics on top of that.

Olivia Gambelin: It would depend on who you ask. I would say yes. I would say that this is definitely a point in time where we need to adapt. We need to grow. I keep pointing back to the EU AI Act only because it’s been making such big progress and it’s going to be a huge topic in the next six months, even. But, I want to point to that act because when the act was first put out, when the regulation was first written, there was no mention of generative AI. Nothing whatsoever. And then, we had the release of Chat GPT, and it actually stalled the development of the AI Act because they had to go back and say, “Okay. How do we account for this new model?”

Even though generative AI did exist before Chat GPT. It just wasn’t a prevalently used type of architecture. So, using that as an example, the way the law was written before was looking at specific use cases of AI, and then having our AI development outpace even the process of bringing that regulation into effect is a huge indicator that we need to rethink at least how some of these legal systems work. How do we either make them adaptable so that they can grow alongside the development of AI, or shorten those feedback loops to be able to keep up and keep pace with the development of our technology.

Hugo Scott-Gall: And, that’s a time-honored problem. Technology moves faster than governments can. Where does democracy come into this? To know the people, do you have to start asking them more often what they think about some of these thorny issues?

But, whether our democracy’s involved, should governments be trying to get ahead of this and asking people what they think? Or, is it more in response to the technology has evolved? We understand as governments that this may not be popular. We’re now going to ask you what you think. So, is there a democratic deficit versus pace of technology here?

Olivia Gambelin: I would say there is—I wouldn’t put the democratic deficit on the government. I would actually put it within the enterprise themselves. So, one of the challenges that I see companies facing quite often is a lack of feedback loop in their development processes. And, what I mean by feedback loop is quite literally talking to their users and talking to experts in the field that they’re designing or producing, developing for.

So, if you think about it, anything from a healthcare startup, they are developing some software for nurse practitioners to be able to use. There is this breakdown of remembering you have to talk to the nurse practitioners to understand what their needs are, and you have to actually do this. Just because you are designing the software doesn’t mean you know the best solutions for that profession. You have to talk to the nurse practitioners. You have to talk to the experts and get that feedback loop in and that kind of democratic input in. This is shaping what a software platform and AI system, any of that would look like. So, I would say less actually on the government. I would say less on the government level, and I would actually put that democratic breakdown into specifically the enterprise.

Hugo Scott-Gall: But, I’m still interested in the role of government in that you could paint I think quite a pessimistic, gloomy picture that the world is becoming more fractured.

So, is AI so serious in terms of if it becomes so powerful and it starts making big decisions outside of human control, that there does need to be global coordination? And, does it worry you that in a world that is becoming maybe less friendly and a little more hostile to each other, whichever blocks of geographic countries you’re thinking of, this couldn’t have come at a worse time. The requirement for global coordination is up there with bio-pharma, with nuclear, and yet the world is becoming a more fractured, less coordinated place.

Olivia Gambelin: Yeah. There’s absolutely a strong need for that global coordination like you’re talking about, Hugo. I think one of the challenges that is I would say more unique to the case of AI is the fact that we have these systems that can reach a global scale. Hence the need for that global coordination. But, the way that we interact with our technology is heavily influenced by our different cultures.

So, although we need that global communication to be able to work on the main risks of AI and those existential, major risks that we’re seeing, when it comes to specific risks or specific application, it still needs to be culturally sensitive. Which makes it difficult for that global collaboration. You can look at and compare China versus the U.S.—very, very different approaches to AI. How do we account for that if we’re supposed to have some type of global cooperation there?

Hugo Scott-Gall: Yeah. This is clearly going to be a bigger and bigger issue. Maybe it’s not fair to ask you this. Which countries have you seen particularly ahead of the curve on this? Particularly thoughtful around this? And, when you think about countries, is it always reflecting what are that country’s interests? So, if a country’s very exposed to certain industries, does that make them more likely to focus on AI in a certain way? If these countries are islands versus landlocked. The geographic location. Are there any sort of patterns you’ve seen? Or, is it an unfair question to say who’s really thoughtful and ahead of the curve on this?

Olivia Gambelin: I would almost say that it’s too early to ask that question. Only because we’re just now seeing in this last year alone countries catching up to the AI game of actually understanding oh, we need to play a more active role. So far, it’s really been driven by the private sector. Now, we’re seeing it, for example, with the executive order coming out of the United States. We’re seeing the powerhouses start to play a bigger role. And, you’ll see differences. With the UK, it’s a very innovation-based approach. EU is a very risk-based approach around AI.

So, I don’t think there’s any clear winner at the moment. I think it really is any country’s game as things are shaping up both from responsible AI—Again, that’s my lens.—Responsible AI, but also the AI field as a whole.

Hugo Scott-Gall: Can we talk a bit about accountability? So, a decision is made, whether it’s by a government, or a police force, or a company, that turns out to be an incorrect decision and has harmful consequences for someone. It could be—I don’t know—misrecognizing them via facial recognition software, and AI performs some calculation whereby someone is forcefully arrested, and/or a customer is removed from a company’s database, or something happens, but it wasn’t me. It was the AI that did it. Is that okay? Can you say that?

Olivia Gambelin: Fortunately, no. Who is to take the blame? That’s still up for debate. That’s still a big question. But really, at the end of the day, you look to our legal systems. You can’t prosecute an AI system. You have to prosecute a person. A company. Whoever was behind it. So, even though it may feel like we can hide behind these systems, at the end of the day, we really can’t. There will always be a legal ramification. But, also speaking as an ethicist, there is always blame to carry. There is always the pain and blame that needs to be assigned, not to a system, but to a person.

Hugo Scott-Gall: I guess my example was machine says you did something and you didn’t do it. But, what happens when it’s not about something that happened, but it’s more about preventing something? Or, you’ve been excluded from something. You didn’t know you were excluded based on an algorithmic prediction. Those cases can be harder to prove. And, often times, you don’t know a potential harm has been done to you.

So, it’s easy. Facial recognition says you were in New York when this thing happened, and you were nowhere near. You can prove you weren’t. So, that’s an easy mistake. An easy error to correct. But, the biases within any algorithm, the biases that have caused someone to be rejected, denied, not included, some potential hard may have been done, but it’s harder to prove. Harder to be aware of. Again, is your answer the same one? Which is it all goes back to the owner or the user. The user of the decision that AI generated or the owner of the AI system is still accountable?

Olivia Gambelin: That one depends. Like you were saying, it can be incredibly difficult to pinpoint that there is some harm occurring. You as a user can say, “I think something feels off.” Or, “I don’t even know something feels off ‘cause I’m in my own bubble. My own bubble of user experience. I have no idea that I should actually be experiencing this technology, or this system, or this decision making in a different way.”

It’s difficult to be able to pinpoint those in the first place. But, I would say we’re actually moving towards a direction now. There’s been so much research and brilliant work done in the space of responsible AI and ethics that we are able to preemptively catch that kind of breakdown in the system more and more with more accuracy. So, we’re actually moving away from a time where you can skirt accountability saying, “Oh, we didn’t know.” And, no one knew because it’s such a brand new technology.

That gap, that window of grace is very quickly shrinking right now because of the pace that we’ve been able to develop from responsible AI field. Be able to develop our understanding. Our risk databases. Our calculations to be able to pinpoint no, you should have known better. There was a very clear indicator. You should have checked how your data set was being labeled. That is common knowledge now. At least, from my field, that is common knowledge now. You can no longer claim that as we didn’t know that biases could exist in data labeling. It’s well known now. So, that gap is closing.

Hugo Scott-Gall: When you look at the evolution of social media from how it started to where it is now. Clearly, it’s been very hotly discussed. Recent events, you certainly had the CEO of Meta up in front of Congress. So, I think it’s fair to characterize it as an area where legislators, regulators have found it hard to come to a commonly agreed upon set of governing criteria or principles. When you look at the evolution of social media, its impact on society, it has clearly had big benefits depending on where you live, and your context, and how you live your life.

But, there’s certainly enough people in the world to say, “This has been efficient.” Otherwise, you wouldn’t see billions of people using it, you assume. But, do you think with hindsight, it could’ve been managed better from an ethical point of view? That actually clearer operating principles from an ethical point of view could’ve been applied? And, that the industry maybe just ran so fast that the government and the regulators couldn’t keep up?

Olivia Gambelin: Yes. I firmly believe that there could have been tighter feedback loops in terms of an ethics perspective where we could have caught some of these—I’m not going to use the word “malicious,” but these negative consequences. We could’ve caught them, and we could have changed the core structure of how we approached social media earlier on. The challenge now is it’s social networking. It’s software lock-in. We’ve been using it for so long for how it’s functioned, that it’s really difficult to go back and change it.

For example, Facebook back when it was still Facebook, when Facebook first put out that Like button, they didn’t necessarily have the right control set to be able to take in research that was coming in on the effects of that Like button, and then feed that back into product and feature design. For example, the year after that Like button was released, suicide rates I think doubled or tripled. A significant amount. And, they were able to trace it back to that Like button.

Now, Facebook didn’t at that time have the right mechanisms in place to have that reflection. That feedback loop and say, “This is a direct cause. That’s not good. That is not the values. That’s not the purpose of what we founded this company on. We need to go back and rethink that feature.” Now, it’s so ingrained that we’re looking at Instagram trying to hide the amount of likes a photo gets, and that’s been in the works for at least five years where they keep testing oh, we’ll take the amount of likes down, and then they see that it drops numbers in engagement. And so, it’s so ingrained in how we use social media. Even though we know it causes these adverse effects, we can’t quite come off of it.

But, if we had had those values base, that ethics base feedback loop earlier on, we would’ve been able to change and adapt. I think the point that I’m trying to make here is a lot of tech is move fast, let’s try things. But, instead of move fast, break things, we need to build in another step there of we can move fast and break things, but then we have to have the accountability and the humility to say, “We broke something we weren’t supposed to. We’re going to fix it now before it’s too ingrained and it’s past being able to change.”

Hugo Scott-Gall: But, you grew up in Silicon Valley. Is that too optimistic? You lived there. You understand that place. And obviously, it’s a hotbed of innovation and progress. Is it too great a responsibility to expect from a bunch of very entrepreneurial people? Or, is it something that they were aware of, but proceeded anyway? Or, is it something where someone like you and people like you could’ve helped more? I guess it’s just the nature of the beast. If you want all the innovation, there are going to be some—negative might be too strong—but, there will be some undesired consequences that go with it, or some unintended consequences that go with it.

Olivia Gambelin: Of course. Always. Life is a balance of both good and bad, and we’re never going to get past that. And, I think this is probably where I will differ from a lot of ethicists that you’ll hear speak is I understand I will not be able to reach everyone. And, that isn’t my goal to be able to reach the companies or the entrepreneurs that are in a different mindset. That I really can’t even reach. That are more focused on those short-term benefits and trying to move fast and break things without any consciousness or any accountability.

Who I work with and really this growing—I want to call it this growing—not trend, but sentiment that I’m finding in the valley is just this hunger for something different. Really, I’m worked to the bones. I’ve been working for startups, and I’m tired of not having purpose. I’m tired of seeing bad tech come out. I’m tired of building something that I don’t really believe in. I need something more than this. It’s this growing sentiment towards having work be more value driven. Having even the technology, the purpose of that technology serve some type of greater good or greater purpose beyond just what the marketing team is putting out. Truly making good impact in people’s lives.

So, I think I would say we’re always going to have what people will call bad actors. But, I—not even think—I firmly believe there is a shift happening. Especially in the Silicon Valley. But, there’s a shift towards hey, wait a second. We changed the world. Why don’t we actually change the world how we want to change it instead of just keep changing it for change sake?

Hugo Scott-Gall: Okay. So, I’ve got a few closing questions. That’s quite an optimistic answer. Are you overall optimistic here? Or, you can see some optimism, but you can also see a big left tail? You can see some big risks here. Where do you come out? We’re mature enough as a society. We’ve learned from social media that actually, this is going to test us, but overall, the world will respond and be responsible in harnessing this technology for the best of humankind, or actually, let’s go forward. Let’s do it.

Olivia Gambelin: I have been lovingly nicknamed the Optimistic Ethicist, and I definitely stick with that nickname. I am an optimist. Because of the work that I’ve done. Because of the people that I’ve worked with and the change that I’ve seen. I think what I’m after is I’m looking at people that want to achieve success. You can hold ethics and success in the same hand. They don’t need to be opposing, and I’ve seen that in practice, and I’ve seen the results that come out of it.

And, I am biased, but I think when you have that value driven, that ethics by design innovation and approach to business, it always results in stronger technology. Stronger products. Stronger companies. Stronger people behind the scenes, which is really the big driver. So, I am optimistic because I’ve seen the change that is already starting to happen. And, the more success stories that we have, the more momentum’s going to build there.

Hugo Scott-Gall: That’s a great answer. I guess these things are hard to predict with a wide range of outcomes, but I predict you are going to be very busy. I know you are already. But, I think your services will go up, and up, and up. Because as investors, we think about risks in a lot of different ways, but we certainly think about the risk companies have for what they do and how societal action has made change to what it is they do. Certainly, I think we see companies in lots of industries who tell us they want to be better because of AI. At decision making. Analytics. Customer relationships. Lots of different ways.

Clearly, it creates risks, and they’re going to be confronted with society expects more and more from corporates in terms of leadership on issues. I suspect they will be expected to lead in these areas. So, I think that you’re going to be very, very busy.

Olivia Gambelin: Yes. That would be an understatement. And, I think, Hugo, a really interesting point here that I just want to add on real quick about risks and that mindset of risk. So, MIT and BCG put out a report last year that I quote all the time because it put a very beautiful number to the work that I do. And, they found that companies that engage in responsible AI practices actually reduce risk of failure rates of AI by 28%. Which is huge. If you don’t already know, the AI risk failure rate is anywhere between 86-93%. Which just seems like an insane number.

And typically, any type of solutions that you’re being told to reduce that risk, you’re looking at reducing it by one or 2% ‘cause that’s even a huge margin. One or 2%. But, 28% is unheard of. That kind of reduction. And, it’s simply because at the end of the day, responsible AI is good business practice. So, you are de-risking your development processes. And then, when you have this ethics layer, you are establishing yourself as really a leader both in your industry, in your field. It starts to compound. You start to see this as it is more risky not to do responsible AI, not to do ethics than it is to invest into establishing these practices.

Hugo Scott-Gall: That makes a lot of sense. This is going to be one of those things that delineates winners vs. losers. I’m sure you’re well alert to that. So look, I want to say, Olivia, thank you very much for coming on the show. Good luck with your book when it comes out in June. I’m sure it will be voraciously read, as it should be. So, thank you very much for coming on the show. Thank you for spending time.

Olivia Gambelin: Thank you so much, Hugo.

Meet Our Moderator

Hugo Scott-Gall, Partner
VIEW BIO

Questions, Comments, Podcast Ideas?

Create an
Enduring Partnership

Our active ownership culture creates long-term client relationships by aligning with your interests and helping you achieve successful investment outcomes. Contact us to learn how we can partner with you.

CONTACT:

Left Menu Icon
 

Subscribe to Our Blog Now

Want the latest insights on the economy and other forces shaping the investment landscape? Subscribe to our blog today.

SIGN UP