Ethics at Scale: Navigating AI Risk | Reid Blackman

 Ethics at Scale: Navigating AI Risk | Reid Blackman | 648


Building AI guardrails for reputational risk before it’s too late

This episode explores how ethical concerns surface across departments—from HR to marketing—through AI adoption, and why traditional frameworks often fall short. We unpack why bespoke, agile solutions are critical for managing AI risk, how reputation drives proactive behavior in large brands, and why ethics must evolve as fast as technology does.

What happens when a philosopher, a pyrotechnics entrepreneur, and a tech ethicist walk into a boardroom?

You get Reid Blackman—author of Ethical Machines, host of a podcast by the same name, and founder of Virtue, a consultancy helping Fortune 500 companies navigate the ethical risks of AI. In this episode of  Leveraging Thought Leadership, we explore the collision of ethics, emerging tech, and organizational complexity.

Reid shares his unorthodox journey from selling fireworks out of a Honda to advising top executives on responsible AI. He discusses how AI creeps into organizations like a Trojan horse—through HR, marketing, and internal development—bringing serious ethical challenges with it. Reid explains why frameworks are often oversimplified tools, why every client engagement must be bespoke, and why most companies still don’t know who should own AI risk.

We dive into the business realities of AI risk management, the importance of moving fast in low-risk sectors like CPG, and the surprising reluctance of high-risk industries like healthcare to embrace AI. Reid also outlines how startups and tech-native firms often underestimate the need for ethical oversight, and why that’s a gamble few can afford.

If you want to understand how to future-proof your brand’s reputation in an AI-driven world—or just love a good story about risk-taking, philosophy, and Led Zeppelin-fueled entrepreneurship—this is the episode for you.

Three Key Takeaways

AI Risk Is Organizational, Not Just Technical
Ethical AI risk isn’t the sole responsibility of the CIO or tech team—it’s a company-wide issue. AI often enters through non-technical departments like HR or marketing, creating reputational and legal risks that leadership must manage proactively.

Frameworks Are Overrated—Bespoke Solutions Win
Reid challenges the reliance on generic frameworks in thought leadership. Instead, he emphasizes the need for bespoke, agile solutions that are deeply informed by organizational structure, goals, and readiness.

Reputation Drives Readiness for Ethical AI
Large brands in low-risk sectors (like CPG) are often quicker to adopt ethical AI practices because the reputational stakes are high. In contrast, high-risk sectors (like healthcare) are slower due to the complexity and fear surrounding AI implementation.

If the episode with Reid Blackman sparked your interest in the ethical implications of thought leadership in rapidly evolving fields like AI, then you’ll find a compelling parallel in our conversation, with Linda Fisher Thornton. Linda dives into the broader responsibilities of thought leaders to ensure their content is not just smart, but also ethical, inclusive, and meaningful. While Reid examines AI as a fast-moving ethical challenge that demands bespoke, responsible oversight, Linda zooms out to highlight how thought leadership, in any domain, must be built on a foundation of trust, transparency, and long-term value creation. Both episodes challenge leaders to do more than inform—they must lead with conscience and intention. Listen to Linda’s episode to explore how ethics can—and must—be the throughline of every thought leadership strategy.


Transcript

Peter Winick And welcome, welcome, Welcome. This is Peter Winick. I’m the founder and CEO at Thought Leadership Leverage. And you’re joining us on the podcast, which is Leveraging Thought Leadership. Today, my guest is Reid Blackman. He’s the founder, and CEO of Virtue, which is an AI ethical risk consultancy. He’s the author of Ethical Machines, which just came out fairly recently by HBR Press. And he’s the host of the podcast by the same name. So without getting into all the nitty gritty, in the amazing companies he’s worked with and articles he’s written I’d rather just talk to him because he’s sitting right here. So, welcome Reed, how are you?

Reid Blackman So far so good, but you know, the podcast just started. We’ll see. Oh, there we go.

Peter Winick So what is, uh, in terms of categories of jobs that didn’t exist 10 years ago, what does an AI ethical risk consultancy CEO do on a typical day?

Reid Blackman I mean, most of the work consists in, if I’m not writing and that sort of thing, advising mostly senior executives on how to design what I call an AI ethical risk program. They often call them responsible AI programs, but the bottom line is, all right, look, we’re this large organization or a Fortune 500. We’ve come to the point at which AI is coming through the cracks. You know, it’s coming through HR, it’s come through in-house development, it’s comes through marketing, et cetera, et cetera. How do we, how do we get our hands around this thing so that we can, we can manage the risks of it.

Peter Winick Got it. Because I mean, one of the interesting things that you said there, it’s not just, well, you know, make this the CIOs problem because everybody’s touching it in some way, shape or form. And if they’re not today, certainly they will be very soon.

Reid Blackman Yeah, it’s fine for the CIO to own the program, but yes, you have to think a lot about, we’re not just designing it in-house, every startup wants to sell their new AI solution to every department, every role in your organization. So how do we get our hands around that? So I think that in a lot of cases, AI sort of acts like a Trojan horse because HR brings it in and before you know it, you’re facing down a lawsuit for discriminatory hiring practices that were the result of your AI being used in a certain way.

Peter Winick So give me a little bit about your path. Cause I would imagine that a young read didn’t sit around saying, what do you want to be when you grow up and it was a ethical risk consultancy? There was probably

Reid Blackman Yeah, yeah, no, I did not. I didn’t quite see it coming. So no, young read became obsessed with philosophy. And, you know, senior year of college, it was either law school or go get a PhD in philosophy. And I realized it basically be a betrayal of my identity to stop doing philosophy at this point. So I went off and I got a PhD and philosophy. And then I became a professor. I was a professor for 10 years. I always specialized in ethics. That’s always multiple branches of philosophy. That was always the most interesting to me. And I can get into details if you like, but at some point it occurred to me that I could start an ethics consultancy of some kind, but I didn’t see any kind of market for that sort of thing. And you fast forward a few more years, maybe even more, and I was keeping sort of, you know, my eyes open. And that’s when I heard things like Me Too and Black Lives Matter and Hashtag Delete Libre and Hashtag Delete Facebook or Boycott, whatever. And I thought, okay, if companies don’t get their ethical houses in order, that’s gonna be really bad for them on social media and ultimately news media. I can sell that. And so there’s lots more details of that, of course, and lots more behind my motivation, but that’s how I ultimately came to start this company. It’s because I realized, oh, things are going sideways. And also, I don’t know how exactly, but maybe it was sort of in the academic ether that AI was coming and things were gonna go sideways with it. And so that was also part of the vision when I started the businesses. I wanna really focus in on AI at some point when that becomes sufficiently relevant to the marketplace.

Peter Winick Right. So you saw it early. So it’s this intersection of philosophy and tech, but then, you know, you sort of fast forwarded through, oh, and then I was a professor for 10 years. Most people then don’t leave academia to take the risk to be an entrepreneur. They say, now I hired a professor after 38 years.

Reid Blackman You’re right. Like.

Peter Winick After the ever after and yeah, sure. You know, whatever. So why, why did you decide to leave the confines of academia?

Reid Blackman Confined is the right word. So there’s a lot to say here. I mean, one thing that I glossed over is that when I was in grad school, I started a fireworks wholesaling company based out of Connecticut. And…

Peter Winick Wait, wait, you’re at a cross of the road. Do I become a philosopher or an attorney, but I’ll be in somewhat of an arms dealer selling fireworks.

Reid Blackman Arm deal is a bit strong, but, uh, you know, as a grad student, I was making bupkis and my father, the bupkus thing is a technical term. It is a tactical term. Yeah. Yeah, yeah. Bupkis Inc. And, uh so there’s why, how did I come to, I started a fireworks business in particular, also part of a story, but my dad was a wholesaler, is a wholesalers. Anyway, so I started selling fireworks out of my two door Honda, going sort of store to store, like the trunk was crammed with fireworks and the the back seat and the passenger seat was crammed with fireworks. And my dad told me where he doesn’t have any customers. This was in Connecticut. And then I just sort of drove around store to store selling my wares. And that went really well the first year and went really, went even better the second year. So by the third year, you know, I’m still in grad school. I had, I rented a cargo van and I crammed that full of fireworks. Anyway, the business grew each year to my surprise. And so, you, know, by the time I had a PhD, you know, at some point I had a PhD and I was driving around in a van in a tank top listen to Led Zeppelin delivering fireworks. And I was like, okay, this has gotta stop. Why don’t I hire someone to do my stuff? And then I’ll go off to a coffee shop, I’ll do my writing, I’ll my research, whatever and then I check them in. And again, to make a long story short, because we’ve only got like 20 minutes total here, that’s when I learned that managing people is a lot of work because I had four or five people working in the warehouse pulling orders and loading trucks. I had four guys on the road making deliveries. And so I was doing the bigger business, but I was did not have the free time that I anticipated. But this is what planted the seed of first of all, that was sort of obviously entrepreneurial. And that explains why once I was a professor during those 10 years, I became an advisor, an advisor to startups, as plenty entrepreneurship organization at the university. And I saw all these kids doing cool, new, interesting stuff. And I was like, I want to do something cool, new, and interesting. What would that be? Well, I love philosophy. I love ethics. I don’t want to leave any of that. I wonder if there’s such a thing as an ethics consultancy. And I looked around, I Googled it. I couldn’t find anything outside of something, you know, very narrow stuff in healthcare. And so I thought, okay, there’s nothing here. I’m just going to wait. And maybe something comes along and maybe, maybe one day there’s a market for it. Maybe there’s not. And then we get back to Black Lives Matter, delete Facebook, blah, blah. Well, Cambridge Analytics scandal, that was a big one. Um, there’s also around the time when all the big consultancies were coming out with these reports about how millennials want more meaningful work and they’ll take lower pay for higher meaning sort of thing. And so that’s, you know, and then I also, I was a professor at Colgate university, sort of the middle of nowhere, my wife and I are city folk, not country folk. We had two kids on the way. We had one kid already had arrived and there was another one on the way and sort of back and forth commute. They guess my wife’s career is Manhattan based. That didn’t work anymore. Anyways, they sort of put these social factors together with my finally seeing at least what I believe to be an opportunity. And so I left academia and I started Virtue.

Peter Winick At least this is where things make total sense in the rear view mirror for thought leaders, where you’ve got, you know, if I were to sort of dissect the smoothie that is you, it’s like, you 10% entrepreneur, 50% philosopher, you know, whatever, and, and all those come together. So you’ve basically created the perfect situation, the perfect job that only you’re suited or probably only you are suited to have.

Reid Blackman I mean, I don’t want to be particularly braggadocious, but yeah, I mean you know, academics, academics, I’m not, I wasn’t, I was never a normal academic. I mean. You know, my grad student friends were not selling fireworks, you know? I can deal with making deals with oil companies to sell fireworks to all their gas stations, you’re the convenience stores that they’re getting, like they don’t have that business experience. They’re not that savvy. I grew up with half my family being from Brooklyn. You know there’s, there’s just a level of sort of street savvy that that most academics don’t have they don’t have experience running a business yeah so I sort of you know in a pretty unique position plus my wife was sort of in a you know at the time she was running up she was running a businesses not her own but someone else’s she has friends so I had sort of people to talk to about it

Peter Winick Now you’re getting paid to solve these problems that really haven’t been solved quite the way they need to be solved before, because ethics has been a thing for, let’s say, quite a long time, that AI piece and the scale and the speed and the intensity at which good or bad things can happen in terms of reputationally brand and all that, it’s huge.

Peter Winick If you’re enjoying this episode of Leveraging Thought Leadership, please make sure to subscribe. If you’d like to help spread the word about our podcast, please leave a five-star review at ratethispodcast.com forward slash LTL and share it with your friends. We’re available on Apple podcasts and on all major listening apps, as well as at thoughtleadershipleverage.com, forward slash podcast.

Peter Winick So, who inside of an organization, as I’m assuming Fortune 1000 kind of thing, is ultimately responsible for hiring you or not? Cause I think a lot of thought leaders struggle with, Hey, here’s a problem. I’m uniquely qualified to fix it. When I go knocking the doors at Citibank or IBM or whoever, who am I, whose door am I knocking on?

Reid Blackman Yeah, okay, so I’m gonna say two things. One, you didn’t ask a question, but I’m going to say, yes, that’s exactly right about AI making everything more complicated and we’ve had corporate ethics. But the truth is that I wasn’t particularly interested in doing Black Lives Matter or DEI, whatever, or those non-tech ethics consulting early on. I just thought that’s where the only market is right now. And I hope that the AI ethics stuff becomes relevant, that there’s a market for it because that’s way more intellectually interesting to me for the exact reasons that you specified. It’s so fast moving, it’s complicated. They’re novel problems. So the fact that it’s sort of a novel puzzle to solve made it intellectually satisfying and stimulating for me as opposed to don’t sexually harass women. We know the answer here. Like it’s not, it not complicated. On an open-ended question, right? Right, just don’t do it. It’s not now you have to have people to stop doing it but that’s not intellectually challenging. I don’t think it’s just some, it just, you know.

Peter Winick Well, but stay there a minute because I think it’s an interesting piece. In, in many instances, thought leaders have their models, their methods, their framework, you know, a change management model method and the thinking is, well, it kind of doesn’t matter what change your organization is going through. I’ve got my model. I’m going to impose it on you and it’ll probably do better than most at getting you what you need. I would imagine your world and maybe I’m wrong is you do have your models, your methods, your frameworks, your research, et cetera, but it’s not like there’s been… Tens of thousands of cases to go against and 50 years of history. Yeah. Is it agile or how much of it is evergreen and how much

Reid Blackman Okay. Okay. So it’s very agile. Everything we do is bespoke because organizations are constructed in such different ways. I never went in with like a framework, something like that. Frankly, I barely have a grip on what the hell people mean when they say they’ve got a framework. It usually means they’ve gotta pie chart with four buzzwords in it or something along those lines. Uh, and I just think honestly, to sort of be, I’ll say the controversial thing, which is I kind of regard frameworks as ways that experts try to convey things to people who don’t know a damn thing. But that’s it. It’s, think of it this way. You’re a master expert surgeon. And when you go into a surgery, what’s the framework that they, she doesn’t say, oh, I’ve got this framework. It’s you know, it’s got four letters. I’ve had an acronym for how to do heart surgery. No, no, no. She cuts open. She assesses the situation. Of course she’s done this a million times before she’s organized. But she sees what needs to be done. She sees, you know. What’s the highest priority here? How do I tackle that? But she’s not sort of ham-handedly, ham-fisted-ly. I don’t know, there’s some body part involved in ham. And pardon. You know, take this framework and apply it to this particular heart surgery. You know if you had a beginner heart surgeon and they come in and they’re like, okay, I cut this person open, let me consult my framework. You’d be like, holy shit, this person is gonna perform a heart surgery? So I think frameworks, frankly, are kind of dumbed down versions of what the experts know. They’re ways of communicating. Some expert level iteration to non-expert.

Peter Winick Values as the value been on a couple of levels, but as a communication tool to convey to other we use change management again. Here’s three horse that Michael here’s where we like to be. This is the practice. So it’s actually you to feel stressed or whatever at this phase, but there’s light on the end of the tunnel. Yeah, right. Yeah. That all you as a communication tool.

Reid Blackman A hundred percent, not saying it’s not valuable, but what I’m saying is when you asked, you’re an expert, do you come with your framework? No, no, no. I come with deep experience and understanding about what’s going on. I need to assess the situation and see, okay, who’s actually involved in this conversation when we’re talking about AI ethical risk or responsible AI, whatever you want to call it. What problem are they trying to solve? Do they have the buy-in that they need? What are they really trying? Are they just trying to escape by regulatory compliance? Were they trying to actually go above and beyond that? There’s a bunch of stuff I need to figure out about them before I say, okay, so here’s where I think we should go. Maybe, maybe that we need a workshop to get the internal alignment that we need because most companies don’t have it.

Peter Winick So let me ask you this then. So oftentimes with thought leaders, there’s sort of two situations, right? Situation one is it would be wise for this organization to learn about dot dot dot ethics and AI and all that. And that would be a good thing for us to do because, you know, that market’s moving quickly and you know it’s a capability development and it’s thought of as such inside the organization, let’s teach these leaders or managers the the basics of AI and ethics and we’ll pay a certain price for that. Then there’s the other side where it’s more triage, right? Like when do people buy burglar alarms? The day after their home was robbed, right. Not two years before or whatever. So are you doing more triaged or prophylactic kind of preventive capabilities development?

Reid Blackman I’d say it’s permanent prophylactic. I agree with you. I mean, things go sideways with AI, then they’re calling their PR firm. You know, they’re called Edelman. They’re not calling me because I’m, I’m not going to sort of tell you how to fix your image or something along those lines. I’m going to tell you had to do it right. I’m gonna, I want to try to, I won’t tell you how moving forward, this is how you do things.

Peter Winick So if you are optics problem when they’re calling an Edelman thought a get to the future, but happen again. Yeah, that’s right. A root cause guy.

Reid Blackman Yes. And I think that’s why I get to work with these big brands. Or because the big brands have brand reputation to lose. Yeah, you know, some small medium sized business things are sideways for with AI for them. At you know maybe they face a lawsuit, in which case they’re going to call their lawyer again not me. But these big brands they want to use AI in a big way. And they know that the journalists are looking at them social media is looking at them. And so they better get it right.

Peter Winick Interesting, interesting. And any trends that you’re seeing in terms of it’s more likely for a CPG to call you than a professional services firm or what are you seeing in the marketplace in real time?

Reid Blackman I work with several CPGs. What I found is that I think that people don’t want to design a seatbelt if they don’t know if the car works. So they’re designing cars right now. And they’re like, should I invest in AI ethics or responsible AI? Let’s see if the damn car moves from point A to point B. And does it move from point B to point A fast enough? Is there an ROI on it? If yes, okay, let’s design a Seatbelt. Now, who has the ability to see if the car can move? Healthcare is nervous as hell. Why? Because you’re literally talking about life and death. CPG talking about ketchup sales, chocolate sales, right? So CPG is moving faster in AI adoption because they’re in a lower risk environment to begin with and they are therefore also moving faster in the response to AI ethics space. Got it. So I would have thought and I would, and I was wrong. I did think, and was wrong that healthcare would be really early on because healthcare speaks the language of ethics. They’ve been doing it for decades. institutional review boards, ethical, ethics committees, IRBs, but they’ve been the slowest. And I think they’ve in the slow is precisely because they’ve being the slow to adopt and even the slow as to adopt because they’re in such a high risk situation.

Peter Winick Well, and then I would imagine that there’s tech that has to. So when you talk about autonomous driving and, you know, the trolley car dilemma and all that sort of stuff, somebody’s got to tell the computers the decisions to make in a situation with could be an ethical situation or that has be programmed, maybe some of that’s becoming.

Reid Blackman Yeah. Well, you know, for whatever reason, I primarily, if not solely get called upon by non tech native companies. So the vast majority of my clients are non tech, native fortune 500 companies, healthcare, CPG, insurance, the mining industry, surprisingly, e-commerce, but like open AI is not calling me. And I think it’s because they think they’ve it handled.

Peter Winick Well, there’s a little bit of bro culture and arrogance. There is well potentially where the.

Reid Blackman Yes, I should I think

Peter Winick Oh, everything, sir.

Reid Blackman I put it nicely by saying they think they’ve got it handled.

Peter Winick But yes, that’s exactly right. Well, this has been awesome. I appreciate your time and I love the journey from, you know, whole lot of love in a van and fireworks, bad Senator lives get, you know. To academia, back to entrepreneur, like very, very cool. Cause the journey is a big piece of it. At least from like, it’s one of the most interesting things I’ve learned from thought leaders is, how the hell did you get here? Cause like you said, you asked the heart surgeon how they get there. We know, they went to medical school and then they did this and they did that and. Whatever, but thanks for sharing the journey and-

Reid Blackman I’ll say one last thing about how I got there. Just try a bunch of things out. I mean, you know, I’m not, it’s funny that I’m in a sort of risk management role or something along those lines, because I’m a risk taker. I don’t, you now, I just try stuff. I’ll just try it out. And so part of that journey was let’s hire a PR person. Let’s see if that works. Let’s try email automation, bad idea, but let’s try it. Let’s Try LinkedIn automation. Let’s tried this relationship person. And so… That’s nothing that academics usually don’t do, right? They stay in their narrow lane. They’re like, I know this thing and then they write a billion articles on that thing. But as you know, I now you know part of being an entrepreneur is, let me just, I’m gonna start a podcast. Let’s see how that goes. And so part of it was that sort of firework.

Peter Winick The ship it if you will mindset has been beaten out of them in order to be a successful academic, right? It’s ready. Aim, aim, aim aim, and then they make fire. They’re ready, fire, fire, you know, ready, entirely. I really, I so I just think it’s just a different capability to develop a different mindset. I mean, academics are perfectionists, which is great. Yeah, right. They don’t ship enough. They don’t break things they don’t you know they’re afraid of, you know what happens if you publish an article and you’re Okay, well, nobody right get over it, you know

Reid Blackman It’s a lot, it’s a little bit of hedging.

Peter Winick The seven people that read it will probably accept that.

Reid Blackman Exactly, exactly. I’ve had to teach myself. No one cares about you. No one’s hanging on every word, no one’s reading your words like you read your words. Just chill out and give it a go. And if you start a podcast and nobody listens, okay, who cares? No, it’s not a big deal. That didn’t happen. But you start a substack, which I did recently, and no one reads, okay who cares. Just give it shot, doesn’t matter.

Peter Winick That’s going to be in your obituary. I’ve seen they’re already writing that. That’s gonna develop in history. He started a subs deck. No, he started a sub deck.

Reid Blackman Yes, I will be. Yes, I will certainly be renowned for my subs to act. There’s no question.

Peter Winick This has been great. Appreciate your time. This has been awesome. Thank you so much.

Reid Blackman It’s my pleasure, Peter. Thanks.

Peter Winick To learn more about Thought Leadership Leverage, please visit our website at ThoughtLeadershipLeverage.com. To reach me directly, feel free to email me at peter at Thought Leadership leverage.com and please subscribe to Leveraging Thought Leadership on iTunes or your favorite podcast app to get your weekly episode automatically.

 

 

The post Ethics at Scale: Navigating AI Risk | Reid Blackman appeared first on Thought Leadership Leverage.

Thought Leadership Leverage

Author: admin

Leave a Reply

Your email address will not be published. Required fields are marked *