Reflecting on company leadership and their understanding of AI

https://miro.medium.com/max/1000/0*twxzvXD2nndRaGgo

Original Source Here

Reflecting on company leadership and their understanding of AI

Why we should expect more from the C-Suite when it comes to machine learning and AI

Photo by Franki Chamaki on Unsplash

Before I get into my discussion, this blog post was prompted after I read this article: https://venturebeat.com/2021/05/25/65-of-execs-cant-explain-how-their-ai-models-make-decisions-survey-finds/. It’s a quick read and I would suggest reading the article before continuing with this blog post!

I’d also recommend getting a copy of Weapons of Math Destruction by Cathy O’Neil. It is a great read and will help you gain an understanding of why ethics are a necessary part of artificial intelligence.

Photo by Benjamin Child on Unsplash

Ok, so even if you chose to ignore my suggestion above, you can read the URL and see something that should be quite alarming: 65% of executives cannot explain how their AI models make decisions. This isn’t a number that we should just brush aside. It’s a number that should, at a minimum, be disappointing. It doesn’t make sense that a CEO wouldn’t be able to explain how one of their tools works. Certainly the rapid adoption of AI tools, can make it difficult to keep up, but excuses shouldn’t be made for technology that can have life altering consequences.

To provide some context I’ll provide with you with some hypothetical situations below. After describing each situation, I’ll ask the same general question: “Would you trust this person?” and all you need to do is answer with a yes or no. Assume you have no knowledge of the technology that is presented and there are no financial repercussions if you choose to say no.

Situation 1: You’ve just purchased a $10,000 TV for your new house. You’ve got the perfect wall to mount this TV and you’ve hired somebody to mount it. The person mounting it has no idea how to use stud finder or a drill. Do you trust them to mount the TV?

Situation 2: The year is 1914 and you have been selected to participate in the first commercial flight. You have the opportunity to speak with the CEO of the airline prior to boarding. You ask a few questions of the CEO about how this airplane works and is able to achieve flight. The CEO is unable to answer any of your questions. Do you still get on the plane?

Situation 3: You are in need of an important operation. You found someone who claims to have a surgical method that will fix your ailment. You go in for a consultation and provide them with your medical history. The doctor is unable to describe how the surgery will actually help you except to say that you should feel better after the surgery is done. Do you proceed with the operation?

For those of you who answered yes, all I can say is that you enjoy taking risks! I personally would have said no to each of those, and I hope even with these simple examples it provides some context as to why C-Suite execs should be expected to have knowledge of how their product works. If you have a tool and cannot use it, then how do you expect to provide a quality service? If you the leader of a company cannot explain the basics of how your AI makes decisions, then why should I trust you or your product? I don’t expect them to have a full understanding of the mathematics behind the models that are used because there should be a team of data scientists that can easily explain the math and decisions made to build a model. What I would expect from the C-Suite is an understanding of what features are used by a model and how those features are used to make a decision.

It’s also imperative for the C-Suite to understand their AI, because they need to be identify when their AI tools perpetuate bias, inequality, or harm towards certain groups. If their product negatively affects a particular group of people or customer segment and they choose to ignore it, then they must face the appropriate consequences. And if the company is allowed to continue to do business, then the proper audits need to be performed so that the now defunct model can be adjusted or rebuilt from the ground up.

Should you be gravely concerned that the AI used today will become sentient and self serving like Ultron? No, probably not (at least not in this lifetime). Should you be ready and willing to question how AI used in financial and healthcare could cause you potential harm? Absolutely. This blog post is not meant to be alarmist and the general idea is not to fear the adoption and implementation of new technologies, but rather be willing to hold the proper people accountable. Profit can sometimes be a hindrance to accountability, so implementing ethical AI practices will require just as much if not more effort than the implementation of the AI itself.

I’ll close this blog with an anonymous quote that I find to be quite applicable:

“No individual raindrop considers itself responsible for the entire flood.”

Thanks for reading! I’m a graduate of the Flatiron School Data Science bootcamp and if you would like to connect you can reach out to me on LinkedIn or Twitter.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: