Ethics That Must Be Built Into Artificial Intelligence



Original Source Here

Ethics That Must Be Built Into Artificial Intelligence

And forever enshrined into legislation

Image from Pixabay.com

So if you’ve been following my content, you know I’ve been writing a lot about artificial intelligence. I’ve shown some of the positive and negative developments in this area, and how we should harness this immensely powerful technology for the commonwealth of man; and not exploit its use for evil, the way we historically have with nuclear weaponry.

Why an authoritative declaration is vital

Of course, I’m not the only one saying it. There are many thinkers and innovators who have been advocating for this. In line with that, today’s piece is all about the ethical principles that pundits feel should be programmed into AI, and developed with a clear view in mind moving forward.

I’ve loosely been basing these artificial intelligence articles around an incredible book entitled 2084 written by Professor John Lennox. Therein, he has submitted a number of generally applicable principles surrounding the development and production of such technology.

He has drawn these motions from the so-called Asilomar AI Principles which were drafted at a conference back in 2017 in – you guessed it – Asilomar, California. Apparently, this has been endorsed by over one-thousand AI researchers. Other supporters of it include the late great Stephen Hawking, Jaan Tallinn, and last but predictably not least, Elon Musk.

Before we review it though, why is such a declaration necessary? The answers to that question are subjective, imperfect, variable and relative. But that in no way diminishes its effect. One has only to take a truncated historical view to notice one salient human frailty. It’s this: mankind is foolish and hasty. Especially when we deliberate upon things in groups. Why else does the word deliberate imply persnickety, tediousness…meticulosity? Because it’s patient and omniscient.

The rate at which we typically address the ethical concerns of technological progress will be our undoing. Brinksmanship and a lack of due care has brought us to the lip of extinction repeatedly. We can’t afford to repeat such folly.

Declarations of this magnitude are necessarily lofty and out of reach. That’s why they’re necessary. Without a vision and an iron-clad commitment, we are destined to fall far shorter of them had we espoused such ideals in the first place.

The principia Professor Lennox includes in his book is a collection of Asilomar’s most salient features. And here they are.

Asilomar’s Most Noteworthy AI Principles

1) Research Goal:” The motivation and direction of AI research should not just be the advancement and proliferation of digital, algorithmic and machine learning ends. Another word for this would be undirected intelligence. Rather the goals should aim higher and always be made with specific benefit to the common good of man.

6) Safety: AI systems should be safe and secure throughout their operational life-time, and verifiably so where applicable and feasible.” So clearly, this conference was agreed that it shouldn’t just be technological development for its own sake, but advancement for a specific use, and tested for verifiable safety.

10) Value Alignment:” This seems to build on number one. It refers to how autonomous AI systems should be designed always to operate harmoniously with human values.

Further to this end,

11) Human Values:” So not only should there be a value alignment with the autonomous uses of machine learning and robotic systems — which could be responsible for producing machines and AI of its own – but these need to align with what humans value, not just what we could find useful. So this would include some compatibility and preference given to human dignity, rights, freedoms, and cultural diversity.

12) Personal Privacy: People Should have the right to access, manage and control the data they generate given AI systems’ power to analyze and utilize that data.” And I would add to this that it should be legislated as a basic universal democratic endowment for citizens of the so-called free world to have direct access to education respecting control of the privacy functionality built into these systems.

This is huge, in my opinion. The senate in the United States has come down pretty hard on Facebook and YouTube, specifically, to not just change their privacy policies but how straightforward they are in the utility of their products. So, there’s already precedent for this. But I believe it should be a generally applicable maxim.

13) Liberty and Privacy: The application of AI to personal data must not unreasonably curtail peoples real or perceived liberty.”

14) Shared Benefit:” This means the development and production of AI should benefit and empower as many people as possible and not just a few oligarchs, bureaucrats and/or billionaires.

To that end,

15) Shared Prosperity:” Which is pretty self-explanatory.

16) Human Control: Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.”

17) Non-subersion:” This should represent an effort on the part of developers, producers, and legislators to design machines with built-in mechanisms to respect and improve, and never to be used to subvert the social and civic precesses of our society.

18) AI Arms Race:” This should be avoided at all costs. Much of the generally applicable principles we attach to nuclear arms and energy should be respected equally toward artificial intelligence.

There are other factors John Lennox brings out from that conference, which I won’t include here. But I agree that while these items are necessarily not perfectly definable or achievable, there should be some commitment to these guiding ideals reflected in AI research, development and legislation.

This may sound sententious. But that’s the nature of ideals, isn’t it! The fact that principles are implicitly in-executable hardly diminishes their importance. The UN, and virtually all of its members, have failed the universal charter at one time or another. Does this invalidate its imprimatur? It shouldn’t!

All that said, feel free to contribute your views of this below in the responses. I’m curious to read other generally applicable values we should codify into the development and proliferation of AI systems.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: