How Artificial Intelligence Threatens World Peace



Original Source Here

How Artificial Intelligence Threatens World Peace

Is another Cold War on the way?

Image from Pixabay.com

If you follow my blogs, you know that I’ve been focusing a fair amount of attention on artificial intelligence, and how it has raised reasons for both optimism and extreme ethical pause. In this one, I want to discuss how there is potential for a new conflict not dissimilar to the Cold War with the development and proliferation of nuclear energy; but this time AI will take centre stage of the theatre.

Any technology can be used as either a sword or a plowshare

Very much akin to nuclear expansion, artificial intelligence comes with its own bag of pros and cons. Indubitably, nuclear energy has been harnessed for the commonwealth of mankind. How so? According to one source:

Water Desalination — Reducing the saline content of seawater is extremely costly and inefficient. But nuclear energy can provide such low-cost power.

Medical — Nuclear technology has been harnessed in various medical applications ranging from imaging to killing tumors and sanitizing surgical equipment.

Space Exploration & Reliable Energy — Exploiting the heat from plutonium can be used in the production of electricity. Such generators can operate independently for long stretches of time, eliminating the need for supervision. In fact, it’s reported that Voyager 1 (launched some 44 years ago!) is still transmitting data. Nuclear energy, when harvested safely, of course, is far more environmentally friendly than burning fossil fuels.

Agriculture & Food — Irradiation can be used as a kind of pesticide, but not in the sense of killing undesirable bugs. It merely prevents them from reproducing without harming the crop or making food radioactive. This form of sterilization is, in fact, the only way of destroying bacteria in frozen and raw foods effectively.

Throughout this most recent series of articles I’ve detailed extensively how artificial intelligence can be used positively, too. I will be speaking in a different tone today.

Beating plowshares into swords (Ick! Stop with the cliches already.)

Most of these AI blogs I’ve been writing are loosely framed on a very thoughtful book entitled 2084 written by Professor John Lennox. And he touches on how artificial intelligence is being used for warfare. He quotes a Chatham House report as indicating,

“Both military and commercial robots will…incorporate ‘artificial intelligence’ (AI) that could make them capable of undertaking…missions of their own.”

This new technology has sparked much debate, outrage even, over whether this should be permitted, especially where innocent human life is at risk. Elon Musk has expressed alarm that it could touch off WWIII, and Vladimir Putin has speculated that leadership in AI will be essential to global power in the very near future.

It is said that the Pentagon plans to spend $2 billion to update and develop its weaponry. And it’s all being done to compete with superpowers like Russia and China. Professor Lennox writes that many commanders in the American military are voicing concern about relinquishing control over AI systems tasked with identifying, seeking and eliminating human targets.

Collateral Damage

Apparently, Google has voiced so much concern over the development of such technology that their involvement in the programme has been discontinued. Speaking of Google, by the way, Timnit Gebru (a prominent ex-AI researcher for the tech giant) has exposed some extremely worrying racial biases that have been built into facial recognition software. Before you give Google a pass however, Gebru was allegedly fired from the company for such embarrassing publicity. Her employers predictably denied such allegations. We can table that debacle for now, I suppose. But again, more importantly, imagine the potential loss of innocent life if such technology is misused. It is just one more way that minorities will pay the steep price of belonging to a certain ethnicity. How sad! How cavalier!

Many experts feel that mere algorithms are not equipped to adapt to complex situations and are prone to malfunction in unpredictable ways. Anyone familiar with AI (in vitually any form — even basic computing), can relate to such frustrations.

It was actually Bill Gates who has made the analogy of AI to the development and proliferation of nuclear weapons during the Cold War. How close to the brink it has brought us on several occasions!

Other concerns surrounding military implementation of AI

Just like how rewarding nuclear energy has proved to be in other fields, artificial intelligence has its better and worse applications. I mean, can you imagine how deadly robots capable of identifying, seeking out and eliminating human targets would be in the hands of terrorists? Many AI alarmists warn of a potential i,Robot situation where a vast army of androids subject mankind to them for the (supposed) better interest and survival of humanity.

But you don’t even need to resort to such speculations. I believe the i,Robot scenario is science-fiction anyways (I don’t want to rule it out, but I’m calling it SF). There are so many other latent dangers in this experiment than that! And it seems that techno-utopians are rushing headlong into these advancements with the giddiness of a schoolgirls pubescent fascination with boys.

Let me know your thoughts on this below in the remarks. Do you feel that combining AI technology with weaponry and military application is similar to the worries surrounding nuclear proliferation? Let me know.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: