How To Join The Applied AI Revolution

Original Source Here


Vijay Raghavan, Executive Vice President and CTO of LexisNexis Risk Solutions, part of RELX

Have you ever wondered whom to thank for some of the modern conveniences you might have started taking for granted, like Siri, Cortana or Alexa (assuming you agree these are conveniences)? The people at the Association for Computing Machinery (ACM) decided to thank Geoffrey Hinton, Yoshua Bengio and Yann LeCun in April of this year by honoring them with the Turing Award for their contributions to deep learning and neural networks.

These contributions are put to use every time you log into your smartphone using fingerprint or facial recognition or when you use Google Photos or a voice assistant, and likely every time you use Amazon, Netflix, Facebook or Instagram. The advances in automatic language translation and autonomous cars in recent years arguably wouldn’t have progressed as rapidly had it not been for the contributions of these three researchers.

All of that is still an understatement of their contributions to artificial intelligence (AI). What they really did was take neural networks from a somewhat marginalized set of concepts as of a couple of decades ago and thrust them back into the limelight via a series of ground-breaking discoveries, even though the rest of the AI community was deeply skeptical of their endeavors. Beyond the algorithms they discovered, their tenacity and willingness to go against the grain of what was then accepted wisdom in AI is truly remarkable.

This is relevant now that “applied AI” is going mainstream and is becoming accessible to many of us from a consumer standpoint (courtesy of gadgets like Amazon Echo), from a commercial applicability standpoint and most certainly from a software engineering discipline standpoint. Within technology organizations, we can make use of AI to do our jobs more effectively using open source AI tools (e.g., TensorFlow or Keras) or cloud-hosted AI APIs (e.g., Alexa APIs from Amazon or Dialogflow APIs from Google). Given how AI is being democratized for the rest of us — because of the work initiated by people like this year’s Turing winners — we have the ability to harness the power of AI quite easily and effectively.

Again, I’m not talking about theoretical AI as discussed in the rarefied atmosphere occupied by Hinton et al., but applied AI that can help us with the automation of coding, testing, software estimation, error correction, anomaly detection, model construction, information security, system uptime, disaster recovery and failover, etc. Understanding the capabilities of these AI frameworks and APIs can potentially reduce our costs, improve our efficiencies and decrease our risk.

So, how does one get started?

There’s plenty we can do to educate our teams via free or inexpensive online courses pertaining to AI and machine learning (via Coursera, Udacity and UDemy) or vendor-specific courses (like those offered by Microsoft, Google and Amazon). Then again, no one is going to become an AI expert overnight by watching a few online courses. But that’s the point of applied AI and is what I mean by AI democratization: We don’t all need to become experts per se to use AI in simple ways internally and iteratively. And it’s become ridiculously easy — and cheap — to experiment and learn. I have several tabletop experiments running on my desk using Raspberry Pi 3 boards (each costing $35.00) running TensorFlow (which is free). The main investment required is the time to learn and experiment.

Let’s get back to one of our Turing Award winners, Geoffrey Hinton, for a minute. There was an annual image recognition competition called ImageNet (actually called ILSVRC) which involved teams competing to write the best code to classify images and recognize objects from a very large collection of pre-assembled and annotated images. Entries were judged based on how low the classification error rate was — for example, if a given program could not identify a tiger within an image or misidentified a tiger as a zebra and so on. Until 2012, the error rate of the winning entry was over 25%, as you can see from this chart from Measuring the Progress of AI Research by EFF (CC BY-SA).

In 2012, Hinton and two of his Ph.D./postdoc students jumped into the fray with their AlexNet submission and rocked the AI world by lowering the error rate down to about 16% — a huge leap — by using a convolutional neural network. Since then, similar techniques (in many cases, based on Hinton’s and his colleagues’ body of work) have been used to knock the error rate down even further. The horizontal red line in the chart represents human performance — at a 5% error rate — which means that by around 2016, a neural network could classify images (or at least, the images in ImageNet) better than humans could.

Impressive stuff. These kinds of advances have implications for the way in which we build and deliver software. It’s therefore incumbent on companies today to get up to speed on these tools and techniques. Until recently, “learning AI” would have sounded formidable to most people — even those in the software engineering profession. But new discoveries and improvements are being made every week, resulting in AI in varying forms being made more and more accessible. For example, MIT just announced a new general-purpose AI programming language that is a probabilistic programming system that can be used by novices and experts alike.

I realize there are different schools of thought when it comes to such democratization of once-esoteric concepts. Some of those schools of thought revolve around the adage “A little knowledge is a dangerous thing.” That’s true, but good tools in inexperienced hands don’t make the tools bad. Hopefully, it helps make those hands more experienced, more easily.

There will always be great programmers and mediocre programmers, regardless of what tools they use (as in any field). Making AI tools easier to use can only accelerate progress, and I’m betting on it improving the software development life cycle itself, so we can build products better, cheaper and faster.

Originally published as part of the Forbes Technology Council at


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: