I Want the Best of Both Worlds


Original Source Here

I Want the Best of Both Worlds

Mixing Machine Vision and New AI Capabilities

Written by Ed Goffin, Sr. Marketing Manager at Pleora Technologies.

“I want the best of both worlds.” I don’t think Sammy Hagar was singing about machine vision and AI while fronting Van Halen (as a second-rate replacement for David Lee Roth, but that’s a topic for a whole other blog), but it’s a hot topic for manufacturers.

Undoubtedly, there’s a lot of hype around AI. Trade shows have a heavy emphasis on AI, from companies developing models to more intelligent self-navigating factory floor robotics. Venture capital invested $75 billion in AI startups in 2020. Beyond startups, a number of well-established players in the vision industry — including Pleora — are now applying their machine vision expertise to new AI solutions.

But often the hype around new technologies can be a few leaps ahead of reality for end-users. It’s not unusual, remember that the best technology companies are developing solutions in anticipation of our future needs.

In my opinion, based in part on observation along with a good dose of speculation, we are somewhere between hype and reality for AI and quality inspection. It reminds me of seeing the first-generation iPhone a few months before market launch. I was just barely passed the flip phone stage, still amazed at receiving email on my portable device. Where was the keyboard on this first-gen? Why would I need Internet access on a mobile device, when I had a laptop in my briefcase?

Photo by Mathieu Improvisato on Unsplash

My device was simple to use, comfortable and did the job I needed it to do. Yet, as more users adopted the next generation of mobile devices, I quickly saw gaps in my technology choice. You can see maps on your phone? Browse the web using apps? Wait, there’s a camera with an editor?

Machine vision and AI are similar. Speaking with a number of manufacturers at a recent event, the general consensus because clearer and clearer, “machine vision does a good job”. It has been a proven technology backed by decades of deployments and technology investments. But, as manufacturing becomes more complex and end-users increasingly demand perfection, it has capability gaps.

Rules-based inspection excels until manufacturers are producing products that may have different thresholds for what is considered an error. That may include regional or customized products or products that are graded for different end markets. The traditional rules-based inspection also struggles with irregularities, such as textiles, and metal and glass where reflection is an issue. As a result, manufacturers are dealing with downtime and costs as they complete a secondary inspection following an inconclusive machine vision decision.

AI promises to help fill that gap, primarily by bringing a degree of consistency to those subjective decisions. We’ve had a few customer conversations with textile manufacturers, where a certain level of inconsistency is acceptable, or even desirable depending on the end market. A good example is hardwood flooring, where scratches and defects are unacceptable but a certain amount of grain and inconsistency is desirable. It’s a subjective decision, difficult to program into a rules-based system, but well-suited to a more adaptive AI approach.

So this brings me back to Van Halen. Manufacturers can mix “the best of” machine vision and new AI capabilities with a hybrid AI approach. This means retaining your existing machine vision infrastructure, processing, and end-user applications while taking advantage of new edge processing power to add in AI capabilities.

In this scenario, the AI algorithm is trained and deployed to an edge processing device, which acts as an intermediary between the camera and the host PC. The embedded device “mimics” the camera for existing applications and automatically acquires the images and applies the required AI skills. Processed data is then sent to the inspection application, which receives it as if it were still connected directly to the camera.

Depending on the manufacturer, as a first step, they could begin using AI as a secondary inspection tool by processing imaging data with AI skills in parallel to traditional processing tools. If a defect is detected, processed video from the embedded device can confirm or reject results as a secondary inspection. Images and data can also be used to continue to train the AI model until the manufacturer has complete confidence in the results.

Alongside advanced edge processing, new “no-code” AI algorithm training tools help bypass what has traditionally been time-consuming and potentially expensive steps to train, optimize, and deploy models. With a more intuitive drag-and-drop approach, manufacturers can develop computer vision and AI plug-ins without requiring specialized skills.

You can learn more about hybrid AI in this Whitepaper or download our AI Solutions brochure for an overview of edge processing and algorithm development.


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: