Defining Bias | The Juice

Original Source Here

Defining Bias | The Juice

Zumo Labs presents The Juice, a weekly newsletter focused on computer vision problems (and sometimes just regular problems). Get it while it’s fresh.

Week of May 31–June 4, 2021


Nearly every week, we feature at least one story that centers on the topic of algorithmic bias. We aren’t cherry-picking. It’s just that pervasive — as pervasive as unconscious bias (precisely because it’s a direct byproduct of that). So, what can we do about it?

As with unconscious bias, the first step is educating ourselves as to the ways algorithmic bias manifests. How is it defined? What does it look like in practice? And why is it so dangerous? Elena has the answers in our most recent blog post. Only through awareness and hypervigilance (and, yes, some synthetic data) can we build fairer AI systems for a more equitable future.

Now do your part. Share the link.



Tabular synthetic data, which is completely made up but maintains the statistical distributions of a real columns-and-rows dataset, preserves privacy in a way traditional data cannot. Protecting the privacy of respondents is what Census Bureau statisticians had in mind recently when they announced they’re looking to phase in synthetic data over the next three years. Researchers, however, are concerned the algorithmically generated data “will not be suitable for research.”

Census Bureau’s use of ‘synthetic data’ worries researchers, via AP.

Meanwhile, we often discuss data scarcity as one of the core value propositions of graphical synthetic data. You can’t train an algorithm without enough of the right images. Here, an AI consultancy partnered with a major auto manufacturer to automate warranty claims, but privacy restrictions on user-submitted data meant they had to turn to synthetic training data instead. (Spoiler: it worked great.)

The power of synthetic images to train AI models, via VentureBeat.


Last week, several digital rights organizations announced that they’ve filed legal complaints against facial recognition service provider Clearview AI. “Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users,” said one Privacy International legal officer, about Clearview’s nasty habit of hoovering up images from services like Instagram, LinkedIn, and YouTube.

Clearview AI hit with sweeping legal complaints over controversial face scraping in Europe, via The Verge.


While some look at manicures as a welcome respite from the day-to-day, others see them as time-consuming maintenance. Now several companies have launched devices intended to streamline the process, and at least two of them are using computer vision to learn to execute the perfect French tip. It may seem scary jamming your fingertips into a box that looks like an electric pencil sharpener, but one co-founder allays those concerns by saying they use “a plastic-tipped cartridge that will not pierce a finger.” 💅

Want your nails done? Let a robot do it., via The Seattle Times.

A toaster-sized computer vision-powered nail painting device, via Nimble.


In April, a Consumer Reports review of Tesla’s Autopilot found that “the system not only failed to make sure the driver was paying attention, but it also couldn’t tell if there was a driver there at all.” Perhaps in response to that, or perhaps in response to other recent news, Tesla has pushed an update activating the in-cabin cameras to monitor drivers. According to the release notes, camera data is processed locally — so only your car will know if you’re misbehaving.

Tesla has activated its in-car camera to monitor drivers using Autopilot, via TechCrunch.


“I don’t see much of a path forward for ethics at Google in any kind of substantive way.” That’s one of the ten people on Google’s own ethical AI team, speaking in response to the current state of things over there. Color us surprised that a group designed to highlight ethical issues in AI systems is not receiving institutional support from the company selling those systems.

Google says it’s committed to ethical AI research. Its ethical AI team isn’t so sure., via Recode.


📄 Paper of the Week

AndroidEnv: A Reinforcement Learning Platform for Android

Go, Atari, or CartPole might come to mind when thinking about reinforcement learning environments. This paper presents an RL environment that controls the touchscreen on an Android phone. The agent can swipe, tap, and wait for visual input on an emulated screen while a clone of the Android OS responds. The researchers throw some standard RL algorithms on a test set of phone apps, reaching human-level performance on some. Given that a massive percentage of the world uses phones as their primary device, anything trained with this environment would instantly have huge scale. Releasing this RL environment as open source, immediately available to anyone in the world, is powerful.


Think The Juice was worth the squeeze? Sign up here to receive it weekly.


Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: