Original Source Here
🍊 The Juice: Check Me Out
Zumo Labs presents The Juice, a weekly newsletter focused on computer vision problems (and sometimes just regular problems). Get it while it’s fresh.
Week of February 15–19, 2021
In the before times, it wasn’t uncommon to skip right over the self checkout option at a grocery store in favor of a traditional checkout experience. Cashier-supported transactions provided a brief moment of human interaction, while also sparing you the ordeal of correctly identifying your chosen produce. Now, with things unlikely to ever return to “normal,” lots of folks are betting that computer vision-powered autonomous checkout systems are the next big thing.
This week Standard Cognition announced they’ve raised a $150 million Series C at a $1 billion valuation, making them the first autonomous checkout unicorn. Standard’s approach to autonomous checkout relies exclusively on cameras — unlike Amazon Go’s approach, which includes shelf sensors as well — as they hope to create a solution that’s an easy retrofit for grocery stores. But replacing human cashiers with computer vision isn’t going to happen overnight. Standard Cognition is aiming to be in just 100 stores by the end of 2021. As their CEO, Jordan Fisher, says in this Bloomberg piece announcing the raise, “We were definitely off the mark in appreciating how hard this was going to be.”
How long until you’re using a smart cart at your local supermarket, or shopping at the neighborhood Amazon Go? Let us know what you think on Twitter.
Coral reefs are delicate ecosystems that are vital to ocean health. Unfortunately they’re also incredibly vulnerable to pollution, ocean acidification, and overfishing. Efforts to monitor reef health have been ongoing, but to date most of those have required human analysis and labeling, slowing preservation efforts. Now a team at Accenture has partnered with the Australian Institute of Marine Science to automate a portion of that process using computer vision.
How computer vision can protect coral reefs, via Computer Weekly.
LA has tons of parking. It’s a consequence of the city’s minimum parking laws. And while many see that as a problem (since it reduces space available for and drives up costs of housing), one company sees it as an opportunity. Metropolis, an LA-based startup, leverages computer vision at parking facilities to facilitate frictionless entry and exit. And they just raised a $41 million Series A.
The world’s first humanoid robot artist, Ai-Da, is set to debut a series of self portraits she composed while looking in a mirror. Ai-Da relies on computer vision to translate what she sees into coordinates that move her (deeply unsettling) robot arms. The show is expected to kick off at the Design Museum in London this May, pandemic permitting.
Google announced today that it’s restructuring its AI teams under Marian Croak, Vice President of Engineering. Both Google’s Responsible AI Research and Engineering Center of Expertise teams will report into Croak, who will in turn report to Senior Vice President of Google AI, Jeff Dean. The move comes on the heels of the contentious dismissal of Timnit Gebru, and subsequent employee backlash. And it seems the reorg caught at least some folks inside Google off guard.
Google to Reorganize AI Teams in Wake of Researcher’s Departure, via Bloomberg.
This piece in The New Yorker explores the idea of social responsibility in ML research, but it’s generating its own conversation on Twitter as to whether it engages with some of its central issues critically enough. From the piece, “Michael Kearns, a computer scientist at the University of Pennsylvania and a co-author of ‘The Ethical Algorithm,’ told me that we are in ‘a little bit of a Manhattan Project moment’ for A.I. and machine learning. ‘The academic research in the field has been deployed at massive scale on society,” he said. “With that comes this higher responsibility.’”
Who Should Stop Unethical A.I.?, via The New Yorker.
📄 Paper of the Week
Deepmind breaks the game yet again by releasing a SOA model in classification that is smaller and faster to train. This paper explores what batch normalization has actually been doing this whole time and figures out an adaptive gradient clipping technique that achieves a similar regularization effect without all the extra training overhead. High fives all around, except for a sneaky sentence mentioning a “large private dataset of 300 million labelled images”. What is that dataset Deepmind, and why is it private? The people demand an answer!
Think The Juice was worth the squeeze? Sign up here to receive The Juice weekly.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot