Original Source Here
This AI newsletter is all you need #22
What happened this week in AI by Louis
One word: Galactica.
Galactica, Meta’s most recent large language model that can store, combine and reason about scientific knowledge was shut down after many users reported results that were misleading or incorrect. There’s a lot of controversy going on around this model, mostly to do with the gap between Meta’s confidence over the model and its rightfully questionable results. The demo was not as catastrophic as Microsoft’s Tay incident of 2016, but it too quickly found the line between fun experimental tool and dangerous propagator of misinformation. Galactica represents a big advancement for large language models, but given that it was intended for scientific use, the level of rigor was far from met.
On my end, I really liked a tweet shared by my friend Lior, which greatly summarizes my thoughts. I’d like to quote here:
“The drama surrounding Galactica baffles me. Let’s remember we’re all on the same team trying to make our tiny field progress.”
Was Galactica perfect? No. But GPT3, StableDiffusion, and Dall-E weren’t either. It’s by releasing it into the world that the feedback loop starts, and these insights help us build better tools over time.
To add the ethical perspective from Lauren, let’s not forget what effects this might leave on the world and our responsibility as AI co-creators to handle those effects, whether they range from negative to positive. This is neither the first nor the last language model to accidentally spread falsehoods, but understanding and learning from these mistakes ensures that the progress we work toward in AI forges the future we want.
- Achieving Individual — and Organizational — Value With AI: A report
The report has many interesting findings and suggests that employees tend to underestimate how much they use AI technologies at work. Some key findings are that a majority of individual workers personally obtain value from AI and regard AI as a coworker, not a job threat. Requiring individuals to use AI encourages its use more than building trust in AI does, and mandatory use, despite seeming oppressive, still leads to individual value. Organizations get value when individuals get value, not at the expense of individual value.
- Design app Canva released a beta version of its own text-to-image generator
Yes, another one! I actually like this news. I create all my YouTube thumbnails using Canva and I really like their product. They also have a background removal tool that works quite well and other AI-based tools. This new one might be really powerful too and useful for AI-related thumbnails 😎
- More layoffs…
Twitter, Meta, and now Amazon are planning to lay off approximately 10,000 employees, one of the largest cuts in the company’s history! For those of you looking for a job, please be patient and try not to be discouraged — you will find something! In the meantime, my best recommendation is to suggest work on your portfolio. Build a cool little app, implement Stable Diffusion, and join one or more Kaggle competitions! Try enjoying the “free time” you have and leverage it to improve your chances of finding your future dream job 🙂
Most interesting papers of the week
- Galactica: A Large Language Model for Science
Galactica: a large language model that can store, combine and reason about scientific knowledge.
- Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
An efficient NeRF approach based on Latent Diffusion Models.
- Extreme Generative Image Compression by Learning Text Embedding from Diffusion Models
“We propose a generative image compression method that demonstrates the potential of saving an image as a short text embedding which in turn can be used to generate high-fidelity images which is equivalent to the original one perceptually.”
Enjoy these papers and news summaries? Get a daily recap in your inbox!
The Learn AI Together Community section!
Meme of the week!
Featured Community post from the Discord
JacobBum#7456 just published “Breaking it Down: K-Means Clustering”. This is a great article which explores and visualizes the fundamentals of K-means clustering with NumPy and scikit-learn. If you write articles and publish them on your blog or on our Medium publication, share them on our discord server and you might get a chance to be featured here too!
AI poll of the week!
TAI Curated section
Article of the week
Training machine learning models can be time and memory-consuming, especially if your data is large. It is important to optimize the workflow to save computational time and memory consumption, especially while training the model multiple times with different hyperparameters to find the best hyperparameters for your model. This article shares six practical tips to decrease computational time and memory consumption while training a machine learning model.
Our must-read articles
If you are interested in publishing with Towards AI, check our guidelines and sign up. We will publish your work to our network if it meets our editorial policies and standards.
Interested in sharing a job opportunity here? Contact firstname.lastname@example.org.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot