The NLP Cypher | 07.04.21



Original Source Here

Hey Welcome back! Want to wish everyone in the US a happy 4th of July🎆🎇! Also, want to quickly mention that the NLP Index has doubled in size (since its inception) with now housing over 6,000 repos, pretty cool!!! 😎 And as always, it gets updated weekly. But first, this week we asked 100 NLP developers: Name one thing Microsoft got for paying $7.5 billi for GitHub, and $1 billi to OpenAI? SURVEY SAYS:

7.5B + 1B = GitHub CoPilot 👍

If you want to hear GitHub’s take on their new code generating assistant read here:

Also it turns out CoPilot is a serial killer 👀, but at least the code is readable. 💪

from @minimaxir

Microsoft’s Hub of Goodies

Hey did you know Microsoft has a stash of models tucked away in their repository spanning NLU, document understanding, cross-lingual and more? If these models interest you, follow this page:

UniLM (v1@NeurIPS'19 | v2@ICML'20 | v3@ACL'21): unified pre-training for language understanding and generation

InfoXLM (v1@NAACL'21 | v2@ACL'21): multilingual/cross-lingual pre-trained models for language understanding and generation

DeltaLM (NEW): encoder-decoder pre-training for language generation and translation by augmenting pretrained multilingual encoders

MiniLM (v1@NeurIPS'20 | v2@ACL'21): small and fast pre-trained models for language understanding and generation

AdaLM (v1@ACL'21): domain, language, and task adaptation of pre-trained models

LayoutLM (v1@KDD'20 | v2@ACL'21): multimodal (text + layout/format + image) pre-training for document understanding (e.g. scanned documents, PDF, etc.)

LayoutXLM (NEW): multimodal (text + layout/format + image) pre-training for multilingual document understanding

BEiT (NEW): BERT Pre-Training of Image Transformers

s2s-ft: sequence-to-sequence fine-tuning toolkit

XLM-T (NEW): Multilingual NMT w/ pretrained cross-lingual encoders

Multimodal Few-Shot Learning with Frozen Language Models

DeepMind took the few-shot learning ability of models like GPT-3 and applied in the multi-modal arena.

LINK

HuggingFace Course Notes Summary

A summary of HF’s free Transformers NLP course:

Vision Transformers Introduction

A overview of the inner workings of Vision Transformers by Paperspace.

Prompting With LM-BFF

After reading the DeepMind paper from above, the following article dovetails nicely into prompting (thank you GPT-3). Blog is written by Tianyu Gao whose paper was featured at ACL 2021:

Paper: https://arxiv.org/pdf/2012.15723.pdf

Paper discusses a new prompting technique used on smaller models called LM-BFF.

Gradient Blog

Code

70+ Python Projects

An aggregation of Python project tutorials ranging from web scrapping, building blockchain, face detection, building your own ciphers and more…

Repo Cypher 👨‍💻

A collection of recently released repos that caught our 👁

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: