The NLP Cypher | 07.18.21

https://miro.medium.com/max/800/0*MimnZgiCSVvv_bOa

Original Source Here

Sometimes… cool things happen. A new chatbot from Facebook AI was released this Friday with remarkable features. This chatbot, BlenderBot 2.0, is an improvement on their previous bot from last year. The bot has better long-term memory and can search the internet for information during conversation! This is a convenient improvement versus traditional bots since information is not statically “memorized” but instead has the option to be dynamic and “staying up to date” via the internet. 🤯

I’ve recently tested the model and trialed the smaller 400M variant. Currently, there exists two variants:

  • BlenderBot 2.0 400m: --model-file zoo:blenderbot2/blenderbot2_400M/model
  • BlenderBot 2.0 2.7B: --model-file zoo:blenderbot2/blenderbot2_3B/model

Getting started with ParlAI is straightforward. For this model, you need to specify two arguments: model file and a search server (for the internet queries). FYI, the search server argument I have yet to figure out. I tried different approaches such as specifying a normal search URL e.g. ‘https://www.google.com/search?q=’ and even an API URL endpoint (as seen below with Bing API) but both approaches were unsuccessful. The ParlAI documentation doesn’t give an example URL to use so right now I’m still debugging this issue. However, you can still download and interact with the bot except it wont’ leverage the internet for queries but instead, will use its memory. However, at least we are in prime position to get the model up and running when the internet server issue gets resolved.

For testing the 400M variant, you’ll need a minimum of 20–25GB of RAM for the model to fit in memory and a minimum of 6GB of VRAM for a GPU at inference time. (my testing used a V100 🥶). You’ll need a virtual machine instance in the cloud to get it running since you’ll most likely run into issues if you choose to use Colab. (this is also confirmed in ParlAI’s GitHub issues thread).

Most bots thus far have been pretty boring but I think this new internet search feature will make them a lot more useful to users. Using an AI model that queries the internet for information will most likely be the most scalable solution for open-ended dialogue in the near future.

…Talkin’ bout Search…

Someone built a StackOverflow search engine 😁

https://searchoverflow.com

CoPilot Interaction Video

Still interested in seeing GitHub Copilot getting a workout in real-time? Watch this vid:

HTLM Language Models — Structure is All You Need

New paper out on using HTLM markup as input prompts for generative tasks. The authors showed that by directly modeling HTML through a BART like objective, they could do structured zero-shot prompting by representing tasks in HTML, such as summarization.

LINK

Multi-Lingual Language Models Survey

If you work with multi-lingual or even low resource languages, this paper is a must-read. The survey discusses the architectural structure of these type of models, cross-lingual transfer attributes, multi-lingual vs. monolingual model performance and more…

LINK

Reasoning with Language Models and Knowledge Graphs for Question Answering

What happens when LMs and KGs come together for question answering:

A Billi Rows and SQLite

One man’s quest to insert 1 billion rows in SQLite under a minute (33 seconds to be exact).

This was the schema used:

With Rust, anything’s possible:

Blog

FAISS Tutorial w/ Code

For peeps interested in scalable search:

Bloomberg and Spotify Stunts at SIGIR 2021

For the information extraction peeps:

Bloomberg:

https://www.techatbloomberg.com/blog/bloombergs-ai-researchers-engineers-publish-3-papers-at-sigir-2021

Spotify:

Repo Cypher 👨‍💻

A collection of recently released repos that caught our 👁

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: