GPT-3 Scared You? Meet Wu Dao 2.0: A Monster of 1.75 Trillion Parameters

https://miro.medium.com/max/1200/0*BMnNj-LkuKhMm-f8

Original Source Here

Wu Dao 2.0’s fantastic capabilities

Multitasking

In an article for VentureBeat, Kyle Wiggers emphasized Wu Dao 2.0’s multimodal capabilities: It has “the ability to perform natural language processing, text generation, image recognition, and image generation tasks. […] as well as captioning images and creating nearly photorealistic artwork, given natural language descriptions.”

Andrew Tarantola writes for Engadget that Wu Dao 2.0 can “both generate alt text based off of a static image and generate nearly photorealistic images based on natural language descriptions. [It can also] predict the 3D structures of proteins, like DeepMind’s AlphaFold.”

Leading researcher Tang Jie highlighted Wu Dao 2.0’s skills in “poetry creation, couplets, text summaries, human setting questions and answers, painting” and even acknowledge that the system “ha[s] been close to breaking through the Turing test, and competing with humans.”

Wu Dao 2.0’s has nothing to envy GPT-3 or any other existing AI model. Its multitasking abilities and multimodal nature grant it the title of most versatile AI. These results suggest multi-AIs will dominate the future.

Benchmark achievements

Wu Dao 2.0 reached/surpassed state-of-the-art (SOTA) levels on 9 benchmark tasks widely recognized by the AI community, as reported by BAAI (benchmark: achievement).

  • ImageNet (zero-shot): SOTA, surpassing OpenAI CLIP.
  • LAMA (factual and commonsense knowledge): Surpassed AutoPrompt.
  • LAMBADA (cloze tasks): Surpassed Microsoft Turing NLG.
  • SuperGLUE (few-shot): SOTA, surpassing OpenAI GPT-3.
  • UC Merced Land Use (zero-shot): SOTA, surpassing OpenAI CLIP.
  • MS COCO (text generation diagram): Surpassed OpenAI DALL·E.
  • MS COCO (English graphic retrieval): Surpassed OpenAI CLIP and Google ALIGN.
  • MS COCO (multilingual graphic retrieval): Surpassed UC² (best multilingual and multimodal pre-trained model).
  • Multi 30K (multilingual graphic retrieval): Surpassed UC².

It’s undeniable these results are amazing. Wu Dao 2.0 achieves excellent levels in key benchmarks across tasks and modalities. However, a quantitative comparison between Wu Dao 2.0 and SOTA models in these benchmarks is missing. Until they publish a paper, we’ll have to wait to see the degree of Wu Dao 2.0’s amazingness.

A virtual student

Hua Zhibing, Wu Dao 2.0’s child, is the first Chinese virtual student. She can learn continuously, compose poetry, draw pictures, and will learn to code in the future. In contrast with GPT-3, Wu Dao 2.0 can learn different tasks over time, not forgetting what it has learned previously. This feature seems to bring AI yet closer to human memory and learning mechanisms.

Tang Jie went as far as to claim that Hua Zhibing has “some ability in reasoning and emotional interaction.” People’s Daily Online reported that Peng Shuang, a member of Tang’s research group, “hoped that the virtual girl will have a higher EQ and be able to communicate like a human.”

When people started playing with GPT-3, many went crazy with the results. “Sentient”, “general intelligence,” and capable of “understanding” were some of the attributes people ascribed to GPT-3. So far, there’s no proof this is true. Now, the ball is in Wu Dao 2.0’s court to show the world it’s capable of “reasoning and emotional interaction.” For now, I’d be prudent before jumping to conclusions.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: