Original Source Here
Power, Limitations, and Use Cases of Gpt-3 From My Tests and Prototype Apps You Can Replicate Right Away
Since OpenAI released it now already several times ago, I have been using GPT-3 and putting it to tests of all kinds. After taking GPT-3 through exams aimed at quantifying how much it knows about science, I started using it to assist me in my own writing processes (for which it turned out especially cool to overcome the “white page blocking”) and also to create chatbots for various purposes from welcoming and guiding the visitors of my website to chatting about whatever topic you feel like, even by talking naturally, and to commanding apps through natural speech. Let’s review here all the articles I wrote about all this.
First things first, so what is GPT-3?
GPT-3 is a last-generation artificial neural network for natural language processing, developed by OpenAI. It reads in text and produces new text, in a way such that input/output pairs could correspond to questions/answers, text/summaries, guidelines/long texts, sentences in language 1/translation to another language, request for a command in natural language/corresponding source code, etc.
The input is usually accompanied by a header of information that GPT-3 doesn’t know about and/or examples of the kinds of inputs and outputs it must provide. This full input is actually called the “prompt” in GPT-3 terms, and the output is the “completion”. The strategy of crafting customization into a prompt is called “few-shot learning”. While you can learn more about it in general terms at OpenAI’s website, most of my examples include some few-shot learning so you’ll easily grasp how it works and how it can help you better use GPT-3.
Costs and related notes
When you get an account from OpenAI to use Gpt-3 you also get some free credits to start using it, and you might even get some more credits by simply asking (not granted they will give you, though). Then, you’ll have to pay for the tokens you use. It’s not too expensive (in particular, it just became cheaper this month, see here) and of course, if you use GPT-3 in a paid service or program you could somehow include GPT-3 access in your costs. Or, as I do in all my examples, you let users write in their own API keys, which means they can try your app without spending on GPT-3 if they use a free key, and then buy more GPT-3 credits if they want to keep using your GPT-3-powered app.
Calling GPT-3 through PHP, to then use it from web apps!
If you sneak through my programming and development articles, not only those on GPT-3 but all of them, you’ll see that > 90% of what I do is tailored to the web. I’m a strong advocate for web development, especially client-based, when possible, because of a large number of advantages that I won’t delve into here (you can have a look at this and this article to get a grasp on these motivations).
How much does GPT-3 “know”, and how can we “explain to it” what we want it to do? Intrinsic “knowledge” and few-shot learning
Here we come to my early tests when I still hadn’t experienced the awe of GPT-3 myself. In particular, at the beginning, I was very curious about how much GPT-3 knew about stuff. So, being myself a scientist, I decided to increasingly push the model with harder and harder questions in a kind of evaluation on various sciences. Here’s how I set up my evaluations, including some basic questions on chemistry, biology, and physics, and then my next two articles presented my deeper evaluations, with some very interesting conclusions. Some positive like:
and others rather negative, kind of expected but here explored in detail:
After these two articles, somebody got kind of angry saying that I seemed to be suggesting that GPT-3 could “think”. Of course, it doesn’t, as it is only a statistical model that “simply” propagates the most likely tokens (output) based on input tokens. If the input tokens make up a coherent question whose answer GPT-3 has been seen in training, then it has a good chance of replying correctly. That’s why it performed better in my biology tests. I clarified all this in a dedicated article.
Smart GPT-3 bots to chat with, talk with, and control commands
GPT-3 is so versatile and has been trained with such a large and rich dataset (which covers several human languages and even non-human languages such as source code) that it can perform very complex tasks and easily engage conversations that look extremely realistic. The tasks include summarization, text creation, translation, source code writing, question answering, and other crazy functions that enable a range of applications.
Question answering is the basis for chatbots, which I explored in articles that included also web-based calls through the PHP module introduced above, and also tailoring the bot’s “knowledge” to properly answer questions about certain topics that it cannot have learned during training -such as my CV, for the bot I built to welcome visitors of my website.
It’s furthermore very impressive if you couple GPT-3 to speech recognition and synthesis engines, to create bots with which you can just engage in a natural conversation just by speaking! Here’s how I coded it:
Likewise, and exploiting GPT-3’s power to cast natural language to app-specific commands if properly tuned in the prompt, here I made an app that controls a molecular viewer through natural speech that GPT-3 converts to commands:
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot