Software 3.0 — How Prompting Will Change the Rules of the Game

https://miro.medium.com/max/1200/0*3_W_2izArfBO1l-E

Original Source Here

Software 1.0, 2.0, and 3.0 — Understanding the whole picture

“The future of [software 2.0] is bright because it is increasingly clear that when we develop AGI, it will certainly be written in Software 2.0.”

— Andrej Karpathy

I disagree with Karpathy’s prediction. AGI (artificial general intelligence) will emerge from a combination of the three programming paradigms (and maybe others yet to be found). The best way to understand why is to make an analogy with the only instance of true intelligence: Us.

Software 1.0

Before neural networks rose to fame in the early 2010s, all code was written by hand. If you wanted a system to have a specific behavior you had to write down every instruction it had to follow for every situation. The hard-coded program did exactly what was programmed to do. No black boxes, no learning.

That’s exactly what evolution had in mind when it created genetic code. When we’re born, we aren’t a tabula rasa, we come to the world with a set of hard-wired modules ingrained in the brain, which are there because our genes say so. That’s our biological software 1.0. My genes had encoded that I’d had two eyes, two arms, and two legs and that I’d be blond, with brown-green eyes.

Software 2.0

Karpathy says software 2.0 involves the curation of a dataset and the design of the neural net architecture. We define the structure of the program and the “food” we’ll feed it, but not the content. It’s the neural net itself that “writes” its inner “program” (weights) to learn to behave as we want.

Biological software 2.0 are the experiences and perceptions we have growing up. We’re born a semi-empty structure waiting to be fed with all kinds of physical and social information. We may be hardwired to have two arms, but whether we’ll learn Spanish or English, when we’ll learn to walk, or how well we’ll learn to grasp how the world works, isn’t hard-coded in our genes. It depends on the environment and our interactions with the world, which is the dataset we’re fed to optimize our inner programs.

Software 3.0

Prompting allows us to condition GPT-3 to “specialize.” In some sense, when we prompt GPT-3 we’re coaching it to do a task other instances of the system haven’t learned. Each GPT-3 instance has the potential of learning the task but it only happens when we prompt it.

Biological software 3.0 would be the specific learning we go through as adults: The tasks we’re taught, the books we read, the jobs we work at, the abilities and skills we develop. All of us have arguably similar potential to learn any of this, but each of us chooses to learn some skills and not others, or acquire some knowledge and dismiss others. Those choices transform our brains like prompts transform GPT-3’s capabilities.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: