Artificial Intelligence Can Write Flawless Code! Will Coding Be Useless?



Original Source Here

Artificial Intelligence Can Write Flawless Code! Will Coding Be Useless?

Learn to write quality computer code can take years, but what if a computer that can program itself is more likely to learn the language faster, converse fluently, and even model human cognition. The curiosity inside you must be popping out with a question about how a machine can handle such a difficult task?

Well, SourceAI, a Paris-based firm, believes that programming shouldn’t be so difficult.

The target of Source AI and other similar programs is GPT-3, a powerful AI language program released in May 2020 by open, a San Francisco corporation committed to making fundamental improvements in AI. Among the first few hundred people to gain access to GPT-3 were the founders of Source AI. Although open AI has not released the code for GPT-3, it does provide an API for some people to access the model.

GPT-3 will convert natural language into PowerFx, a very basic programming language similar to Microsoft’s Excel instructions, which was released in March. For example, if you want to multiply two numbers just tell the company’s tool to “multiply two numbers given by a user,” and it will whip up a dozen or so lines in Python to do just that.

“If you can describe what to do with natural language, GPT-3 will generate the list of the most relevant formulas for you to choose from and then the code writes itself,” said Microsoft CEO Satya Nadella.

Also, Microsoft invested $1 billion in OpenAI in 2019 and has agreed to license GPT-3.

Image Credit: Google

SourceAI promises to enable its users to create a greater range of programs in a variety of languages, thereby helping automate the creation of more software. “Developers will save time in coding, while people with no coding knowledge will also be able to develop applications,”

Bettes says. Hendricks thinks AI that suggests your next line of code could improve the productivity of human programmers and potentially lead to less demand for programmers or allow smaller teams to accomplish goals.

But they were not the first ones to notice the potential of SourceAI. Soon after SourceAI was released, one programmer demonstrated that it could remix pieces of code to create customized web apps with buttons, text input fields, and colors. Rebuild, another startup intends to commercialize the technique. Another firm, TabNine, built a tool that offers to auto-complete a line or a function when a developer starts typing using a prior version of OpenAI’s language model.

According to Brendan Dolan-Gavitt, an assistant professor in NYU’s Computer Science and Engineering Department GPT-3 language models will most likely be utilized to assist human programmers.

The list doesn’t end here. At Microsoft’s Build Conference, CEO of OpenAI, Sam Altman demonstrated a language model tuned with Github code that generates lines of Python code automatically. There are many examples of the same.

Image Credit: Google

But there are some researchers and scientists who believe that using AI to generate and analyze code can be difficult. Researchers at MIT demonstrated in a paper published online in March that an AI computer taught to ensure that the code runs securely may be misled by making a few minor modifications, such as swapping particular variables, to generate a dangerous algorithm.

“Once these models go into production, things can get nasty pretty quickly,” he says.

Image Credit: WIRED

Dolan-Gavitt, the NYU Professor, raises challenging questions “ I think using language models directly would probably end up producing buggy and even insecure code,” he says. “After all, they’re trained on human-written code, which is very often buggy and insecure.”

In a recent test, the best model only achieved 14% of the time on beginner programming problems put together by a group of AI researchers. Some researchers also find that automating code may change software development, modern AI limits, and blind spots are found which can create new issues.

For example, in recent research, it is found that GPT-3 which was set to do the task of answering questions and creating content, was generating text that involves sexual acts with children, offensive text about Black people, women, and Muslims.

An AI program can make thousand of assumptions. For example, a simple command to an AI assistant, “Buy me toilet paper” has a lot of assumptions baked in. These could be interpreted wrong if not coded as constraints in advance. How important is the price? Softness? Quality?

AI code needs to be tested(with code). Given that AI may be generating code for anything, the output space may be limitless. For example, You can’t develop tests that cover an endless space and number of domains since you can’t monitor a self-driving car for 100 million kilometers to ensure its safety.

AI may not be likely to trust mission-critical systems. It’s simple to write faultless code within a single function. It’s significantly more difficult throughout an entire app. What are the consequences of AI being hacked or bad programming being written? It can’t be trusted by the military.

“Artificial intelligence is potentially more dangerous than nukes,” said Elon Musk.

Image Credit: The Verge

AI cannot replace Software Engineers. It may not sense for large dev shops but software engineers in startups do a lot more than just writing the code which may include:

Ø ticket and code writing and review

Ø talking about the user experience

Ø constraints on hypothetical features are discussed.

As said by Eliezer Yudkowsky, “By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

Some researchers and scientists also believed that this should be approached in a logical manner at least in the beginning level of AI’s development career otherwise AI doesn’t have to be evil to destroy humanity if humanity just happens to come in a way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.

We should never completely rely on AI and we should always focus more on the different sets of problems. By doing this we will be able to empower the power of AI before it overpowers us.

How Microsoft, OpenAI, and Github will work together on AI for coding is still unclear. In 2018, soon after Microsoft acquired Github, the company announced to deploy language models to power semantic code search but Github representatives declined to comment on the status.

It still remains to see how well SouceAI’s tool actually works.

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: