Automating code with Machine Intelligence

towards-data-science

This post was originally published by Nathan Lambert at Towards Data Science

And many more implications from OpenAI’s GPT3 language processing API.

What is it

OpenAI’s newest language processing model GPT-3 (Generative Pretrained Transformer) creating a variety of front-end code snippets from two example snippets. (Front-end code is code that renders on a website, and is often repeated in chunks to get variations of the same designs, hence why it is an easy initial target for automation).

You can engage with the author of the tool here (Twitter). You can find a collection of more creative examples here, or another code generation example here. One I particularly liked was written creative fiction or a written auto-generated game on the last generation of the model.

How this works

The language model “Generative Pretrained Transformer” uses a new pay-to-play API from OpenAI. Here is an excerpt on NLP and Transformers from my post (AI & Arbitration of Truth — which seems to need to be revisited every week).

The Tech — Transformers & NLP

Natural Language Processing (NLP) is the subfield of machine learning concerned with manipulating and extracting information from text. It’s used in smart assistants, translators, search engines, online stores, and more. NLP (along with computer vision) is one of a few monetized state-of-the-art machine learning developments. It’s the candidate for being used to interpret truth.

The best NLP tool to date is a neural network architecture called the transformer. Long story short, transformers use an encoder and decoder structure that encodes words to a latent space, and decodes to a translation, typo fix, or classification (you can think of an encoder-decoder as compressing a complicated feature space to a simpler space via a neural network — nonlinear function approximation). A key tool in the NLP space is something called Attention that learns which words to focus on and for how long (rather than hard coding it into an engineered system).

A transformer combines these tools, and a couple other advancements that allows the models to be trained efficiently in parallel. Below is a diagram showing how data flows through a transformer.

A visualization from an awesome tutorial I found.

Why it matters

This is the first application I have seen where people could use this to replace engineering time. Front-end designers can drastically increase their speed with this tool. This will likely be sold to many existing companies, and new businesses will be using it in the future to create valuable services. It takes a creative person to find the best applications, is certainly limited by us human designers, and will soon be replaced by the next state-of-the-art model [More].

This is eye-raising for more reasons because of OpenAI’s famous charter. In short — we will work towards AGI, and if another company looks to be getting there first, we will join them. The claim behind this product is that the funds will help them execute AI research, but their leadership has in the past withdrawn models out of fear that they are “too dangerous to share.” This fine-line of AI danger will only get sharper.

Nerd corner: The training amount for this model was 50 petaflop/s-days (what exactly does this mean?) amounts to over $12million for training costs alone [Source]. That’s a bit of a cost to recoup in fees. I like to think about how this model compares to the shallow neural networks I use for regression tasks — it’s over 100million times the number of parameters. That is a totally different regime of function approximation. For the nerdy-nerds, the academic paper is here.

I requested access to the beta for robotics research. I am interested to see what level of planning a language model (big neural network) can achieve given context in the form of a game. Does language capture the basic intent in a game and the structure of a solution?

Longer term I think language integration into robotic rewards is of interest — it will allow humans who work with the robots to give the machines verbal tasks (verification of said tasks is a problem for another day).

Examples:

  • Given an embedding of a game board (written, grid, other methods), say “where should I move.”
  • Given a description of an environment: “the block is on the ball which is to the right of the chair,” ask “is the ball above the chair?”

This is a very rudimentary example, but I think links from commercialized machine learning fields such as deep learning for vision and language are high potential.

Spread the word

This post was originally published by Nathan Lambert at Towards Data Science

Related posts