AI at DTonomy and OpenAI’s GPT-3

What is GPT-3

Yash Vasani
3 min readJun 23, 2021

A paper (titled “Language Models Are Few-Shot Learners”) published by Open AI in May 2020 has been all the hype among the developers’ community for a while now. This paper demonstrated working examples of GPT-3. GPT-3 is a language model that is currently world’s largest neural network with 175 billion parameters, with the second-best being Microsoft’s Turing NLG with 17 billion parameters. This sheer magnitude of parameters allow GPT-3 to perform state-of-the-art Natural Language processing tasks with no training, which includes translation and cloze tasks. In a blind test on generating news articles, participants were able to distinguish them with human written text with a “chancey” accuracy of 52%.

Uses cases for GPT-3 and its limitations

To further elaborate about GPT-3’s capabilities, given the following command to a plugin for Figma (widely used for software design), “An app that has a navigation bar with a camera icon, “Photos” title and a message icon. A feed of photos with each photo having a user icon, a photo, a heart icon and a chat bubble”. The output was an application interface design that was similar to Instagram.

GPT-3 creating a Instagram lookalike with a single instruction

A twitter user posted his demo here.

Thus, in short, GPT-3 can create anything that follows grammar or a structure. It was able to achieve such feat using all the “pre-training” done a vast body of texts, 570gb to be precise. The computation cost OpenAI $4.6 million. It learnt how to use each word by sematic analysis i.e., it not only understands the meaning of the word but also its situational use.

You can apply for free access using this google form.

GPT-3 is not the ultimate Jarvis the world has been waiting for, it too according to San Altman, CEO of OpenAI, is just an early glimpse and has too much hype around it. The main limitations currently identified are, it is expensive, and is still unreliable in terms of producing complex language as the output is described as “gibberish”. Melanie Mitchell from Portland State University described mistakes by GPT-3 as “most unhumanlike errors”.

DTonomy’s AI Assistant

DTonomy AIR has a similar AI which can translate engine that convert language commands in plain text to workflows, these workflows which can later be used to take automatized actions on alerts automate security operations. For example, a command, “query asn for ip and send me an email”, now the AI will create a automated workflow doing the same (the videos can be found here). The AI can also be given to generate complex workflows which involve situation based decision making according to individual or a group (case) of alerts.

DTonomy’s AI

Instead of writing code or assembling workflow manually, AI technology is dramatically simplifying the way how people are achieving automation. I am optimistic that it will change many industries and benefit lots of people. For more visit