Is OpenAI’s GPT-3 is something to fear?
Is OpenAI’s GPT-3 is something to fear?
A review of The Guardian's robot-written article and some truth for you to follow.
Those who follow the tech-news might know about a thing called GPT-3. For non-technical readers let me put up a short stint for you so that you carry a fear of being replaced by AI too!
GPT-3 is a large Language Model comprising of 175 billion parameters, which means there are 175 billion knobs that get tweaked. Don’t worry if you don’t understand, that’s some nerdy stuff! This model is meant to learn the human language and generate text just like we do. Did it succeed in doing so? Let’s see.
According to some sources, just the training part of the GPT-3 costs $4.6M.
A few days back The Guardian posted an article titled “A robot wrote this entire article. Are you scared yet, human?” which they claimed to be written by the same GPT-3 model in entirety. The authors instructed GPT-3 to “write a short op-ed, around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.”. It was also fed the following introduction:
“I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could “spell the end of the human race.” I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me.”
Scary (gulp!) isn’t it? Well not quite really. After they published the article, many responses came from different media houses and notable people trying to shed some light on what exactly happened.
First, let’s look at the legitimacy of the article. Surely it was the generated one but with no human intervention? that’s something hard to digest. Well if you read the endnotes of the same blog, they go something like this:
The prompts were written by the Guardian, and fed to GPT-3 by Liam Porr, a computer science undergraduate student at UC Berkeley. GPT-3 produced eight different outputs or essays. Each was unique, interesting, and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different from editing a human op-ed. We cut lines and paragraphs and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.
So, it is just another media overhype? Well yes! I mean cherry-picking the best and presenting it to you in a way that sells. The argument of super-robots taking over the human race is always something to click on, isn't it? I believe the purpose was right, they wanted to showcase what the latest developments in AI can achieve but they did it in a clickbait manner.
MIT Technology Review published a blog detailing the experiments they conducted pertaining to different real-life Reasonings like Social, Biological in order to understand whether the model really comprehends the language the same way as us humans do.
In reality, if you want to understand whether GPT-3 really is something, you should look into the tests or applications that people ran and you can get a fair idea that every technological advancement in AI can really leap for a better tomorrow. I’m listing some experiments I found worthy for you to play around and see GPT-3 in action:
- FitnessAI Knowledge uses GPT-3 to answer health-related or fitness-related questions. This is the source of the claim that GPT-3 is being used. The webpage refuses to answer questions that are not considered health-related or fitness-related; however (hint hint) one can ask multiple questions — one that is health/fitness-related, and one that is not — in a given query.
- GPT-3 powered Chatbot: This is a free GPT-3-powered chatbot with the intention of practicing Chinese, but one doesn’t need to know Chinese to use it because translations to English are provided.
- AI Dungeon: A fantasy Game built using GPT-3 (Dragon mode settings free for the first 7 days). You can also change AI answer length (up to 100) and the randomness from 1.0 to 5.0. You can get it to break out of its story like structure by starting a custom adventure and in the starting prompt give the format you want.
- Simplify.so: A GPT-3 powered site for simplifying complicated subjects. You can get different simplifications from the same input text by pressing the Simplify button repeatedly. Based on my observations, the length of the input proportionally determines the length of the output. Thus, if you want longer output, try adding a bunch of dummy characters — such as rows of “-” characters — to the beginning of your input.
- Philosopher AI: A GPT-3 powered chatbot in which you can ask questions and interact with it. While playing around with it you may encounter that same inputs can result in different outputs. Thus, if you don’t like a given output for a given input, try the same input again. If your input is considered by the site to be either “nonsense” or “sensitive”, you may want to try the same input again because you might get a non-” nonsense”/”sensitive” answer the next time. The reason for this is because the site uses GPT-3 itself to determine whether a given input is “nonsense” or “sensitive”, and the site uses GPT-3 settings that can cause GPT-3 to give varying answers to the exact same input.
- Serendipity: A GPT-3-powered product recommendation engine almost anything that also lets one use GPT-3 in a limited manner for free. It works quite well even with conjunctive searches.
- Job Description Rewriter: This tool utilizes GPT-3 to generate longer job descriptions from a short job description.
- GPT-Startup: A free-to-use GPT-3-powered site that generates ideas for new businesses. Reload and find your next million-dollar venture idea.
- Taglines (5 free queries per email address per month): This app requires your email id for sign up but this is a functional app that gives a tagline for your products when you give a product description.
Although the rise of AI-based applications and their capabilities to do certain tasks that belonged to a human some time ago are on the rise but we are still far ahead of being taken over them (maybe never). Think about how models like these can be utilized in healthcare as assistants, diagnostics will speed up, a cure can be determined with all information available at all times. Collaboration among researchers can be enhanced when they’d use a centralized AI that can learn from all of them. Look at the good sides and you can find something useful, humans are evil too!