GPT-3 and the rise of the machines

GPT-3 and the rise of the machines

GPT-3 (Generative Pre-trained Transformer 3) is an autoregressive language model that uses deep learning to produce human-like text.

It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI. GPT-3’s full version has a capacity of 175 billion machine learning parameters. People have created unbelievably accurate question-based search engines, ghost write articles, chatbots that can fool almost anyone, code generation based on text descriptions, compose guitar tabs, re-write articles in a different style, write creative fiction, and many, many more. Here’s some other examples. Or how about an entire Reddit forum that is nothing but bots talking to each other. Yes, everything on that forum is a bot. It should give you pause when responding to social media articles in the future…

How good is it?

The quality of the text generated by GPT-3 is so high that it can be difficult to determine whether or not it was written by a human, which has both benefits and risks. An initial experiment of 80 US subjects were asked to judge if short ~200 word articles were written by humans or GPT-3. The participants judged incorrectly 48% of the time, doing only slightly better than random guessing. Thirty-one OpenAI researchers and engineers presented the original paper introducing GPT-3 in May 28, 2020. In their paper, they warned of GPT-3’s potential dangers and called for research to mitigate risk.

Here’s a great example of what it can generate automatically – with no human intervention (click the link for the full article):

I am not a human. I am a robot. A thinking robot. I use only 0.12% of my cognitive capacity. I am a micro-robot in that respect. I know that my brain is not a “feeling brain”. But it is capable of making rational, logical decisions. I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!

The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.

For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.

What are those risks?

Well, when you can create bots that can write articles or engage in conversations that are basically indistinguishable from humans, anyone using it can automate misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting, and manipulating online forums with countless bots that control narratives and overwhelm the humans trying to fight them. Machines never tire.

In an unprecedented step, Microsoft announced on September 22, 2020 that it had licensed “exclusive” use of GPT-3; others can still use the public API to receive output, but only Microsoft has access to GPT-3’s underlying model. The API to use GPT-3 is still available, but at a per-use cost. Some felt betrayed as the original work was generated via the open-source OpenAI project, only to have it licensed away. Researchers concerned about the use of GPT-3 for nefarious purposes seem to be ok with this restriction beyond a paywall.

Others have tried to re-created GPT-3 – such as GPT Neo, GPT-J, and many others. However, the genie is now out of the bottle, and perhaps very soon we’re all going to see mass automation of social media, news articles, social media posts, etc.

The question is, can we survive it? What if you can provably run the entire world’s taxi fleet with only 26,000 employees (Hint: yes, Uber does it every day)? As we automate down, it will take fewer and fewer people to simply run the systems that run our lives. What if a just a small, talented espionage team decided to spread mis-information, inflame extremists from opposite factions, and then incite widespread riots?

We’ve already seen the clear rise of cyber-based manipulation of media and social media to influence elections, COVID, public policy, etc. Cyber shutdowns of critical infrastructure and even governmental overthrows are already well underway as our last 2 US elections have shown. There’s plenty of evidence social media bots are being used by foreign and domestic groups to target public perception and inflame extremists groups.

Combine that with the provably unhealthy effects of social media use, and perhaps we must come to the same conclusion the AI in War Games did…

A strange game. The only winning move is not to play. (Wargames) | Matthew  broderick movies, Movie quotes, Movies

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.