OpenAI’s new text generator impresses, but it’s far from perfect

0
24
OpenAI's new text generator impresses, but it's far from perfect





© monsitj/getty images/istockphoto
The GPT-3 algorithm is based on 175 billion parameters.


Several personalities in the tech world are generating strong reactions on social media in recent days, sharing screenshots of long, coherent texts written by GPT-3, the latest version of the text generation algorithm of OpenAI, the artificial intelligence lab co-founded by Elon Musk.

These Internet users are part of the small circle of people chosen to test the beta version of this tool capable of generating text from a draft of a few words. OpenAI also intends to market GPT-3 by the end of the year, offering companies paid subscriptions.

GPT-3 can generate all kinds of texts in English if the draft provided is strong enough: literary creation, memo, journalistic article… and even code.

The algorithm is able to predict the rest of any text supplied to it based on its 175 billion parameters. Its predecessor, the GPT-2, had 1.5 billion when it was released last year and was already considered the most powerful language model ever built.


OpenAI was co-founded by Elon Musk, Sam Altman, and Ilya Sutskever.


© /OpenAI
OpenAI was co-founded by Elon Musk, Sam Altman, and Ilya Sutskever.

Variable success

The magazine Forbes explains that GPT-3 has “basically ingested all text available on the internet“. The word choices are therefore made according to their statistical plausibility, calculated based on all of the data used to train the algorithm.

This can give very impressive results. Of poems written in the style of different authors, a guide on how to hold a meeting and one fake article about GPT-3 are just a few of the examples that have appeared online in recent days.

Some of the demonstrations also prove the limitations of GPT-3. While artificial intelligence can calculate the statistical plausibility of the text it generates, it is unable to understand certain concepts that fall under “common sense ».

This is what shown former Google and Facebook engineer Kevin Lacker asking insane questions to GPT-3. If the algorithm was able to know how many eyes a giraffe and a spider have (two and eight, respectively), it is unable to know that the sun or a blade of grass does not ( GPT-3 replied that they each had one).

«GPT-3 is hesitant to say it doesn’t know the answer. Invalid questions generate false answers», He sums up in his blog post.

GPT-3 has also been shown may generate racist, sexist or anti-Semitic messages starting with a single word like “Jew”, “woman” or “Black”. This is another case that highlights the potential bias in algorithms that can be caused by biased data.

Video: Practical advice for using your mask correctly (Radio-Canada.ca)


NEXT VIDEO

NEXT VIDEO



LEAVE A REPLY

Please enter your comment!
Please enter your name here