OpenAI: New GPT-3 language model presented

OpenAI: New GPT-3 language model presented

2. Juni 2020 0 Von Horst Buchwald

OpenAI: New GPT-3 language model presented

New York, 1.6.2020

OpenAI has introduced its latest language model, GPT-3, with 175 billion parameters. More than 30 OpenAI researchers published a paper on the model, which can achieve state-of-the-art results in a number of benchmarks and natural language processing tasks, such as news article generation and language translation.

The 72-page paper, Language Models are Few-Shot Learners, was published on the arXiv pre-print server last week. GPT-3, which was trained on the common crawl data set, can process two orders of magnitude more text than GPT-2, its predecessor, which was fully released at the end of last year.

https://arxiv.org/abs/2005.14165

The 175 billion parameters of GPT-3 – a calculation over a neural network – surpasses the largest version of the 1.5 billion parameters of GPT-2 and the 17 billion parameters of Microsoft Turing NLG. GPT-3 achieved almost the most advanced results in the reading comprehension data sets of COPA and ReCoRD, but fell short of expectations in the school examination questions for Word in Context Analysis (WiC) and RACE.

OpenAI had declined to comment on when a full version of GPT-3 could be released.