OpenAI : How words are turned into works of art11. April 2022
OpenAI : How words are turned into works of art
San Francisco, 11/4/2022
OpenAI has unveiled a new version of DALL-E, its AI model that generates photorealistic images based on text descriptions. The second-generation text-image generator transforms words into works of art.
DALL-E is a portmanteau of Salvador Dalí and WALL-E. It was first introduced in January 2021. The neural network generates photorealistic images based on simple written descriptions.
OpenAI contributor Aris Konstantinidis, for example, used the tool to generate images of bandana-wearing pandas riding motorcycles in the desert.
DALL-E 2 is faster and generates higher resolution images than its predecessor. The latest version is not released to the public, but researchers can sign up for a preview.
OpenAI plans to release it to third-party developers via an API at a later date. This could be particularly useful for graphic designers and app developers.
Who is OpenAI?
The San Francisco-based AI research lab was founded in late 2015 to conduct safe and helpful AI projects for the public.
The original founders were Sam Altman, Elon Musk, Greg Brockman, Ilya Sutskever, Wojciech Zaremba and John Schulman, who collectively contributed more than $1 billion to the company at launch. Musk later stepped down from the board, in part to avoid conflicts of interest with Tesla. In 2019, the nonprofit lab became a for-profit company and announced a $1 billion investment package from Microsoft. In 2020, OpenAI released its popular GPT-3 language model, which is the foundation for applications that generate code and text for websites, advertising, and social media. DALL-E 2 is based on OpenAI’s image recognition/computer vision system CLIP, which in turn is based on GPT-3.