Creating images with AI: DALL-E starts paid beta phase

Creating images with AI: DALL-E starts paid beta phase

OpenAI has officially opened the beta phase for DALL-E. The AI ​​system creates images based on descriptions and can modify existing images based on text input. The project name is a portmanteau of Spanish artist Salvador Dali’s last name and the title of the Pixar film “WALL-E”.

At the start of the beta, OpenAI announced that it would invite one million interested parties from the waiting list to the program in the coming weeks. Previously, only a limited number of users had access.

With the further opening of the system, the completely free trial phase also ends. In the beta, users receive 50 credits in the first month and 15 free credits in the following months. With a credit you get four pictures based on a text input. Alternatively, they can modify images uploaded or created by DALL-E with text descriptions, or generate variations of the original image. They each receive three results for one credit.

The image was created with the command “Faust as Super Mario and Mephistopheles as Wario, photorealistic”.

(Image: Vladimir Alexeev)

If you want to generate or change more content, you can buy 115 credits for $15. Parallel to the paid offer, OpenAI releases the previously prohibited commercial use. Users may use the images created by DALL-E for illustrations, newsletters or game characters, among other things. This also applies retrospectively to the works created during the preview phase.

In the blog post about the start of the beta, OpenAI once again refers to the precautionary measures and rules for the use of DALL-E. Uploading realistic portraits or attempting to imitate well-known personalities is prohibited, as is the depiction of violence, political or sexual content. On the technical side, a filter should block the uploading of corresponding content. The company also recently modified the system to achieve more diversity when generating images of people.

DALL-E is an AI system that creates images based on descriptions. OpenAI released the first version in January 2021. It uses the GPT-3 language model, which also comes from OpenAI. While the latter draws its basic knowledge from a large collection of texts, OpenAI has trained DALL-E and its successors with numerous images and associated descriptions. The system uses the basis to generate, for example, an Andy Warhol-style astronaut on a horse.

The successor DALL-E 2, released in April 2022, combines two techniques that OpenAI has developed since the release of the first variant: CLIP (Contrastive Language-Image Pre-training), an artificial neural network that converts visual concepts into categories, and GLIDE (Guided Language to Image Diffusion for Generation and Editing), a text-guided diffusion model that, according to a paper, outperformed DALL-E primarily in the areas of photorealism and apt description.

The system can create variations of artwork and other images.

(Image: OpenAI)

Unlike its predecessor, DALL-E 2 can subsequently change images and add contextual content. In addition, the system can redesign existing images. The project page shows variations of well-known works of art such as “The Girl with a Pearl Earring” by Jan Vermeer van Delft or “The Kiss” by Gustav Klimt.

More details can be found on the OpenAI blog. Those interested can register on the waiting list. Even if the most recent blog post speaks of DALL-E, it is technically the successor DALL-E 2.

(rm)

To home page

Malicious code attacks with root privileges on Cisco Nexus Dashboard possible Previous post Malicious code attacks with root privileges on Cisco Nexus Dashboard possible
Apple Arcade: Why games are disappearing from the subscription service - and how Next post Apple Arcade: Why games are disappearing from the subscription service – and how