DPR

Incredible AI system DALL-E 2 enters public beta

0

Last month, we shared unique AI-generated camera designs inspired by pop culture icons like Darth Vader and Batman. The camera ideas were developed by photographer Mathieu Stern, but the designs themselves were created by DALL-E 2, an AI system built by OpenAI. At that time, DALL-E 2 was limited to a private beta test, so readers wanting to use AI to create cameras of their own were out of luck. However, OpenAI has announced that DALL-E is now available as a public beta. The company will invite a million people from its waitlist in the coming weeks.

If you’re invited into the public beta test, you’ll be able to create with DALL-E using free credits that refill each month. During the first month, users will receive 50 free credits. Each subsequent month, the free credit allocation drops to 15. Each DALL-E prompt generation requires one credit and delivers four images. You can also edit or create a variation prompt, which returns three additional images. If those aren’t enough credits, OpenAI sells additional credits in 115-generation increments for $15. It’s worth noting that the number of images generated by each prompt is ‘approximate.’

In case you aren’t familiar with DALL-E, it’s an AI system that generates realistic images and art from a natural language text description. For example, if you typed ‘a painting of a fox sitting in a field at sunrise in the style of Claude Monet,’ that’s what DALL-E 2 would generate.

‘A painting of a fox sitting in a field at sunrise in the style of Claude Monet.’ DALL-E 1 (left) vs DALL-E 2 (right). DALL-E generates more realistic, accurate results with four times the resolution of the original DALL-E.

It’s the successor to the original DALL-E that OpenAI released in 2021. Compared to the original iteration, DALL-E 2 promises more realism and accuracy, plus four times greater resolution. According to research, nearly 72% of respondents preferred DALL-E 2’s caption matching and about 89% preferred it for its photorealism.

DALL-E 2 works because it has learned how descriptive text relates to images. The system uses a process called ‘diffusion.’ With diffusion, DALL-E starts with a pattern of random dots and gradually changes the pattern of dots toward an image ‘when it recognizes specific aspects of that image.’

Once DALL-E 2 has created an image from a text prompt, you can edit the results. You can change the location of requested objects, add or remove elements, and adjust shadows, reflections, colors, textures, and more.

DALL-E 2 allows users to edit the results. For example, what if you wanted to add a corgi to the scene on the left? You can describe where you want it located, such as in the painting of a man (right).

When DALL-E 2 entered private beta, one reason not just anyone could use the AI was concerns about harmful content. OpenAI has been developing safety mitigations. DALL-E 2’s training data has been cleaned up to remove the most explicit content. The team has also developed techniques to prevent the photorealistic generation of real people, including public figures. DALL-E 2 has a content filter to identify text prompts and image uploads that violate OpenAI’s policies. Plus, there are automated and human monitoring systems in place.

There’s no doubt you could have a lot of fun with DALL-E 2. Whether you want a children’s book illustration of astronauts playing basketball in space with cats, teddy bear scientists mixing chemicals, or something else entirely, DALL-E 2 can generate it. If you’d like to try it for yourself, visit OpenAI and sign up for the waitlist.


Notice: Trying to access array offset on value of type bool in /var/www/vhosts/worldgames.gr/httpdocs/wp-content/themes/the-next-mag/inc/libs/tnm_core.php on line 746

Canon’s EOS R3 gets 195fps ‘Custom high-speed continuous’ mode, 240p FHD video and more

Previous article

The Division Battle Royale Rumors May Just be About Heartland’s Storm Mode

Next article

You may also like

More in DPR