Stability ai Unveils New Text-to-Image Model, Stable Diffusion 3: Enhancing Image Quality and Mitigating Potential Harms
London-based ai lab Stability ai has revealed an early preview of its new text-to-image model, Stable Diffusion 3. The advanced generative artificial intelligence (ai) model aims to create high-quality images from text prompts with improved performance across several key areas.
Stability ai’s latest iteration boasts significantly better abilities for handling multi-subject image generation compared to previous versions, allowing users to include more detailed prompts with multiple elements and achieve better results. In addition, the new model boasts upgraded overall image quality and spelling accuracy.
Stability ai Previews Stable Diffusion 3: A Safe and Accessible Generative ai Model
Stability ai has opened a registration for early access to Stable Diffusion 3, allowing the company to gather feedback and continue refining the model before a full release planned later this year. The company is also working with experts to test Stable Diffusion 3 and ensure it mitigates potential harms, similar to OpenAI’s approach with Sora.
Stability ai Aims for Balance Between Creative Performance and Accessibility
Stable Diffusion 3 is offered in a range of model sizes, from 800 million parameters on the low-end to 8 billion on the high-end. Stability ai aims to balance creative performance and accessibility by providing a spectrum of options for users with varying computational resources.
Stability ai’s Commitment to Safe, Responsible ai Practices
“We believe in safe, responsible ai practices. This means we have taken and continue to take reasonable steps to prevent the misuse of Stable Diffusion 3 by bad actors. Safety starts when we begin training our model and continues throughout the testing, evaluation, and deployment,” said Stability ai.
About TechForge
Powered by TechForge, explore other upcoming enterprise technology events and webinars