Stable Diffusion is a machine learning model used to generate photorealistic digital images from natural language descriptions. The model can also be used for other tasks, such as generating an improved image from a sketch and textual description.
Released in 2022, Stable Diffusion is primarily used to generate detailed images conditioned by textual descriptions but can also be applied to other tasks such as inpainting, outpainting, and guided image-to-image translation. It’s a latent diffusion model, a type of deep generative neural network developed by the compvis group at the University of Munich. The model was released through a collaboration between Stability AI, compvis lmu, and runway with support from eleutherai and laion. In October 2022, Stability AI raised $101 million in funding led by Lightspeed Venture Partners and Coatue Management.
Stable Diffusion is an open-source model, and its code and model weights are publicly available and can run on most consumer-grade hardware with at least 8GB of RAM. This marks a departure from previous proprietary text-to-image conversion models like DALL-E and MidJourney, which were only accessible through cloud services. A guide on how to use Stable Diffusion will soon be available on our website.