Join our upcoming webinar “Deriving Business Value from LLMs and RAGs.”
Register now

Introduction to diffusion models for machine learning

Thank you for subscribing to our newsletter!
Oops! Something went wrong while submitting the form.

The quest for multimodal AI that can create, imagine, and innovate like humans has been a driving force in machine learning research. In this pursuit, diffusion models emerged as a novel solution in the generative AI industry.

Diffusion models are prominent in generating high-quality images, video, sound, etc. They are named for their similarity to the natural diffusion process in physics, which describes how molecules move from high-concentration to low-concentration areas. In the context of machine learning, diffusion models generate new data by reversing a diffusion process, i.e., information loss due to noise intervention. The main idea here is to add random noise to data and then undo the process to get the original data distribution from the noisy data.

The famous DALL-E 2, Midjourney, and open-source Stable Diffusion that create realistic images based on the user's text input are all examples of diffusion models. This article will teach us about generative models, how they work, and some common applications.

What are diffusion models

Diffusion models are advanced machine learning algorithms that uniquely generate high-quality data by progressively adding noise to a dataset and then learning to reverse this process. This innovative approach enables them to create remarkably accurate and detailed outputs, from lifelike images to coherent text sequences. Central to their function is the concept of gradually degrading data quality, only to reconstruct it to its original form or transform it into something new. This technique enhances the fidelity of generated data and offers new possibilities in areas like medical imaging, autonomous vehicles, and personalized AI assistants.

How diffusion models work

Diffusion models work in a dual-phase mechanism: They first train a neural network to introduce noise into the dataset(a staple in the forward diffusion process) and then methodically reverse this process. Here's a detailed breakdown of the diffusion model lifecycle.

Data preprocessing

Before the diffusion process begins, data needs to be appropriately formatted for model training. This process involves data cleaning to remove outliers, data normalization to scale features consistently, and data augmentation to increase dataset diversity, especially in the case of image data. Standardization is also applied to achieve normal data distribution, which is important for handling noisy image data. Different data types, such as text or images, may require specific preprocessing steps, like addressing class-imbalance issues. Well-executed data processing ensures high-quality training data and contributes to the model's ability to learn meaningful patterns and generate high-quality images (or other data types) during inference.

Introducing noise: Forward diffusion process

The forward diffusion process begins by sampling from a basic, usually Gaussian, distribution. This initial simple sample undergoes a series of reversible, incremental modifications, where each step introduces a controlled amount of complexity through a Markov chain. It gradually layers on complexity, often visualized as the addition of structured noise. This diffusion of the initial data through successive transformations allows the model to capture and reproduce the complex patterns and details inherent in the target distribution. The ultimate goal of the forward diffusion process is to evolve these simple beginnings into samples that closely mimic the desired complex data distribution. This really shows how starting with minimal information can lead to rich, detailed outputs.

forward diffusion
Source

In the forward diffusion process, the small Gaussian noise is incrementally added to the distribution overT steps, resulting in a series of increasingly noisy samples. The noise added at each step is regulated by a variance schedule β1,...,βT. If the variance schedule ‘behaves well,’ the xT will nearly be isotropic Gaussian for sufficiently large T.

forward diffusion formula
Source
mean

Here, q(xₜ​​∣xₜ₋₁​) is defined by the mean μ.

Reverse diffusion process

This reverse process separates diffusion models from other generative models, such as generative adversarial networks (GANs). The reverse diffusion process involves recognizing the specific noise patterns introduced at each step and training the neural network to denoise the data accordingly. This isn't a simple process but rather involves complex reconstruction through a Markov chain. The model uses its acquired knowledge to predict the noise at each step and then carefully removes it.
But how does reverse diffusion actually work?

As T gets very large, the variable xT​ behaves like an isotropic Gaussian distribution. If we learn to reverse the distribution q(xt−1∣xt), we can start with xT​ from a normal distribution N(0,I), go backward, and create a new data point similar to the original dataset.

Modeling the reverse process with a neural network

We can't directly calculate q(xt−1∣xt) because it involves complex data-related calculations.
Instead, we use a model (like a neural network) to estimate q(xt−1∣xt). Assuming q(xt−1∣xt) is Gaussian, and with a small enough βt, we set our model pθ​ to be Gaussian and simply adjust the mean and variance.

modeling the reverse process with a neural network

reverse diffusion
Source

If we apply the reverse formula for all time steps, also known as the trajectory, we can trace our steps back to the original data distribution. By doing this at every timestep, the model learns to predict specific characteristics like the average value and spread of the data at each point in time.

 reverse process with a neural network

Additionally, by tuning the model to focus on each specific time step, it gets better at estimating these characteristics. This way, it becomes more accurate in predicting how the data behaves at different stages.

Model Training

model training
Lvlb is an upper bound (the negative of the ELBO) which we’re trying to minimize.

A Diffusion Model is trained by determining the reverse transitions in a Markov process that maximizes the likelihood of the training data. Essentially, training this model involves reducing the variational upper limit on the negative log likelihood of the data.

We aim to express Lvlb using Kullback-Leibler (KL) Divergences. The KL Divergence is an asymmetrical distance measure of how different one probability distribution, P, is from another reference distribution, Q.Expressing Lvlb in terms of KL divergences is useful in our case because  the transitions in our Markov chain are Gaussian, and the KL divergence between Gaussian distributions is closed.

KL divergence

In diffusion models, Kullback-Leibler (KL) divergence is used to measure how one probability distribution diverges from a second, expected probability distribution. It helps quantify the difference between the actual transition of data in the model and what the model predicts should happen. Essentially, KL divergence gives a mathematical way to see how well the model's predictions align with the actual changes observed in the data as it moves through various stages of the diffusion process. This is crucial for refining the model to make more accurate predictions.

kl divergence

As mentioned earlier, we can express Lvlb in terms of KL divergences.

kullback-leibler divergence

From here,

lvlb in terms of kl divergences.

When we condition the forward process x0​ in Lt−1, it simplifies the form, allowing us to directly calculate all KL divergences. These divergences are comparisons between Gaussian distributions, meaning we can calculate them precisely using straightforward formulas instead of relying on approximate Monte Carlo methods.

Diffusion model techniques

Central to the diffusion model's operation are several key mechanisms that collectively drive its performance. Understanding these elements is vital for grasping how diffusion models function. These include score-based generative modeling, denoising diffusion probabilistic models, and stochastic differential equations, each playing a critical role in the model's ability to process and generate complex data.

Stochastic differential equations (SDEs)

SDEs are mathematical tools that describe the noise addition process in diffusion models. They provide a detailed blueprint of how noise is incrementally added to the data over time. This framework is essential because it gives diffusion models the flexibility to work with different types of data and applications, allowing them to be tailored for various generative tasks.

reverse sde
Reverse SDE (noise → data): Source

Score-based generative models (SGMs)

This is where the model learns to understand and reverse the process of noise addition. Imagine adding layers of noise to an image until it's unrecognizable. Score-based generative modeling teaches the model to do the opposite - starting with noisy data and progressively removing noise to reveal clear, detailed images. This process is critical to creating realistic outputs from random noise.

score based generative models
Score-based generative modeling through SDEs. We can map data to a noise distribution (the prior) with an SDE and reverse this SDE for generative modeling: Source

Denoising diffusion probabilistic models (DDPMs)

Denoising diffusion probabilistic models (DDPMs) are a specific type of diffusion model that focuses on probabilistically removing noise from data. During training, they learn how noise is added to data over time and how to reverse this process to recover the original data. This involves using probabilities to make educated guesses about what the data looked like before noise was added. This approach is essential for the model's capability to accurately reconstruct data, ensuring the outputs aren’t just noise-free but also closely resemble the original data.

ddpms
Source

Together, these components enable diffusion models to transform simple noise into detailed and realistic outputs, making them powerful tools in generative AI. Understanding these elements helps in appreciating the complex workings and capabilities of diffusion models.

GAN vs diffusion model

Let's talk about the benefits of diffusion models, why they're necessary, and their advantages over GANs.

Image quality

A primary advantage of diffusion models over GANs and VAEs is the ease of training with simple and efficient loss functions and their ability to generate highly realistic images. They excel at closely matching the distribution of real images, outperforming GANs in this aspect. This proficiency is due to the distinct mechanisms in diffusion models, allowing for more precise replication of real-world imagery.

dm image quality
Source

Training stability

Regarding training stability, generative diffusion models have an edge over GANs. GANs often struggle with 'mode collapse,' a limitation where they produce a limited output variety. Diffusion models effectively avoid this issue through their gradual data smoothing process, leading to a more diverse range of generated images.

Input types

It's also important to mention that diffusion models handle various input types. They perform diverse generative tasks like text-to-image synthesis, layout-to-image generation, inpainting, and super-resolution tasks.

Techniques for speeding up diffusion models

Generating a sample from DDPM using the reverse diffusion process is quite slow because it involves many steps, possibly up to a thousand. For instance, according to Song et al. (2020), it takes about 20 hours to generate 50,000 small images with a DDPM, while a GAN can create the same amount in less than a minute using an Nvidia 2080 Ti GPU.

There is an alternative method called Denoising Diffusion Implicit Model (DDIM) that stands out for its efficiency and quality. Unlike traditional models, DDIM needs fewer steps to create clear images from noisy data.

When you compare DDIM with the traditional Denoising Diffusion Probabilistic Model (DDPM), the key benefits are

  • Speed: DDIM works much faster, needing fewer steps to achieve results that used to take much longer.
  • Predictability: With DDIM, the results are more predictable. If you start with the same seed, you'll get very similar images every time. This reliability is perfect for tasks where you can't afford surprises.

New techniques that improve DDIM

There are techniques that improve DDIM even more:

  • Progressive distillation: Progressive distillation takes a complex model (the teacher) and simplifies it into a less complex one (the student). This student model learns to achieve in one step what the teacher model does in two, speeding up the whole process without losing accuracy.
progressive distillation
  • Consistency models: In 2023, Song and colleagues introduced a method that helps any point in the image generation process trace its way back to the start. This "Consistency Model" ensures that all points that follow the same path lead back to the same origin. It's like having a reliable map that guides every point back to where it began, which is crucial for maintaining the integrity and consistency of the images generated.
consistency models
Consistency models learn to map any data point on the trajectory back to its origin: Source

Applications of diffusion models

There are very diverse applications of diffusion models, one of the most exciting being digital art creation. Artists can use these models to transform abstract concepts or textual descriptions into detailed, visually striking images. This capability allows for a new form of artistic expression where the boundary between technology and art blurs, enabling creators to explore new styles and ideas previously difficult or impossible to achieve.

Graphic design

In graphic design and illustration, diffusion models provide a tool for rapidly generating visual content. Designers can input sketches, layouts, or rough ideas, and the models can flesh these out into complete, polished images. This can significantly speed up the design process, offering a range of possibilities from the initial concept to the final product.

Here's a diffusion model example – a tuned model for graphic design:

diffusion model for graphic design
Source

Film and animation

Another creative application is in the field of film and animation. Diffusion models can generate realistic backgrounds, characters, or even dynamic elements within scenes, reducing the time and effort required for traditional production methods. This streamlines the workflow and allows for greater experimentation and creativity in visual storytelling.

An artist used a set of Stable Diffusion algorithms to produce the first full AI animation. The movie, which is less than two minutes long, is a collaboration between the artist, AI, and several software tools like Daz3D, Unreal Engine, Adobe Photoshop, Adobe After Effects, and Adobe Premiere. It’s the latest in a series of AI-generated movies, which includes shorts from anime style.

animation diffusion models
AI animation: Source

Music and sound design

In music and sound design, generative diffusion models can be adapted to generate unique soundscapes or represent music, offering new ways for artists to visualize and create auditory experiences.

A paper titled "Controllable Music Production with Diffusion Models and Guidance Gradients" discusses a diffusion model example used in the music industry. The authors demonstrate how conditional generation from diffusion models can be used to tackle a variety of realistic tasks in the production of music in 44.1kHz stereo audio with sampling-time guidance. The scenarios we consider include continuation, inpainting, and regeneration of musical audio, creating smooth transitions between two music tracks, and transferring desired stylistic characteristics to existing audio clips.

Media and gaming industry

The interactive media and gaming industry also stands to benefit from diffusion models. They can be used to create detailed environments, characters, and other assets, adding realism and immersion to games and interactive experiences previously challenging to achieve.

In essence, diffusion models are a powerful tool for anyone in the creative field, offering a blend of precision, efficiency, and artistic freedom. These models allow creators to push the boundaries of traditional mediums, explore new forms of expression, and bring imaginative concepts to life with unprecedented ease and detail.

Here's a complete guide on applications and techniques to use AI image generation tools in video games.

Image generation in SuperAnnotate

SuperAnnotate's GenAI playground allows users to try ready-made templates for their LLM and GenAI use cases or build their own. Among the most used templates are GPT fine-tuningsupervised fine-tuningchat ratingRLHF for image generation, and others. For diffusion models, we'll talk about the image generation template.

The RLHF for image generation template looks like this:

rlhf for image generation
RLHF for image generation

You can either use this template or build your own based on the project at hand. The customizability of the tool allows you to bring ideas into reality and adjust the template accordingly. If you build your use case, we'll walk you through the steps from scratch.

The first step is building the UI. Here's the cool thing about the tool—just look at the set of builders you can use to customize your form.

For this RLHF case, we'll build it this way.


Input => button => select => images in your desired number => annotation in ranking form => annotation in text form

The next step is renaming the components.

Let's put the template to work.

Imagine you're designing ads for Lego and have this cool idea to feature the Great Wall of China, built from Lego bricks on a poster. But you don't want to be limited to just a few options. For this kind of project, you want to rank your model's outputs and have a textual annotation on why the highest-ranked output is the best. In the Lego scenario, the third option ended up being our favorite because it perfectly captures the beauty of the building in all its glory.

SuperAnnotate's RLHF for image generation tool offers a solid set of features, including:

  • High customization
  • Strong data governance and security
  • Efficient data curation
  • Domain-specific training

This means if you're looking to create images for a particular project, you won't have to constantly prompt existing diffusion models. Continuous prompting might get annoying if the results are not quite right for your needs. Instead, with SuperAnnotate, you can select or build a template, generate images, review and rank the outcomes, and repeat as needed to collect training data.

Whether you're making a few unique images or launching a large marketing campaign, this tool has you covered. Its annotation and ranking features help ensure you get the best results possible.

Popular Diffusion tools

Some of the most popular diffusion models, which have gained widespread attention for their impressive capabilities in image generation, include:

DALL-E 2

Developed by OpenAI, DALL-E 2 is known for highly detailed and creative images from textual descriptions. It uses advanced diffusion techniques to produce images that are both imaginative and realistic, making it a popular tool in creative and artistic applications.

dalle 2
Source

DALL-E 3

DALL-E 3 is the latest version of OpenAI image generation models and is a huge advancement over DALL-E 2. The most notable change is that this latest version isn't just an app but is integrated into ChatGPT. It also stands out with its image generation quality.

Here's a comparison of DALL-E 2 vs DALL-E 3 with the same prompt.

dall e 2 vs dall e 3
DALL- E 2 vs. DALL-E 3: Source

Sora

Sora's the latest model by OpenAI, and it's a game-changer. The AI community has been waiting for this drop since it's the first-ever text-to-video model by OpenAI. Sora can make 1080p videos in any resolution up to a minute long, and the videos it creates are scarily realistic. Sora is now limited to a select group of users and red-teamers, which is worth appreciation since it proves OpenAI is cautious about the ethical regulations of the model. You don't want 100 percent realistic deepfakes of politicians on every corner of the Internet.

Here are a few Sora examples that left people speechless.

Prompt: The camera directly faces colorful buildings in Burano, Italy. An adorable dalmatian looks through a window on a building on the ground floor. Many people are walking and cycling along the canal streets in front of the buildings.

Prompt: A stylish woman walks down a street in Tokyo filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, while carrying a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.

Prompt: Archeologists discover a generic plastic chair in the desert, excavating and dusting it with great care.

Stable Diffusion

Stable diffusion was created by researchers at Stability AI, who had previously taken part in inventing the latent diffusion model architecture used by Stable Diffusion. This model stands out for its efficiency and effectiveness in converting text prompts into realistic images. It has been recognized for its high-quality image generation capabilities.

stable diffusion
Source

Stable Diffusion 3 is Stability AI's most recent release, and it's impressive. It's their most capable text-to-image model with greatly improved performance in multi-subject prompts, image quality, and spelling abilities. Look at the angled text "Stable diffusion" on the side of the bus. This used to be a dream for image generation tools.

stable diffusion 3
Source

The Stable Diffusion 3 collection spans models with 800M to 8B parameters, embracing Stability AI's will to widen access for everyone. This variety ensures users can find the perfect fit for their scalability and quality requirements, fueling their creativity. They've just opened the waitlist for an early preview.

Prompt: Resting on the kitchen table is an embroidered cloth with the text 'good night' and an embroidered baby tiger. Next to the cloth, there is a lit candle. The lighting is dim and dramatic.

tiger sd3

Stable diffusion also features an exciting application that can extend an image in various directions. This is called stable diffusion outpainting and is used to expand an image beyond its original borders.

Midjourney

Midjourney is another diffusion model available via API that has been released recently. It is used for image generation from text prompts, much like the other models​​. Excitingly, the recent hype around Midjourney's latest release, Midjourney v6, underscores its advancements and enhanced capabilities in generating even more refined and creative images.

midjourney

Midjourney is exclusively available through Discord — probably the most unorthodox approach of all the models listed.

Here's a portrait image comparison between Midjourney V5.2 vs. Midjourney V6.

midjourney v5.2 vs midjourney v6
Midjourney V5.2 vs. Midjourney V6

NAI diffusion

The NovelAI Diffusion gives the user a unique image generation experience. It's a creative tool to visualize your visions without limitations, allowing you to paint the stories of your imagination. It does sound pretty intriguing and actually is. Here are the key features of the NovelAI image generator:

  • Text-to-image generation: input a phrase called a 'prompt,' and the AI will generate an image for you.
  • Image-to-image generation: upload an image (or use a previously generated one) to generate a new image.
  • Inpainting: Paint over a part of the image and regenerate it.
nai diffusion
Source

Imagen

Developed by Google, Imagen is a text-to-image diffusion model known for its photorealism and deep language understanding. It uses large transformer language models for text encoding and achieves high-fidelity image generation. Imagen has been noted for its high FID score, indicating its effectiveness in producing images that closely align with human-rated quality and text-image alignment​​​​.

Omnigen

Omnigen is the newest diffusion model available and it’s pretty impressive. This diffusion framework unifies various image generation tasks within a single model, cutting the need for task-specific networks or fine-tuning. Unlike other popular models like Stable Diffusion, Omnigen doesn’t require additional modules like ContolNet or IP-Adapter to process diverse control conditions.

Omnigen supports text-to-image, multimodal-to-image, few-shot-to-image use cases. They haven’t outsourced the related sources yet, but will soon do on their GitHub.

omnigen
Source

Comparing the latest diffusion models

We already saw some comparisons of models with their older versions for Midjourney and DALL-E. Now, let us compare different players of diffusion models and see who outperforms where.

Stable Diffusion 3 vs. DALL-E 3

Some users say Stable Diffusion 3 beats DALL-E 3 at image generation, especially at text generation and listening to instructions.

Prompt: cinematic photo of a red apple on a table in a classroom, on the blackboard are the words "go big or go home" written in chalk.

stable diffusion 3 vs dall-e 3
Stable Diffusion 3: Source

Prompt: a painting of an astronaut riding a pig wearing a tutu holding a pink umbrella, on the ground next to the pig is a robin bird wearing a top hat, in the corner are the words "stable diffusion".

dall-e 3 vs stable diffusion 3
dall-e 3 and stable diffusion 3
Top: Stable Diffusion 3, Bottom: Dall-E 3

Midjourney v6 vs. DALL-E 3

The most intriguing comparison would be between the latest models of the image generation giants – DALL-E 3 and Midjourney v6. So, which one's the better model?

  • Midjourney is better in photorealism and details, DALL-E 3 masters quality and consistency but lacks photorealism.
  • Midjourney runs on Discord, is more user-friendly, while DALL-E requires OpenAI access and third-party tools.
  • Midjourney V6 has a subscription fee; DALL-E has different costs depending on the plan.

Prompt: a realistic closeup photo of an elderly man in an urban setting, leaning against a door opening with his hand to his face smoking a cigar. No emotion. Distant look in eyes. Moody misty.

dall e old man
DALL-E: Source
midjourney old man
Midjourney: Source

Prompt: a graphic designed logo for a company called Buzz Coffee that sells very strong coffee.

dall e coffee logo
DALL-E: Source
midjourney coffee logo
Midjourney: Source

Prompt: a creative graphic design square poster for an electronic music band from the late 1990s. Flyer art style.

dall e poster
DALL-E: Source
midjourney poster
Midjourney: Source

Midjourney v6 vs. SDXL

A few comparisons of Midjourney v6 vs. SDXL by Stable Diffusion:

midjourney v6 vs sdxl 1
Source
midjourney v6 vs sdxl 2
Source
midjourney v6 vs sdxl 3
Source

Diffusion model limitations

Deploying diffusion models like those used in DALL-E can be challenging. They are computationally intensive and require significant resources, which can be a hurdle for real-time or large-scale applications. Additionally, their ability to generalize to unseen data can be limited, and adapting them to specific domains may require extensive fine-tuning or retraining.

Integrating these models into human workflows also presents challenges, as it's essential to ensure that the AI-generated outputs align with human intentions. Ethical and bias concerns are prevalent, as diffusion models can inherit biases from their training data, necessitating ongoing efforts to ensure fairness and ethical alignment.

Also, the complexity of diffusion models makes them difficult to interpret, posing challenges in applications where understanding the reasoning behind outputs is crucial. Managing user expectations and incorporating feedback to improve model performance is an ongoing process in the development and application of these models.

Another big downside is their slow sampling time: generating high-quality samples takes hundreds or thousands of model evaluations. There are two main ways to address this issue: The first is new parameterizations of diffusion models that provide increased stability when using a few sampling steps. The second method is the distillation of guided diffusion models. Progressive distillation for fast sampling of diffusion models to distill a trained deterministic diffusion sampler results in a new diffusion model that takes half as many sampling steps.

diffusion models distillation

Wrapping up

Diffusion models have truly transformed the way we think about AI's ability to generate images, videos, and sounds. These models operate by initially introducing noise into data and then cleverly removing it, which allows them to create complex and high-quality patterns. In this article, we dived into how these models work, their innovative techniques, and where they're being used. Beyond enhancing creativity in art and design, diffusion models are making strides in medical imaging and the development of self-driving cars. Their versatility offers a fascinating look at the ongoing evolution and expanding possibilities of AI.

Recommended for you

Stay connected

Subscribe to receive new blog posts and latest discoveries in the industry from SuperAnnotate
Thank you for subscribing to our newsletter!
Oops! Something went wrong while submitting the form.