Abstract

Diffusion Models have become powerful generative models which is capable of synthesizing high-quality images across various domains. This paper explores Stable Diffusion and mostly focuses on Latent Diffusion Models. Latent Consistency Models can enhance the inference with minimal iterations. It demonstrates the performance in image in-painting and class-conditional synthesis tasks. Throughout the experiment different datasets and parameter configurations, the paper highlights the image quality, processing time, and parameter. It also discussed the future directions including adding trigger-based implementation and emotional-based themes to replace the prompt.

Faculty Advisor/Mentor

Dr. Rui Ning

Document Type

Paper

Disciplines

Artificial Intelligence and Robotics

DOI

10.25776/2dwf-4v45

Publication Date

4-12-2024

Upload File

wf_yes

Share

COinS
 

High-Resolution and Quality Settings with Latent Consistency Models

Diffusion Models have become powerful generative models which is capable of synthesizing high-quality images across various domains. This paper explores Stable Diffusion and mostly focuses on Latent Diffusion Models. Latent Consistency Models can enhance the inference with minimal iterations. It demonstrates the performance in image in-painting and class-conditional synthesis tasks. Throughout the experiment different datasets and parameter configurations, the paper highlights the image quality, processing time, and parameter. It also discussed the future directions including adding trigger-based implementation and emotional-based themes to replace the prompt.