Low-bit Diffusion Model Quantization via Efficient Selective Finetuning

View a PDF of the paper titled QuEST: Low-bit Diffusion Model Quantization via Efficient Selective Finetuning, by Haoxuan Wang and 5 other authors

View PDF
HTML (experimental)

Abstract:The practical deployment of diffusion models is still hindered by the high memory and computational overhead. Although quantization paves a way for model compression and acceleration, existing methods face challenges in achieving low-bit quantization efficiently. In this paper, we identify imbalanced activation distributions as a primary source of quantization difficulty, and propose to adjust these distributions through weight finetuning to be more quantization-friendly. We provide both theoretical and empirical evidence supporting finetuning as a practical and reliable solution. Building on this approach, we further distinguish two critical types of quantized layers: those responsible for retaining essential temporal information and those particularly sensitive to bit-width reduction. By selectively finetuning these layers under both local and global supervision, we mitigate performance degradation while enhancing quantization efficiency. Our method demonstrates its efficacy across three high-resolution image generation tasks, obtaining state-of-the-art performance across multiple bit-width settings.

Submission history

From: Haoxuan Wang [view email]
[v1]
Tue, 6 Feb 2024 03:39:44 UTC (8,502 KB)
[v2]
Tue, 13 Feb 2024 05:22:34 UTC (8,502 KB)
[v3]
Fri, 6 Sep 2024 02:02:41 UTC (7,242 KB)
[v4]
Thu, 26 Jun 2025 17:36:29 UTC (6,569 KB)
[v5]
Wed, 9 Jul 2025 03:25:08 UTC (6,570 KB)

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top