One-Step Residual Shifting Diffusion for Image Super-Resolution via Distillation

Created by MG96

External Public cs.CV

Statistics

Citations
5
References
66
Last updated
Loading...
Authors

Daniil Selikhanovych David Li Aleksei Leonov Nikita Gushchin Sergei Kushneriuk Alexander Filippov Evgeny Burnaev Iaroslav Koshelev Alexander Korotin
Project Resources

Name Type Source Actions
ArXiv Paper Paper arXiv
Semantic Scholar Paper Semantic Scholar
Abstract

Diffusion models for super-resolution (SR) produce high-quality visual results but require expensive computational costs. Despite the development of several methods to accelerate diffusion-based SR models, some (e.g., SinSR) fail to produce realistic perceptual details, while others (e.g., OSEDiff) may hallucinate non-existent structures. To overcome these issues, we present RSD, a new distillation method for ResShift, one of the top diffusion-based SR models. Our method is based on training the student network to produce such images that a new fake ResShift model trained on them will coincide with the teacher model. RSD achieves single-step restoration and outperforms the teacher by a large margin. We show that our distillation method can surpass the other distillation-based method for ResShift - SinSR - making it on par with state-of-the-art diffusion-based SR distillation methods. Compared to SR methods based on pre-trained text-to-image models, RSD produces competitive perceptual quality, provides images with better alignment to degraded input images, and requires fewer parameters and GPU memory. We provide experimental results on various real-world and synthetic datasets, including RealSR, RealSet65, DRealSR, ImageNet, and DIV2K.

Note:

No note available for this project.

No note available for this project.
Contact:

No contact available for this project.

No contact available for this project.