Paper Detail
Danil Tokhchukov, Aysel Mirzoeva, Andrey Kuznetsov, Konstantin Sobolev
In this paper, we uncover the hidden potential of Diffusion Transformers (DiTs) to significantly enhance generative tasks. Through an in-depth analysis of the denoising process, we demonstrate that introducing a single learned scaling parameter can significantly improve the performance of DiT blocks. Building on this insight, we propose Calibri, a parameter-efficient approach that optimally calibrates DiT components to elevate generative quality. Calibri frames DiT calibration as a black-box reward optimization problem, which is efficiently solved using an evolutionary algorithm and modifies just ~100 parameters. Experimental results reveal that despite its lightweight design, Calibri consistently improves performance across various text-to-image models. Notably, Calibri also reduces the inference steps required for image generation, all while maintaining high-quality outputs.
No structured notes yet. Add `summary_sections`, `why_relevant`, `claim_impact`, or `next_action` in `papers.jsonl` to enrich this view.
No ranking explanation is available yet.
No tags.
@misc{tokhchukov2026calibri,
title = {Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration},
author = {Danil Tokhchukov and Aysel Mirzoeva and Andrey Kuznetsov and Konstantin Sobolev},
year = {2026},
abstract = {In this paper, we uncover the hidden potential of Diffusion Transformers (DiTs) to significantly enhance generative tasks. Through an in-depth analysis of the denoising process, we demonstrate that introducing a single learned scaling parameter can significantly improve the performance of DiT blocks. Building on this insight, we propose Calibri, a parameter-efficient approach that optimally calibrates DiT components to elevate generative quality. Calibri frames DiT calibration as a black-box rew},
url = {https://huggingface.co/papers/2603.24800},
keywords = {Diffusion Transformers, denoising process, learned scaling parameter, parameter-efficient approach, DiT blocks, generative quality, black-box reward optimization, evolutionary algorithm, code available, huggingface daily},
eprint = {2603.24800},
archiveprefix = {arXiv},
}
{}