Adaptive Blind All-in-One Image Restoration

1Computer Vision Center, 2Universitat Autònoma de Barcelona, 3Universidad Autónoma de Madrid

Adapting to diverse degradations for blind all-in-one IR

Our Adaptive Blind All-in-One Image Restoration (ABAIR) model combines a powerful baseline trained on images with synthetic degradations, low-rank decompositions for task-specific adaptation, and a lightweight estimator to handle complex distortions. It achieves state-of-the-art performance on multiple benchmarks while remaining efficient and highly adaptable.


ABAIR architecture


Our method comprises three key components: a baseline model, a set of Low-Rank Adapters, and a lightweight estimator. First, the baseline model is trained on a large dataset of natural images with synthetically generated degradations, including noise, blur, rain, haze, and low-light conditions. To enhance diversity, we introduce a CutMix strategy to blend multiple degradation types within a single image, paired with a Cross Entropy Loss for pixel-wise degradation classification. Second, the baseline model is adapted to each task using Low-Rank Decomposition (LoRA). Third, a lightweight estimator is trained to identify the input degradations. This estimator enables either the blending or selection of the most suitable adapters to fine-tune the pre-trained baseline weights. Consequently, our method is highly flexible, generalizing effectively to unseen image restoration tasks, mixed degradations, and new tasks. Notably, adapting to new tasks requires only training a new adapter and updating the estimator, without compromising the knowledge gained from previously trained tasks.


Pre-training with Synthetic Degradations

We propose a pre-training strategy based on synthetic degradations, which are parameterized to control both the type and severity of degradations, including rain, blur, noise, haze, and low-light conditions.

Building a Robust Baseline Model

To establish a strong weight initialization for subsequent fine-tuning, we propose a degradation CutMix method. This approach seamlessly blends multiple degradation types and severity levels within a single image, enhancing the model’s ability to generalize. Additionally, we incorporate an auxiliary segmentation head and optimize the model using a Cross-Entropy Loss to perform pixel-wise degradation type estimation. Notably, the segmentation head is only utilized during Phase I and is excluded in Phases II and III.

Simple yet Adaptive Method

Our approach is highly effective at handling known degradations while remaining adaptable to new image restoration tasks, enabled by its ability to learn a disentangled representation for each degradation type. In contrast, current methods require retraining the entire architecture with all degradation types to accomodate a new task, making them computationally expensive and inefficient. By using a using a robust baseline model and a disentangled representation, our method requires only training only a small set of parameters for new tasks, preserving the knowledge acquired from previous degradations.

Abstract

Blind all-in-one image restoration models aim to recover a high-quality image from an input degraded with unknown distortions. However, these models require all the possible degradation types to be defined during the training stage while showing limited generalization to unseen degradations, which limits their practical application in complex cases. In this paper, we propose a simple but effective adaptive blind all-in-one restoration (ABAIR) model, which can address multiple degradations, generalizes well to unseen degradations, and efficiently incorporate new degradations by training a small fraction of parameters. First, we train our baseline model on a large dataset of natural images with multiple synthetic degradations, augmented with a segmentation head to estimate per-pixel degradation types, resulting in a powerful backbone able to generalize to a wide range of degradations. Second, we adapt our baseline model to varying image restoration tasks using independent low-rank adapters. Third, we learn to adaptively combine adapters to versatile images via a flexible and lightweight degradation estimator. Our model is both powerful in handling specific distortions and flexible in adapting to complex tasks, it not only outperforms the state-of-the-art by a large margin on five- and three-task IR setups, but also shows improved generalization to unseen degradations and also composite distortions.

Quantitative results

We evaluate ABAIR in a comprehensive five-degradation setup designed for all-in-one image restoration. Furthermore, we test its performance in a three-degradation setup, on unseen datasets excluded from training, on novel degradation types, and in mixed degradation scenarios. The table below summarizes our method’s performance on the five-degradation setup, demonstrating its superiority over previous state-of-the-art methods. Additionally, we present a radial plot that highlights ABAIR’s performance across all degradation scenarios: known, unseen, and mixed. For a more detailed analysis and quantitative results, please refer to our paper.

BibTeX

@article{serrano2024abair,
      title={Adaptive Blind All-in-One Image Restoration},
      author={Serrano-Lozano, David and Herranz, Luis and Su, Shaolin and Vazquez-Corral, Javier},
      journal={arXiv preprint arXiv:2411.18412},
      year={2024}
}