Deep Image Regularization

Deep Regularization via Bilevel Optimization

Two different sub-projects presented at Asilomar Conference on Signals, Systems, and Computers, 2022, and BASP Frontiers, 2023.

Inverse problems in imaging — such as denoising, deblurring, and medical image reconstruction — require estimating signals from incomplete or noisy data. Classical approaches rely on handcrafted convex regularizers like total variation or wavelet sparsity, which offer convergence guarantees but limited expressiveness. Deep learning has introduced powerful data-driven priors, yet most lack mathematical structure, leading to unstable or non-convergent optimization. This research aims to bridge that gap: to design learnable deep regularizers that combine the interpretability and convergence of convex methods with the flexibility and accuracy of modern neural networks.

We propose a bi-level learning framework that trains deep regularizers \( r_\theta(x) \) within a variational optimization setup \( J(x) = d(x; y) + r_\theta(x) \), where \( d \) ensures data fidelity and \( r_\theta \) encodes prior structure. The regularizer’s gradient is constrained through adversarial monotonicity and Lipschitz penalties, ensuring the learned operator behaves like a monotone mapping — a key condition for convergence.

  • Convex variant: directly enforces monotonicity on ∇r.
  • Non-convex variant: relaxes the constraint to the overall energy ∇J, improving expressivity while maintaining stability.

Training proceeds via gradient-step and bi-level optimization, leveraging automatic differentiation and implicit differentiation for end-to-end learning.

Experiments on Gaussian deblurring and other image-recovery tasks show that these monotone-regularized networks outperform both structurally convex and unconstrained baselines, achieving higher PSNR and SSIM with provable convergence guarantees. The approach demonstrates that deep regularizers need not sacrifice stability for accuracy — by embedding monotonicity and smoothness into learning, one can build data-driven inverse solvers that are both expressive and convergent. This framework lays the foundation for interpretable deep optimization in imaging and beyond.