閱讀筆記——《FFDNet Toward a Fast and Flexible Solution for CNN based Image Denoising》
本博文屬於閱讀筆記,僅供本人學習理解用
論文連結:https://ieeexplore.ieee.org/abstract/document/8365806
給出程式碼(https://github.com/cszn/FFDNet)
Many methods mostly learn a specific model for each noise level, and require multiple models for denoising images with different noise levels.對於基於深度學習的去噪網路,大多數都僅僅是學習網路的在某一個噪聲水平下的模型,而對於不同的噪聲水平,需要多個不同的去噪網路,為次本文提出了一種快速的、靈活的去噪卷積神經網路FFDNet。該網路通過下采樣sub-images來加速處理的過程,並採用正交正則化方法提高泛化能力。與現有的discriminative denoisers(判別式降噪器)相比,作者提出的網路具有以下的優點:
1、可以僅僅只用一個網路就實現處理寬範圍的噪聲。
2、通過指定非均勻噪聲水平圖來去除空間變異噪聲的能力(the ability to remove spatially variant noise by specifying a non-uniform noise level map)
3、速度快
通過合成的噪聲影象和真實的噪聲影象來驗證所提出的FFDNet網路的效能。在本文中,假設噪聲是AWGN,並且噪聲水平是給定的。為了應對實際去噪的問題,一個靈活的去噪器需要有以下的特性:
1、可以只採用一個模型就實現去噪
2、有效的、高效的、容易使用的
3、可以處理空間變異(spatially variant)噪聲。
When the noise level is unknown or is difficult to estimate, the denoiser should allow the user to adaptively control the tradeoff between noise reduction and details preservation.(當噪聲水平未知或難以估計時,降噪器應允許使用者自適應地控制降噪和細節儲存之間的權衡)進一步地,the noise can be spatially variant and the denoiser should be flexible enough to handle spatially variant noise(噪聲可以在空間上變化,並且降噪器應該足夠靈活以處理空間變化的噪聲。)
the FFDNet——the noise level map is modeled as an input and the model parameters are invariant to noise level。 FFDNet provides
a flexible way to handle various types of noise with a single network.
the proposed FFDNet works on downsampled sub-images, which largely accelerates the training and testing speed, and enlarges the receptive field as well(這操作是為了加速用得)
當前去噪的方法可以分為兩種:
1、model based methods——例如BM3D and WNNM are flexible in handling denoising problems with various noise levels, but they suffer from several drawbacks. 耗時,不能直接用於去除空間變異噪聲。並且需要手工製作先驗影象以及nonlocal selfsimilarity
2、discriminative (辨別) learning based ones(CNN類)——learn the underlying(底層) image prior and fast inference from a training set of degraded and ground-truth image pairs.The learned model is usually tailored to a specific noise level.(只能處理特定的噪聲水平)is hard to be directly deployed to images with other noise levels.all the existing discriminative learning based methods lack flexibility to deal with spatially variant noise.
DnCNN利用Batch Normalization和residual learning可以有效地去除均勻高斯噪聲,且對一定噪聲水平範圍的噪聲都有抑制作用。然而真實的噪聲並不是均勻的高斯噪聲,其是訊號依賴的,各顏色通道相關的,而且是不均勻的,可能隨空間位置變化的。在這種情況下,FFDNet使用噪聲估計圖作為輸入,權衡對均布噪聲的抑制和細節的保持,從而應對更加複雜的真實場景。而CBDNet進一步發揮了這種優勢,其將噪聲水平估計過程也用一個子網路實現,從而使得整個網路可以實現盲去噪。
FFDNet網路的特點:
- 將噪聲水平估計作為網路的輸入,可以應對更加複雜的噪聲,如不同噪聲水平噪聲和空間變化噪聲,而且噪聲水平估計可以作為權重權衡對噪聲的抑制和細節的保持。
- 將輸入影象下采樣為多張子影象作為網路輸入,輸出的子影象再通過上取樣得到最終的輸出。該操作在保持結果精度的條件下,有效地減少了網路引數,增加感受野,使得網路更有效率,更快。
- 使用正交矩陣初始化網路引數,從而使得網路訓練更有效率。
FFDNet網路通過將噪聲水平圖(tunable noise level map)作為輸入,使得去噪網路可以對噪聲水平更加靈活。而為了提高去噪網路的效率,將輸入影象降取樣來處理。同時為了(insensitive to the bias between the input and ground truth noise levels and generate less artifacts),在卷積層中採用了orthogonal regularization
噪聲估計子網路將噪聲觀測影象轉換為估計的噪聲水平圖
網路結構如下圖所示
Noise Level Map
先重溫一下為啥model-based image denoising methods可以適用於不同的噪聲水平,如下式子所示(data fidelity term資料保真度,regularization terms.正則化項)
m為noise level map
an implicit function(隱含功能)
這篇論文是把噪聲圖和 noise level map作為網路的輸入,進而可以實現不同的噪聲等級下,估計噪聲。。。。可是問題是怎麼得到 noise level map呢?應該在CBDNet有說到?
the noise level map may not be accurately estimated from the noisy observation, and mismatch between the input and real noise levels is inevitable.(噪聲水平圖可能無法從噪聲觀察中準確估計,並且輸入和實際噪聲水平之間的不匹配是不可避免的)If the input noise level is lower than the real noise level, the noise cannot be completely removed. Therefore, users often prefer to set a higher noise level to remove more noise. However, this may also remove too much image details together with noise.
A practical denoiser should tolerate certain mismatch of noise levels.
An approximation of non-uniform noise level map can then be obtained.
FFDNet exhibits similar noise level sensitivity performance to BM3D and DnCNN in balancing noise reduction and details preservation. When the ground truth noise level is unknown(當真正的 noise level 是未知時,FFDNet效果也是更好的)
關於noise level,注意看實驗的E部分
參考博文:
https://blog.csdn.net/zbwgycm/article/details/82848893
https://blog.csdn.net/zbwgycm/article/details/82052003(關於CBDNet)