Deblurring refers to the removal of the blur from blurred images. The removal of blur is extremely important in the fields of medical imaging, astronomy etc. In medical imaging this is also known as denoising and finds extensive applications in ultra sonic and CT images. Similarly in astronomy there is a need to denoise and deblur images that are taken by space telescopes for e.g. the Hubble telescope.
Deblurring is really a tough problem to solve though it has been around for ages. While going through some of the white papers on deblurring I have been particularly fascinated by the results. The blurred images are restored back to its pristine, original sharp image. It is almost magical and is amazing to know that a computer program is able to detect and remove patches, almost bordering on “computer perception”.
However the solution to the problem is very involved. Almost every white paper I read in deblurring techniques involves abstruse mathematical concepts, enough to make one break into ‘cold sweat’ as I did several times. There are several well known methods of removing blur from images. The chief among them are the Richardson-Lucy method, the Weiner filter (do read my post “Dabbling with Wiener filter using OpenCV“), Radon transform or some form Bayesian filtering etc. All of these methods perform the converse of convolution known as de-convolution. Almost all approaches are based on the following equation
b(x) = k(x) * i(x) + n(x) – (1)
Where b(x) – is the blurred image, i(x) is the latent (good) image, k(x) is the blur kernel also known as the Point Spread Function (PSF) and n(x) is random noise.
The key fact to note in the above equation is that the blurred image (b) is a convolution of a good latent image with a blur kernel plus some additive noise. So the good latent image is convolved by a blurring function or the points spread function and results in the blurred image.
Now there are 2 unknowns in the above equation, both the blur kernel and the latent image. To obtain the latent image a process known as de-convolution has to be performed.
If equation (1) is converted to the frequency domain using the Fourier transform on equation (1) we have
B(w) = K(w) * I(w) + N(w)
Ignoring noise we have
I(w) = B(w)/K(w)
or
I(w) = DFT [B(w)]/DFT[K(w)]
Or i(t) = Inverse DFT {[DFT B(w)]/DFT[K(w)]}
If DFT[K(w)] can be represented as A+iB then the above can be represented as
i(t) = Inverse DFT { DFT [B(w)] * (A – iB)}
—————————
A**2 + B**2
where A-iB is the complex conjugate of the DFT of the blur kernel
This is known as de-convolution. There are two types of de-convolution blind and non-blind. In the non-blind de-convolution method an initial blur kernel is assumed and is used to de-blur the image. In the blind convolution an initial estimate of the blur kernel is made and the latent image is successively iterated to obtain the best image. This is based on a method known as maximum-a-posteriori (MAP) to iterate through successive estimations of better and better blur kernels. The Richardson-Lucy algorithm also tries to estimate the blur kernel from the blurred image.
In this post I perform de-convolution using an estimated blur kernel. As can be seen there is a slight reduction of the blur. By choosing a large kernel probably a 100 x 100 matrix would give a better result.
You can clone this code from GitHub at “Deblurring by deconvolution in OpenCV”
Kindly do share any thoughts, ideas that you have. I have included the full code for this. Feel free to use the code and tweak it as you see fit and please share any findings you come across.
Steps in the de-convolution process
- Perform DFT on blurred image. Also perform Inverse DFT to verify whether the process of DFT is correct or not. Make sure that the line for performing the inverse is commented out as it overwrites the DFT array.
// Perform DFT of original image dft_A = cvShowDFT(im, dft_M1, dft_N1,"original"); //Perform inverse (check) cvShowInvDFT(im,dft_A,dft_M1,dft_N1,fp, "original");
2. Perform DFT on blur kernel. Also perform inverse DFT to get back original contents. Make sure that the line for performing the inverse is commented out as it overwrites the DFT array.
// Perform DFT of kernel dft_B = cvShowDFT(k_image,dft_M1,dft_N1,"kernel"); //Perform inverse of kernel (check) cvShowInvDFT(k_image,dft_B,dft_M1,dft_N1,fp, "kernel");3. Multiply the DFT of image with the complex conjugate of the DFT of the blur kernel // Multiply numerator with complex conjugate dft_C = cvCreateMat( dft_M1, dft_N1, CV_64FC2 ); 4. Compute A**2 + B**2 // Split Real and imaginary parts cvSplit( dft_B, image_ReB, image_ImB, 0, 0 ); cvPow( image_ReB, image_ReB, 2.0); cvPow( image_ImB, image_ImB, 2.0); cvAdd(image_ReB, image_ImB, image_ReB,0); 5. Divide numerator with A**2 + B**2 //Divide Numerator/A^2 + B^2 cvDiv(image_ReC, image_ReB, image_ReC, 1.0); cvDiv(image_ImC, image_ReB, image_ImC, 1.0); 6.Merge real and imaginary parts // Merge Real and complex parts cvMerge(image_ReC, image_ImC, NULL, NULL, complex_ImC); 7.Finally perform Inverse DFT cvShowInvDFT(im, complex_ImC,dft_M1,dft_N1,fp,"deblur");
I have included the complete re-worked code below for the process of de-convolution. There was a small bug in the initial version. This has been fixed and the code below should work as is.
Note: You can clone this code from GitHub at “Deblurring by deconvolution in OpenCV”
Do also take a look at my post “Reworking the Lucy Richardson algorithm in OpenCV”
/*
============================================================================
Name : deconv.c
Author : Tinniam V Ganesh
Version :
Copyright :
Description : Deconvolution in OpenCV
============================================================================
*/
#include
#include
#include “cxcore.h”
#include “cv.h”
#include “highgui.h”
int main(int argc, char ** argv)
{
int height,width,step,channels;
uchar* data;
uchar* data1;
int i,j,k;
float s;
CvMat *dft_A;
CvMat *dft_B;
CvMat *dft_C;
IplImage* im;
IplImage* im1;
IplImage* image_ReB;
IplImage* image_ImB;
IplImage* image_ReC;
IplImage* image_ImC;
IplImage* complex_ImC;
IplImage* image_ReDen;
IplImage* image_ImDen;
FILE *fp;
fp = fopen(“test.txt”,”w+”);
int dft_M,dft_N;
int dft_M1,dft_N1;
CvMat* cvShowDFT();
void cvShowInvDFT();
im1 = cvLoadImage( “kutty-1.jpg”,1 );
cvNamedWindow(“original-color”, 0);
cvShowImage(“original-color”, im1);
im = cvLoadImage( “kutty-1.jpg”, CV_LOAD_IMAGE_GRAYSCALE );
if( !im )
return -1;
cvNamedWindow(“original-gray”, 0);
cvShowImage(“original-gray”, im);
// Create blur kernel (non-blind)
//float vals[]={.000625,.000625,.000625,.003125,.003125,.003125,.000625,.000625,.000625};
//float vals[]={-0.167,0.333,0.167,-0.167,.333,.167,-0.167,.333,.167};
float vals[]={.055,.055,.055,.222,.222,.222,.055,.055,.055};
CvMat kernel = cvMat(3, // number of rows
3, // number of columns
CV_32FC1, // matrix data type
vals);
IplImage* k_image_hdr;
IplImage* k_image;
k_image_hdr = cvCreateImageHeader(cvSize(3,3),IPL_DEPTH_64F,2);
k_image = cvCreateImage(cvSize(3,3),IPL_DEPTH_64F,1);
k_image = cvGetImage(&kernel,k_image_hdr);
/*IplImage* k_image;
k_image = cvLoadImage( “kernel4.bmp”,0 );*/
cvNamedWindow(“blur kernel”, 0);
height = k_image->height;
width = k_image->width;
step = k_image->widthStep;
channels = k_image->nChannels;
//data1 = (float *)(k_image->imageData);
data1 = (uchar *)(k_image->imageData);
cvShowImage(“blur kernel”, k_image);
dft_M = cvGetOptimalDFTSize( im->height – 1 );
dft_N = cvGetOptimalDFTSize( im->width – 1 );
//dft_M1 = cvGetOptimalDFTSize( im->height+99 – 1 );
//dft_N1 = cvGetOptimalDFTSize( im->width+99 – 1 );
dft_M1 = cvGetOptimalDFTSize( im->height+3 – 1 );
dft_N1 = cvGetOptimalDFTSize( im->width+3 – 1 );
// Perform DFT of original image
dft_A = cvShowDFT(im, dft_M1, dft_N1,”original”);
//Perform inverse (check & comment out) – Commented as it overwrites dft_A
//cvShowInvDFT(im,dft_A,dft_M1,dft_N1,fp, “original”);
// Perform DFT of kernel
dft_B = cvShowDFT(k_image,dft_M1,dft_N1,”kernel”);
//Perform inverse of kernel (check & comment out) – commented as it overwrites dft_B
//cvShowInvDFT(k_image,dft_B,dft_M1,dft_N1,fp, “kernel”);
// Multiply numerator with complex conjugate
dft_C = cvCreateMat( dft_M1, dft_N1, CV_64FC2 );
printf(“%d %d %d %d\n”,dft_M,dft_N,dft_M1,dft_N1);
// Multiply DFT(blurred image) * complex conjugate of blur kernel
cvMulSpectrums(dft_A,dft_B,dft_C,CV_DXT_MUL_CONJ);
// Split Fourier in real and imaginary parts
image_ReC = cvCreateImage( cvSize(dft_N1, dft_M1), IPL_DEPTH_64F, 1);
image_ImC = cvCreateImage( cvSize(dft_N1, dft_M1), IPL_DEPTH_64F, 1);
complex_ImC = cvCreateImage( cvSize(dft_N1, dft_M1), IPL_DEPTH_64F, 2);
printf(“%d %d %d %d\n”,dft_M,dft_N,dft_M1,dft_N1);
//cvSplit( dft_C, image_ReC, image_ImC, 0, 0 );
cvSplit( dft_C, image_ReC, image_ImC, 0, 0 );
// Compute A^2 + B^2 of denominator or blur kernel
image_ReB = cvCreateImage( cvSize(dft_N1, dft_M1), IPL_DEPTH_64F, 1);
image_ImB = cvCreateImage( cvSize(dft_N1, dft_M1), IPL_DEPTH_64F, 1);
// Split Real and imaginary parts
cvSplit( dft_B, image_ReB, image_ImB, 0, 0 );
cvPow( image_ReB, image_ReB, 2.0);
cvPow( image_ImB, image_ImB, 2.0);
cvAdd(image_ReB, image_ImB, image_ReB,0);
//Divide Numerator/A^2 + B^2
cvDiv(image_ReC, image_ReB, image_ReC, 1.0);
cvDiv(image_ImC, image_ReB, image_ImC, 1.0);
// Merge Real and complex parts
cvMerge(image_ReC, image_ImC, NULL, NULL, complex_ImC);
// Perform Inverse
cvShowInvDFT(im, complex_ImC,dft_M1,dft_N1,fp,”deblur”);
cvWaitKey(-1);
return 0;
}
CvMat* cvShowDFT(im, dft_M, dft_N,src)
IplImage* im;
int dft_M, dft_N;
char* src;
{
IplImage* realInput;
IplImage* imaginaryInput;
IplImage* complexInput;
CvMat* dft_A, tmp;
IplImage* image_Re;
IplImage* image_Im;
char str[80];
double m, M;
realInput = cvCreateImage( cvGetSize(im), IPL_DEPTH_64F, 1);
imaginaryInput = cvCreateImage( cvGetSize(im), IPL_DEPTH_64F, 1);
complexInput = cvCreateImage( cvGetSize(im), IPL_DEPTH_64F, 2);
cvScale(im, realInput, 1.0, 0.0);
cvZero(imaginaryInput);
cvMerge(realInput, imaginaryInput, NULL, NULL, complexInput);
dft_A = cvCreateMat( dft_M, dft_N, CV_64FC2 );
image_Re = cvCreateImage( cvSize(dft_N, dft_M), IPL_DEPTH_64F, 1);
image_Im = cvCreateImage( cvSize(dft_N, dft_M), IPL_DEPTH_64F, 1);
// copy A to dft_A and pad dft_A with zeros
cvGetSubRect( dft_A, &tmp, cvRect(0,0, im->width, im->height));
cvCopy( complexInput, &tmp, NULL );
if( dft_A->cols > im->width )
{
cvGetSubRect( dft_A, &tmp, cvRect(im->width,0, dft_A->cols – im->width, im->height));
cvZero( &tmp );
}
// no need to pad bottom part of dft_A with zeros because of
// use nonzero_rows parameter in cvDFT() call below
cvDFT( dft_A, dft_A, CV_DXT_FORWARD, complexInput->height );
strcpy(str,”DFT -“);
strcat(str,src);
cvNamedWindow(str, 0);
// Split Fourier in real and imaginary parts
cvSplit( dft_A, image_Re, image_Im, 0, 0 );
// Compute the magnitude of the spectrum Mag = sqrt(Re^2 + Im^2)
cvPow( image_Re, image_Re, 2.0);
cvPow( image_Im, image_Im, 2.0);
cvAdd( image_Re, image_Im, image_Re, NULL);
cvPow( image_Re, image_Re, 0.5 );
// Compute log(1 + Mag)
cvAddS( image_Re, cvScalarAll(1.0), image_Re, NULL ); // 1 + Mag
cvLog( image_Re, image_Re ); // log(1 + Mag)
cvMinMaxLoc(image_Re, &m, &M, NULL, NULL, NULL);
cvScale(image_Re, image_Re, 1.0/(M-m), 1.0*(-m)/(M-m));
cvShowImage(str, image_Re);
return(dft_A);
}
void cvShowInvDFT(im, dft_A, dft_M, dft_N,fp, src)
IplImage* im;
CvMat* dft_A;
int dft_M,dft_N;
FILE *fp;
char* src;
{
IplImage* realInput;
IplImage* imaginaryInput;
IplImage* complexInput;
IplImage * image_Re;
IplImage * image_Im;
double m, M;
char str[80];
realInput = cvCreateImage( cvGetSize(im), IPL_DEPTH_64F, 1);
imaginaryInput = cvCreateImage( cvGetSize(im), IPL_DEPTH_64F, 1);
complexInput = cvCreateImage( cvGetSize(im), IPL_DEPTH_64F, 2);
image_Re = cvCreateImage( cvSize(dft_N, dft_M), IPL_DEPTH_64F, 1);
image_Im = cvCreateImage( cvSize(dft_N, dft_M), IPL_DEPTH_64F, 1);
//cvDFT( dft_A, dft_A, CV_DXT_INV_SCALE, complexInput->height );
cvDFT( dft_A, dft_A, CV_DXT_INV_SCALE, dft_M);
strcpy(str,”DFT INVERSE – “);
strcat(str,src);
cvNamedWindow(str, 0);
// Split Fourier in real and imaginary parts
cvSplit( dft_A, image_Re, image_Im, 0, 0 );
// Compute the magnitude of the spectrum Mag = sqrt(Re^2 + Im^2)
cvPow( image_Re, image_Re, 2.0);
cvPow( image_Im, image_Im, 2.0);
cvAdd( image_Re, image_Im, image_Re, NULL);
cvPow( image_Re, image_Re, 0.5 );
// Compute log(1 + Mag)
cvAddS( image_Re, cvScalarAll(1.0), image_Re, NULL ); // 1 + Mag
cvLog( image_Re, image_Re ); // log(1 + Mag)
cvMinMaxLoc(image_Re, &m, &M, NULL, NULL, NULL);
cvScale(image_Re, image_Re, 1.0/(M-m), 1.0*(-m)/(M-m));
//cvCvtColor(image_Re, image_Re, CV_GRAY2RGBA);
cvShowImage(str, image_Re);
}
Checkout my book ‘Deep Learning from first principles Second Edition- In vectorized Python, R and Octave’. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).
You may also like my companion book “Practical Machine Learning with R and Python:Second Edition- Machine Learning in stereo” available in Amazon in paperback($12.99) and Kindle($9.99/Rs449) versions.
See also
– My book ‘Practical Machine Learning with R and Python’ on Amazon
– De-blurring revisited with Wiener filter using OpenCV
– Dabbling with Wiener filter using OpenCV
– Deblurring with OpenCV: Wiener filter reloaded
– Re-working the Lucy-Richardson Algorithm in OpenCV
You may also like
1. What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
2. Bend it like Bluemix, MongoDB with autoscaling – Part 1
3. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
4. A crime map of India in R: Crimes against women
Hi,
I didn’t take a look at your code but I think you can devise a better test to make sure everything is working the way you want it to. Deblurring is inherently an ill-posed problem so you have to make certain assumptions on the blur kernel to make sure that you can reduce the number of parameters to solve for as much as possible.
Try this. Load an image in openCV. Use a very simple averaging blur kernel. Let’s say you pick a 3×3 averaging kernel. It will look like 1/9 * identity(3,3) (a matrix of 1’s divided by 1/9). You might not see a lot of blur but you can increase the size of the kernel to 5×5 or 7×7. Be sure to divide by the number of elements in the window.
When you estimate the blur kernel make sure you impose symmetric constraints and the fact that the summation of all values in the blur kernel is equal to 1. This means that flipping the kernel horizontally and vertically gives you the same blur kernel. This is a common assumption for rotationally invariant blur. Also the summation equal to 1 converses the energy in the image.
LikeLike
Mustafa, thanks for your response. WIll give your suggestion a try. I just wanted to give de-convolution a try as it is a common thread in almost all research papers/ articles.
LikeLike
Mustafa, Now my code works. I have used a simple blur kernel and also removed a bug because I performed a inverseDFT of the original image and kernel which was overwriting the DFT array. I have commented these and I get a better result.
LikeLike
I wonder if you’ve seen this work? http://www.cse.cuhk.edu.hk/~leojia/projects/motion_deblurring/index.html
Supposedly, this algorithm is to be released in Adobe Photoshop. They have software you can try too.
LikeLike
Yes, I did see this. A few other papers on deblurring are
http://cs.nyu.edu/~fergus/research/deblur.html
Click to access Cho_11_Blur_Kernel_Estimation.pdf
However I wanted to give it a try on my own. Am sure I will figure this out soon 😉
LikeLike
Hi, I am still a newbie in image deblurring and i wish to learn more about it.
I had look through few papers which is blind deconvolution and was wondering how the blur kernel constructed from the image itself?
Would you mind to share some thought regarding to this?
LikeLike
Leo, Estimating a blur kernel from the image is quite involved and uses some form of linear regression techniques. I am still working on that. In the meanwhile I would suggest that you look at my posts on Wiener filter. It is also supposed to quite effective with removing blurs.
LikeLike
Sir,
I am working on a project to remove motion blur form SAR images.I did run the code above.I am new to opencv.I didnt get a complete deblurred image.Can you help me with your inputs?
Thank you.
LikeLike
Anu,
The code works but there is still a lot more to be done with the algorithm. If you read my other post you will understand that you have to progressively improve your blur kernel with which you will de-convolve. I hope to give it another shot when I get time. Will post to my blog. Do keep reading.
LikeLike
Hi,
I am actually working on a project to remove blur from videos,
I want to use openCV to do so.
Which algorithm according to you is good to detect blur in videos??
My first goal is to determine blur ..
LikeLike
Siddhesh,
I would suggest that you give Lucy-Richardson a try. I did not have great success. I will re-work the algorithm again later. – Ganesh
LikeLike
hi, i want to perform this deblur operation in emgucv, is this logic applicable in emgucv? if yes then how can i implement?
LikeLike