Inverse FFT from scipy returns almost blank image - scipy

I've started using fftpack from scipy, and I have a wierd problem. The following basic script returns a blank image:
image = plt.imread("moonlanding.png")
plt.imshow(image)
img_fft = np.abs(fftpack.fft2(image))
plt.show()
final_image = fftpack.ifft2(img_fft).real
plt.imshow(final_image)
plt.show()
If the result matrix is clipped between 0 and 1,
final_image = fftpack.ifft2(img_fft).real.clip(0, 1)
the final image looks like this:
Any idea what is happening here? I am expecting it to return the same image.

Related

Downsizing and rescaling an image using for loops

I'm relatively new to Matlab, and trying to understand why a piece of code isn't working.
I have a 512x512 image that needs to be downsized to 256, and then resized back up to 512.
How I understand the mathematics, is that I would need to mean the pixels in the image to get the 256, and then sum them back to get the 512. Is that correct ? Following is the code that I'm looking at, and if someone can explain me whats wrong(its giving a blank white image), I would appreciate it:
w = double(imread('walkbridge.tif'));
%read the image
w = w(:,:,1);
for x = 1:256
for y = 1:256
s256(x,y) = (w(2*x,2*y)+ w(2*x,(2*y)-1) + w((2*x)-1,2*y)+ w((2*x)-1,(2*y)-1))/4;
end
end
for x = 1 : 256
for y = 1 : 256
for x1 = 0:1
for y1 = 0:1
R1((2*x)-x1,((2*y)-y1)) = s256(x,y);
end
end
end
end
imshow(R1)
I got your code to work, so you might have some bad values in your image data. Namely, if your image has values in range 0..127 or something similar, it will most likely show as all white. By default, imshow expects color channels to be in range 0..1.
You might also want to simplify your code a bit by indexing the original array instead of accessing individual elements. That way the code is even easy to change:
half_size = 256;
w = magic(2*half_size);
w = w / max(w(:));
figure()
imshow(w)
s = zeros(half_size);
for x = 1:half_size
for y = 1:half_size
ix = w(2*x-1:2*x, 2*y-1:2*y);
s(x,y) = sum(ix(:))/4;
end
end
for x = 1:half_size
for y = 1:half_size
R1(2*x-1:2*x, 2*y-1:2*y) = s(x,y);
end
end
figure()
imshow(R1)
I imagine the calculations could even be vectorised in some way instead of looping, but I didn't bother.

Deploy and use LeNet on Matlab using MatCaffe

I am having issue with MatCaffe. I trained LeNet using my own dataset (2 classification, 0 or 1) in python successfully and trying to deploy it on Matlab now. Net architecture is from caffe/examples/mnist/lenet.prototxt. All the input images I fed into the net always return 1. (i tried using both positive and negative images from training).
Below is my code:
deployNet = 'lenet_deploy.prototxt';
caffeModel = 'weight.caffemodel';
caffe.set_mode_cpu();
net = caffe.Net(deployNet, caffeModel, 'test');
net.blobs('data').reshape([28 28 1 1]);
net.reshape();
patch_data = imread('cropped.jpg'); % already in greyscale
patch_data = imresize(patch_data, [28 28],'bilinear');
imshow(patch_data)
input_data = {patch_data};
scores = net.forward(input_data);
highest = max(scores{1});
disp(i);
disp(highest);
The highest always return 1 even for negative image. I tried deploying it on python and it works great. I am guessing issue with the way I pre-process the input.
Found out the issue. I forgotten to multiply the image with the training scale and transpose the width and height since Matlab is 1-indexed and column-major, the usual 4 blob dimensions in Matlab are [width, height, channels, num], and width is the fastest dimension. So just add in 2 more line of codes:
deployNet = 'lenet_deploy.prototxt';
caffeModel = 'weight.caffemodel';
caffe.set_mode_cpu();
net = caffe.Net(deployNet, caffeModel, 'test');
net.blobs('data').reshape([28 28 1 1]);
net.reshape();
patch = imread('cropped.jpg'); % already in greyscale
patch = single(patch) * 0.00390625; % multiply with scale
patch = permute(patch, [2,1,3]); %permute width and height
input_data = {patch};
scores = net.forward(input_data);
highest = scores{1};

Using a clear portion of a picture to recreate a PSF

I'm trying to unblur the blurred segments of the following picture.
the original PSF was not given, so I proceeded to analyze the blurred part and see whether there was a word I could roughly make out. I found out that I could make out "of" in the blurred section. I cropped out both the the blurred "of" and its counterpart in the clear section, as seen below.
I then thought through lectures in FFT that you divide the blurred (frequency domain) with a particular blurring function (frequency domain) to recreate the original image.
I thought that if I could do Unblurred (frequency domain) \ Blurred(frequency domain), the original PSF could be retrieved. Please advise on how I could do this.
Below is my code:
img = im2double(imread('C:\Users\adhil\Desktop\matlab pics\image1.JPG'));
Blurred = imcrop(img,[205 541 13 12]);
Unblurred = imcrop(img,[39 140 13 12]);
UB = fftshift(Unblurred);
UB = fft2(UB);
UB = ifftshift(UB);
F_1a = zeros(size(B));
for idx = 1 : size(Blurred, 3)
B = fftshift(Blurred(:,:,idx));
B = fft2(B);
B = ifftshift(B);
UBa = UB(:,:,idx);
tmp = UBa ./ B;
tmp = ifftshift(tmp);
tmp = ifft2(tmp);
tmp = fftshift(tmp);
[J, P] = deconvblind(Blurred,tmp);
end
subplot(1,3,1);imshow(Blurred);title('Blurred');
subplot(1,3,2);imshow(Unblurred);title('Original Unblurred');
subplot(1,3,3);imshow(J);title('Attempt at unblurring');
This code, however, does not work, and I'm getting the following error:
Error using deconvblind
Expected input number 2, INITPSF, to be real.
Error in deconvblind>parse_inputs (line 258)
validateattributes(P{1},{'uint8' 'uint16' 'double' 'int16' 'single'},...
Error in deconvblind (line 122)
[J,P,NUMIT,DAMPAR,READOUT,WEIGHT,sizeI,classI,sizePSF,FunFcn,FunArg] = ...
Error in test2 (line 20)
[J, P] = deconvblind(Blurred,tmp);
Is this a good way to recreate the original PSF?
I'm not an expert in this area, but I have played around with deconvolution a little bit and have written a program to compute the point spread function when given a clear image and a blurred image. Once I got the psf function using this program, I verified that it was correct by using it to deconvolve the blurry image and it worked fine. The code is below. I know this post is extremely old, but hopefully it will still be of use to someone.
import numpy as np
import matplotlib.pyplot as plt
import cv2
def deconvolve(normal, blur):
blur_fft = np.fft.rfft2(blur)
normal_fft = np.fft.rfft2(normal)
return np.fft.irfft2(blur_fft/(normal_fft))
img = cv2.imread('Blurred_Image.jpg')
blur = img[:,:,0]
img2 = cv2.imread('Original_Image.jpg')
normal = img2[:,:,0]
psf_real = deconvolve(normal, blur)
fig = plt.figure(figsize=(10,4))
ax1 = plt.subplot(131)
ax1.imshow(blur)
ax2 = plt.subplot(132)
ax2.imshow(normal)
ax3 = plt.subplot(133)
ax3.imshow(psf_real)
plt.gray()
plt.show()

Matlab - After dithering, RGB image can't divide into R-G-B

I'm new in Matlab..
I have image with dimension 512x512x3 uint8. And I use 'dither' function like this :
[Myimagedither, Myimagedithermap] = rgb2ind(img, 16, 'dither');
imwrite(Myimagedither,Myimagedithermap,'step_4_RGB_D_U_16.tiff');
after that, I use imread to read the image like this :
new_img = imread('step_4_RGB_D_U_16.tiff');
but, after that dimension change into 512x512 unit8 only. I need to divide that image into R G B. Can anyone help me to solve this?
You need to read the map seperatly. Like this:
[new_img new_img_map] = imread('step_4_RGB_D_U_16.tiff');
And then convert the image into rgb using ind2rgb() and divide the colorchannels into 3 seperate image. Like this:
new_img_RGB = ind2rgb(new_img,new_img_map);
g1_16 = new_img_RGB(:,:,1);
g2_16 = new_img_RGB(:,:,2);
g3_16 = new_img_RGB(:,:,3);

How to convert a grayscale image to binary in MATLAB

I have to write a program that converts intensity images into black-and-white ones. I just figured I could take a value from the original matrix, I, and if it's above the mean value, make the corresponding cell in another array equal to 1, otherwise equal to zero:
for x=1:X
for y=1:Y
if I(x,y)>mean(I(:))
bw(x,y)=1;
elseif I(x,y)<mean(I(:))
bw(x,y)=0;
end
end
end
image(bw)
Unfortunately, the image I get is all black. Why?
I is in uint8, btw. 2-D Lena.tiff image
Use this :
bw = im2bw(I, graythresh(I));
Here the documentation for im2bw;
using imshow(I,[]);, doesn't evaluate the image between 0 and 255, but between min(I(:)) and max(I(:))
EDIT
You can change graythresh(I) by anyother level. You can still use the mean of your image. (Normalize between 0 and 1).
maxI = max(I(:));
minI = min(I(:));
bw = im2bw(I,(maxI - mean(I(:)))/(maxI - minI));
Use imagesc(bw) (instead of image(bw)). That automatically scales the image range.
Also, note that you can replace all your code by this vectorized, more efficient version:
bw = double(I>mean(I(:)));
One problem is that you are always using the mean untill then.
What you will want to do is do this before the loop:
myMean = mean(I(:));
And then replace all following occurences of mean(I(:)) by myMean.
Using the running mean as you currently will definitely slow things, but it will not be the reason why it becomes all black.