Replicate Photoshop sRGB to LAB Conversion - matlab

The task I want to achieve is to replicate Photoshop RGB to LAB conversion.
For simplicity, I will describe what I did to extract only the L Channel.
Extracting Photoshop's L Channel
Here is RGB Image which includes all RGB colors (Please click and download):
In order to extract Photoshop's LAB what I did is the following:
Loaded the image into Photoshop.
Set Mode to LAB.
Selected the L Channel in the Channel Panel.
Set Mode to Grayscale.
Set mode to RGB.
Saved as PNG.
This is the L Channel of Photoshop (This is exactly what seen on screen when L Channel is selected in LAB Mode):
sRGB to LAB Conversion
My main reference is Bruce Lindbloom great site.
Also known is that Photoshop is using D50 White Point in its LAB Mode (See also Wikipedia's LAB Color Space Page).
Assuming the RGB image is in sRGB format the conversion is given by:
sRGB -> XYZ (White Point D65) -> XYZ (White Point D50) -> LAB
Assuming data is in Float within the [0, 1] range the stages are given by:
Transform sRGB into XYZ.
The conversion Matrix is given by RGB -> XYZ Matrix (See sRGB D65).
Converting from XYZ D65 to XYZ D50
The conversion is done using Chromatic Adaptation Matrix. Since the previous step and this are Matrix Multiplication they can be combined into one Matrix which goes from sRGB -> XYZ D50 (See the bottom of RGB to XYZ Matrix). Note that Photoshop uses Bradford Adaptation Method.
Convert from XYZ D50 to LAB
The conversion is done using the XYZ to LAB Steps.
MATLAB Code
Since, for start, I'm only after the L Channel things are a bit simpler. The images are loaded into MATLAB and converted into Float [0, 1] range.
This is the code:
%% Setting Enviorment Parameters
INPUT_IMAGE_RGB = 'RgbColors.png';
INPUT_IMAGE_L_PHOTOSHOP = 'RgbColorsL.png';
%% Loading Data
mImageRgb = im2double(imread(INPUT_IMAGE_RGB));
mImageLPhotoshop = im2double(imread(INPUT_IMAGE_L_PHOTOSHOP));
mImageLPhotoshop = mImageLPhotoshop(:, :, 1); %<! All channels are identical
%% Convert to L Channel
mImageLMatlab = ConvertRgbToL(mImageRgb, 1);
%% Display Results
figure();
imshow(mImageLPhotoshop);
title('L Channel - Photoshop');
figure();
imshow(mImageLMatlab);
title('L Channel - MATLAB');
Where the function ConvertRgbToL() is given by:
function [ mLChannel ] = ConvertRgbToL( mRgbImage, sRgbMode )
OFF = 0;
ON = 1;
RED_CHANNEL_IDX = 1;
GREEN_CHANNEL_IDX = 2;
BLUE_CHANNEL_IDX = 3;
RGB_TO_Y_MAT = [0.2225045, 0.7168786, 0.0606169]; %<! D50
Y_CHANNEL_THR = 0.008856;
% sRGB Compensation
if(sRgbMode == ON)
vLinIdx = mRgbImage < 0.04045;
mRgbImage(vLinIdx) = mRgbImage(vLinIdx) ./ 12.92;
mRgbImage(~vLinIdx) = ((mRgbImage(~vLinIdx) + 0.055) ./ 1.055) .^ 2.4;
end
% RGB to XYZ (D50)
mY = (RGB_TO_Y_MAT(1) .* mRgbImage(:, :, RED_CHANNEL_IDX)) + (RGB_TO_Y_MAT(2) .* mRgbImage(:, :, GREEN_CHANNEL_IDX)) + (RGB_TO_Y_MAT(3) .* mRgbImage(:, :, BLUE_CHANNEL_IDX));
vYThrIdx = mY > Y_CHANNEL_THR;
mY3 = mY .^ (1 / 3);
mLChannel = ((vYThrIdx .* (116 * mY3 - 16.0)) + ((~vYThrIdx) .* (903.3 * mY))) ./ 100;
end
As one could see the results are different.
Photoshop is much darker for most colors.
Anyone knows how to replicate Photoshop's LAB conversion?
Anyone can spot issue in this code?
Thank You.

Latest answer (we know that it is wrong now, waiting for a proper answer)
Photoshop is a very old and messy software. There's no clear documentation as to why this or that happens to the pixel values when you are performing conversions from a mode to another.
Your problem happens because when you are converting the selected L* channel to Greyscale in Adobe Photoshop, there's a change in gamma. Natively, the conversion uses a gamma of 1.74 for single channel to greyscale conversion. Don't ask me why, I would guess this is related to old laser printers (?).
Anyway, this is the best way I found to do it:
Open your file, turn it to LAB mode, select the L channel only
Then go to:
Edit > Convert to profile
You will select "custom gamma" and enter the value 2.0 (don't ask me why 2.0 works better, I have no idea what's in the mind of Adobe's software makers...)
This operation will turn your picture into a greyscale one with only one channel
Then you can convert it to RGB mode.
If you compare the result with your result, you will see differences up to 4 dot something % - all located in the darkest areas.
I suspect this is because the gamma curve application does not appy to LAB mode in the dark values (Cf. as you know, all XYZ values below 0.008856 are linear in LAB)
CONCLUSION:
As far as I know, there is no proper implemented way in Adobe Photoshop to extract the L channel from LAB mode to grey mode!
Previous answer
this is the result I get with my own method:
It seems to be exactly the same result as the Adobe Photoshop one.
I am not sure what went wrong on your side since the steps that you are describing are exactly the same ones that I followed and that I would have advised you to follow. I don't have Matlab so I used python:
import cv2, Syn
# your file
fn = "EASA2.png"
#reading the file
im = cv2.imread(fn,-1)
#openCV works in BGR, i'm switching to RGB
im = im[:,:,::-1]
#conversion to XYZ
XYZ = Syn.sRGB2XYZ(im)
#white points D65 and D50
WP_D65 = Syn.Yxy2XYZ((100,0.31271, 0.32902))
WP_D50 = Syn.Yxy2XYZ((100,0.34567, 0.35850))
#bradford
XYZ2 = Syn.bradford_adaptation(XYZ, WP_D65, WP_D50)
#conversion to L*a*b*
LAB = Syn.XYZ2Lab(XYZ2, WP_D50)
#picking the L channel only
L = LAB[:,:,0] /100. * 255.
#image output
cv2.imwrite("result.png", L)
the Syn library is my own stuff, here are the functions (sorry for the mess):
def sRGB2XYZ(sRGB):
sRGB = np.array(sRGB)
aShape = np.array([1,1,1]).shape
anotherShape = np.array([[1,1,1],[1,1,1]]).shape
origShape = sRGB.shape
if sRGB.shape == aShape:
sRGB = np.reshape(sRGB, (1,1,3))
elif len(sRGB.shape) == len(anotherShape):
h,d = sRGB.shape
sRGB = np.reshape(sRGB, (1,h,d))
w,h,d = sRGB.shape
sRGB = np.reshape(sRGB, (w*h,d)).astype("float") / 255.
m1 = sRGB[:,0] > 0.04045
m1b = sRGB[:,0] <= 0.04045
m2 = sRGB[:,1] > 0.04045
m2b = sRGB[:,1] <= 0.04045
m3 = sRGB[:,2] > 0.04045
m3b = sRGB[:,2] <= 0.04045
sRGB[:,0][m1] = ((sRGB[:,0][m1] + 0.055 ) / 1.055 ) ** 2.4
sRGB[:,0][m1b] = sRGB[:,0][m1b] / 12.92
sRGB[:,1][m2] = ((sRGB[:,1][m2] + 0.055 ) / 1.055 ) ** 2.4
sRGB[:,1][m2b] = sRGB[:,1][m2b] / 12.92
sRGB[:,2][m3] = ((sRGB[:,2][m3] + 0.055 ) / 1.055 ) ** 2.4
sRGB[:,2][m3b] = sRGB[:,2][m3b] / 12.92
sRGB *= 100.
X = sRGB[:,0] * 0.4124 + sRGB[:,1] * 0.3576 + sRGB[:,2] * 0.1805
Y = sRGB[:,0] * 0.2126 + sRGB[:,1] * 0.7152 + sRGB[:,2] * 0.0722
Z = sRGB[:,0] * 0.0193 + sRGB[:,1] * 0.1192 + sRGB[:,2] * 0.9505
XYZ = np.zeros_like(sRGB)
XYZ[:,0] = X
XYZ[:,1] = Y
XYZ[:,2] = Z
XYZ = np.reshape(XYZ, origShape)
return XYZ
def Yxy2XYZ(Yxy):
Yxy = np.array(Yxy)
aShape = np.array([1,1,1]).shape
anotherShape = np.array([[1,1,1],[1,1,1]]).shape
origShape = Yxy.shape
if Yxy.shape == aShape:
Yxy = np.reshape(Yxy, (1,1,3))
elif len(Yxy.shape) == len(anotherShape):
h,d = Yxy.shape
Yxy = np.reshape(Yxy, (1,h,d))
w,h,d = Yxy.shape
Yxy = np.reshape(Yxy, (w*h,d)).astype("float")
XYZ = np.zeros_like(Yxy)
XYZ[:,0] = Yxy[:,1] * ( Yxy[:,0] / Yxy[:,2] )
XYZ[:,1] = Yxy[:,0]
XYZ[:,2] = ( 1 - Yxy[:,1] - Yxy[:,2] ) * ( Yxy[:,0] / Yxy[:,2] )
return np.reshape(XYZ, origShape)
def bradford_adaptation(XYZ, Neutral_source, Neutral_destination):
"""should be checked if it works properly, but it seems OK"""
XYZ = np.array(XYZ)
ashape = np.array([1,1,1]).shape
siVal = False
if XYZ.shape == ashape:
XYZ = np.reshape(XYZ, (1,1,3))
siVal = True
bradford = np.array(((0.8951000, 0.2664000, -0.1614000),
(-0.750200, 1.7135000, 0.0367000),
(0.0389000, -0.068500, 1.0296000)))
inv_bradford = np.array(((0.9869929, -0.1470543, 0.1599627),
(0.4323053, 0.5183603, 0.0492912),
(-.0085287, 0.0400428, 0.9684867)))
Xs,Ys,Zs = Neutral_source
s = np.array(((Xs),
(Ys),
(Zs)))
Xd,Yd,Zd = Neutral_destination
d = np.array(((Xd),
(Yd),
(Zd)))
source = np.dot(bradford, s)
Us,Vs,Ws = source[0], source[1], source[2]
destination = np.dot(bradford, d)
Ud,Vd,Wd = destination[0], destination[1], destination[2]
transformation = np.array(((Ud/Us, 0, 0),
(0, Vd/Vs, 0),
(0, 0, Wd/Ws)))
M = np.mat(inv_bradford)*np.mat(transformation)*np.mat(bradford)
w,h,d = XYZ.shape
result = np.dot(M,np.rot90(np.reshape(XYZ, (w*h,d)),-1))
result = np.rot90(result, 1)
result = np.reshape(np.array(result), (w,h,d))
if siVal == False:
return result
else:
return result[0,0]
def XYZ2Lab(XYZ, neutral):
"""transforms XYZ to CIE Lab
Neutral should be normalized to Y = 100"""
XYZ = np.array(XYZ)
aShape = np.array([1,1,1]).shape
anotherShape = np.array([[1,1,1],[1,1,1]]).shape
origShape = XYZ.shape
if XYZ.shape == aShape:
XYZ = np.reshape(XYZ, (1,1,3))
elif len(XYZ.shape) == len(anotherShape):
h,d = XYZ.shape
XYZ = np.reshape(XYZ, (1,h,d))
N_x, N_y, N_z = neutral
w,h,d = XYZ.shape
XYZ = np.reshape(XYZ, (w*h,d)).astype("float")
XYZ[:,0] = XYZ[:,0]/N_x
XYZ[:,1] = XYZ[:,1]/N_y
XYZ[:,2] = XYZ[:,2]/N_z
m1 = XYZ[:,0] > 0.008856
m1b = XYZ[:,0] <= 0.008856
m2 = XYZ[:,1] > 0.008856
m2b = XYZ[:,1] <= 0.008856
m3 = XYZ[:,2] > 0.008856
m3b = XYZ[:,2] <= 0.008856
XYZ[:,0][m1] = XYZ[:,0][XYZ[:,0] > 0.008856] ** (1/3.0)
XYZ[:,0][m1b] = ( 7.787 * XYZ[:,0][m1b] ) + ( 16 / 116.0 )
XYZ[:,1][m2] = XYZ[:,1][XYZ[:,1] > 0.008856] ** (1/3.0)
XYZ[:,1][m2b] = ( 7.787 * XYZ[:,1][m2b] ) + ( 16 / 116.0 )
XYZ[:,2][m3] = XYZ[:,2][XYZ[:,2] > 0.008856] ** (1/3.0)
XYZ[:,2][m3b] = ( 7.787 * XYZ[:,2][m3b] ) + ( 16 / 116.0 )
Lab = np.zeros_like(XYZ)
Lab[:,0] = (116. * XYZ[:,1] ) - 16.
Lab[:,1] = 500. * ( XYZ[:,0] - XYZ[:,1] )
Lab[:,2] = 200. * ( XYZ[:,1] - XYZ[:,2] )
return np.reshape(Lab, origShape)

All conversions between colour spaces in Photoshop are through CMM, which is sufficiently fast on circa 2000 hardware, and not quite accurate. You can have a lot of 4-bit errors and some 7-bit errors with Adobe CMM if you check "round robin" - RGB -> Lab -> RGB. That may cause posterisation. I always base my conversions on formulae, not on CMMs. However the average deltaE of the error with Adobe CMM and Argyll CMM is quite acceptable.
Lab conversions are quite similar to RGB, only the non-linearity (gamma) is applied at the first step; something like this:
normalize XYZ to white point
bring the result to gamma 3 (keeping shadow portion linear, depends on implementation)
multiply the result by [0 116 0 -16; 500 -500 0 0; 0 200 -200 0]'

Related

Convert Fisheye Video into regular Video

I have a video stream coming from a 180 degree fisheye camera. I want to do some image-processing to convert the fisheye view into a normal view.
After some research and lots of read articles I found this paper.
They describe an algorithm (and some formulas) to solve this problem.
I used tried to implement this method in a Matlab. Unfortunately it doesn't work, and I failed to make it work. The "corrected" image looks exactly like the original photograph and there's no any removal of distortion and secondly I am just receiving top left side of the image, not the complete image but changing the value of 'K' to 1.9 gives mw the whole image, but its exactly the same image.
Input image:
Result:
When the value of K is 1.15 as mentioned in the article
When the value of K is 1.9
Here is my code:
image = imread('image2.png');
[Cx, Cy, channel] = size(image);
k = 1.5;
f = (Cx * Cy)/3;
opw = fix(f * tan(asin(sin(atan((Cx/2)/f)) * k)));
oph = fix(f * tan(asin(sin(atan((Cy/2)/f)) * k)));
image_new = zeros(opw, oph,channel);
for i = 1: opw
for j = 1: oph
[theta,rho] = cart2pol(i,j);
R = f * tan(asin(sin(atan(rho/f)) * k));
r = f * tan(asin(sin(atan(R/f))/k));
X = ceil(r * cos(theta));
Y = ceil(r * sin(theta));
for k = 1: 3
image_new(i,j,k) = image(X,Y,k);
end
end
end
image_new = uint8(image_new);
warning('off', 'Images:initSize:adjustingMag');
imshow(image_new);
This is what solved my problem.
input:
strength as floating point >= 0. 0 = no change, high numbers equal stronger correction.
zoom as floating point >= 1. (1 = no change in zoom)
algorithm:
set halfWidth = imageWidth / 2
set halfHeight = imageHeight / 2
if strength = 0 then strength = 0.00001
set correctionRadius = squareroot(imageWidth ^ 2 + imageHeight ^ 2) / strength
for each pixel (x,y) in destinationImage
set newX = x - halfWidth
set newY = y - halfHeight
set distance = squareroot(newX ^ 2 + newY ^ 2)
set r = distance / correctionRadius
if r = 0 then
set theta = 1
else
set theta = arctangent(r) / r
set sourceX = halfWidth + theta * newX * zoom
set sourceY = halfHeight + theta * newY * zoom
set color of pixel (x, y) to color of source image pixel at (sourceX, sourceY)

How to reduce the time consumed by the for loop?

I am trying to implement a simple pixel level center-surround image enhancement. Center-surround technique makes use of statistics between the center pixel of the window and the surrounding neighborhood as a means to decide what enhancement needs to be done. In the code given below I have compared the center pixel with average of the surrounding information and based on that I switch between two cases to enhance the contrast. The code that I have written is as follows:
im = normalize8(im,1); %to set the range of pixel from 0-255
s1 = floor(K1/2); %K1 is the size of the window for surround
M = 1000; %is a constant value
out1 = padarray(im,[s1,s1],'symmetric');
out1 = CE(out1,s1,M);
out = (out1(s1+1:end-s1,s1+1:end-s1));
out = normalize8(out,0); %to set the range of pixel from 0-1
function [out] = CE(out,s,M)
B = 255;
out1 = out;
for i = s+1 : size(out,1) - s
for j = s+1 : size(out,2) - s
temp = out(i-s:i+s,j-s:j+s);
Yij = out1(i,j);
Sij = (1/(2*s+1)^2)*sum(sum(temp));
if (Yij>=Sij)
Aij = A(Yij-Sij,M);
out1(i,j) = ((B + Aij)*Yij)/(Aij+Yij);
else
Aij = A(Sij-Yij,M);
out1(i,j) = (Aij*Yij)/(Aij+B-Yij);
end
end
end
out = out1;
function [Ax] = A(x,M)
if x == 0
Ax = M;
else
Ax = M/x;
end
The code does the following things:
1) Normalize the image to 0-255 range and pad it with additional elements to perform windowing operation.
2) Calls the function CE.
3) In the function CE obtain the windowed image(temp).
4) Find the average of the window (Sij).
5) Compare the center of the window (Yij) with the average value (Sij).
6) Based on the result of comparison perform one of the two enhancement operation.
7) Finally set the range back to 0-1.
I have to run this for multiple window size (K1,K2,K3, etc.) and the images are of size 1728*2034. When the window size is selected as 100, the time consumed is very high.
Can I use vectorization at some stage to reduce the time for loops?
The profiler result (for window size 21) is as follows:
The profiler result (for window size 100) is as follows:
I have changed the code of my function and have written it without the sub-function. The code is as follows:
function [out] = CE(out,s,M)
B = 255;
Aij = zeros(1,2);
out1 = out;
n_factor = (1/(2*s+1)^2);
for i = s+1 : size(out,1) - s
for j = s+1 : size(out,2) - s
temp = out(i-s:i+s,j-s:j+s);
Yij = out1(i,j);
Sij = n_factor*sum(sum(temp));
if Yij-Sij == 0
Aij(1) = M;
Aij(2) = M;
else
Aij(1) = M/(Yij-Sij);
Aij(2) = M/(Sij-Yij);
end
if (Yij>=Sij)
out1(i,j) = ((B + Aij(1))*Yij)/(Aij(1)+Yij);
else
out1(i,j) = (Aij(2)*Yij)/(Aij(2)+B-Yij);
end
end
end
out = out1;
There is a slight improvement in the speed from 93 sec to 88 sec. Suggestions for any other improvements to my code are welcomed.
I have tried to incorporate the suggestions given to replace sliding window with convolution and then vectorize the rest of it. The code below is my implementation and I'm not getting the result expected.
function [out_im] = CE_conv(im,s,M)
B = 255;
temp = ones(2*s,2*s);
temp = temp ./ numel(temp);
out1 = conv2(im,temp,'same');
out_im = im;
Aij = im-out1; %same as Yij-Sij
Aij1 = out1-im; %same as Sij-Yij
Mij = Aij;
Mij(Aij>0) = M./Aij(Aij>0); % if Yij>Sij Mij = M/Yij-Sij;
Mij(Aij<0) = M./Aij1(Aij<0); % if Yij<Sij Mij = M/Sij-Yij;
Mij(Aij==0) = M; % if Yij-Sij == 0 Mij = M;
out_im(Aij>=0) = ((B + Mij(Aij>=0)).*im(Aij>=0))./(Mij(Aij>=0)+im(Aij>=0));
out_im(Aij<0) = (Mij(Aij<0).*im(Aij<0))./ (Mij(Aij<0)+B-im(Aij<0));
I am not able to figure out where I'm going wrong.
A detailed explanation of what I'm trying to implement is given in the following paper:
Vonikakis, Vassilios, and Ioannis Andreadis. "Multi-scale image contrast enhancement." In Control, Automation, Robotics and Vision, 2008. ICARCV 2008. 10th International Conference on, pp. 856-861. IEEE, 2008.
I've tried to see if I could get those times down by processing with colfiltand nlfilter, since both are usually much faster than for-loops for sliding window image processing.
Both worked fine for relatively small windows. For an image of 2048x2048 pixels and a window of 10x10, the solution with colfilt takes about 5 seconds (on my personal computer). With a window of 21x21 the time jumped to 27 seconds, but that is still a relative improvement on the times displayed on the question. Unfortunately I don't have enough memory to colfilt using windows of 100x100, but the solution with nlfilter works, though taking about 120 seconds.
Here the code
Solution with colfilt:
function outval = enhancematrix(inputmatrix,M,B)
%Inputmatrix is a 2D matrix or column vector, outval is a 1D row vector.
% If inputmatrix is made of integers...
inputmatrix = double(inputmatrix);
%1. Compute S and Y
normFactor = 1 / (size(inputmatrix,1) + 1).^2; %Size of column.
S = normFactor*sum(inputmatrix,1); % Sum over the columns.
Y = inputmatrix(ceil(size(inputmatrix,1)/2),:); % Center row.
% So far we have all S and Y, one value per column.
%2. Compute A(abs(Y-S))
A = Afunc(abs(S-Y),M);
% And all A: one value per column.
%3. The tricky part. If Y(i)-S(i) > 0 do something.
doPositive = (Y > S);
doNegative = ~doPositive;
outval = zeros(1,size(inputmatrix,2));
outval(doPositive) = (B + A(doPositive) .* Y(doPositive)) ./ (A(doPositive) + Y(doPositive));
outval(doNegative) = (A(doNegative) .* Y(doNegative)) ./ (A(doNegative) + B - Y(doNegative));
end
function out = Afunc(x,M)
% Input x is a row vector. Output is another row vector.
out = x;
out(x == 0) = M;
out(x ~= 0) = M./x(x ~= 0);
end
And to call it, simply do:
M = 1000; B = 255; enhancenow = #(x) enhancematrix(x,M,B);
w = 21 % windowsize
result = colfilt(inputImage,[w w],'sliding',enhancenow);
Solution with nlfilter:
function outval = enhanceimagecontrast(neighbourhood,M,B)
%1. Compute S and Y
normFactor = 1 / (length(neighbourhood) + 1).^2;
S = normFactor*sum(neighbourhood(:));
Y = neighbourhood(ceil(size(neighbourhood,1)/2),ceil(size(neighbourhood,2)/2));
%2. Compute A(abs(Y-S))
test = (Y>=S);
A = Afunc(abs(Y-S),M);
%3. Return outval
if test
outval = ((B + A) * Y) / (A + Y);
else
outval = (A * Y) / (A + B - Y);
end
function aval = Afunc(x,M)
if (x == 0)
aval = M;
else
aval = M/x;
end
And to call it, simply do:
M = 1000; B = 255; enhancenow = #(x) enhanceimagecontrast(x,M,B);
w = 21 % windowsize
result = nlfilter(inputImage,[w w], enhancenow);
I didn't spend much time checking that everything is 100% correct, but I did see some nice contrast enhancement (hair looks particularly nice).
This answer is the implementation that was suggested by Peter. I debugged the implementation and presenting the final working version of the fast implementation.
function [out_im] = CE_conv(im,s,M)
B = 255;
im = ( im - min(im(:)) ) ./ ( max(im(:)) - min(im(:)) )*255;
h = ones(s,s)./(s*s);
out1 = imfilter(im,h,'conv');
out_im = im;
Aij = im-out1; %same as Yij-Sij
Aij1 = out1-im; %same as Sij-Yij
Mij = Aij;
Mij(Aij>0) = M./Aij(Aij>0); % if Yij>Sij Mij = M/(Yij-Sij);
Mij(Aij<0) = M./Aij1(Aij<0); % if Yij<Sij Mij = M/(Sij-Yij);
Mij(Aij==0) = M; % if Yij-Sij == 0 Mij = M;
out_im(Aij>=0) = ((B + Mij(Aij>=0)).*im(Aij>=0))./(Mij(Aij>=0)+im(Aij>=0));
out_im(Aij<0) = (Mij(Aij<0).*im(Aij<0))./ (Mij(Aij<0)+B-im(Aij<0));
out_im = ( out_im - min(out_im(:)) ) ./ ( max(out_im(:)) - min(out_im(:)) );
To call this use the following code
I = imread('pout.tif');
w_size = 51;
M = 4000;
output = CE_conv(I(:,:,1),w_size,M);
The output for the 'pout.tif' image is given below
The execution time for Bigger image and with 100*100 block size is around 5 secs with this implementation.

Different intensity values for same image in OpenCV and MATLAB

I'm using Python 2.7 and OpenCV 3.x for my project for omr sheet evaluation using web camera.
While finding the number of white pixels in around the center of circle,I came to know that the intensity values are wrong, but it shows the correct values in MATLAB when I'm using imtool('a1.png').
I'm using .png image (datatype uint8).
just run the code and in the image go to [360:370,162:172] coordinate and see the intensity values.. it should not be 0.
find the images here -> a1.png a2.png
Why is this happening?
import numpy as np
import cv2
from matplotlib import pyplot as plt
#select radius of circle
radius = 10;
#function for finding white pixels
def thresh_circle(img,ptx,pty):
centerX = ptx;
centerY = pty;
cntOfWhite = 0;
for i in range((centerX - radius),(centerX + radius)):
for j in range((centerY - radius), (centerY + radius)):
if(j < img.shape[0] and i < img.shape[1]):
val = img[i][j]
if (val == 255):
cntOfWhite = cntOfWhite + 1;
return cntOfWhite
MIN_MATCH_COUNT = 10
img1 = cv2.imread('a1.png',0) # queryImage
img2 = cv2.imread('a2.png',0) # trainImage
sift = cv2.SIFT()# Initiate SIFT detector
kp1, des1 = sift.detectAndCompute(img1,None)# find the keypoints and descriptors with SIFT
kp2, des2 = sift.detectAndCompute(img2,None)
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
search_params = dict(checks = 50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1,des2,k=2)
good = []# store all the good matches as per Lowe's ratio test.
for m,n in matches:
if m.distance < 0.7*n.distance:
good.append(m)
if len(good)>MIN_MATCH_COUNT:
src_pts = np.float32([ kp1[m.queryIdx].pt for m in good ]).reshape(-1,1,2)
dst_pts = np.float32([ kp2[m.trainIdx].pt for m in good ]).reshape(-1,1,2)
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.LMEDS,5.0)
#print M
matchesMask = mask.ravel().tolist()
h,w = img1.shape
else:
print "Not enough matches are found - %d/%d" % (len(good),MIN_MATCH_COUNT)
matchesMask = None
img3 = cv2.warpPerspective(img1, M, (img2.shape[1],img2.shape[0]))
blur = cv2.GaussianBlur(img3,(5,5),0)
ret3,th3 = cv2.threshold(blur,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
ret,th2 = cv2.threshold(blur,ret3,255,cv2.THRESH_BINARY_INV)
print th2[360:370,162:172]#print a block of image
plt.imshow(th2, 'gray'),plt.show()
cv2.waitKey(0)
cv2.imwrite('th2.png',th2)
ptyc = np.array([170,200,230,260]);#y coordinates of circle center
ptxc = np.array([110,145,180,215,335,370,405,440])#x coordinates of circle center
pts_src = np.zeros(shape = (32,2),dtype=np.int);#x,y coordinates of circle center
ct = 0;
for i in range(0,4):
for j in range(0,8):
pts_src[ct][1] = ptyc[i];
pts_src[ct][0] = ptxc[j];
ct = ct+1;
boolval = np.zeros(shape=(8,4),dtype=np.bool)
ct = 0;
for j in range(0,8):
for i in range(0,4):
a1 = thresh_circle(th2,pts_src[ct][0],pts_src[ct][1])
ct = ct+1;
if(a1 > 50):
boolval[j][i] = 1
else:
boolval[j][i] = 0

Matlab FFT and home brewed FFT

I'm trying to verify an FFT algorithm I should use for a project VS the same thing on Matlab.
The point is that with my own C FFT function I always get the right (the second one) part of the double sided FFT spectrum evaluated in Matlab and not the first one as "expected".
For instance if my third bin is in the form a+i*b the third bin of Matlab's FFT is a-i*b. A and b values are the same but i always get the complex conjugate of Matlab's.
I know that in terms of amplitudes and power there's no trouble (cause abs value) but I wonder if in terms of phases I'm going to read always wrong angles.
Im not so skilled in Matlab to know (and I have not found useful infos on the web) if Matlab FFT maybe returns the FFT spectre with negative frequencies first and then positive... or if I have to fix my FFT algorithm... or if it is all ok because phases are the unchanged regardless wich part of FFT we choose as single side spectrum (but i doubt about this last option).
Example:
If S is the sample array with N=512 samples, Y = fft(S) in Matlab return the FFT as (the sign of the imaginary part in the first half of the array are random, just to show the complex conjugate difference for the second part):
1 A1 + i*B1 (DC, B1 is always zero)
2 A2 + i*B2
3 A3 - i*B3
4 A4 + i*B4
5 A5 + i*B5
...
253 A253 - i*B253
254 A254 + i*B254
255 A255 + i*B255
256 A256 + i*B256
257 A257 - i*B257 (Nyquyst, B257 is always zero)
258 A256 - i*B256
259 A255 - i*B255
260 A254 - i*B254
261 A253 + i*B253
...
509 A5 - i*B5
510 A4 - i*B4
511 A3 + i*B3
512 A2 - i*B2
My FFT implementation returns only 256 values (and that's ok) in the the Y array as:
1 1 A1 + i*B1 (A1 is the DC, B1 is Nyquist, both are pure Real numbers)
2 512 A2 - i*B2
3 511 A3 + i*B3
4 510 A4 - i*B4
5 509 A5 + i*B5
...
253 261 A253 + i*B253
254 260 A254 - i*B254
255 259 A255 - i*B255
256 258 A256 - i*B256
Where the first column is the proper index of my Y array and the second is just the reference of the relative row in the Matlab FFT implementation.
As you can see my FFT implementation (DC apart) returns the FFT like the second half of the Matlab's FFT (in reverse order).
To summarize: even if I use fftshift as suggested, it seems that my implementation always return what in the Matlab FFT should be considered the negative part of the spectrum.
Where is the error???
This is the code I use:
Note 1: the FFT array is not declared here and it is changed inside the function. Initially it holds the N samples (real values) and at the end it contains the N/2 +1 bins of the single sided FFT spectrum.
Note 2: the N/2+1 bins are stored in N/2 elements only because the DC component is always real (and it is stored in FFT[0]) and also the Nyquyst (and it is stored in FFT[1]), this exception apart all the other even elements K holds a real number and the oven elements K+1 holds the imaginary part.
void Fft::FastFourierTransform( bool inverseFft ) {
double twr, twi, twpr, twpi, twtemp, ttheta;
int i, i1, i2, i3, i4, c1, c2;
double h1r, h1i, h2r, h2i, wrs, wis;
int nn, ii, jj, n, mmax, m, j, istep, isign;
double wtemp, wr, wpr, wpi, wi;
double theta, tempr, tempi;
// NS is the number of samples and it must be a power of two
if( NS == 1 )
return;
if( !inverseFft ) {
ttheta = 2.0 * PI / NS;
c1 = 0.5;
c2 = -0.5;
}
else {
ttheta = 2.0 * PI / NS;
c1 = 0.5;
c2 = 0.5;
ttheta = -ttheta;
twpr = -2.0 * Pow( Sin( 0.5 * ttheta ), 2 );
twpi = Sin(ttheta);
twr = 1.0+twpr;
twi = twpi;
for( i = 2; i <= NS/4+1; i++ ) {
i1 = i+i-2;
i2 = i1+1;
i3 = NS+1-i2;
i4 = i3+1;
wrs = twr;
wis = twi;
h1r = c1*(FFT[i1]+FFT[i3]);
h1i = c1*(FFT[i2]-FFT[i4]);
h2r = -c2*(FFT[i2]+FFT[i4]);
h2i = c2*(FFT[i1]-FFT[i3]);
FFT[i1] = h1r+wrs*h2r-wis*h2i;
FFT[i2] = h1i+wrs*h2i+wis*h2r;
FFT[i3] = h1r-wrs*h2r+wis*h2i;
FFT[i4] = -h1i+wrs*h2i+wis*h2r;
twtemp = twr;
twr = twr*twpr-twi*twpi+twr;
twi = twi*twpr+twtemp*twpi+twi;
}
h1r = FFT[0];
FFT[0] = c1*(h1r+FFT[1]);
FFT[1] = c1*(h1r-FFT[1]);
}
if( inverseFft )
isign = -1;
else
isign = 1;
n = NS;
nn = NS/2;
j = 1;
for(ii = 1; ii <= nn; ii++) {
i = 2*ii-1;
if( j>i ) {
tempr = FFT[j-1];
tempi = FFT[j];
FFT[j-1] = FFT[i-1];
FFT[j] = FFT[i];
FFT[i-1] = tempr;
FFT[i] = tempi;
}
m = n/2;
while( m>=2 && j>m ) {
j = j-m;
m = m/2;
}
j = j+m;
}
mmax = 2;
while(n>mmax) {
istep = 2*mmax;
theta = 2.0 * PI /(isign*mmax);
wpr = -2.0 * Pow( Sin( 0.5 * theta ), 2 );
wpi = Sin(theta);
wr = 1.0;
wi = 0.0;
for(ii = 1; ii <= mmax/2; ii++) {
m = 2*ii-1;
for(jj = 0; jj <= (n-m)/istep; jj++) {
i = m+jj*istep;
j = i+mmax;
tempr = wr*FFT[j-1]-wi*FFT[j];
tempi = wr*FFT[j]+wi*FFT[j-1];
FFT[j-1] = FFT[i-1]-tempr;
FFT[j] = FFT[i]-tempi;
FFT[i-1] = FFT[i-1]+tempr;
FFT[i] = FFT[i]+tempi;
}
wtemp = wr;
wr = wr*wpr-wi*wpi+wr;
wi = wi*wpr+wtemp*wpi+wi;
}
mmax = istep;
}
if( inverseFft )
for(i = 1; i <= 2*nn; i++)
FFT[i-1] = FFT[i-1]/nn;
if( !inverseFft ) {
twpr = -2.0 * Pow( Sin( 0.5 * ttheta ), 2 );
twpi = Sin(ttheta);
twr = 1.0+twpr;
twi = twpi;
for(i = 2; i <= NS/4+1; i++) {
i1 = i+i-2;
i2 = i1+1;
i3 = NS+1-i2;
i4 = i3+1;
wrs = twr;
wis = twi;
h1r = c1*(FFT[i1]+FFT[i3]);
h1i = c1*(FFT[i2]-FFT[i4]);
h2r = -c2*(FFT[i2]+FFT[i4]);
h2i = c2*(FFT[i1]-FFT[i3]);
FFT[i1] = h1r+wrs*h2r-wis*h2i;
FFT[i2] = h1i+wrs*h2i+wis*h2r;
FFT[i3] = h1r-wrs*h2r+wis*h2i;
FFT[i4] = -h1i+wrs*h2i+wis*h2r;
twtemp = twr;
twr = twr*twpr-twi*twpi+twr;
twi = twi*twpr+twtemp*twpi+twi;
}
h1r = FFT[0];
FFT[0] = h1r+FFT[1]; // DC
FFT[1] = h1r-FFT[1]; // FS/2 (NYQUIST)
}
return;
}
In matlab try using fftshift(fft(...)). Matlab doesn't automatically shift the spectrum after the FFT is called which is why they implemented the fftshift() function.
It is simply a matlab formatting thing. Basically, matlab arrange Fourier transform in following order
DC, (DC-1), .... (Nyquist-1), -Nyquist, -Nyquist+1, ..., DC-1
Let's say you have a 8 point sequence: [1 2 3 1 4 5 1 3]
In your signal processing class, your professor probably draws the Fourier spectrum based on a Cartesian system ( negative -> positive for x axis); So your DC should be located at 0 (the 4th position in your fft sequence, assuming position index here is 0-based) on your x axis.
In matlab, the DC is the very first element in the fft sequence, so you need to to fftshit() to swap the first half and second half of the fft sequence such that DC will be located at 4th position (position is 0-based indexed)
I am attaching a graph here so you may have a visual:
where a is the original 8-point sequence; FT(a) is the Fourier transform of a.
The matlab code is here:
a = [1 2 3 1 4 5 1 3];
A = fft(a);
N = length(a);
x = -N/2:N/2-1;
figure
subplot(3,1,1), stem(x, a,'o'); title('a'); xlabel('time')
subplot(3,1,2), stem(x, fftshift(abs(A),2),'o'); title('FT(a) in signal processing'); xlabel('frequency')
subplot(3,1,3), stem(x, abs(A),'o'); title('FT(a) in matlab'); xlabel('frequency')

Implementation of shadow free 1d invariant image

I implemented a method for removing shadows based on invariant color features found in the paper Entropy Minimization for Shadow Removal. My implementation seems to be yielding similar computational results sometimes, but they are always off, and my grayscale image is blocky, maybe as a result of incorrectly taking the geometric mean.
Here is an example plot of the information potential from the horse image in the paper as well as my invariant image. Multiply the x-axis by 3 to get theta(which goes from 0 to 180):
And here is the grayscale Image my code outputs for the correct maximum theta (mine is off by 10):
You can see the blockiness that their image doesn't have:
Here is their information potential:
When dividing by the geometric mean, I have tried using NaN and tresholding the image so the smallest possible value is .01, but it doesn't seem to change my output.
Here is my code:
I = im2double(imread(strname));
[m,n,d] = size(I);
I = max(I, .01);
chrom = zeros(m, n, 3, 'double');
for i = 1:m
for j = 1:n
% if ((I(i,j,1)*I(i,j,2)*I(i,j,3))~= 0)
chrom(i,j, 1) = I(i,j,1)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
chrom(i,j, 2) = I(i,j,2)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
chrom(i,j, 3) = I(i,j,3)/((I(i,j,1)*I(i,j,2)*I(i,j, 3))^(1/3));
% else
% chrom(i,j, 1) = 1;
% chrom(i,j, 2) = 1;
% chrom(i,j, 3) = 1;
% end
end
end
p1 = mat2gray(log(chrom(:,:,1)));
p2 = mat2gray(log(chrom(:,:,2)));
p3 = mat2gray(log(chrom(:,:,3)));
X1 = mat2gray(p1*1/(sqrt(2)) - p2*1/(sqrt(2)));
X2 = mat2gray(p1*1/(sqrt(6)) + p2*1/(sqrt(6)) - p3*2/(sqrt(6)));
maxinf = 0;
maxtheta = 0;
data2 = zeros(1, 61);
for theta = 0:3:180
M = X1*cos(theta*pi/180) - X2*sin(theta*pi/180);
s = sqrt(std2(X1)^(2)*cos(theta*pi/180) + std2(X2)^(2)*sin(theta*pi/180));
s = abs(1.06*s*((m*n)^(-1/5)));
[m, n] = size(M);
length = m*n;
sources = zeros(1, length, 'double');
count = 1;
for x=1:m
for y = 1:n
sources(1, count) = M(x , y);
count = count + 1;
end
end
weights = ones(1, length);
sigma = 2*s;
[xc , Ak] = fgt_model(sources , weights , sigma , 10, sqrt(length) , 6 );
sum1 = sum(fgt_predict(sources , xc , Ak , sigma , 10 ));
sum1 = sum1/sqrt(2*pi*2*s*s);
data2(theta/3 + 1) = sum1;
if (sum1 > maxinf)
maxinf = sum1;
maxtheta = theta;
end
end
InvariantImage2 = cos(maxtheta*pi/180)*X1 + sin(maxtheta*pi/180)*X2;
Assume the Fast Gauss Transform is correct.
I don't know whether this makes any difference as it is more than a month now, but the blockiness and different information potential plot is simply caused by compression of the used image. You can't expect to be getting same results using this image as they had, because they have used raw, high resolution uncompressed version of it. I have to say I am fairly impressed with your results, especially with implementing the information potential. That thing went over my head a little.
John.