This question already has answers here:
In MATLAB, can I have a script and a function definition in the same file?
(7 answers)
Closed 8 years ago.
I have written a MATLAB program to render a 'Delta E' or color difference image for 2 given images. However, when I run the program, I receive this error:
Error: File: deltaE.m Line: 6 Column: 1
Function definitions are not permitted in this context.
Here is the program:
imageOriginal = imread('1.jpg');
imageModified = imread('2.jpg');
function [imageOut] = deltaE(imageOriginal, imageModified)
[imageHeight imageWidth imageDepth] = size(imageOriginal);
% Convert image from RGB colorspace to lab color space.
cform = makecform('srgb2lab');
labOriginal = applycform(im2double(imageOriginal),cform);
labModified = applycform(im2double(imageModified),cform);
% Extract out the color bands from the original image
% into 3 separate 2D arrays, one for each color component.
L_original = labOriginal(:, :, 1);
a_original = labOriginal(:, :, 2);
b_original = labOriginal(:, :, 3);
L_modified = labModified(:,:,1);
a_modified = labModified(:,:,2);
b_modified = labModified(:,:,3);
% Create the delta images: delta L, delta A, and delta B.
delta_L = L_original - L_modified;
delta_a = a_original - a_modified;
delta_b = b_original - b_modified;
% This is an image that represents the color difference.
% Delta E is the square root of the sum of the squares of the delta images.
delta_E = sqrt(delta_L .^ 2 + delta_a .^ 2 + delta_b .^ 2);
imageOut = delta_E;
end
I might have made a beginner error, since I'm just 17 and I'm starting out with MATLAB. It'd be great if you could tell me what I'm doing wrong.
You can't define a function within a script. You need to define functions on a separate file, or turn the script into a (main) function, and then your other functions will be subfunctions of that. See also here.
EDIT: From Matlab R2016b you can define local functions withint a script; see here.
Related
I have a clean image, and a noisy image. I created a denoiser and applied it to the noisy image, that was my final output. Now to compare how much this image is close to a clean image I need to compare it using PSNR and SSIM, but due to different positions of the image I am unable to compare.
Now I am getting SSIM as 0.5, which is very low, due to the improper placement of both the images. If the images are registered properly, then I guess SSIM should come around 0.80+. But I have not been able to accomplish this.
How can I align these two images to obtain a good SSIM value?
I have two coin images, 1st image (CLEAN), 2nd image (IMPROVED a NOISY IMG), for comparison.
Clean Img:
Noisy Img:
Due to positions of images at different positions ssim(img1,img2) is giving incorrect output. I tried cropping but that did not work.
Here is what I have tried so far:
Attempt 1:
function [valPSNR,valSSIM,badpict]=getSSIM(clean_img,img2)
% pad reference image since object is so close to edges
refpict = padarray(mat2gray(clean_img),[20 20],'replicate','both');
% crop test image down to extract the object alone
badpict = imcrop(mat2gray(img2),[2.5 61.5 357 363]);
% maximize normalized cross-correlation to find offset
szb = size(badpict);
c = normxcorr2(badpict,refpict);
[idxy idxx] = find(c == max(c(:)));
osy = idxy-szb(1);
osx = idxx-szb(2);
% crop the reference pict to the ROI
refpict = refpict(osy:idxy-1,osx:idxx-1);
%imshow(imfuse(badpict,refpict,'checkerboard'));
%imagesc(badpict);
valSSIM=ssim(badpict,refpict);
valPSNR=getPSNR(badpict,refpict);
img2=badpict;
clean_img=refpict;
figure; imshowpair(clean_img,img2);
figure; montage({mat2gray(clean_img),mat2gray(img2)}, 'Size', [1 2], 'BackgroundColor', 'w', 'BorderSize', [2 2]);
end
Attempt 2:
function [valPSNR,valSSIM,badpict]=getSSIM2(clean_img,img2)
% pad reference image since object is so close to edges
bw1 = im2bw(mat2gray(clean_img));
bw2 = imclose(im2bw(mat2gray(img2),0.3),strel('disk',9));
bw2 = bwareafilt(bw2,1);
% make same size
[r,c] = find(bw1);
clean_img = clean_img(min(r):max(r),min(c):max(c));
[r,c] = find(bw2);
img2 = img2(min(r):max(r),min(c):max(c));
img2= imresize(img2, size(clean_img),'bilinear');
valPSNR=getPSNR(mat2gray(clean_img),mat2gray(img2));
valSSIM=ssim(mat2gray(clean_img),mat2gray(img2));
badpict=img2;
figure; imshowpair(clean_img,img2);
figure; montage({mat2gray(clean_img),mat2gray(img2)}, 'Size', [1 2], 'BackgroundColor', 'w', 'BorderSize', [2 2]);
end
As others have pointed out, the resampling required by registration will have some non-zero error. But, here is some sample code that will take you through the registration part that is the crux of your question.
% SSIM isn't defined on RGB images, convert to grayscale.
ref = rgb2gray(imread('https://i.stack.imgur.com/tPKEJ.png'));
X = rgb2gray(imread('https://i.stack.imgur.com/KmU4y.png'));
% The input image data has bright borders at the edges that create
% artifacts in resampling, best to just crop those or maybe there are
% aquisitions that don't have these borders?
X = X(3:end-2,3:end-2);
ref = ref(4:end-3,4:end-3);
figure
montage({X,ref});
tform = imregcorr(X,ref,"translation");
Xreg = imwarp(X,tform,OutputView=imref2d(size(ref)),SmoothEdges=true);
figure
imshowpair(Xreg,ref)
ssim(Xreg,ref)
Maybe you can refer to my github.
I implemented a template matching algorithm by OpenCV which you can use NCC-Based Pattern Matching to find targets, and then get a score (similarity).
You can then use this score to decide if it is clean.
Besides, tranforming c++ code may be an issue for you, but just find the all corresponded function in matlab version.
Here are effects (red blocks are areas with similarity higher than threshold 0.85 in comparison with golden sample):
The whole function is too long to be posted here.
Part of the function:
for (int i = 0; i < iSize; i++)
{
Mat matRotatedSrc, matR = getRotationMatrix2D (ptCenter, vecAngles[i], 1);
Mat matResult;
Point ptMaxLoc;
double dValue, dMaxVal;
double dRotate = clock ();
Size sizeBest = GetBestRotationSize (vecMatSrcPyr[iTopLayer].size (), pTemplData->vecPyramid[iTopLayer].size (), vecAngles[i]);
float fTranslationX = (sizeBest.width - 1) / 2.0f - ptCenter.x;
float fTranslationY = (sizeBest.height - 1) / 2.0f - ptCenter.y;
matR.at<double> (0, 2) += fTranslationX;
matR.at<double> (1, 2) += fTranslationY;
warpAffine (vecMatSrcPyr[iTopLayer], matRotatedSrc, matR, sizeBest);
MatchTemplate (matRotatedSrc, pTemplData, matResult, iTopLayer);
minMaxLoc (matResult, 0, &dMaxVal, 0, &ptMaxLoc);
vecMatchParameter[i * (m_iMaxPos + MATCH_CANDIDATE_NUM)] = s_MatchParameter (Point2f (ptMaxLoc.x - fTranslationX, ptMaxLoc.y - fTranslationY), dMaxVal, vecAngles[i]);
for (int j = 0; j < m_iMaxPos + MATCH_CANDIDATE_NUM - 1; j++)
{
ptMaxLoc = GetNextMaxLoc (matResult, ptMaxLoc, -1, pTemplData->vecPyramid[iTopLayer].cols, pTemplData->vecPyramid[iTopLayer].rows, dValue, m_dMaxOverlap);
vecMatchParameter[i * (m_iMaxPos + MATCH_CANDIDATE_NUM) + j + 1] = s_MatchParameter (Point2f (ptMaxLoc.x - fTranslationX, ptMaxLoc.y - fTranslationY), dValue, vecAngles[i]);
}
}
FilterWithScore (&vecMatchParameter, m_dScore-0.05*iTopLayer);
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
You can refer to section 4.3.1 in this article if you want.
If pI is any pixel/intensity on this image, and dS is the (rho, theta) of that line in the Hough Space, what is the meaning of the following statement?
Is the following a correct implementation?
function val = gaussC(pI, sigma, dS)
x = pI(1);
y = pI(2);
rho = dS(1);
theta = dS(2);
exponent = ((x-rho).^2 + (y-theta).^2)./(2*sigma);
val = (exp(-exponent));
end
EDIT:
My second proposal,
I = gray_imread('Scratch1.png');
dimesnsion = 5;
sigma = 1;
pI = [22, 114];
dS = [-108, -80];
J = get_matrix_from_image(I, pI, dimension);
var = normpdf(J(:), dS(2), sigma);
get_matrix_from_image.m
function mat = get_matrix_from_image(input_image, ctr_point, dimension)
[height, width] = size(input_image);
col_count = width;
row_count = height;
xxx = col_count;
yyy = row_count;
if(ctr_point(1) < 1 && ctr_point(2) < 1)
mat = zeros(dimension, dimension);
else
x = ctr_point(1);
y = ctr_point(2);
start_x = x - floor(dimension/2);
end_x = start_x + dimension - 1;
start_y = y - floor(dimension/2);
end_y = start_y + dimension - 1;
if(start_x > xxx || end_x>xxx || start_y > yyy || end_y>yyy || ...
start_x < 1 || end_x<1 || start_y <1 || end_y<1)
mat = zeros(dimension, dimension);
else
mat = input_image(start_x:end_x, start_y:end_y);
end
end
end
Not quite. Basically you are manually coding formula (9) from here. Then:
...
exponent = ((x-rho).^2 + (y-theta).^2)./(2*sigma^2); % sigma is also squared
val = exp(-exponent); % superfluous bracket removed
val = val./(2*pi*sigma^2); % you also forgot the denominator part
end
Of course you could write the whole thing a bit more efficient. But unless you actually want to use this formula on a lot of data I would keep it like this (it's very readable).
If you value performance, just use the built in function:
val = normpdf(pI,dS,sigma)
For new readers of this question: The OP reopened this questions after editing it heavily, completely changing the nature of the question. Therefore this answer now seems a bit off.
Your code is the incorrect implementation of the PDF of the normal distribution. The PDF of the normal distribution is:
IMO If pI is defined by x and y, i.e. pI(x,y), and dS is defined by rho and theta, i.e. dS(rho,theta) then you cannot simply subtract rho from x and theta from y . You have to convert one of them to the other. In my code, I have converted dS(rho,theta) to dS(x,y) and then used it as μ in the formula of PDF.
Furthermore, I think pI would be a matrix of 5 rows and 2 columns (5 pixels with x and y values) saying this on the basis of the following figure which is Figure 6 of the linked research:
Now coming to the statement,
g(pi , ds) is a Gaussian function, evaluated
in pi, with a peak in correspondence with the detected
scratch direction ds and a constant standard deviation.
IMO The author/s of the paper suggest/s to take 5 pixels, calculate PDF and find where the peak is.
Based on my understanding, its implementation should be:
function val = gaussC(pI,dS,sigma)
x = pI(:,1); %x values of all pixels
y = pI(:,2); %y values of all pixels
rho = dS(1);
theta = dS(2);
%Converting polar coordinates to rectangular coordinates to get mean value
%in x and y direction
exponent = [(x-rho*cos(theta)).^2 + (y-rho*sin(theta)).^2]./(2*sigma^2);
val = exp(-exponent)./(sigma*sqrt(2*pi));
end
After calculating the PDF, find the peak value.
I want to find how similar a picture is to some samples that I have (black and white).
I use the sum of absolute difference code, but because I'm new to MATLAB I didn't find out how to use it. How does this algorithm work? Does it give a measure of how similar the pics are?
I= imread('img1.jpg');
image2= imread('img2.jpg');
% J = uint8(filter2(fspecial('gaussian'), I));
K = imabsdiff(I,image2);
figure, imshow(K,[])
Well I think you pretty much answered your question yourself. It is the sum of the absolute difference. So let say you have img1 and img2 which are the same size and type.
To find the difference, do subtraction
img1-img2
To find the absolute difference, use the absolute value function abs
abs(img1-img2)
To find the sum, use the sum function. Note that you will need to do this for each "dimension" your image has. If you are not sure, type size(img1) and see if there are 2 or 3 numbers that show up, this corresponds to how many sum(...) you need to use.
For a color image (3 dimensions):
sum(sum(sum(abs(img1-img2))))
^^ Is the sum of the absolute differences. Whichever has the smallest sum can be considered the closest image.
If you have different sized images, you need to use the normxcorr2 function. This function will return a matrix of the same size with how well the template (small) image fits into the big image at each different point. Find the maximum value of that matrix and that is how well that image fits.
For instance:
correlation = normxcorr2(smallImg, bigImg);
compareMe = max(correlation(:))
It is best practice to use MATLAB's build-in function imabsdiff. In contrast to the other suggested answers, it takes care of the range boundaries if your image is formatted as uint8. Consider:
img1 = uint8(10);
img2 = uint8(20);
sum(abs(img1(:)-img2(:)))
gives you 0, whereas
imabsdiff(img1(:),img2(:))
correctly gives 10.
You should use the command im2col in MATLAB you will be able to do so in Vectorized manner.
Just arrange each neighborhood in columns (For each frame).
Put them in 3D Matrix and apply you operation on the 3rd dimension.
Code Snippet
I used Wikipedia's definition of "Sum of Absolute Differences".
The demo script:
```
% Sum of Absolute Differences Demo
numRows = 10;
numCols = 10;
refBlockRadius = 1;
refBlockLength = (2 * refBlockRadius) + 1;
mImgSrc = randi([0, 255], [numRows, numCols]);
mRefBlock = randi([0, 255], [refBlockLength, refBlockLength]);
mSumAbsDiff = SumAbsoluteDifferences(mImgSrc, mRefBlock);
```
The Function SumAbsoluteDifferences:
```
function [ mSumAbsDiff ] = SumAbsoluteDifferences( mInputImage, mRefBlock )
%UNTITLED2 Summary of this function goes here
% Detailed explanation goes here
numRows = size(mInputImage, 1);
numCols = size(mInputImage, 2);
blockLength = size(mRefBlock, 1);
blockRadius = (blockLength - 1) / 2;
mInputImagePadded = padarray(mInputImage, [blockRadius, blockRadius], 'replicate', 'both');
mBlockCol = im2col(mInputImagePadded, [blockLength, blockLength], 'sliding');
mSumAbsDiff = sum(abs(bsxfun(#minus, mBlockCol, mRefBlock(:))));
mSumAbsDiff = col2im(mSumAbsDiff, [blockLength, blockLength], [(numRows + blockLength - 1), (numCols + blockLength - 1)]);
end
```
Enjoy...
This question already has answers here:
eigenfaces are not showing correctly and are very dark
(2 answers)
Closed 3 years ago.
i have a set of 17 face grayscale pictures..and when try to view it i get a black images instead of ghost like pictures.
input_dir = 'images';
image_dims = [60, 60];
filenames = dir(fullfile(input_dir, '*.jpg'));
num_images = numel(filenames);
images = [];
for n = 1:num_images
filename = fullfile(input_dir, filenames(n).name);
img = imresize(imread(filename),[60,60]);
if n == 1
images = zeros(prod(image_dims), num_images);
end
images(:, n) = img(:);
end
% Trainig
% steps 1 and 2: find the mean image and the mean-shifted input images
mean_face = mean(images, 2);
shifted_images = images - repmat(mean_face, 1, num_images);
% steps 3 and 4: calculate the ordered eigenvectors and eigenvalues
[evectors, score, evalues] = princomp(images');
% step 5: only retain the top 'num_eigenfaces' eigenvectors (i.e. the principal components)
num_eigenfaces = 20;
evectors = evectors(:, 1:num_eigenfaces);
% step 6: project the images into the subspace to generate the feature vectors
features = evectors' * shifted_images;
and to see the eignevalues i used this code
figure;
for n = 1:num_eigenfaces
subplot(2, ceil(num_eigenfaces/2), n);
evector = reshape(evectors(:,n), image_dims);
imshow(evector);
end
i dont think it was suppose to be like this. can someone point out what i did wrong?
You should check each step in the code and make sure they pass sanity checks. My guess is this
features = evectors' * shifted_images;
Should be this
features = shifted_images * evectors;
Which makes me wonder if shifted_images has the correct dimensions. The evectors should be a matrix where each column represents a component vector. The matrix will be [pics x n]. The shifted images should be a [pixcount x pics] matrix. "pixcount" is the amount of pixels in each picture and "pics" is the number of pictures. If evectors' * shifted_images works without a dimensions error, I wonder if one quantity isn't being calculated correctly. I think this transpose is the culprit:
princomp(images');
Try scaling the image:
for i=1:num_eigenfaces
subplot(1,7,i);
image=reshape(evectors(:,i), image_dims);
image=image';
%scale image to full scale
imshow(image, []);
end
I am using Gonzalez frdescp function to get Fourier descriptors of a boundary. I use this code, and I get two totally different sets of numbers describing two identical but different in scale shapes.
So what is wrong?
im = imread('c:\classes\a1.png');
im = im2bw(im);
b = bwboundaries(im);
f = frdescp(b{1}); // fourier descriptors for the boundary of the first object ( my pic only contains one object anyway )
// Normalization
f = f(2:20); // getting the first 20 & deleting the dc component
f = abs(f) ;
f = f/f(1);
Why do I get different descriptors for identical - but different in scale - two circles?
The problem is that the frdescp code (I used this code, that should be the same as referred by you) is written also in order to center the Fourier descriptors.
If you want to describe your shape in a correct way, it is mandatory to mantain some descriptors that are symmetric with respect to the one representing the DC component.
The following image summarize the concept:
In order to solve your problem (and others like yours), I wrote the following two functions:
function descriptors = fourierdescriptor( boundary )
%I assume that the boundary is a N x 2 matrix
%Also, N must be an even number
np = size(boundary, 1);
s = boundary(:, 1) + i*boundary(:, 2);
descriptors = fft(s);
descriptors = [descriptors((1+(np/2)):end); descriptors(1:np/2)];
end
function significativedescriptors = getsignificativedescriptors( alldescriptors, num )
%num is the number of significative descriptors (in your example, is was 20)
%In the following, I assume that num and size(alldescriptors,1) are even numbers
dim = size(alldescriptors, 1);
if num >= dim
significativedescriptors = alldescriptors;
else
a = (dim/2 - num/2) + 1;
b = dim/2 + num/2;
significativedescriptors = alldescriptors(a : b);
end
end
Know, you can use the above functions as follows:
im = imread('test.jpg');
im = im2bw(im);
b = bwboundaries(im);
b = b{1};
%force the number of boundary points to be even
if mod(size(b,1), 2) ~= 0
b = [b; b(end, :)];
end
%define the number of significative descriptors I want to extract (it must be even)
numdescr = 20;
%Now, you can extract all fourier descriptors...
f = fourierdescriptor(b);
%...and get only the most significative:
f_sign = getsignificativedescriptors(f, numdescr);
I just went through the same problem with you.
According to this link, if you want invariant to scaling, make the comparison ratio-like, for example by dividing every Fourier coefficient by the DC-coefficient. f*1 = f1/f[0], f*[2]/f[0], and so on. Thus, you need to use the DC-coefficient where the f(1) in your code is not the actual DC-coefficient after your step "f = f(2:20); % getting the first 20 & deleting the dc component". I think the problem can be solved by keeping the value of the DC-coefficient first, the code after adjusted should be like follows:
% Normalization
DC = f(1);
f = f(2:20); % getting the first 20 & deleting the dc component
f = abs(f) ; % use magnitudes to be invariant to translation & rotation
f = f/DC; % divide the Fourier coefficients by the DC-coefficient to be invariant to scale