This question already has answers here:
eigenfaces are not showing correctly and are very dark
(2 answers)
Closed 3 years ago.
i have a set of 17 face grayscale pictures..and when try to view it i get a black images instead of ghost like pictures.
input_dir = 'images';
image_dims = [60, 60];
filenames = dir(fullfile(input_dir, '*.jpg'));
num_images = numel(filenames);
images = [];
for n = 1:num_images
filename = fullfile(input_dir, filenames(n).name);
img = imresize(imread(filename),[60,60]);
if n == 1
images = zeros(prod(image_dims), num_images);
end
images(:, n) = img(:);
end
% Trainig
% steps 1 and 2: find the mean image and the mean-shifted input images
mean_face = mean(images, 2);
shifted_images = images - repmat(mean_face, 1, num_images);
% steps 3 and 4: calculate the ordered eigenvectors and eigenvalues
[evectors, score, evalues] = princomp(images');
% step 5: only retain the top 'num_eigenfaces' eigenvectors (i.e. the principal components)
num_eigenfaces = 20;
evectors = evectors(:, 1:num_eigenfaces);
% step 6: project the images into the subspace to generate the feature vectors
features = evectors' * shifted_images;
and to see the eignevalues i used this code
figure;
for n = 1:num_eigenfaces
subplot(2, ceil(num_eigenfaces/2), n);
evector = reshape(evectors(:,n), image_dims);
imshow(evector);
end
i dont think it was suppose to be like this. can someone point out what i did wrong?
You should check each step in the code and make sure they pass sanity checks. My guess is this
features = evectors' * shifted_images;
Should be this
features = shifted_images * evectors;
Which makes me wonder if shifted_images has the correct dimensions. The evectors should be a matrix where each column represents a component vector. The matrix will be [pics x n]. The shifted images should be a [pixcount x pics] matrix. "pixcount" is the amount of pixels in each picture and "pics" is the number of pictures. If evectors' * shifted_images works without a dimensions error, I wonder if one quantity isn't being calculated correctly. I think this transpose is the culprit:
princomp(images');
Try scaling the image:
for i=1:num_eigenfaces
subplot(1,7,i);
image=reshape(evectors(:,i), image_dims);
image=image';
%scale image to full scale
imshow(image, []);
end
Related
I have a clean image, and a noisy image. I created a denoiser and applied it to the noisy image, that was my final output. Now to compare how much this image is close to a clean image I need to compare it using PSNR and SSIM, but due to different positions of the image I am unable to compare.
Now I am getting SSIM as 0.5, which is very low, due to the improper placement of both the images. If the images are registered properly, then I guess SSIM should come around 0.80+. But I have not been able to accomplish this.
How can I align these two images to obtain a good SSIM value?
I have two coin images, 1st image (CLEAN), 2nd image (IMPROVED a NOISY IMG), for comparison.
Clean Img:
Noisy Img:
Due to positions of images at different positions ssim(img1,img2) is giving incorrect output. I tried cropping but that did not work.
Here is what I have tried so far:
Attempt 1:
function [valPSNR,valSSIM,badpict]=getSSIM(clean_img,img2)
% pad reference image since object is so close to edges
refpict = padarray(mat2gray(clean_img),[20 20],'replicate','both');
% crop test image down to extract the object alone
badpict = imcrop(mat2gray(img2),[2.5 61.5 357 363]);
% maximize normalized cross-correlation to find offset
szb = size(badpict);
c = normxcorr2(badpict,refpict);
[idxy idxx] = find(c == max(c(:)));
osy = idxy-szb(1);
osx = idxx-szb(2);
% crop the reference pict to the ROI
refpict = refpict(osy:idxy-1,osx:idxx-1);
%imshow(imfuse(badpict,refpict,'checkerboard'));
%imagesc(badpict);
valSSIM=ssim(badpict,refpict);
valPSNR=getPSNR(badpict,refpict);
img2=badpict;
clean_img=refpict;
figure; imshowpair(clean_img,img2);
figure; montage({mat2gray(clean_img),mat2gray(img2)}, 'Size', [1 2], 'BackgroundColor', 'w', 'BorderSize', [2 2]);
end
Attempt 2:
function [valPSNR,valSSIM,badpict]=getSSIM2(clean_img,img2)
% pad reference image since object is so close to edges
bw1 = im2bw(mat2gray(clean_img));
bw2 = imclose(im2bw(mat2gray(img2),0.3),strel('disk',9));
bw2 = bwareafilt(bw2,1);
% make same size
[r,c] = find(bw1);
clean_img = clean_img(min(r):max(r),min(c):max(c));
[r,c] = find(bw2);
img2 = img2(min(r):max(r),min(c):max(c));
img2= imresize(img2, size(clean_img),'bilinear');
valPSNR=getPSNR(mat2gray(clean_img),mat2gray(img2));
valSSIM=ssim(mat2gray(clean_img),mat2gray(img2));
badpict=img2;
figure; imshowpair(clean_img,img2);
figure; montage({mat2gray(clean_img),mat2gray(img2)}, 'Size', [1 2], 'BackgroundColor', 'w', 'BorderSize', [2 2]);
end
As others have pointed out, the resampling required by registration will have some non-zero error. But, here is some sample code that will take you through the registration part that is the crux of your question.
% SSIM isn't defined on RGB images, convert to grayscale.
ref = rgb2gray(imread('https://i.stack.imgur.com/tPKEJ.png'));
X = rgb2gray(imread('https://i.stack.imgur.com/KmU4y.png'));
% The input image data has bright borders at the edges that create
% artifacts in resampling, best to just crop those or maybe there are
% aquisitions that don't have these borders?
X = X(3:end-2,3:end-2);
ref = ref(4:end-3,4:end-3);
figure
montage({X,ref});
tform = imregcorr(X,ref,"translation");
Xreg = imwarp(X,tform,OutputView=imref2d(size(ref)),SmoothEdges=true);
figure
imshowpair(Xreg,ref)
ssim(Xreg,ref)
Maybe you can refer to my github.
I implemented a template matching algorithm by OpenCV which you can use NCC-Based Pattern Matching to find targets, and then get a score (similarity).
You can then use this score to decide if it is clean.
Besides, tranforming c++ code may be an issue for you, but just find the all corresponded function in matlab version.
Here are effects (red blocks are areas with similarity higher than threshold 0.85 in comparison with golden sample):
The whole function is too long to be posted here.
Part of the function:
for (int i = 0; i < iSize; i++)
{
Mat matRotatedSrc, matR = getRotationMatrix2D (ptCenter, vecAngles[i], 1);
Mat matResult;
Point ptMaxLoc;
double dValue, dMaxVal;
double dRotate = clock ();
Size sizeBest = GetBestRotationSize (vecMatSrcPyr[iTopLayer].size (), pTemplData->vecPyramid[iTopLayer].size (), vecAngles[i]);
float fTranslationX = (sizeBest.width - 1) / 2.0f - ptCenter.x;
float fTranslationY = (sizeBest.height - 1) / 2.0f - ptCenter.y;
matR.at<double> (0, 2) += fTranslationX;
matR.at<double> (1, 2) += fTranslationY;
warpAffine (vecMatSrcPyr[iTopLayer], matRotatedSrc, matR, sizeBest);
MatchTemplate (matRotatedSrc, pTemplData, matResult, iTopLayer);
minMaxLoc (matResult, 0, &dMaxVal, 0, &ptMaxLoc);
vecMatchParameter[i * (m_iMaxPos + MATCH_CANDIDATE_NUM)] = s_MatchParameter (Point2f (ptMaxLoc.x - fTranslationX, ptMaxLoc.y - fTranslationY), dMaxVal, vecAngles[i]);
for (int j = 0; j < m_iMaxPos + MATCH_CANDIDATE_NUM - 1; j++)
{
ptMaxLoc = GetNextMaxLoc (matResult, ptMaxLoc, -1, pTemplData->vecPyramid[iTopLayer].cols, pTemplData->vecPyramid[iTopLayer].rows, dValue, m_dMaxOverlap);
vecMatchParameter[i * (m_iMaxPos + MATCH_CANDIDATE_NUM) + j + 1] = s_MatchParameter (Point2f (ptMaxLoc.x - fTranslationX, ptMaxLoc.y - fTranslationY), dValue, vecAngles[i]);
}
}
FilterWithScore (&vecMatchParameter, m_dScore-0.05*iTopLayer);
I am trying to create Training and Testing set out of my ground truth(observation) data which are presented in a tif (raster) format.
Actually, I have a hyperspectral image (Satellite image) which has 200 dimensions(channels/bands) along with the corresponding label(17 class) which are stored in another image. Now, my goal is to implement a classification algorithm and then check the accuracy with the testing dataset.
My problem is, that I do not know how can I describe to my algorithm that which pixel belongs to which class and then split them to taring and testing set.
I have provided a face idea of my goal which is as follows:
But I do not want to do this since I have 145 * 145 pixels dim, so it's not easy to define the location of these pixels and manually assign to their corresponding class.
note that the following example is for 3D image and I have 200D image and I have the labels (ground truth) so I do not need to specify them like the following code but I do want to assign them to their pixels member.
% Assigning pixel(by their location)to different groups.
tpix=[1309,640 ,1;... % Group 1
1218,755 ,1;...
1351,1409,2;... % Group 2
673 ,394 ,2;...
285 ,1762,3;... % Group 3
177 ,1542,3;...
538 ,1754,4;... % Group 4
432 ,1811,4;...
1417,2010,5;... % Group 5
163 ,1733,5;...
652 ,677 ,6;... % Group 6
864 ,1032,6];
row=tpix(:,1); % y-value
col=tpix(:,2); % x-value
group=tpix(:,3); % group number
ngroup=max(group);
% create trainingset
train=[];
for i=1:length(group)
train=[train; r(row(i),col(i)), g(row(i),col(i)), b(row(i),col(i))];
end %for
Do I understand this right? At the seconlast line the train variable gets the values it has until now + the pixels in red, green and blue? Like, you want them to be displayed only in red,green and blue? Only certain ones or all of them? I could imagine that we define an image matrix and then place the values in the images red, green and blue layers. Would that help? I'd make you the code if this is you issue :)
Edit: Solution
%download the .mats from the website and put them in folder of script
load 'Indian_pines_corrected.mat';
load 'Indian_pines_gt.mat';
ipc = indian_pines_corrected;
gt = indian_pines_gt;
%initiating cell
train = cell(16,1);
%loop to search class number of the x and y pixel coordinates
for c = 1:16
for i = 1:145
for j = 1:145
% if the classnumber is equal to the number in the gt pixel,
% then place the pixel from ipc(x,y,:) it in the train{classnumber}(x,y,:)
if gt(i,j) == c
train{c}(i,j,:) = ipc(i,j,:);
end %if
end %for j
end %for i
end %for c
Now you get the train cell that has a matrix in each cell. Each cell is one class and has only the pixels inside that you want. You can check for yourself if the classes correspond to the shape.
Eventually, I could solve my problem. The following code reshapes the matrix(Raster) to vector and then I index the Ground Truth data to find the corresponding pixel's location in Hyperspectral image.
Note that I am looking for an efficient way to construct Training and Testing set.
GT = indian_pines_gt;
data = indian_pines_corrected;
data_vec=reshape(data, 145*145,200);
GT_vec = reshape(GT,145*145,1);
[GT_vec_sort,idx] = sort(GT_vec);
%INDEXING.
index = find(and(GT_vec_sort>0,GT_vec_sort<=16));
classes_num = GT_vec_sort(index);
%length(index)
for k = 1: length(index)
classes(k,:) = data_vec(idx(index(k)),:);
end
figure(1)
plot(GT_vec_sort)
New.
I have done the following for creating Training and Testing set for #Hyperspectral images(Pine dataset). No need to use for loop
clear all
load('Indian_pines_corrected.mat');
load Indian_pines_gt.mat;
GT = indian_pines_gt;
data = indian_pines_corrected;
%Convert image from raster to vector.
data_vec = reshape(data, 145*145, 200);
%Provide location of the desired classes.
GT_loc = find(and(GT>0,GT<=16));
GT_class = GT(GT_loc)
data_value = data_vec(GT_loc,:)
% explanatories plus Respond variable.
%[200(variable/channel)+1(labels)= 201])
dat = [data_value, GT_class];
% create random Test and Training set.
[m,n] = size(dat);
P = 0.70 ;
idx = randperm(m);
Train = dat(idx(1:round(P*m)),:);
Test = dat(idx(round(P*m)+1:end),:);
X_train = Train(:,1:200); y_train = Train(:, 201);
X_test = Test(:,1:200); y_test = Test(:, 201);
I'm trying to estimate the (unknown) original datapoints that went into calculating a (known) moving average. However, I do know some of the original datapoints, and I'm not sure how to use that information.
I am using the method given in the answers here: https://stats.stackexchange.com/questions/67907/extract-data-points-from-moving-average, but in MATLAB (my code below). This method works quite well for large numbers of data points (>1000), but less well with fewer data points, as you'd expect.
window = 3;
datapoints = 150;
data = 3*rand(1,datapoints)+50;
moving_averages = [];
for i = window:size(data,2)
moving_averages(i) = mean(data(i+1-window:i));
end
length = size(moving_averages,2)+(window-1);
a = (tril(ones(length,length),window-1) - tril(ones(length,length),-1))/window;
a = a(1:length-(window-1),:);
ai = pinv(a);
daily = mtimes(ai,moving_averages');
x = 1:size(data,2);
figure(1)
hold on
plot(x,data,'Color','b');
plot(x(window:end),moving_averages(window:end),'Linewidth',2,'Color','r');
plot(x,daily(window:end),'Color','g');
hold off
axis([0 size(x,2) min(daily(window:end))-1 max(daily(window:end))+1])
legend('original data','moving average','back-calculated')
Now, say I know a smattering of the original data points. I'm having trouble figuring how might I use that information to more accurately calculate the rest. Thank you for any assistance.
You should be able to calculate the original data exactly if you at any time can exactly determine one window's worth of data, i.e. in this case n-1 samples in a window of length n. (In your case) if you know A,B and (A+B+C)/3, you can solve now and know C. Now when you have (B+C+D)/3 (your moving average) you can exactly solve for D. Rinse and repeat. This logic works going backwards too.
Here is an example with the same idea:
% the actual vector of values
a = cumsum(rand(150,1) - 0.5);
% compute moving average
win = 3; % sliding window length
idx = hankel(1:win, win:numel(a));
m = mean(a(idx));
% coefficient matrix: m(i) = sum(a(i:i+win-1))/win
A = repmat([ones(1,win) zeros(1,numel(a)-win)], numel(a)-win+1, 1);
for i=2:size(A,1)
A(i,:) = circshift(A(i-1,:), [0 1]);
end
A = A / win;
% solve linear system
%x = A \ m(:);
x = pinv(A) * m(:);
% plot and compare
subplot(211), plot(1:numel(a),a, 1:numel(m),m)
legend({'original','moving average'})
title(sprintf('length = %d, window = %d',numel(a),win))
subplot(212), plot(1:numel(a),a, 1:numel(a),x)
legend({'original','reconstructed'})
title(sprintf('error = %f',norm(x(:)-a(:))))
You can see the reconstruction error is very small, even using the data sizes in your example (150 samples with a 3-samples moving average).
I want to find how similar a picture is to some samples that I have (black and white).
I use the sum of absolute difference code, but because I'm new to MATLAB I didn't find out how to use it. How does this algorithm work? Does it give a measure of how similar the pics are?
I= imread('img1.jpg');
image2= imread('img2.jpg');
% J = uint8(filter2(fspecial('gaussian'), I));
K = imabsdiff(I,image2);
figure, imshow(K,[])
Well I think you pretty much answered your question yourself. It is the sum of the absolute difference. So let say you have img1 and img2 which are the same size and type.
To find the difference, do subtraction
img1-img2
To find the absolute difference, use the absolute value function abs
abs(img1-img2)
To find the sum, use the sum function. Note that you will need to do this for each "dimension" your image has. If you are not sure, type size(img1) and see if there are 2 or 3 numbers that show up, this corresponds to how many sum(...) you need to use.
For a color image (3 dimensions):
sum(sum(sum(abs(img1-img2))))
^^ Is the sum of the absolute differences. Whichever has the smallest sum can be considered the closest image.
If you have different sized images, you need to use the normxcorr2 function. This function will return a matrix of the same size with how well the template (small) image fits into the big image at each different point. Find the maximum value of that matrix and that is how well that image fits.
For instance:
correlation = normxcorr2(smallImg, bigImg);
compareMe = max(correlation(:))
It is best practice to use MATLAB's build-in function imabsdiff. In contrast to the other suggested answers, it takes care of the range boundaries if your image is formatted as uint8. Consider:
img1 = uint8(10);
img2 = uint8(20);
sum(abs(img1(:)-img2(:)))
gives you 0, whereas
imabsdiff(img1(:),img2(:))
correctly gives 10.
You should use the command im2col in MATLAB you will be able to do so in Vectorized manner.
Just arrange each neighborhood in columns (For each frame).
Put them in 3D Matrix and apply you operation on the 3rd dimension.
Code Snippet
I used Wikipedia's definition of "Sum of Absolute Differences".
The demo script:
```
% Sum of Absolute Differences Demo
numRows = 10;
numCols = 10;
refBlockRadius = 1;
refBlockLength = (2 * refBlockRadius) + 1;
mImgSrc = randi([0, 255], [numRows, numCols]);
mRefBlock = randi([0, 255], [refBlockLength, refBlockLength]);
mSumAbsDiff = SumAbsoluteDifferences(mImgSrc, mRefBlock);
```
The Function SumAbsoluteDifferences:
```
function [ mSumAbsDiff ] = SumAbsoluteDifferences( mInputImage, mRefBlock )
%UNTITLED2 Summary of this function goes here
% Detailed explanation goes here
numRows = size(mInputImage, 1);
numCols = size(mInputImage, 2);
blockLength = size(mRefBlock, 1);
blockRadius = (blockLength - 1) / 2;
mInputImagePadded = padarray(mInputImage, [blockRadius, blockRadius], 'replicate', 'both');
mBlockCol = im2col(mInputImagePadded, [blockLength, blockLength], 'sliding');
mSumAbsDiff = sum(abs(bsxfun(#minus, mBlockCol, mRefBlock(:))));
mSumAbsDiff = col2im(mSumAbsDiff, [blockLength, blockLength], [(numRows + blockLength - 1), (numCols + blockLength - 1)]);
end
```
Enjoy...
So I'd like to resize a matrix that is of size 72x144x156 into a 180x360x156 grid. I can try to do it with this command: resizem(precip,2.5). The first two dimensions are latitude and longitude, while the last dimension is time. I don't want time to be resized.
This works if the matrix is of size 72x144. But it doesn't work for size 72x144x156. Is there a way to resize the first two dimensions without resizing the third?
Also, what is the fastest way to do this (preferably without a for loop). If a for loop is necessary, then that's fine.
I hinted in my comment, but could use interp3 like this:
outSize = [180 360 156];
[nrows,ncols,ntimes] = size(data);
scales = [nrows ncols ntimes] ./ outSize;
xq = (1:outSize(2))*scales(2) + 0.5 * (1 - scales(2));
yq = (1:outSize(1))*scales(1) + 0.5 * (1 - scales(1));
zq = (1:outSize(3))*scales(3) + 0.5 * (1 - scales(3));
[Xq,Yq,Zq] = meshgrid(xq,yq,zq);
dataLarge = interp3(data,Xq,Yq,Zq);
But the problem is simplified if you know you don't want to interpolate between time points, so you can loop as in Daniel R's answer. Although, this answer will not increase the number of time points.
D= %existing matrix
scale=2.5;
E=zeros(size(D,1)*2.5,size(D,2)*2.5,size(D,3))
for depth=1:size(D,3)
E(:,:,depth)=resizem(D(:,:,depth),scale)
end
This should provide the expected output.
% s = zeros(72, 144, 156);
% whos s;
% news = resize2D(s, 2.5);
% whos news;
function [result] = resize2D(input, multiply)
[d1, d2, d3] = size(input);
result = zeros(d1*multiply, d2*multiply, d3);
end