create training and testing set with ground truth for Hyper spectral satellite imagery - matlab

I am trying to create Training and Testing set out of my ground truth(observation) data which are presented in a tif (raster) format.
Actually, I have a hyperspectral image (Satellite image) which has 200 dimensions(channels/bands) along with the corresponding label(17 class) which are stored in another image. Now, my goal is to implement a classification algorithm and then check the accuracy with the testing dataset.
My problem is, that I do not know how can I describe to my algorithm that which pixel belongs to which class and then split them to taring and testing set.
I have provided a face idea of my goal which is as follows:
But I do not want to do this since I have 145 * 145 pixels dim, so it's not easy to define the location of these pixels and manually assign to their corresponding class.
note that the following example is for 3D image and I have 200D image and I have the labels (ground truth) so I do not need to specify them like the following code but I do want to assign them to their pixels member.
% Assigning pixel(by their location)to different groups.
tpix=[1309,640 ,1;... % Group 1
1218,755 ,1;...
1351,1409,2;... % Group 2
673 ,394 ,2;...
285 ,1762,3;... % Group 3
177 ,1542,3;...
538 ,1754,4;... % Group 4
432 ,1811,4;...
1417,2010,5;... % Group 5
163 ,1733,5;...
652 ,677 ,6;... % Group 6
864 ,1032,6];
row=tpix(:,1); % y-value
col=tpix(:,2); % x-value
group=tpix(:,3); % group number
ngroup=max(group);
% create trainingset
train=[];
for i=1:length(group)
train=[train; r(row(i),col(i)), g(row(i),col(i)), b(row(i),col(i))];
end %for

Do I understand this right? At the seconlast line the train variable gets the values it has until now + the pixels in red, green and blue? Like, you want them to be displayed only in red,green and blue? Only certain ones or all of them? I could imagine that we define an image matrix and then place the values in the images red, green and blue layers. Would that help? I'd make you the code if this is you issue :)
Edit: Solution
%download the .mats from the website and put them in folder of script
load 'Indian_pines_corrected.mat';
load 'Indian_pines_gt.mat';
ipc = indian_pines_corrected;
gt = indian_pines_gt;
%initiating cell
train = cell(16,1);
%loop to search class number of the x and y pixel coordinates
for c = 1:16
for i = 1:145
for j = 1:145
% if the classnumber is equal to the number in the gt pixel,
% then place the pixel from ipc(x,y,:) it in the train{classnumber}(x,y,:)
if gt(i,j) == c
train{c}(i,j,:) = ipc(i,j,:);
end %if
end %for j
end %for i
end %for c
Now you get the train cell that has a matrix in each cell. Each cell is one class and has only the pixels inside that you want. You can check for yourself if the classes correspond to the shape.

Eventually, I could solve my problem. The following code reshapes the matrix(Raster) to vector and then I index the Ground Truth data to find the corresponding pixel's location in Hyperspectral image.
Note that I am looking for an efficient way to construct Training and Testing set.
GT = indian_pines_gt;
data = indian_pines_corrected;
data_vec=reshape(data, 145*145,200);
GT_vec = reshape(GT,145*145,1);
[GT_vec_sort,idx] = sort(GT_vec);
%INDEXING.
index = find(and(GT_vec_sort>0,GT_vec_sort<=16));
classes_num = GT_vec_sort(index);
%length(index)
for k = 1: length(index)
classes(k,:) = data_vec(idx(index(k)),:);
end
figure(1)
plot(GT_vec_sort)
New.
I have done the following for creating Training and Testing set for #Hyperspectral images(Pine dataset). No need to use for loop
clear all
load('Indian_pines_corrected.mat');
load Indian_pines_gt.mat;
GT = indian_pines_gt;
data = indian_pines_corrected;
%Convert image from raster to vector.
data_vec = reshape(data, 145*145, 200);
%Provide location of the desired classes.
GT_loc = find(and(GT>0,GT<=16));
GT_class = GT(GT_loc)
data_value = data_vec(GT_loc,:)
% explanatories plus Respond variable.
%[200(variable/channel)+1(labels)= 201])
dat = [data_value, GT_class];
% create random Test and Training set.
[m,n] = size(dat);
P = 0.70 ;
idx = randperm(m);
Train = dat(idx(1:round(P*m)),:);
Test = dat(idx(round(P*m)+1:end),:);
X_train = Train(:,1:200); y_train = Train(:, 201);
X_test = Test(:,1:200); y_test = Test(:, 201);

Related

Matlab implementation of light speed labeling

I am trying to implement code for the light speed labeling technique described in this article (I cannot use the Image Processing Toolbox): https://pdfs.semanticscholar.org/ef31/7c257603004d818ca1e2a2aa67d36d40147e.pdf (see section 2, page 7).
Here is my Matlab code for LSL equivalence construction (algorithm 14, step 2).
function [EQ,ERAi,nea] = LSL_equivalence(EQ,ERim1,RLCi,ERAim1,ERAi,NERi,nea,lImg)
% LSL_EQUIVALENCE build the associative table between er and ea
% GOAL: to create a Look Up Table to be applied to ERi to create EAi.
for er = 1:2:NERi % check segments one by one
% read the boundaries of each segment to obtain de relative labels of every agacent segment in the prev line
j0 = RLCi(er);
j1 = RLCi(er+1);
er0 = ERim1(j0+1); % label of the first segment
er1 = ERim1(j1+1); % the label of the last segment
% check label parity: segments are odd, background is even
% bitand([1 2 3 4 5],1) == [1 0 1 0 1]
if bitand(er0,1) == 0 % if er0 is even
er0 = er0 + 1;
end
if bitand(er1,1) == 0 % if er1 is even
er1 = er1 -1;
end
if er1 >= er0 % if there is an adjacency
ea = ERAim1(er0+1); % absolute label of the first segment
a = EQ(ea+1); % a is the ancestor (smallest label of the equivalence class)
for erk = (er0+2):2:er1
eak = ERAim1(erk+1);
ak = EQ(eak+1);
% min extraction and propagation
if a < ak
EQ(eak+1) = a;
else
a = ak;
EQ(ea+1) = a;
ea = eak;
end
end
ERAi(er+1) = a; % the global min of all ak ancestors
else % if there are no adjacent labels make a new label
nea = nea + 1;
ERAi(er+1) = nea;
end
end
end
I am having some trouble with indexes, as the pseudo code described in the article has indexes starting with 0 and Matlab works with 1. I have already found some C++ code in this Stack Overflow post Implementing LSL for Connected Component Labeling/Blob Extraction (I applied suggested changes) and also in this git repo https://github.com/prittt/YACCLAB/blob/master/include/labeling_lacassagne_2016_code.inc. I fail to see the differences.
Also, I'm having some trouble understanding what an equivalence class is (which is what goes in matrix EQ). Thanks ahead of time!
I realize this is coming a bit late, but I just put up a piece of code that is similar to what the light speed labeling algorithm does. I'll give a brief description of how I solved the index problem.
The first step of LSL is take each column of pixels and finds the start and stop positions of consecutively labeled pixels. You can do that in Matlab like this:
I = imread('text.png');
[rows,cols] = find(xor(I(2:end,:),I(1:end-1,:)));
What this gives you is the row and column of the start and stop position of each run of pixels in a column, except its non-inclusive indexing. By non-inclusive indexing I mean the indices of pixels runs from I(r(2*n-1),c(2*n-1)) to I(r(2*n)-1,c(2*n)) for each pixel run (n). You should note that the paper operates along rows where the above code operates along columns, but the same principle applies. You should also note that the above code does not cover the circumstance of labeled pixels on the edge of the image.
If you want to see a complete implementation, I posted my code on the Matlab File Exchange. I don't claim that it copies LSL exactly, but it works on many of the same principles.
https://www.mathworks.com/matlabcentral/fileexchange/70323-ftllabel?s_tid=prof_contriblnk

Matlab SVM example

I am trying to implement SVM for classification. The goal is to output the correct grid of origin of a power signal (.wav file). The grids are titled A-I and there are 93 total signals for the training set and 49 practice signals. I have a 93x10x36 matrix of feature vectors. Does anyone know why I get the errors shown? TrainCorrectGrid and Training_Cepstrum1 both have 93 rows so I don't understand what the problem is. Any help is greatly appreciated.
My code is shown here:
clc; clear; close all;
load('avg_fft_feature (4).mat'); %training feature vectors
load('practice_fft_Mag_all (2).mat'); %practice feauture vectors
load('practice_GridOrigin.mat'); %correct grids of origin for practice data
load PracticeCorrectGrid.mat;
load Training_Cepstrum1;
load Practice_Cepstrum1a;
load fSet1.mat %load in correct practice grids
TrainCorrectGrid=['A';'A';'A';'A';'A';'A';'A';'A';'A';'B';'B';'B';'B';'B';'B';'B';'B';'B';'B';'C';'C';'C';'C';'C';'C';'C';'C';'C';'C';'C';'D';'D';'D';'D';'D';'D';'D';'D';'D';'D';'D';'E';'E';'E';'E';'E';'E';'E';'E';'E';'E';'E';'F';'F';'F';'F';'F';'F';'F';'F';'G';'G';'G';'G';'G';'G';'G';'G';'G';'G';'G';'H';'H';'H';'H';'H';'H';'H';'H';'H';'H';'H';'I';'I';'I';'I';'I';'I';'I';'I';'I';'I';'I'];
%[results,u] = multisvm(avg_fft_feature, TrainCorrectGrid, avg_fft_feature_practice);%avg_fft_feature);
[results,u] = multisvm(Training_Cepstrum1(93,:,1), TrainCorrectGrid, Practice_Cepstrum1a(49,:,1));
disp('Grids of Origin (SVM)');
%Display SVM Results
for i = 1:numel(u)
str = sprintf('%d: %s', i, u(i));
disp(str);
end
%Display Percent Correct
numCorrect = 0;
for i = 1:numel(u)
%if (strcmp(TrainCorrectGrid(i,1), u(i))==1); %compare training to
%training
if (strcmp(PracticeCorrectGrid(i,1), u(i))==1); %compare practice data to training
numCorrect = numCorrect + 1;
end
end
numberOfElements = numel(u);
percentCorrect = numCorrect / numberOfElements * 100;
% percentCorrect = round(percentCorrect, 2);
dispPercent = sprintf('Percent Correct = %0.3f%%', percentCorrect);
disp(dispPercent);
error shown here
The multisvm function is shown here:
function [result, u] = multisvm(TrainingSet,GroupTrain,TestSet)
%Models a given training set with a corresponding group vector and
%classifies a given test set using an SVM classifier according to a
%one vs. all relation.
%
%This code was written by Cody Neuburger cneuburg#fau.edu
%Florida Atlantic University, Florida USA and slightly modified by Renny Varghese
%This code was adapted and cleaned from Anand Mishra's multisvm function
%found at http://www.mathworks.com/matlabcentral/fileexchange/33170-multi-class-support-vector-machine/
u=unique(GroupTrain);
numClasses=length(u);
result = zeros(length(TestSet(:,1)),1);
%build models
for k=1:numClasses
%Vectorized statement that binarizes Group
%where 1 is the current class and 0 is all other classes
G1vAll=(GroupTrain==u(k));
models(k) = svmtrain(TrainingSet,G1vAll);
end
%classify test cases
for j=1:size(TestSet,1)
for k=1:numClasses
if(svmclassify(models(k),TestSet(j,:)))
break;
end
end
result(j) = k;
end
mapValues = 'ABCDEFGHI';
u = mapValues(result);
You state that Training_Cepstrum1 has size [93,10,36]. But when you call multisvm, you are only passing in Training_Cepstrum1(93,:,1) which has size [1,10]. Since TrainCorrectGrid has size [93,1], there is a mismatch in the number of rows.
It looks like you make the same error when passing in Practice_Cepstrum1a.
Try replacing your call to multisvm with
[results,u] = multisvm(Training_Cepstrum1(:,:,1), TrainCorrectGrid, Practice_Cepstrum1a(:,:,1));
This way Training_Cepstrum1(:,:,1) has size [93,10], the same number of rows as TrainCorrectGrid.

Plot portfolio composition map in Julia (or Matlab)

I am optimizing portfolio of N stocks over M levels of expected return. So after doing this I get the time series of weights (i.e. a N x M matrix where where each row is a combination of stock weights for a particular level of expected return). Weights add up to 1.
Now I want to plot something called portfolio composition map (right plot on the picture), which is a plot of these stock weights over all levels of expected return, each with a distinct color and length (at every level of return) is proportional to it's weight.
My questions is how to do this in Julia (or MATLAB)?
I came across this and the accepted solution seemed so complex. Here's how I would do it:
using Plots
#userplot PortfolioComposition
#recipe function f(pc::PortfolioComposition)
weights, returns = pc.args
weights = cumsum(weights,dims=2)
seriestype := :shape
for c=1:size(weights,2)
sx = vcat(weights[:,c], c==1 ? zeros(length(returns)) : reverse(weights[:,c-1]))
sy = vcat(returns, reverse(returns))
#series Shape(sx, sy)
end
end
# fake data
tickers = ["IBM", "Google", "Apple", "Intel"]
N = 10
D = length(tickers)
weights = rand(N,D)
weights ./= sum(weights, dims=2)
returns = sort!((1:N) + D*randn(N))
# plot it
portfoliocomposition(weights, returns, labels = tickers)
matplotlib has a pretty powerful polygon plotting capability, e.g. this link on plotting filled polygons:
ploting filled polygons in python
You can use this from Julia via the excellent PyPlot.jl package.
Note that the syntax for certain things changes; see the PyPlot.jl README and e.g. this set of examples.
You "just" need to calculate the coordinates from your matrix and build up a set of polygons to plot the portfolio composition graph. It would be nice to see the code if you get this working!
So I was able to draw it, and here's my code:
using PyPlot
using PyCall
#pyimport matplotlib.patches as patch
N = 10
D = 4
weights = Array(Float64, N,D)
for i in 1:N
w = rand(D)
w = w/sum(w)
weights[i,:] = w
end
weights = [zeros(Float64, N) weights]
weights = cumsum(weights,2)
returns = sort!([linspace(1,N, N);] + D*randn(N))
##########
# Plot #
##########
polygons = Array(PyObject, 4)
colors = ["red","blue","green","cyan"]
labels = ["IBM", "Google", "Apple", "Intel"]
fig, ax = subplots()
fig[:set_size_inches](5, 7)
title("Problem 2.5 part 2")
xlabel("Weights")
ylabel("Return (%)")
ax[:set_autoscale_on](false)
ax[:axis]([0,1,minimum(returns),maximum(returns)])
for i in 1:(size(weights,2)-1)
xy=[weights[:,i] returns;
reverse(weights[:,(i+1)]) reverse(returns)]
polygons[i] = matplotlib[:patches][:Polygon](xy, true, color=colors[i], label = labels[i])
ax[:add_artist](polygons[i])
end
legend(polygons, labels, bbox_to_anchor=(1.02, 1), loc=2, borderaxespad=0)
show()
# savefig("CompositionMap.png",bbox_inches="tight")
Can't say that this is the best way, to do this, but at least it is working.

Bag of words not correctly labeling responses

I am trying to implement Bag of Words in opencv and has come with the implementation below. I am using Caltech 101 database. However, since its my first time and not being familiar, I have planned to used two image sets from the database, the chair image set and the soccer ball image set. I have coded for the svm using this.
Everything went allright, except when I call classifier.predict(descriptor) , I do not get the label vale as intended. I always get a0 instead of '1', irrespective of my test image. The number of images in the chair dataset is 10 and in the soccer ball dataset is 10. I labelled chair as 0 and soccer ball as 1 . The links represent the samples of each categories, the top 10 is of chairs, the bottom 10 is of soccer balls
function hello
clear all; close all; clc;
detector = cv.FeatureDetector('SURF');
extractor = cv.DescriptorExtractor('SURF');
links = {
'http://i.imgur.com/48nMezh.jpg'
'http://i.imgur.com/RrZ1i52.jpg'
'http://i.imgur.com/ZI0N3vr.jpg'
'http://i.imgur.com/b6lY0bJ.jpg'
'http://i.imgur.com/Vs4TYPm.jpg'
'http://i.imgur.com/GtcwRWY.jpg'
'http://i.imgur.com/BGW1rqS.jpg'
'http://i.imgur.com/jI9UFn8.jpg'
'http://i.imgur.com/W1afQ2O.jpg'
'http://i.imgur.com/PyX3adM.jpg'
'http://i.imgur.com/U2g4kW5.jpg'
'http://i.imgur.com/M8ZMBJ4.jpg'
'http://i.imgur.com/CinqIWI.jpg'
'http://i.imgur.com/QtgsblB.jpg'
'http://i.imgur.com/SZX13Im.jpg'
'http://i.imgur.com/7zVErXU.jpg'
'http://i.imgur.com/uUMGw9i.jpg'
'http://i.imgur.com/qYSkqEg.jpg'
'http://i.imgur.com/sAj3pib.jpg'
'http://i.imgur.com/DMPsKfo.jpg'
};
N = numel(links);
trainer = cv.BOWKMeansTrainer(100);
train = struct('val',repmat({' '},N,1),'img',cell(N,1), 'pts',cell(N,1), 'feat',cell(N,1));
for i=1:N
train(i).val = links{i};
train(i).img = imread(links{i});
if ndims(train(i).img > 2)
train(i).img = rgb2gray(train(i).img);
end;
train(i).pts = detector.detect(train(i).img);
train(i).feat = extractor.compute(train(i).img,train(i).pts);
end;
for i=1:N
trainer.add(train(i).feat);
end;
dictionary = trainer.cluster();
extractor = cv.BOWImgDescriptorExtractor('SURF','BruteForce');
extractor.setVocabulary(dictionary);
for i=1:N
desc(i,:) = extractor.compute(train(i).img,train(i).pts);
end;
a = zeros(1,10)';
b = ones(1,10)';
labels = [a;b];
classifier = cv.SVM;
classifier.train(desc,labels);
test_im =rgb2gray(imread('D:\ball1.jpg'));
test_pts = detector.detect(test_im);
test_feat = extractor.compute(test_im,test_pts);
val = classifier.predict(test_feat);
disp('Value is: ')
disp(val)
end
These are my test samples:
Soccer Ball
(source: timeslive.co.za)
Chair
Searching through this site I think that my algorithm is okay, even though I am not quite confident about it. If anybody can help me in finding the bug, it will be appreciable.
Following Amro's code , this was my result:
Distribution of classes:
Value Count Percent
1 62 49.21%
2 64 50.79%
Number of training instances = 61
Number of testing instances = 65
Number of keypoints detected = 38845
Codebook size = 100
SVM model parameters:
svm_type: 'C_SVC'
kernel_type: 'RBF'
degree: 0
gamma: 0.5063
coef0: 0
C: 62.5000
nu: 0
p: 0
class_weights: 0
term_crit: [1x1 struct]
Confusion matrix:
ans =
29 1
1 34
Accuracy = 96.92 %
Your logic looks fine to me.
Now I guess you'll have to tweak the various parameters if you want to improve the classification accuracy. This includes the clustering algorithm parameters (such as the vocabulary size, clusters initialization, termination criteria, etc..), the SVM parameters (kernel type, the C coefficient, ..), the local features algorithm used (SIFT, SURF, ..).
Ideally, whenever you want to perform parameter selection, you ought to use cross-validation. Some methods already have such mechanism embedded (CvSVM::train_auto for instance), but for the most part you'll have to do this manually...
Finally you should follow general machine learning guidelines; see the whole bias-variance tradeoff dilemma. The online Coursera ML class discusses this topic in detail in week 6, and explains how to perform error analysis and use learning curves to decide what to try next (do we need to add more instances, increase model complexity, and so on..).
With that said, I wrote my own version of the code. You might wanna compare it with your code:
% dataset of images
% I previously saved them as: chair1.jpg, ..., ball1.jpg, ball2.jpg, ...
d = [
dir(fullfile('images','chair*.jpg')) ;
dir(fullfile('images','ball*.jpg'))
];
% local-features algorithm used
detector = cv.FeatureDetector('SURF');
extractor = cv.DescriptorExtractor('SURF');
% extract local features from images
t = struct();
for i=1:numel(d)
% load image as grayscale
img = imread(fullfile('images', d(i).name));
if ~ismatrix(img), img = rgb2gray(img); end
% extract local features
pts = detector.detect(img);
feat = extractor.compute(img, pts);
% store along with class label
t(i).img = img;
t(i).class = find(strncmp(d(i).name,{'chair','ball'},4));
t(i).pts = pts;
t(i).feat = feat;
end
% split into training/testing sets
% (a better way would be to use cvpartition from Statistics toolbox)
disp('Distribution of classes:')
tabulate([t.class])
tTrain = t([1:7 11:17]);
tTest = t([8:10 18:20]);
fprintf('Number of training instances = %d\n', numel(tTrain));
fprintf('Number of testing instances = %d\n', numel(tTest));
% build visual vocabulary (by clustering training descriptors)
K = 100;
bowTrainer = cv.BOWKMeansTrainer(K, 'Attempts',5, 'Initialization','PP');
clust = bowTrainer.cluster(vertcat(tTrain.feat));
fprintf('Number of keypoints detected = %d\n', numel([tTrain.pts]));
fprintf('Codebook size = %d\n', K);
% compute histograms of visual words for each training image
bowExtractor = cv.BOWImgDescriptorExtractor('SURF', 'BruteForce');
bowExtractor.setVocabulary(clust);
M = zeros(numel(tTrain), K);
for i=1:numel(tTrain)
M(i,:) = bowExtractor.compute(tTrain(i).img, tTrain(i).pts);
end
labels = vertcat(tTrain.class);
% train an SVM model (perform paramter selection using cross-validation)
svm = cv.SVM();
svm.train_auto(M, labels, 'SvmType','C_SVC', 'KernelType','RBF');
disp('SVM model parameters:'); disp(svm.Params)
% evaluate classifier using testing images
actual = vertcat(tTest.class);
pred = zeros(size(actual));
for i=1:numel(tTest)
descs = bowExtractor.compute(tTest(i).img, tTest(i).pts);
pred(i) = svm.predict(descs);
end
% report performance
disp('Confusion matrix:')
confusionmat(actual, pred)
fprintf('Accuracy = %.2f %%\n', 100*nnz(pred==actual)./numel(pred));
Here are the output:
Distribution of classes:
Value Count Percent
1 10 50.00%
2 10 50.00%
Number of training instances = 14
Number of testing instances = 6
Number of keypoints detected = 6300
Codebook size = 100
SVM model parameters:
svm_type: 'C_SVC'
kernel_type: 'RBF'
degree: 0
gamma: 0.5063
coef0: 0
C: 312.5000
nu: 0
p: 0
class_weights: []
term_crit: [1x1 struct]
Confusion matrix:
ans =
3 0
1 2
Accuracy = 83.33 %
So the classifier correctly labels 5 out of 6 images from the test set, which is not bad for a start :) Obviously you'll get different results each time you run the code due to the inherent randomness of the clustering step.
What is the number of images you are using to build your dictionary i.e. what is N? From your code, it seems that you are only using a 10 images (those listed in links). I hope this list is truncated down for this post else that would be too few. Typically you need in the order of 1000 or much more images to build the dictionary and the images need not be restricted to only these 2 classes that you are classifying. Otherwise, with only 10 images and 100 clusters your dictionary is likely to be messed up.
Also, you might want to use SIFT as a first choice as it tends to perform better than the other descriptors.
Lastly, you can also debug by checking the detected keypoints. You can get OpenCV to draw the keypoints. Sometimes your keypoint detector parameters are not set properly, resulting in too few keypoints getting detected, which in turn gives poor feature vectors.
To understand more about the BOW algorithm, you can take a look at these posts here and here. The second post has a link to a free pdf for an O'Reilley book on computer vision using python. The BOW model (and other useful stuff) is described in more details inside that book.
Hope this helps.

Eigenfaces shows emptyblack image in matlab [duplicate]

This question already has answers here:
eigenfaces are not showing correctly and are very dark
(2 answers)
Closed 3 years ago.
i have a set of 17 face grayscale pictures..and when try to view it i get a black images instead of ghost like pictures.
input_dir = 'images';
image_dims = [60, 60];
filenames = dir(fullfile(input_dir, '*.jpg'));
num_images = numel(filenames);
images = [];
for n = 1:num_images
filename = fullfile(input_dir, filenames(n).name);
img = imresize(imread(filename),[60,60]);
if n == 1
images = zeros(prod(image_dims), num_images);
end
images(:, n) = img(:);
end
% Trainig
% steps 1 and 2: find the mean image and the mean-shifted input images
mean_face = mean(images, 2);
shifted_images = images - repmat(mean_face, 1, num_images);
% steps 3 and 4: calculate the ordered eigenvectors and eigenvalues
[evectors, score, evalues] = princomp(images');
% step 5: only retain the top 'num_eigenfaces' eigenvectors (i.e. the principal components)
num_eigenfaces = 20;
evectors = evectors(:, 1:num_eigenfaces);
% step 6: project the images into the subspace to generate the feature vectors
features = evectors' * shifted_images;
and to see the eignevalues i used this code
figure;
for n = 1:num_eigenfaces
subplot(2, ceil(num_eigenfaces/2), n);
evector = reshape(evectors(:,n), image_dims);
imshow(evector);
end
i dont think it was suppose to be like this. can someone point out what i did wrong?
You should check each step in the code and make sure they pass sanity checks. My guess is this
features = evectors' * shifted_images;
Should be this
features = shifted_images * evectors;
Which makes me wonder if shifted_images has the correct dimensions. The evectors should be a matrix where each column represents a component vector. The matrix will be [pics x n]. The shifted images should be a [pixcount x pics] matrix. "pixcount" is the amount of pixels in each picture and "pics" is the number of pictures. If evectors' * shifted_images works without a dimensions error, I wonder if one quantity isn't being calculated correctly. I think this transpose is the culprit:
princomp(images');
Try scaling the image:
for i=1:num_eigenfaces
subplot(1,7,i);
image=reshape(evectors(:,i), image_dims);
image=image';
%scale image to full scale
imshow(image, []);
end