Encode each training image as a histogram of the number of times each vocabulary element shows up for Bag of Visual Words - matlab

I want to implement bag of visual words in MATLAB. I used SURF features to extract features from the images and k-means to cluster those features into k clusters. I now have k centroids and I want to know how many times each cluster is used by assigning each image feature to its closet neighbor. Finally, I'd like to create a histogram of this for each image.
I tried to use knnsearch function but it doesn't work in this case.
Here is my MATLAB code:
clc;
clear;
close all;
folder = 'CarData/TrainImages/cars';
filePattern = fullfile(folder, '*.pgm');
f=dir(filePattern);
files={f.name};
for k=1:numel(files)
fullFileName = fullfile(folder, files{k});
H = fspecial('log');
image=imfilter(imread(fullFileName),H);
temp = detectSURFFeatures(image);
[im_features, temp] = extractFeatures(image, temp);
features{k}= im_features;
end
features = vertcat(features{:});
image_feats = [];
[assignments,centers] = kmeans(double(features),500);
vocab = centers';
I have all images feature in features array and cluster center in centroid array

You're almost there. You don't even need to use knnsearch at all. The assignments variable tells you which input feature mapped to which cluster. assignments will give you a N x 1 vector where N is the total number of examples you have, or the total number of features in the input matrix features. Each value assignments(i) tells you which cluster the example i (or row i) of features it maps to. The cluster centroid dictated by assignments(i) would be given as centers(i, :).
Therefore given how you've called kmeans, it will be a N x 1 vector where each element is from 1 to 500 with 500 being the total number of clusters desired.
Let's do the simple case where we only have one image in your codebook. If this is the case, all you have to do is create a histogram of the assignments variable. The output histogram h will be a 500 x 1 vector with each element h(i) being the number of times an example used centroid i as its representation in your codebook.
Just use the histcounts function and make sure that you specify the bin ranges so that they coincide with each cluster ID. You must make sure that you account for the ending bin, as the bin ranges are exclusive on the right edge so just add an additional bin to the end.
Something like this will work:
h = histcounts(assignments, 1 : 501);
If you want something simpler and you don't want to worry about specifying the end bin, you can use accumarray to achieve the same result:
h = accumarray(assignments, 1);
The effect of accumarray we assign key-value pairs where the key is the centroid that the example mapped to and the value is simply 1 for all keys. accumarray will bin all values in assignments that share the same key and you do something with those values. The default behaviour of accumarray is to sum all values, which is effectively computing the histogram.
However, you want to do this for multiple images, not just a single image.
For Bag of Visual Words problems, we will certainly have more than one training image in our database. Therefore, you want to find the histogram of the features for each image. We can still use the above concept, but one thing I can suggest is you maintain a separate variable that tells you how many features were detected per image, then you can index into the assignments variable to help extract out the correct assigned centroid IDs, then build a histogram of those individually. We can build a 2D matrix where each row delineates the histogram of each image. Remember that in kmeans, each row tells you what cluster each example was assigned to independently of the other examples in your data. Using that, you would use kmeans on the entire training dataset, then be smart about how you're accessing the assignments variable to extract out the assigned clusters for each input image.
Therefore, modify your code so that it looks something like this:
clc;
clear;
close all;
folder = 'CarData/TrainImages/cars';
filePattern = fullfile(folder, '*.pgm');
f=dir(filePattern);
files={f.name};
num_features = zeros(numel(files), 1); % New - for keeping track of # of features per image
for k=1:numel(files)
fullFileName = fullfile(folder, files{k});
H = fspecial('log');
image=imfilter(imread(fullFileName),H);
temp = detectSURFFeatures(image);
[im_features, temp] = extractFeatures(image, temp);
num_features(k) = size(im_features, 1); % New - # of features per image
features{k}= im_features;
end
features = vertcat(features{:});
num_clusters = 500; % Added to make the code adaptive
[assignments,centers] = kmeans(double(features), num_clusters);
counter = 1; % Keeps track of where we need to slice in assignments
% Go through each image and find their histograms
features_hist = zeros(numel(files), num_clusters); % Records the per image histograms
for k = 1 : numel(files)
a = assignments(counter : counter + num_features(k) - 1); % Get the assignments
h = histcounts(a, 1 : num_clusters + 1);
% Or:
% h = accumarray(a, 1).'; % Transpose to make it a row
% Place in final output
features_hist(k, :) = h;
% Increment counter
counter = counter + num_features(k);
end
features_hist will now be a N x 500 matrix where each row is the histogram of each image you are seeking. The final job would be to use a supervised machine learning algorithm (SVM, Neural Networks, etc.) where the expected labels is the description of each image you have assigned to the image accompanied by the histogram of each image as the input features. The final result would be a learned model so that when you have a new image, calculate the SURF features, represent them in a histogram of features like we did above, then feed it into the classification model to give you the expected class or label that the image represents.
P.S. Deep Learning / CNNs do a much better job at this, but require much more time to train. If you're looking at performance wise, don't use Bag of Visual Words but this is something very quick to implement and it's known to perform moderately well but that of course depends on the kinds of images you want to classify.

Related

Append for MATLAB

I am training an ANN, and I want to have different instances of training. In each instance, I want to find the maximum difference between the actual and predicted output. Then I want to take the average of all these maximums.
My code so far is:
maximum = [];
k=1;
for k = 1:5
%Train network
layers = [ ...
imageInputLayer([250 1 1])
reluLayer
fullyConnectedLayer(100)
fullyConnectedLayer(100)
fullyConnectedLayer(1)
regressionLayer];
options = trainingOptions('sgdm','InitialLearnRate',0.1, ...
'MaxEpochs',1000);
net = trainNetwork(nnntrain,nnnfluidtrain,layers,options);
net.Layers
%Test network
predictedn = predict(net,nnntest);
maximum = append(maximum, max(abs(predictedn-nnnfluidtest)));
k=k+1
end
My intent is to produce a list named 'maximum' with five elements (the max of each ANN training instance) that I would then like to take the average of.
However, it keeps giving me the error:
wrong number of input arguments for obsolete matrix-based syntax
when it tries to append. The first input is a list while the second is a 1x1 single.
Appending in MATLAB is a native operation. You append elements by actually building a new vector where the original vector is part of the input.
Therefore:
maximum = [maximum max(abs(predictedn-nnnfluidtest))];
If for some reason you would like to do it in function form, the function you are looking for is cat which is short form for concatenate. The append function is seen in multiple toolboxes but each one of them does not do what you want. cat is what you want but you still need to provide the original input vector as part of the arguments:
maximum = cat(2, maximum, max(abs(predictedn-nnnfluidtest)));
The first argument is the axis you want to append to. To respect the code that you're doing above, you want the columns to increase as you extend your vector so that is the second axis, or the axis being 2.

comparing generated data to measured data

we have measured data that we managed to determine the distribution type that it follows (Gamma) and its parameters (A,B)
And we generated n samples (10000) from the same distribution with the same parameters and in the same range (between 18.5 and 59) using for loop
for i=1:1:10000
tot=makedist('Gamma','A',11.8919,'B',2.9927);
tot= truncate(tot,18.5,59);
W(i,:) =random(tot,1,1);
end
Then we tried to fit the generated data using:
h1=histfit(W);
After this we tried to plot the Gamma curve to compare the two curves on the same figure uing:
hold on
h2=histfit(W,[],'Gamma');
h2(1).Visible='off';
The problem s the two curves are shifted as in the following figure "Figure 1 is the generated data from the previous code and Figure 2 is without truncating the generated data"
enter image description here
Any one knows why??
Thanks in advance
By default histfit fits a normal probability density function (PDF) on the histogram. I'm not sure what you were actually trying to do, but what you did is:
% fit a normal PDF
h1=histfit(W); % this is equal to h1 = histfit(W,[],'normal');
% fit a gamma PDF
h2=histfit(W,[],'Gamma');
Obviously that will result in different fits because a normal PDF != a gamma PDF. The only thing you see is that for the gamma PDF fits the curve better because you sampled the data from that distribution.
If you want to check whether the data follows a certain distribution you can also use a KS-test. In your case
% check if the data follows the distribution speccified in tot
[h p] = kstest(W,'CDF',tot)
If the data follows a gamma dist. then h = 0 and p > 0.05, else h = 1 and p < 0.05.
Now some general comments on your code:
Please look up preallocation of memory, it will speed up loops greatly. E.g.
W = zeros(10000,1);
for i=1:1:10000
tot=makedist('Gamma','A',11.8919,'B',2.9927);
tot= truncate(tot,18.5,59);
W(i,:) =random(tot,1,1);
end
Also,
tot=makedist('Gamma','A',11.8919,'B',2.9927);
tot= truncate(tot,18.5,59);
is not depending in the loop index and can therefore be moved in front of the loop to speed things up further. It is also good practice to avoid using i as loop variable.
But you can actually skip the whole loop because random() allows to return multiple samples at once:
tot=makedist('Gamma','A',11.8919,'B',2.9927);
tot= truncate(tot,18.5,59);
W =random(tot,10000,1);

K-Means centroids getting marginalized to having no data points [Matlab]

So I have a sort of strange problem. I have a dataset with 240 points and I'm trying to use k-means to cluster it into 100 clusters. I'm using Matlab but I don't have access to the statistics toolbox, so I had to write my own k-means function. It's pretty simple, so that shouldn't be too hard, right? Well, it seems something is wrong with my code:
function result=Kmeans(X,c)
[N,n]=size(X);
index=randperm(N);
ctrs = X(index(1:c),:);
old_label = zeros(1,N);
label = ones(1,N);
iter = 0;
while ~isequal(old_label, label)
old_label = label;
label = assign_labels(X, ctrs);
for i = 1:c
ctrs(i,:) = mean(X(label == i,:));
if sum(isnan(ctrs(i,:))) ~= 0
ctrs(i,:) = zeros(1,n);
end
end
iter = iter + 1;
end
result = ctrs;
function label = assign_labels(X, ctrs)
[N,~]=size(X);
[c,~]=size(ctrs);
dist = zeros(N,c);
for i = 1:c
dist(:,i) = sum((X - repmat(ctrs(i,:),[N,1])).^2,2);
end
[~,label] = min(dist,[],2);
It seems what happens is that when I go to recompute the centroids, some centroids have no datapoints assigned to them, so I'm not really sure what to do with that. After doing some research on this, I found that this can happen if you supply arbitrary initial centroids, but in this case the initial centroids are taken from the datapoints themselves, so this doesn't really make sense. I've tried re-assigning these centroids to random datapoints, but that causes the code to not converge (or at least after letting it run all night, the code never converged). Basically they get re-assigned, but that causes other centroids to get marginalized, and repeat. I'm not really sure what's wrong with my code, but I ran this same dataset through R's k-means function for k=100 for 1000 iterations and it managed to converge. Does anyone know what I'm messing up here? Thank you.
Let's step through your code one piece at a time and discuss what you're doing with respect to what I know about the k-means algorithm.
function result=Kmeans(X,c)
[N,n]=size(X);
index=randperm(N);
ctrs = X(index(1:c),:);
old_label = zeros(1,N);
label = ones(1,N);
This looks like a function that takes in a data matrix of size N x n, where N is the number of points you have in your dataset, while n is the dimension of a point in your dataset. This function also takes in c: the desired number of output clusters.index provides a random permutation between 1 to as many data points as you have, and then we select at random c points from this permutation which you have used to initialize your cluster centres.
iter = 0;
while ~isequal(old_label, label)
old_label = label;
label = assign_labels(X, ctrs);
for i = 1:c
ctrs(i,:) = mean(X(label == i,:));
if sum(isnan(ctrs(i,:))) ~= 0
ctrs(i,:) = zeros(1,n);
end
end
iter = iter + 1;
end
result = ctrs;
For k-means, we basically keep iterating until the cluster membership of each point from the previous iteration matches with the current iteration, which is what you have going with your while loop. Now, label determines the cluster membership of each point in your dataset. Now, for each cluster that exists, you determine what the mean data point is, then assign this mean data point as the new cluster centre for each cluster. For some reason, should you experience any NaN for any dimension of your cluster centre, you set your new cluster centre to all zeroes instead. This looks very abnormal to me, and I'll provide a suggestion later. Edit: Now I understand why you did this. This is because should you have any clusters that are empty, you would simply make this cluster centre all zeroes as you wouldn't be able to find the mean of empty clusters. This can be solved with my suggestion for duplicate initial clusters towards the end of this post.
function label = assign_labels(X, ctrs)
[N,~]=size(X);
[c,~]=size(ctrs);
dist = zeros(N,c);
for i = 1:c
dist(:,i) = sum((X - repmat(ctrs(i,:),[N,1])).^2,2);
end
[~,label] = min(dist,[],2);
This function takes in a dataset X and the current cluster centres for this iteration, and it should return a label list of where each point belongs to each cluster. This also looks correct because for each column of dist, you are calculating the distance between each point to each cluster, where those distances are in the ith column for the ith cluster. One optimization trick that I would use is to avoid using repmat here and use bsxfun which handles the replication internally. Therefore, do this instead:
function label = assign_labels(X, ctrs)
[N,~]=size(X);
[c,~]=size(ctrs);
dist = zeros(N,c);
for i = 1:c
dist(:,i) = sum(bsxfun(#minus, X, ctrs(i,:)).^2, 2);
end
[~,label] = min(dist,[],2);
Now, this all looks correct. I also ran some tests myself and it all seems to work out, provided that the initial cluster centres are unique. One small problem with k-means is that we implicitly assume that all cluster centres are unique. Should they not be unique, then you'll run into a problem where two clusters (or more) have the exact same initial cluster centres.... so which cluster should the data point be assigned to? When you're doing the min in your assign_labels function, should you have two identical cluster centres, the cluster label that the point gets assigned to will be the minimum of these two numbers. This is why you will have a cluster with no points in it, as all of the points that should have been assigned to this cluster get assigned to the other.
As such, you may have two (or more) initial cluster centres that are the same upon randomization. Even though the permutation of the indices to select are unique, the actual data points themselves may not be unique upon selection. One thing that I can impose is to loop over the permutation until you get a unique set of initial clusters without repeats. As such, try doing this at the beginning of your code instead.
[N,n]=size(X);
index=randperm(N);
ctrs = X(index(1:c),:);
while size(unique(ctrs, 'rows'), 1) ~= c
index=randperm(N);
ctrs = X(index(1:c),:);
end
old_label = zeros(1,N);
label = ones(1,N);
iter = 0;
%// While loop appears here
This will ensure that you have a unique set of initial clusters before you continue on in your code. Now, going back to your NaN stuff inside the for loop. I honestly don't see how any dimension could result in NaN after you compute the mean if your data doesn't have any NaN to begin with. I would suggest you get rid of this in your code as (to me) it doesn't look very useful. Edit: You can now remove the NaN check as the initial cluster centres should now be unique.
This should hopefully fix your problems you're experiencing. Good luck!
"Losing" a cluster is not half as special as one may think, due to the nature of k-means.
Consider duplicates. Lets assume that all your first k points are identical, what would happen in your code? There is a reason you need to carefully handle this case. The simplest solution would be to leave the centroid as it was before, and live with degenerate clusters.
Given that you only have 240 points, but want to use k=100, don't expect too good results. Most objects will be on their own... choosing a much too large k is probably a reason why you do see this degeneration effect a lot. Let's assume out of these 240, fewer than 100 are unique... Then you cannot have 100 non-empty clusters... Plus, I would consider this kind of result "overfitting", anyway.
If you don't have the toolboxes you need in Matlab, maybe you should move on to free software. Octave, R, Weka, ELKI, ... there is plenty of software, some of which is much more powerful when it comes to clustering than pure Matlab (in particular, if you don't have the toolboxes).
Also benchmark. You will be surprised of the performance differences.

MATLAB Cronbach's Alpha if item deleted

I was wondering if there were a way to run a complete Cronbach's Alpha analysis (like that available in 'Reliability Analysis' in SPSS), including an Alpha value if item is deleted.
I've created a Cronbach function from Mathworks File Exchange, giving me:
% Calculate the number of items
k=size(X,2);
% Calculate the variance of the items' sum
VarTotal=nanvar(nansum(X'));
% Calculate the item variance
SumVarX=nansum(nanvar(X));
% Calculate the Cronbach's alpha
a=k/(k-1)*(VarTotal-SumVarX)/VarTotal;
In a 1000x60 matrix, I'd like to know the Alpha when each item across dimension 2 is deleted.
Is there an in-built function for something like this? Is it possible to update this code (or write new code) to that effect?
OK, so it turns out it was simply a case of building the correct for loop.
as(60)=NaN; % preallocate output matrices
varargout(60)=NaN;
for ques = 1:size(twodm,2) % loop across items
cols = 1:size(twodm,2);
cols(ques)=[]; % ques only uses items that aren't `ques`
[as(ques), varargout(ques)] = CronbachAlpha(twodm(:,cols)); % perform the test
end
The function CronbachAlpha was taken from this file, which calculates both standardised and unstandardised Alphas and is better than the one used in the question.

Find a Binary Data Sequence in a Signal

Here's my goal:
I'm trying to find a way to search through a data signal and find (index) locations where a known, repeating binary data sequence is located. Then, because the spreading code and demodulation is known, pull out the corresponding chip of data and read it. Currently, I believe xcorr will do the trick.
Here's my problem:
I can't seem to interpret my result from xcorr or xcorr2 to give me what I'm looking for. I'm either having a problem cross-referencing from the vector location of my xcorr function to my time vector, or a problem properly identifying my data sequence with xcorr, or both. Other possibilities may exist.
Where I am at/What I have:
I have created a random BPSK signal that consists of the data sequence of interest and garbage data over a repeating period. I have tried processing it using xcorr, which is where I am stuck.
Here's my code:
%% Clear Variables
clc;
clear all, close all;
%% Create random data
nbits = 2^10;
ngarbage = 3*nbits;
data = randi([0,1],1,nbits);
garbage = randi([0,1],1,ngarbage);
stream = horzcat(data,garbage);
%% Convert from Unipolar to Bipolar Encoding
stream_b = 2*stream - 1;
%% Define Parameters
%%% Variable Parameters
nsamples = 20*nbits;
nseq = 5 %# Iterate stream nseq times
T = 10; %# Number of periods
Ts = 1; %# Symbol Duration
Es = Ts/2; %# Energy per Symbol
fc = 1e9; %# Carrier frequency
%%% Dependent Parameters
A = sqrt(2*Es/Ts); %# Amplitude of Carrier
omega = 2*pi*fc %# Frequency in radians
t = linspace(0,T,nsamples) %# Discrete time from 0 to T periods with nsamples samples
nspb = nsamples/length(stream) %# Number of samples per bit
%% Creating the BPSK Modulation
%# First we have to stretch the stream to fit the time vector. We can quickly do this using _
%# simple matrix manipulation.
% Replicate each bit nspb/nseq times
repStream_b = repmat(stream_b',1,nspb/nseq);
% Tranpose and replicate nseq times to be able to fill to t
modSig_proto = repmat(repStream_b',1,nseq);
% Tranpose column by column, then rearrange into a row vector
modSig = modSig_proto(:)';
%% The Carrier Wave
carrier = A*cos(omega*t);
%% Modulated Signal
sig = modSig.*carrier;
Using XCORR
I use xcorr2() to eliminate the zero padding effect of xcorr on unequal vectors. See comments below for clarification.
corr = abs(xcorr2(data,sig); %# pull the absolute correlation between data and sig
[val,ind] = sort(corr(:),'descend') %# sort the correlation data and assign values and indices
ind_max = ind(1:nseq); %# pull the nseq highest valued indices and send to ind_max
Now, I think this should pull the five highest correlations between data and sig. These should correspond to the end bit of data in the stream for every iteration of stream, because I would think that is where the data would most strongly cross-correlate with sig, but they do not. Sometimes the maxes are not even one stream length apart. So I'm confused here.
Question
In a three part question:
Am I missing a certain step? How do I use xcorr in this case to find where data and sig are most strongly correlated?
Is my entire method wrong? Should I not be looking for the max correlations?
Or should I be attacking this problem from another angle, id est, not use xcorr and maybe use filter or another function?
Your overall method is great and makes a lot of sense. The problem you're having is that you're getting some actual correlation with your garbage data. I noticed that you shifted all of your sream to be zero-centered, but didn't do the same to your data. If you zero-center the data, your correlation peaks will be better defined (at least that worked when I tried it).
data = 2*data -1;
Also, I don't recommend using a simple sort to find your peaks. If you have a wide peak, which is especially possible with a noisy signal, you could have two high points right next to each other. Find a single maximum, and then zero that point and a few neighbors. Then just repeat however many times you like. Alternatively, if you know how long your epoch is, only do a correlation with one epoch's worth of data, and iterate through the signal as it arrives.
With #David K 's and #Patrick Mineault's help I manage to track down where I went wrong. First #Patrick Mineault suggested I flip the signals. The best way to see what you would expect from the result is to slide the small vector along the larger, searched vector. So
corr = xcorr2(sig,data);
Then I like to chop off the end there because it's just extra. I did this with a trim function I made that simply takes the signal you're sliding and trims it's irrelevant pieces off the end of the xcorr result.
trim = #(x,s2) x(1:end - (length(s2) - 1));
trim(corr,data);
Then, as #David K suggests, you need to have the data stream you're looking for encoded the same as your searched signal. So in this case
data = 2*data-1;
Second, if you just have your data at it's original bit length, and not at it's stretched, iterated length, it can be found in the signal but it will be VERY noisy. To reduce the noise, simply stretch the data to match it's stretched length in the iterated signal. So
rdata = repmat(data',1,nspb/nseq);
rdata = repmat(rdata',1,nseq);
data = rdata(:)';
Now finally, we should have crystal clear correlations for this case. And to pull out the maxes that should correspond to those correlations I wrote
[sortedValues sortIndex] = sort(corr(:),'descend');
c = 0 ;
for r = 1 : length(sortedValues)
if sortedValues(r,:) == max(corr)
c = c + 1;
maxIndex(1,c) = sortIndex(r,:);
else break % If you don't do this, you get loop lock
end
end
Now c should end up being nseq for this case and you should have 5 index times where the corrs should be! You can easily pull out the bits with another loop and c or length(maxIndex). I've also made this into a more "real world" toy script, where there is a data stream, doppler, fading, and it's over a time vector in seconds instead of samples.
Thanks for the help!
Try flipping the signal, i.e.:
corr = abs(xcorr2(data,sig(end:-1:1));
Is that any better?