we have measured data that we managed to determine the distribution type that it follows (Gamma) and its parameters (A,B)
And we generated n samples (10000) from the same distribution with the same parameters and in the same range (between 18.5 and 59) using for loop
for i=1:1:10000
tot=makedist('Gamma','A',11.8919,'B',2.9927);
tot= truncate(tot,18.5,59);
W(i,:) =random(tot,1,1);
end
Then we tried to fit the generated data using:
h1=histfit(W);
After this we tried to plot the Gamma curve to compare the two curves on the same figure uing:
hold on
h2=histfit(W,[],'Gamma');
h2(1).Visible='off';
The problem s the two curves are shifted as in the following figure "Figure 1 is the generated data from the previous code and Figure 2 is without truncating the generated data"
enter image description here
Any one knows why??
Thanks in advance
By default histfit fits a normal probability density function (PDF) on the histogram. I'm not sure what you were actually trying to do, but what you did is:
% fit a normal PDF
h1=histfit(W); % this is equal to h1 = histfit(W,[],'normal');
% fit a gamma PDF
h2=histfit(W,[],'Gamma');
Obviously that will result in different fits because a normal PDF != a gamma PDF. The only thing you see is that for the gamma PDF fits the curve better because you sampled the data from that distribution.
If you want to check whether the data follows a certain distribution you can also use a KS-test. In your case
% check if the data follows the distribution speccified in tot
[h p] = kstest(W,'CDF',tot)
If the data follows a gamma dist. then h = 0 and p > 0.05, else h = 1 and p < 0.05.
Now some general comments on your code:
Please look up preallocation of memory, it will speed up loops greatly. E.g.
W = zeros(10000,1);
for i=1:1:10000
tot=makedist('Gamma','A',11.8919,'B',2.9927);
tot= truncate(tot,18.5,59);
W(i,:) =random(tot,1,1);
end
Also,
tot=makedist('Gamma','A',11.8919,'B',2.9927);
tot= truncate(tot,18.5,59);
is not depending in the loop index and can therefore be moved in front of the loop to speed things up further. It is also good practice to avoid using i as loop variable.
But you can actually skip the whole loop because random() allows to return multiple samples at once:
tot=makedist('Gamma','A',11.8919,'B',2.9927);
tot= truncate(tot,18.5,59);
W =random(tot,10000,1);
Related
I want to implement bag of visual words in MATLAB. I used SURF features to extract features from the images and k-means to cluster those features into k clusters. I now have k centroids and I want to know how many times each cluster is used by assigning each image feature to its closet neighbor. Finally, I'd like to create a histogram of this for each image.
I tried to use knnsearch function but it doesn't work in this case.
Here is my MATLAB code:
clc;
clear;
close all;
folder = 'CarData/TrainImages/cars';
filePattern = fullfile(folder, '*.pgm');
f=dir(filePattern);
files={f.name};
for k=1:numel(files)
fullFileName = fullfile(folder, files{k});
H = fspecial('log');
image=imfilter(imread(fullFileName),H);
temp = detectSURFFeatures(image);
[im_features, temp] = extractFeatures(image, temp);
features{k}= im_features;
end
features = vertcat(features{:});
image_feats = [];
[assignments,centers] = kmeans(double(features),500);
vocab = centers';
I have all images feature in features array and cluster center in centroid array
You're almost there. You don't even need to use knnsearch at all. The assignments variable tells you which input feature mapped to which cluster. assignments will give you a N x 1 vector where N is the total number of examples you have, or the total number of features in the input matrix features. Each value assignments(i) tells you which cluster the example i (or row i) of features it maps to. The cluster centroid dictated by assignments(i) would be given as centers(i, :).
Therefore given how you've called kmeans, it will be a N x 1 vector where each element is from 1 to 500 with 500 being the total number of clusters desired.
Let's do the simple case where we only have one image in your codebook. If this is the case, all you have to do is create a histogram of the assignments variable. The output histogram h will be a 500 x 1 vector with each element h(i) being the number of times an example used centroid i as its representation in your codebook.
Just use the histcounts function and make sure that you specify the bin ranges so that they coincide with each cluster ID. You must make sure that you account for the ending bin, as the bin ranges are exclusive on the right edge so just add an additional bin to the end.
Something like this will work:
h = histcounts(assignments, 1 : 501);
If you want something simpler and you don't want to worry about specifying the end bin, you can use accumarray to achieve the same result:
h = accumarray(assignments, 1);
The effect of accumarray we assign key-value pairs where the key is the centroid that the example mapped to and the value is simply 1 for all keys. accumarray will bin all values in assignments that share the same key and you do something with those values. The default behaviour of accumarray is to sum all values, which is effectively computing the histogram.
However, you want to do this for multiple images, not just a single image.
For Bag of Visual Words problems, we will certainly have more than one training image in our database. Therefore, you want to find the histogram of the features for each image. We can still use the above concept, but one thing I can suggest is you maintain a separate variable that tells you how many features were detected per image, then you can index into the assignments variable to help extract out the correct assigned centroid IDs, then build a histogram of those individually. We can build a 2D matrix where each row delineates the histogram of each image. Remember that in kmeans, each row tells you what cluster each example was assigned to independently of the other examples in your data. Using that, you would use kmeans on the entire training dataset, then be smart about how you're accessing the assignments variable to extract out the assigned clusters for each input image.
Therefore, modify your code so that it looks something like this:
clc;
clear;
close all;
folder = 'CarData/TrainImages/cars';
filePattern = fullfile(folder, '*.pgm');
f=dir(filePattern);
files={f.name};
num_features = zeros(numel(files), 1); % New - for keeping track of # of features per image
for k=1:numel(files)
fullFileName = fullfile(folder, files{k});
H = fspecial('log');
image=imfilter(imread(fullFileName),H);
temp = detectSURFFeatures(image);
[im_features, temp] = extractFeatures(image, temp);
num_features(k) = size(im_features, 1); % New - # of features per image
features{k}= im_features;
end
features = vertcat(features{:});
num_clusters = 500; % Added to make the code adaptive
[assignments,centers] = kmeans(double(features), num_clusters);
counter = 1; % Keeps track of where we need to slice in assignments
% Go through each image and find their histograms
features_hist = zeros(numel(files), num_clusters); % Records the per image histograms
for k = 1 : numel(files)
a = assignments(counter : counter + num_features(k) - 1); % Get the assignments
h = histcounts(a, 1 : num_clusters + 1);
% Or:
% h = accumarray(a, 1).'; % Transpose to make it a row
% Place in final output
features_hist(k, :) = h;
% Increment counter
counter = counter + num_features(k);
end
features_hist will now be a N x 500 matrix where each row is the histogram of each image you are seeking. The final job would be to use a supervised machine learning algorithm (SVM, Neural Networks, etc.) where the expected labels is the description of each image you have assigned to the image accompanied by the histogram of each image as the input features. The final result would be a learned model so that when you have a new image, calculate the SURF features, represent them in a histogram of features like we did above, then feed it into the classification model to give you the expected class or label that the image represents.
P.S. Deep Learning / CNNs do a much better job at this, but require much more time to train. If you're looking at performance wise, don't use Bag of Visual Words but this is something very quick to implement and it's known to perform moderately well but that of course depends on the kinds of images you want to classify.
I use the following code to collect data from two text files join them together and then plot them. For some reason, I appear to get two plots rather than 1, I am not sure why this is happening.
load MODES1.dat; % read data into the MODES1 matrix
x1 = MODES1(:,1); % copy first column of MODES1 into x1
y1 = MODES1(:,2); % and second column of MODES1 into y1
load MODES.dat; % read data into the MODES matrix
x = MODES(:,1); % copy first column of MODES into x
y = MODES(:,2); % and second column of MODES into y
% Joining the two sets of data
endx = [x1;x];
endy = [y1;y];
figure(1)
plot(endx,endy)
xlabel('Unique Threshold Strains','FontSize',12);
ylabel('Probabilities of occurrence','FontSize',12);
title('\it{Unique Values versus frequencies of occurrence}','FontSize',16);
Thanks
Your problem is quite a simple one. Matlab's plot command creates a point for each data point defined by the parameters and connects those points in the order they appeared in the first parameter. To get an idea of this behavior, try
x = [0;1;-1;2;-2;3;-3;4;-4;5];
plot(x,x.^2);
You won't get the quadratic function graph you might expect.
To fix this, you must sort you input arrays identically. Sorting one array is simple (sort(endx)), but you want to sort both in the same way. Matlab actually gives you a function to do this, but it only works on matrices, so you need to do some concatenating/seperating:
input = sortrows( [endx endy] );
endx = input(:,1);
endy = input(:,2);
This will sort the rows of the matrix built by putting endy right of endx with respect to the first column (endx). Now your inputs are correctly sorted and the resulting plot should only show one line. (More accurately, one line which does not at some point go back where it came from.)
Another way to achieve this, depending on you actual use case and data origin, would be to build the mean value of both parts of x, so instead of endx = [x1;x];, you'd build endx = mean([x1 x],2);.
Yet another way is to drop the line altogether and go with
plot(endx,endy,'.');
or
plot(endx,endy,'LineStyle','none');
But this is only useful if your data points are very close to each other.
As seen in the code below, I am currently generating random numbers from a Normal Distribution and am selecting the ones within the -3*sigma and 3*sigma interval. However, I now want to generate numbers such that there is a higher probability that I select numbers from outside the -3*sigma and 3*sigma interval. For eg. A number from [-4*sigma -3*sigma) should have 35% probability of being chosen and same for [3*sigma 4*sigma). Basically, I'll be calling this function several times and am wondering if there is a way for me to select a higher proportion of random numbers from the "tails" of the normal distribution, without actually altering the shape of the normal distribution. I'm struggling to do this.
function [new_E11, new_E22] = elasticmodulusrng()
new_E11 = normrnd(136e9,9.067e9,[1 1]);
new_E22 = normrnd(8.9e9,2.373e9,[1 1]);
while new_E11<=-3*9.067e9 && new_E11>=3*9.067e9
new_E11 = normrnd(136e9,9.067e9,[1 1]);
end
while new_E11<=-3*2.373e9 && new_E11>=3*2.373e9
new_E22 = normrnd(8.9e9,2.373e9,[1 1]);
end
Thanks
The question does not make much sense, as pointed out by Jojo: this is not a normal distribution anymore.
What you could do is to create your own Probability density functions pdf and draw from it.
For instance,
N = pdf('Normal',-5:0.2:5,0,1);
gives you the normal PDF with a good resolution.
You could alter it, say
Z = N;
Z(5:15)=3*Z(5:15);
Z(35:45)=3*Z(35:45);
and use Direct Methods, Inversion Methods, or Acceptance-Rejection Methods as explained here
There is an implementation in the FileExchange:
http://www.mathworks.com/matlabcentral/fileexchange/27590-simple-rejection-sampling
I was wondering if there were a way to run a complete Cronbach's Alpha analysis (like that available in 'Reliability Analysis' in SPSS), including an Alpha value if item is deleted.
I've created a Cronbach function from Mathworks File Exchange, giving me:
% Calculate the number of items
k=size(X,2);
% Calculate the variance of the items' sum
VarTotal=nanvar(nansum(X'));
% Calculate the item variance
SumVarX=nansum(nanvar(X));
% Calculate the Cronbach's alpha
a=k/(k-1)*(VarTotal-SumVarX)/VarTotal;
In a 1000x60 matrix, I'd like to know the Alpha when each item across dimension 2 is deleted.
Is there an in-built function for something like this? Is it possible to update this code (or write new code) to that effect?
OK, so it turns out it was simply a case of building the correct for loop.
as(60)=NaN; % preallocate output matrices
varargout(60)=NaN;
for ques = 1:size(twodm,2) % loop across items
cols = 1:size(twodm,2);
cols(ques)=[]; % ques only uses items that aren't `ques`
[as(ques), varargout(ques)] = CronbachAlpha(twodm(:,cols)); % perform the test
end
The function CronbachAlpha was taken from this file, which calculates both standardised and unstandardised Alphas and is better than the one used in the question.
Here's my goal:
I'm trying to find a way to search through a data signal and find (index) locations where a known, repeating binary data sequence is located. Then, because the spreading code and demodulation is known, pull out the corresponding chip of data and read it. Currently, I believe xcorr will do the trick.
Here's my problem:
I can't seem to interpret my result from xcorr or xcorr2 to give me what I'm looking for. I'm either having a problem cross-referencing from the vector location of my xcorr function to my time vector, or a problem properly identifying my data sequence with xcorr, or both. Other possibilities may exist.
Where I am at/What I have:
I have created a random BPSK signal that consists of the data sequence of interest and garbage data over a repeating period. I have tried processing it using xcorr, which is where I am stuck.
Here's my code:
%% Clear Variables
clc;
clear all, close all;
%% Create random data
nbits = 2^10;
ngarbage = 3*nbits;
data = randi([0,1],1,nbits);
garbage = randi([0,1],1,ngarbage);
stream = horzcat(data,garbage);
%% Convert from Unipolar to Bipolar Encoding
stream_b = 2*stream - 1;
%% Define Parameters
%%% Variable Parameters
nsamples = 20*nbits;
nseq = 5 %# Iterate stream nseq times
T = 10; %# Number of periods
Ts = 1; %# Symbol Duration
Es = Ts/2; %# Energy per Symbol
fc = 1e9; %# Carrier frequency
%%% Dependent Parameters
A = sqrt(2*Es/Ts); %# Amplitude of Carrier
omega = 2*pi*fc %# Frequency in radians
t = linspace(0,T,nsamples) %# Discrete time from 0 to T periods with nsamples samples
nspb = nsamples/length(stream) %# Number of samples per bit
%% Creating the BPSK Modulation
%# First we have to stretch the stream to fit the time vector. We can quickly do this using _
%# simple matrix manipulation.
% Replicate each bit nspb/nseq times
repStream_b = repmat(stream_b',1,nspb/nseq);
% Tranpose and replicate nseq times to be able to fill to t
modSig_proto = repmat(repStream_b',1,nseq);
% Tranpose column by column, then rearrange into a row vector
modSig = modSig_proto(:)';
%% The Carrier Wave
carrier = A*cos(omega*t);
%% Modulated Signal
sig = modSig.*carrier;
Using XCORR
I use xcorr2() to eliminate the zero padding effect of xcorr on unequal vectors. See comments below for clarification.
corr = abs(xcorr2(data,sig); %# pull the absolute correlation between data and sig
[val,ind] = sort(corr(:),'descend') %# sort the correlation data and assign values and indices
ind_max = ind(1:nseq); %# pull the nseq highest valued indices and send to ind_max
Now, I think this should pull the five highest correlations between data and sig. These should correspond to the end bit of data in the stream for every iteration of stream, because I would think that is where the data would most strongly cross-correlate with sig, but they do not. Sometimes the maxes are not even one stream length apart. So I'm confused here.
Question
In a three part question:
Am I missing a certain step? How do I use xcorr in this case to find where data and sig are most strongly correlated?
Is my entire method wrong? Should I not be looking for the max correlations?
Or should I be attacking this problem from another angle, id est, not use xcorr and maybe use filter or another function?
Your overall method is great and makes a lot of sense. The problem you're having is that you're getting some actual correlation with your garbage data. I noticed that you shifted all of your sream to be zero-centered, but didn't do the same to your data. If you zero-center the data, your correlation peaks will be better defined (at least that worked when I tried it).
data = 2*data -1;
Also, I don't recommend using a simple sort to find your peaks. If you have a wide peak, which is especially possible with a noisy signal, you could have two high points right next to each other. Find a single maximum, and then zero that point and a few neighbors. Then just repeat however many times you like. Alternatively, if you know how long your epoch is, only do a correlation with one epoch's worth of data, and iterate through the signal as it arrives.
With #David K 's and #Patrick Mineault's help I manage to track down where I went wrong. First #Patrick Mineault suggested I flip the signals. The best way to see what you would expect from the result is to slide the small vector along the larger, searched vector. So
corr = xcorr2(sig,data);
Then I like to chop off the end there because it's just extra. I did this with a trim function I made that simply takes the signal you're sliding and trims it's irrelevant pieces off the end of the xcorr result.
trim = #(x,s2) x(1:end - (length(s2) - 1));
trim(corr,data);
Then, as #David K suggests, you need to have the data stream you're looking for encoded the same as your searched signal. So in this case
data = 2*data-1;
Second, if you just have your data at it's original bit length, and not at it's stretched, iterated length, it can be found in the signal but it will be VERY noisy. To reduce the noise, simply stretch the data to match it's stretched length in the iterated signal. So
rdata = repmat(data',1,nspb/nseq);
rdata = repmat(rdata',1,nseq);
data = rdata(:)';
Now finally, we should have crystal clear correlations for this case. And to pull out the maxes that should correspond to those correlations I wrote
[sortedValues sortIndex] = sort(corr(:),'descend');
c = 0 ;
for r = 1 : length(sortedValues)
if sortedValues(r,:) == max(corr)
c = c + 1;
maxIndex(1,c) = sortIndex(r,:);
else break % If you don't do this, you get loop lock
end
end
Now c should end up being nseq for this case and you should have 5 index times where the corrs should be! You can easily pull out the bits with another loop and c or length(maxIndex). I've also made this into a more "real world" toy script, where there is a data stream, doppler, fading, and it's over a time vector in seconds instead of samples.
Thanks for the help!
Try flipping the signal, i.e.:
corr = abs(xcorr2(data,sig(end:-1:1));
Is that any better?