Given a lower triangular matrix (100x100) containg cross-correlation
values, where entry 'ij' is the correlation value between signal 'i'
and 'j' and so a high value means that these two signals belong to
the same class of objects, and knowing there are at most four distinct
classes in the data set, does someone know of a fast and effective way
to classify the data and assign all the signals to the 4 different
classes, rather than search and cross check all the entries against
each other? The following 7x7 matrix may help illustrate
the point:
1 0 0 0 0 0 0
.2 1 0 0 0 0 0
.8 .15 1 0 0 0 0
.9 .17 .8 1 0 0 0
.23 .8 .15 .14 1 0 0
.7 .13 .77 .83. .11 1 0
.1 .21 .19 .11 .17 .16 1
there are three classes in this example:
class 1: rows <1 3 4 6>,
class 2: rows <2 5>,
class 3: rows <7>
This is a good problem for hierarchical clustering. Using complete linkage clustering you will get compact clusters, all you have to do is determine the cutoff distance, at which two clusters should be considered different.
First, you need to convert the correlation matrix to a dissimilarity matrix. Since correlation is between 0 and 1, 1-correlation will work well - high correlations get a score close to 0, and low correlations get a score close to 1. Assume that the correlations are stored in an array corrMat
%# remove diagonal elements
corrMat = corrMat - eye(size(corrMat));
%# and convert to a vector (as pdist)
dissimilarity = 1 - corrMat(find(corrMat))';
%# decide on a cutoff
%# remember that 0.4 corresponds to corr of 0.6!
cutoff = 0.5;
%# perform complete linkage clustering
Z = linkage(dissimilarity,'complete');
%# group the data into clusters
%# (cutoff is at a correlation of 0.5)
groups = cluster(Z,'cutoff',cutoff,'criterion','distance')
groups =
2
3
2
2
3
2
1
To confirm that everything is great, you can visualize the dendrogram
dendrogram(Z,0,'colorthreshold',cutoff)
You can use the following method instead of creating the dissimilarity matrix.
Z = linkage(corrMat,'complete','correlation')
This allows Matlab to interpret your matrix as correlation distance and then, you can plot the dendrogram as follows:
dendrogram(Z);
One way to verify if your dendrogram is right or not is by checking its maximum height which should correspond to 1-min(corrMat). If the minimum value in corrMat is 0 then the maximum height of your tree should be 1. If the minimum value is -1 (negative correlation), the height should be 2.
Since it is given that there are going to be 4 groups, I'd start with a pretty simplistic two stage approach.
In the first stage you find the maximum correlation among any two elements, place those two elements in a group, then zero out their correlation in the matrix. Repeat, finding the next highest correlation among two elements and either adding those to an existing group or creating a new one until you have the correct number of groups.
Finally, check which elements aren't in a group, go to their column, and identify the highest correlation they have with any other group. If that element is in a group already, place them in that group as well, otherwise skip to the next element and come back to them later.
If there is interest or anything isn't clear I can add code later. Like I said, the approach is simplistic but if you don't need to verify the number of groups I think it should be effective.
Related
Hi my question is a bit long please bare and read it till the end.
I am working on a project with 30 participants. We have two type of data set (first data set has 30 rows and 160 columns , and second data set has the same 30 rows and 200 columns as outputs=y and these outputs are independent), what i want to do is to use the first data set and predict the second data set outputs.As first data set was rectangular type and had high dimension i have used factor analysis and now have 19 factors that cover up to 98% of the variance. Now i want to use these 19 factors for predicting the outputs of the second data set.
I am using neuralnet and backpropogation and everything goes well and my results are really close to outputs.
My questions :
1- as my inputs are the factors ( they are between -1 and 1 ) and my outputs scale are between 4 to 10000 and integer , should i still scaled them before running neural network ?
2-I scaled the data ( both input and outputs ) and then predicted with neuralnet , then i check the MSE error it was so high like 6000 while my prediction and real output are so close to each other. But if i rescale the prediction and outputs then check The MSE its near zero. Is it unbiased to rescale and then check the MSE ?
3- I read that it is better to not scale the output from the beginning but if i just scale the inputs all my prediction are 1. Is it correct to not to scale the outputs ?
4- If i want to plot the ROC curve how can i do it. Because my results are never equal to real outputs ?
Thank you for reading my question
[edit#1]: There is a publication on how to produce ROC curves using neural network results
http://www.lcc.uma.es/~jja/recidiva/048.pdf
1) You can scale your values (using minmax, for example). But only scale your training data set. Save the parameters used in the scaling process (in minmax they would be the min and max values by which the data is scaled). Only then, you can scale your test data set WITH the min and max values you got from the training data set. Remember, with the test data set you are trying to mimic the process of classifying unseen data. Unseen data is scaled with your scaling parameters from the testing data set.
2) When talking about errors, do mention which data set the error was computed on. You can compute an error function (in fact, there are different error functions, one of them, the mean squared error, or MSE) on the training data set, and one for your test data set.
4) Think about this: Let's say you train a network with the testing data set,and it only has 1 neuron in the output layer . Then, you present it with the test data set. Depending on which transfer function (activation function) you use in the output layer, you will get a value for each exemplar. Let's assume you use a sigmoid transfer function, where the max and min values are 1 and 0. That means the predictions will be limited to values between 1 and 0.
Let's also say that your target labels ("truth") only contains discrete values of 0 and 1 (indicating which class the exemplar belongs to).
targetLabels=[0 1 0 0 0 1 0 ];
NNprediction=[0.2 0.8 0.1 0.3 0.4 0.7 0.2];
How do you interpret this?
You can apply a hard-limiting function such that the NNprediction vector only contains the discreet values 0 and 1. Let's say you use a threshold of 0.5:
NNprediction_thresh_0.5 = [0 1 0 0 0 1 0];
vs.
targetLabels =[0 1 0 0 0 1 0];
With this information you can compute your False Positives, FN, TP, and TN (and a bunch of additional derived metrics such as True Positive Rate = TP/(TP+FN) ).
If you had a ROC curve showing the False Negative Rate vs. True Positive Rate, this would be a single point in the plot. However, if you vary the threshold in the hard-limit function, you can get all the values you need for a complete curve.
Makes sense? See the dependencies of one process on the others?
I just have a problem with graphing different plots on the same graph within a ‘for’ loop. I hope someone can be point me in the right direction.
I have a 2-D array, with discrete chunks of data in and amongst zeros. My data is the following:
A=
0 0
0 0
0 0
3 9
4 10
5 11
6 12
0 0
0 0
0 0
0 0
7 9.7
8 9.8
9 9.9
0 0
0 0
A chunk of data is defined as contiguous set of data, without interruptions of a [0 0] row. So in this example, the 1st chunk of data would be
3 9
4 10
5 11
6 12
And 2nd chunk is
7 9.7
8 9.8
9 9.9
The first column is x and second column is y. I would like to plot y as a function of x (x is horizontal axis, y is vertical axis) I want to plot these data sets on the same graph as a scatter graph, and put a line of best fit through the points, whenever I come across a chunk of data. In this case, I will have 2 sets of points and 2 lines of best fit (because I have 2 chunks of data). I would also like to calculate the R-squared value
The code that I have so far is shown below:
fh1 = figure;
hold all;
ah1 = gca;
% plot graphs:
for d = 1:max_number_zeros+num_rows
if sequence_holder(d,1)==0
continue;
end
c = d;
while sequence_holder(c,1)~=0
plot(ah1,sequence_holder(c,1),sequence_holder(c,num_cols),'*');
%lsline;
c =c+1;
continue;
end
end
Sequence holder is the array with the data in it. I can only plot the first set of data, with no line of best fit. I tried lsline, but that didn't work.
Can anyone tell me how to
-plot both sets of graphs
-how to draw a line of best fit a get the regression coefficient?
The first part could be done in a number of ways. I would test the second column for zeroness
zerodata = A(:,2) == 0;
which will give you a logical array of ones and zeros like [1 1 1 0 1 0 0 ...]. Then you can use this to split up your input. You could look at the diff of that array and test it for positive or negative sign. Your data starts on 0 so you won't get a transition for that one, so you'd need to think of some way to deal with that or the opposite case, unless you know for certain that it will always be one way or the other. You could just test the first element, or you could insert a known value at the start of your input array.
You will then have to store your chunks. As they may be of variable length and variable number you wouldn't put them into a big matrix, but you still want to be able to use a loop. I would use either a cell array, where each cell in a row contains the x or y data for a chunk, or a struct array where say structarray(1).x and structarray)1).y hold your data values.
Then you can iterate through your struct array and call plot on each chunk separately.
As for fitting you can use the fit command. It's complex and has lots of options so you should check out the help first (type doc fit inside the console to get the inline help, which is the same as the website help in content). The short version is that you can do a simple linear fit like this
[fitobject, gof] = fit(x, y, 'poly1');
where 'poly1' specifies you want a first order polynomial (i.e. straight line) and the output arguments give you a fit object, which you can do various things with like plot or interpolate, and the second gives you a struct containing among other things the r^2 and adjusted r^2. The fitobject also contains your fit coefficients.
Introduction to problem:
I'm modelling a system where i have a matrix X=([0,0,0];[0,1,0],...) where each row represent a point in 3D-space. I then choose a random row, r, and take all following rows and rotate around the point represented by r, and make a new matrix from these rows, X_rot. I now want to check whether any of the rows from X_rot is equal two any of the rows of X (i.e. two vertices on top of each other), and if that is the case refuse the rotation and try again.
Actual question:
Until now i have used the following code:
X_sim=[X;X_rot];
if numel(unique(X_sim,'rows'))==numel(X_sim);
X(r+1:N+1,:,:)=X_rot;
end
Which works, but it takes up over 50% of my running time and i were considering if anybody in here knew a more efficient way to do it, since i don't need all the information that i get from unique.
P.S. if it matters then i typically have between 100 and 1000 rows in X.
Best regards,
Morten
Additional:
My x-matrix contains N+1 rows and i have 12 different rotational operations that i can apply to the sub-matrix x_rot:
step=ceil(rand()*N);
r=ceil(rand()*12);
x_rot=x(step+1:N+1,:);
x_rot=bsxfun(#minus,x_rot,x(step,:));
x_rot=x_rot*Rot(:,:,:,r);
x_rot=bsxfun(#plus,x_rot,x(step,:));
Two possible approaches (I don't know if they are faster than using unique):
Use pdist2:
d = pdist2(X, X_rot, 'hamming'); %// 0 if rows are equal, 1 if different.
%// Any distance function will do, so try those available and choose fastest
result = any(d(:)==0);
Use bsxfun:
d = squeeze(any(bsxfun(#ne, X, permute(X_rot, [3 2 1])), 2));
result = any(d(:)==0);
result is 1 if there is a row of X equal to some row of X_rot, and 0 otherwise.
How about ismember(X_rot, X, 'rows')?
I am new to Hidden Markov Model. I understand the main idea and I have tried some Matlab built-in HMM functions to help me understand more.
If I have a sequence of observations and corresponding states,
e.g.
seq = 2 6 6 1 4 1 1 1 5 4
states = 1 1 2 2 2 2 2 2 2 2
and I can use hmmestimate function to calculate transition and emission probability matrices as:
[TRANS_EST, EMIS_EST] = hmmestimate(seq, states)
TRANS_EST =
0.5000 0.5000
0 1.0000
EMIS_EST =
0 0.5000 0 0 0 0.5000
0.5000 0 0 0.2500 0.1250 0.1250
In the example, the observation is just a single value.
The example picture below describes my situation.
If I have states: {Sleep, Work, Sport}, and I have a set of observations: {lightoff, light on, heart rate>100 .....}
If I use number to represent each observation, in my situation each state has multiple observations at the same time,
seq = {2,3,5} {6,1} {2} {2,3,6} {4} {1,2} {1}
states = 1 1 2 2 2 2 2
I have no idea how to implement this in Matlab to get transition and emission probability matrix. I am quite lost, what shall I do in the next step? Am I using the right approach?
Thanks!
If you know the hidden state sequence, then max likelihood estimation is trivial: it's the normalized empirical counts. In other words, count up the transitions and emissions and then divide the elements in each row by the total counts in that row.
In the case where you have multiple observation variables, code the observations as a vector where each element gives the value of one of the random variables on that time step, e.g. '{lights=1, computer=0, Heart Rate >100 = 1, location =0}'. The key is that you need to have the same number of observations at each time step or else things will be much more difficult.
I think you have two options.
1) code multiple observations into one number. For example, if you know that the maximal possible value for the observation is N, and at each state you may have at most K observations, then you can code any combinations of observations as a number between 0 and N^K - 1. By doing this, you are assuming that {2,3,6} and {2,3,5} do not share anything, they are completely different two observations.
2) Or you can have multiple emission distributions for each state. I haven't used the built-in functions in matlab for HMM estimation, so I have no idea whether or not it supports that. But the idea is, if you have multiple emission distributions at a state, the emission likelihood is just the product of them. This is what jerad suggests.
I have a line of code in matlab for which i am selecting a subset of a matrix:
A(3:5,1:3);
Now i want to adapt this line, to only select rows for which all three values are larger than zero:
(A(3:5,1:3) > 0);
But apparently i am not doing this right. How do i select part of the matrix, and also make sure that only the rows (for which all three values are) larger than zero are selected?
EDIT: To clarify: lets say that i have a matrix of coordinates called A, that looks like this:
Matrix A [5,3]
3 4 0
0 1 0
0 3 1
0 0 0
4 8 7
Now i want to select only part [3:5,1:3], and of that part i only want to select row 3 and 5. How do i do that?
The expression:
A(find(sum(A(3:5,:),2)~=0),:)
will return only the rows of A(3:5,:) which have a row-sum not equal to zero.
If you had posted syntactically correct Matlab it would have been easier for me to cut and paste your test data into my Matlab session.
I'm modelling this answer off of A(find( A > 0 ))
distances = pdist(find( pdist(medoidContainer(i,1:3)) > 0 ));
This will give you a vector of values in the distances variable. The reason the pdist(medoidContainer(i,1:3) > 0) does not work is because it first, finds the indices specified by i,1:3 in medoidContainer. Then it finds the indices in medoidContainer(i,1:3) that are greater than 0. However, since medoidContainer(i,1:3) and pdist now likely have different dimensions, the comparison does not give the right indexes.