How do I write a script to find the cost of each item in a list if only the total cost is known? MATLAB - matlab

I am trying to write a script in MATLAB for my class. The scenario is that there are four different types of pens. I only know the total cost of all four pens (total is not actually given to me). I am trying to find the individual cost of each different type of pen. My 3 "friends" also each bought the four pens themselves. That makes for a total of 16 pens among 4 people. Everyone's total cost should be the same. The book suggests creating a matrix for the pens made up of columns for each different type of pen and rows for each of the people (4x4). It also says to have a column vector for the totals each person spent on the pens, which I presume would all be the same. I am stuck and really not sure how to go about solving this since I do not know the cost of even one of the pens. Any help would greatly be appreciated.

#TTT is right, linear algebra solves your task. The great thing about Matlab is, that it can actually calculate linear algebra without the fuzz of building for-loops.
Here is a simple example that should suit your case.
Footnote:
Note that the matrix inversion with inv() will be flagged as inefficient by the Matlab-IDE (i.e. the program) because it is much faster and more accurate to calculate inv(NumPens) * total jointly (which is expressed as NumPens\total) than explicitly calculating the inverse of the matrix first -- but to teach linear algebra, this way is much better!)
total = [17;13;12;27]; % vector 4x1 (number of persons x 1)
NumPens = [1 1 3 1
1 0 1 1
0 1 0 2
3 0 1 1]; % matrix 4x4 (number of persons x number of pen types)
% total = NumPens * x % original system
x = inv(NumPens) * total % how to calculate the number of pens

Related

Is my implementation of confusion matrix correct? Or is something else at fault here?

I have trained a multi class svm classifier with 5 classes, i.e. svm(1)...svm(5).
I then used 5 images not used to during the training of these classifiers for testing.
These 5 images are then tested with their respective classifier. i.e. If 5 images were taken from class one they are tested against the same class.
predict = svmclassify(svm(i_t),test_features);
The predict produces a 5 by 1 vector showing the result.
-1
1
1
1
-1
I sum these and then insert it into a diagonal matrix.
Ideally it should be a diagonal matrix with 5 written diagonally when all images are correctly classified. But the result is very poor. I mean in some cases I am getting negative result. I just want to verify if this poor result is because my confusion matrix is not accurate or if I should use some other feature extractor.
Here is the code I wrote
svm_table = [];
for i_t = 1:numel(svm)
test_folder = [Path_training folders(i_t).name '\']; %select writer
feature_count = 1; %Initialize count for feature vector accumulation
for j_t = 6:10 %these 5 images that were not used for training
[img,map] = imread([test_folder imlist(j_t).name]);
test_img = imresize(img, [100 100]);
test_img = imcomplement(test_img);
%Features extracted here for each image.
%The feature vector for each image is a 1 x 16 vector.
test_features(feature_count,:) = Features_extracted;
%The feature vectors are accumulated in a single matrix. Each row is an image
feature_count = feature_count + 1; % increment the count
end
test_features(isnan(test_features)) = 0; %locate Nan and replace with 0
%I was getting NaN in some images, which was causing problems with svm, so just replaced with 0
predict = svmclassify(svm(i_t),test_features); %produce column vector of preicts
svm_table(end+1,end+1) = sum(predict); %sum them and add to matrix diagonally
end
this is what I am getting. Looks like a confusion matrix but is very poor result.
-1 0 0 0 0
0 -1 0 0 0
0 0 3 0 0
0 0 0 1 0
0 0 0 0 1
So I just want to know what is at fault here. My implementation of confusion matrix. My way of testing the svm or my selection of features.
I would like to add some issues:
You mention that: << These 5 images are then tested with their respective classifier. i.e. If 5 images were taken from class one they are tested against the same class. >>
You are never supposed to know the class (category) of test images. Of course, you need to know the test category labels for calculating various metrics such as accuracy, precision, confusion matrix etc. Apart from that, when you are using SVM to determine which class the example belongs to, you have to try all the SVMs.
There are two popular ways of training and testing multi-class SVMs, namely one-vs-all and one-vs-one approach. Read this answer and its corresponding question to understand them in detail.
I don't know if MATLAB SVM is capable of doing multiclass classification, but if you use LIBSVM then its uses one-vs-one approach. It will also do the testing for you correctly. However, if you want to design your own one-vs-one classifier, this is how you should proceed:
Say you have 5 classes, then train all possible combinations of pairs = 5c2 = 10 pairs ({1,2}, ..., {1,5},{2,1},...,{2,5},...,{5,4}). While testing, you have to apply all the 10 models and count all the votes to decide the final result. For example, we train models for 4 pairs (say), ({1 vs 2}, {1 vs 3}, {2 vs 1}, {2 vs 3}) and the outputs of 4 models are {1,1,0,1} respectively. That means, your 4 predicted classes are {1,1,1,2}. Therefore, the final class is 1.
Once you get all the predicted labels, then you can actually use the command confusionmat to get the confusion matrix. If you want to make your own, then make a 5x5 matrix of zeros. Add a 1 to the position (actual label, predicted label) i.e. if the actual class was 2 and you predicted it as 3, then add 1 at the position (2nd row, 3rd col) in the matrix.
Several issues that I can see...
1) What you're using is not really a multi class SVM. Your taking several different SVM models and applying them to the same test data (not really the same thing). You need to look at the documentation for svmtrain. When you use it you give it two kinds of data, the training data (parameter vectors for each training image) and the Group data (vector of classes for the images associated with the vectors..). What you get will be one SVM model which will decide between 1 of the options. (I usually use libsvm, so Im not that familiar with Matlabs SVM implementation, but that should be the gist of it)
2) Your confusion matrix is derived incorrectly (see: http://en.wikipedia.org/wiki/Confusion_matrix). Start by making a 5x5 zeros matrix to hold the confusion matrix. Loop through each of your test images and let the SVM model classify the image (it should pick 1 of the five possibilities). Add 1 at the proper position of the confusion matrix. So if the image should classify as a 3 and the SVM classifies it as a 4 you should add 1 to the 3,4 position...

Efficent chisquared test for independence in matlab

I have these bunch of X values and a bunch of Y1,Y2 ,Y3 values. I want to test the independence among X and Y1, X and Y2 and X and Y3. How can I do this efficiently in matlab. My variables are categorical.
I can use crosstab like crosstabl(X,Y1) and get the p values to see the independence/dependence thing. But I have to iterate over Y1, Y2 and Y3 separately this will take lot of time.
I have around 20000 Ys. So is there any way to do this efficiently get 20,000 p values at once in matlab?
X Y1
1 0
1 0
2 0
2 1
3 0
3 1
3 1
3 1
I think to find if the vectors are linearly dependent or not, you can try to find some coefficients that:
(exmaple)
to find if these vectors are linearly dependent:
you need to find not-all-zero scalars that:
in this case:
and so these vectors are linearly dependent.
you can check this video for more info.
but if you want to know that which of those vectors are more similar, the best way is Analysis of Variance (Covariance).
Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences between group means and their associated procedures (such as "variation" among and between groups). In ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation.
ANOVA is implemented in MATLAB and can be done for single and multiple factors.
The functions are well documented in MATLAB and you can find them here.
To learn it easily you can check this lesson on youtube.

How to resample with interp1 in Matlab when input vectors are of different length

I have two variables in a .mat file here:
https://www.yousendit.com/download/UW13UGhVQXA4NVVQWWNUQw
testz is a vector of cumulative distance (in meters, monotonically and regularly increasing)
testSDT is a vector of integrated (cumulative) sound wave travel time (in milliseconds) generated using the distance vector and a vector of velocities
(there is an intermediate step of creating interval travel times)
Since velocity is a continuously variable function the resulting interval travelt times and also the integrated travel times are non integers and variable in magnitude
What I want is to resample the distance vector at regular time intervals (e.g. 1 ms, 2 ms, ..., n ms)
What makes it difficult is that the maximum travel time, 994.6659, is less than the number of samples in the 2 vectors, therefore it is not straightforward to use interp1.
i.e.:
X=testSDT -> 1680 samples
Y=testz -> 1680 samples
XI=[1:1:994] -> 994 samples
This is the code I've come up with. It is a working code and it is not too bad I think.
%% Initial chores
M=fix(max(testSDT));
L=(1:1:M);
%% Create indices
% this loops finds the samples in the integrated travel time vector
% that are closest to integer milliseconds and their sample number
for i=1:M
[cl(i) ind(i)] = min(abs(testSDT-L(i)));
nearest(i) = testSDT(ind(i));
end
%% Remove duplicates
% this is necessary to remove duplicates in the index vector (happens in this test).
% For example: 2.5 ms would be the closest to both 2 ms and 2 ms
[clsst,ia,ic] = unique(nearest);
idx=(ind(ia));
%% Interpolation
% this uses the index vectors to resample the depth vectors at
% integer times
newz=interp1(clsst,testz(idx),[1:1:length(idx)],'cubic')';
As far as I can see there is one issue with this code:
I rely on the vector idx as my XI for interpolation. Vector idx is 1 sample shorter than vector ind (one duplicate was removed).
Therefore my new times will stop one millisecond short. This is a very small issue, and duplicate are unlikely but I am wondering if anybody can think of a workaround, or of a different way to approach the problem altogether.
Thank you
If I understand you correctly, you want to extrapolate to that extra point.
you can do this is many ways, one is to add that extra point to the interp1 line.
If you have some function you expect to follow your data you can use it by fitting it to the data and then obtaining that extra point or with a tool like fnxtr.
But I have a problem understanding what you want because of the way you used the line. The third argument you use, [1:1:length(idx)], is just the series [1 2 3 ...], usually when interpolating, one uses some vector x_i of points of interest, though I doubt your points of interest happen to be the series of integers 1:length(idx), what you want is just [1:length(idx) xi], where xi is that extra point x-axis value.
EDIT:
Instead of the loop just produce matrix forms out of L and testSDT, then matrix operation is somewhat faster in doing the min(abs(...:
MM=ones(numel(testSDT),1)*L;
TT=testSDT*ones(1,numel(L));
[cl ind]=(min(abs(TT-MM)));
nearest=testSDT(ind);

Creating a new probabilistic matrix from two existing ones according to prespecified rules in MATLAB

I have a problem in my MATLAB code. Let me first give you some explanation about the issue. I have two matrices which represent probabilities of specific outcomes of events. The first one is called DemandProbabilityMatrix or in short DemandP. Entry (i,j) shows the probability that item i is demanded j many times. Similarly, we have a ReturnProbabilityMatrix, i.e. ReturnP. An element of type (i,j) stores the probability that item i is returned j many times.
We want to compute the net demand probability out of these two matrices. For an example:
DemandP=[ .4 .5 .1]
ReturnP=[ .2 .3 .5]
In this case we have 1 item and it can be demanded or returned either 1,2 or 3 times with the given probabilities. To be more specific That item will be demanded just for once with probability .4 .
Then we need to compute the net demand. In this case, net demand can be -2,-1,0,1 or 2. For instance in order to get a net demand of -1 we can either have a demand of 1 and return of 2 or demand of 2 and return of 3. Thus we have
NetDemandP(1,2)= DemandP(1,1)*ReturnP(1,2)+DemandP(1,2)*ReturnP(1,3).
Thus the NetDemandP should look as:
NetDemandP=[.20 .37 .28 .13 .02]
I can do this with nested for loops but I'm trying to come up with a faster way. In case it helps I have the following for loops solutions where I denotes the number of rows in ReturnP and DemandP, J+1 denotes the number of columns in those matrices.
NetDemandP=zeros(I,2*J+1);
for i=1:I
for j=1:J+1
for k=1:J+1
NetDemandP(i,j-k+J+1)=NetDemandP(i,j-k+J+1)+DemandP(i,j)*ReturnP(i,k);
end
end
end
Thanks in advance
What you want is the convolution of your probability density functions. Or, more specifically, you want the convolution of the demand density with the reverse of the return density. This is easily achieved in Matlab. For example:
DemandP = [.4 .5 .1];
ReturnP = [.2 .3 .5];
NetDemandP = conv(DemandP,fliplr(ReturnP))
If you have matrices instead of vectors, then just iterate through the rows:
for i = 1:size(DemandP,1)
NetDemandP(i,:) = conv(DemandP(i,:),fliplr(ReturnP(i,:)))
end

Matlab: how to find which variables from dataset could be discarded using PCA in matlab?

I am using PCA to find out which variables in my dataset are redundand due to being highly correlated with other variables. I am using princomp matlab function on the data previously normalized using zscore:
[coeff, PC, eigenvalues] = princomp(zscore(x))
I know that eigenvalues tell me how much variation of the dataset covers every principal component, and that coeff tells me how much of i-th original variable is in the j-th principal component (where i - rows, j - columns).
So I assumed that to find out which variables out of the original dataset are the most important and which are the least I should multiply the coeff matrix by eigenvalues - coeff values represent how much of every variable each component has and eigenvalues tell how important this component is.
So this is my full code:
[coeff, PC, eigenvalues] = princomp(zscore(x));
e = eigenvalues./sum(eigenvalues);
abs(coeff)/e
But this does not really show anything - I tried it on a following set, where variable 1 is fully correlated with variable 2 (v2 = v1 + 2):
v1 v2 v3
1 3 4
2 4 -1
4 6 9
3 5 -2
but the results of my calculations were following:
v1 0.5525
v2 0.5525
v3 0.5264
and this does not really show anything. I would expect the result for variable 2 show that it is far less important than v1 or v3.
Which of my assuptions is wrong?
EDIT I have completely reworked the answer now that I understand which assumptions were wrong.
Before explaining what doesn't work in the OP, let me make sure we'll have the same terminology. In principal component analysis, the goal is to obtain a coordinate transformation that separates the observations well, and that may make it easy to describe the data , i.e. the different multi-dimensional observations, in a lower-dimensional space. Observations are multidimensional when they're made up from multiple measurements. If there are fewer linearly independent observations than there are measurements, we expect at least one of the eigenvalues to be zero, because e.g. two linearly independent observation vectors in a 3D space can be described by a 2D plane.
If we have an array
x = [ 1 3 4
2 4 -1
4 6 9
3 5 -2];
that consists of four observations with three measurements each, princomp(x) will find the lower-dimensional space spanned by the four observations. Since there are two co-dependent measurements, one of the eigenvalues will be near zero, since the space of measurements is only 2D and not 3D, which is probably the result you wanted to find. Indeed, if you inspect the eigenvectors (coeff), you find that the first two components are extremely obviously collinear
coeff = princomp(x)
coeff =
0.10124 0.69982 0.70711
0.10124 0.69982 -0.70711
0.9897 -0.14317 1.1102e-16
Since the first two components are, in fact, pointing in opposite directions, the values of the first two components of the transformed observations are, on their own, meaningless: [1 1 25] is equivalent to [1000 1000 25].
Now, if we want to find out whether any measurements are linearly dependent, and if we really want to use principal components for this, because in real life, measurements my not be perfectly collinear and we are interested in finding good vectors of descriptors for a machine-learning application, it makes a lot more sense to consider the three measurements as "observations", and run princomp(x'). Since there are thus three "observations" only, but four "measurements", the fourth eigenvector will be zero. However, since there are two linearly dependent observations, we're left with only two non-zero eigenvalues:
eigenvalues =
24.263
3.7368
0
0
To find out which of the measurements are so highly correlated (not actually necessary if you use the eigenvector-transformed measurements as input for e.g. machine learning), the best way would be to look at the correlation between the measurements:
corr(x)
ans =
1 1 0.35675
1 1 0.35675
0.35675 0.35675 1
Unsurprisingly, each measurement is perfectly correlated with itself, and v1 is perfectly correlated with v2.
EDIT2
but the eigenvalues tell us which vectors in the new space are most important (cover the most of variation) and also coefficients tell us how much of each variable is in each component. so I assume we can use this data to find out which of the original variables hold the most of variance and thus are most important (and get rid of those that represent small amount)
This works if your observations show very little variance in one measurement variable (e.g. where x = [1 2 3;1 4 22;1 25 -25;1 11 100];, and thus the first variable contributes nothing to the variance). However, with collinear measurements, both vectors hold equivalent information, and contribute equally to the variance. Thus, the eigenvectors (coefficients) are likely to be similar to one another.
In order for #agnieszka's comments to keep making sense, I have left the original points 1-4 of my answer below. Note that #3 was in response to the division of the eigenvectors by the eigenvalues, which to me didn't make a lot of sense.
the vectors should be in rows, not columns (each vector is an
observation).
coeff returns the basis vectors of the principal
components, and its order has little to do with the original input
To see the importance of the principal components, you use eigenvalues/sum(eigenvalues)
If you have two collinear vectors, you can't say that the first is important and the second isn't. How do you know that it shouldn't be the other way around? If you want to test for colinearity, you should check the rank of the array instead, or call unique on normalized (i.e. norm equal to 1) vectors.