Limiting the number of decimal points taken by matlab for calculation - matlab

I was checking the similarity of two random matrices by using two methods ,after addition, sum of one elements of first matrix comes as .7095 and same .7095 for second matrix matrix but when tried to find the difference in sum ,instead of zero it was giving a value very close to zero .Later I checked in work space and find out that first number is actually 0.709485632903040 and the second is 0.709485632903037. It is extremely important for me to have the difference vector to be zero so as I am using that zero in my later stages of my program .If matlab do calculation for a precision in less than 4 or 5 digits I can achieve that. I want to limit the calculation only for 4 or 5 digits,I had used digits(4) but it is not working . I want matlab to do calculation in a precision up to 4 decimal places only(not for display,calculation inside matlab) ,Is there is a way to do that??

Related

How to quickly/easily merge and average data in matrix in MATLAB?

I have got a matrix of AirFuelRatio values at certain engine speeds and throttlepositions. (eg. the AFR is 14 at 2500rpm and 60% throttle)
The matrix is now 25x10, and the engine speed ranges from 1200-6000rpm with interval 200rpm, the throttle range from 0.1-1 with interval 0.1.
Say i have measured new values, eg. an AFR of 13.5 at 2138rpm and 74,3% throttle, how do i merge that in the matrix? The matrix closest values are 2000 or 2200rpm and 70 or 80% throttle. Also i don't want new data to replace the older data. How can i make the matrix take this value in and adjust its values to take the new value in account?
Simplified i have the following x-axis values(top row) and 1x4 matrix(below):
2 4 6 8
14 16 18 20
I just measured an AFR value of 15.5 at 3 rpm. If you interpolate the AFR matrix you would've gotten a 15, so this value is out of the ordinary.
I want the matrix to take this data and adjust the other variables to it, ie. average everything so that the more data i put in the more reliable and accurate the matrix becomes. So in the simplified case the matrix would become something like:
2 4 6 8
14.3 16.3 18.2 20.1
So it averages between old and new data. I've read the documentation about concatenation but i believe my problem can't be solved with that function.
EDIT: To clarify my question, the following visual clarification.
The 'matrix' keeps the same size of 5 points whil a new data point is added. It takes the new data in account and adjusts the matrix accordingly. This is what i'm trying to achieve. The more scatterd data i get, the more accurate the matrix becomes. (and yes the green dot in this case would be an outlier, but it explains my case)
Cheers
This is not a matter of simple merge/average. I don't think there's a quick method to do this unless you have simplifying assumptions. What you want is a statistical inference of the underlying trend. I suggest using Gaussian process regression to solve this problem. There's a great MATLAB toolbox by Rasmussen and Williams called GPML. http://www.gaussianprocess.org/gpml/
This sounds more like a data fitting task to me. What you are suggesting is that you have a set of measurements for which you wish to get the best linear fit. Instead of producing a table of data, what you need is a table of values, and then find the best fit to those values. So, for example, I could create a matrix, A, which has all of the recorded values. Let's start with:
A=[2,14;3,15.5;4,16;6,18;8,20];
I now need a matrix of points for the inputs to my fitting curve (which, in this instance, lets assume it is linear, so is the set of values 1 and x)
B=[ones(size(A,1),1), A(:,1)];
We can find the linear fit parameters (where it cuts the y-axis and the gradient) using:
B\A(:,2)
Or, if you want the points that the line goes through for the values of x:
B*(B\A(:,2))
This results in the points:
2,14.1897 3,15.1552 4,16.1207 6,18.0517 8,19.9828
which represents the best fit line through these points.
You can manually extend this to polynomial fitting if you want, or you can use the Matlab function polyfit. To manually extend the process you should use a revised B matrix. You can also produce only a specified set of points in the last line. The complete code would then be:
% Original measurements - could be read in from a file,
% but for this example we will set it to a matrix
% Note that not all tabulated values need to be present
A=[2,14; 3,15.5; 4,16; 5,17; 8,20];
% Now create the polynomial values of x corresponding to
% the data points. Choosing a second order polynomial...
B=[ones(size(A,1),1), A(:,1), A(:,1).^2];
% Find the polynomial coefficients for the best fit curve
coeffs=B\A(:,2);
% Now generate a table of values at specific points
% First define the x-values
tabinds = 2:2:8;
% Then generate the polynomial values of x
tabpolys=[ones(length(tabinds),1), tabinds', (tabinds').^2];
% Finally, multiply by the coefficients found
curve_table = [tabinds', tabpolys*coeffs];
% and display the results
disp(curve_table);

spdiags and features scaling

According to libsvm faqs, the following one-line code scale each feature to the range of [0,1] in Matlab
(data - repmat(min(data,[],1),size(data,1),1))*spdiags(1./(max(data,[],1)-min(data,[],1))',0,size(data,2),size(data,2))
so I'm using this code:
v_feature_trainN=(v_feature_train - repmat(mini,size(v_feature_train,1),1))*spdiags(1./(maxi-mini)',0,size(v_feature_train,2),size(v_feature_train,2));
v_feature_testN=(v_feature_test - repmat(mini,size(v_feature_test,1),1))*spdiags(1./(maxi-mini)',0,size(v_feature_test,2),size(v_feature_test,2));
where I use the first one to train the classifier and the second one to classify...
In my humble opinion scaling should be performed by:
i.e.:
v_feature_trainN2=(v_feature_train -min(v_feature_train(:)))./(max(v_feature_train(:))-min((v_feature_train(:))));
v_feature_test_N2=(v_feature_test -min(v_feature_train(:)))./(max(v_feature_train(:))-min((v_feature_train(:))));
Now I compared the classification results using these two scaling methods and the first one outperforms the second one.
The question are:
1) What exactly does the first method? I didn't understand it.
2) Why the code suggested by libsvm outperforms the second one (e.g. 80% vs 60%)?
Thank you so much in advance
First of all:
The code described in the libsvm does something different than your code:
It maps every column independently onto the interval [0,1].
Your code however uses the global min and max to map all the columns using the same affine transformation instead of a separate transformation for each column.
The first code works in the following way:
(data - repmat(min(data,[],1),size(data,1),1))
This subtracts each column's minimum from the entire column. It does this by computing the row vector of minima min(data,[],1) which is then replicated to build a matrix the same size as data. Then it is subtracted from data.
spdiags(1./(max(data,[],1)-min(data,[],1))',0,size(data,2),size(data,2))
This generates a diagonal matrix. The entry (i,i) of this matrix is 1 divided by the difference of the maximum and the minimum of the ith column: max(data(:,i))-min(data(:,i)).
The right multiplication of this diagonal matrix means: Multiply each column of the left matrix with the corresponding diagonal entry. This effectively divides column i by max(data(:,i))-min(data(:,i)).
Instead of using a sparse diagonal matrix, you could do this even more efficiently with bsxfun:
bsxfun(#rdivide, ...
bsxfun(#minus, ...
data, min(data,[],1)), ...
max(data,[],1)-min(data,[],1))
Which is the matlab way of writing:
Divide:
The difference of:
each column and its respective minimum
by the difference of each column's max and min.
I know this has already been answered correctly, but I would like to present another solution that I think is also correct and I found more intuitive/shorther then the one presented by knedlsepp. I am new to matlab and as I was studying knedlsepp solution, I found it more intuitive to solve this problem with the following formula:
function [ output ] = feature_scaling( y)
output = (y - repmat(min(y),size(y,1),1)) * diag(1./(max(y) - min(y)));
end
I find it a bit easier to use diag this way instead of spdiags, but I believe it produces the same result for the purpose of this excercise.
Multiplying the first term by the second, effectively divides each member of the matrix (Y-min(Y)) by the scalar value 1/(max(y)-min(y)), achieving the desired result.
In case someone prefers a shorter version, maybe this can be of help.

Using corrcoef and all p values returned as 1 or 0

I'm trying to test the correlation of 4 sets of data using the corrcoef function in matlab. Each one contains 50 values. Starting out, the data are in a .csv file wtih 4 columns of 50 values each. I've put all 4 sets into a column vector labeled y:
y=[[set1] [set2] [set3] [set4]]
So y is a matrix of 4 colums of 50 rows.
I've called the corrcoef function using
[r,p]=corrcoef(y)
When I run this code, I get a 4x4 r matrix with a diagonal of 1's and sets of identical values above or below it, which seems correct because the 1's must be the correlation of one set to itself and the identical values above and below the diagonal are just the correlations of the same two sets repeated (i.e. set2 to set1 vs. set1 to set2).
However, the matrix of p values I return seems all wrong, and I'm not sure why. I get a 4x4 matrix with a diagonal of 1's, and all the values above and below are 0. Clearly this is incorrect because it's saying that the probability of getting the perfect correlations by chance is 100% while getting the "imperfect" correlations by chance is almost impossible.
Can anyone help show me what I'm doing wrong here? I can supply more details if needed.
Edit: just wanted to mention I'm trying to follow the instructions from the mathworks help page:
http://www.mathworks.com/help/matlab/ref/corrcoef.html
In their example, they do show the p-values along the diagonal as all being equal to 1, can anyone tell me why that is?
Also, the p-values aren't exactly equal to 0, they're just absurdly small, like 5.9601e-10, which is what is making me feel like something is wrong.
Edit2: I've also tried computing the correlation coefficient between two of the sets using excel's corr function, and it gives me the same value for r as matlab does.

How does matlab compare two complex numbers?

I saw a file in matlab with used max() on a matrix whose entries are complex numbers. I can't understand how does matlab compare two complex numbers?
ls1=max(tfsp');
Here , tfsp contains complex numbers.
The complex numbers are compared first by magnitude, then by phase angle (if there is a tie for the maximum magnitude.)
From help max:
When X is complex, the maximum is computed using the magnitude
MAX(ABS(X)). In the case of equal magnitude elements, then the phase
angle MAX(ANGLE(X)) is used.
NaN's are ignored when computing the maximum. When all elements in X
are NaN's, then the first one is returned as the maximum.

Probability of generating a particular random number, such as in MATLAB

In real probability, there is a 0% chance that a random number p, selected from all of the real numbers in the interval (0,1), will be 0.5. However, what are the odds that
rand == 0.5
in MATLAB? I suppose this is like asking how many double-precision numbers are between zero and one, or maybe there are other factors at play.
No particular info on MATLAB's generator...
In general even simple pseudo-random generators have long enough cycles which would cover all values representable by double.
If MATLAB uses some other form of generating random numbers it would be even better - so assuming it uniformly covers whole range of double values.
I believe probability would be: distance between representable numbers around values you are interested divided by length of the interval. See What is the minimal step in double data type? (.NET) for discussion on the distance.
Looking at this question, we see that there are 262 - 252
doubles in the interval (0 1). Therefore, the probability of picking any single one (like 0.5) would be roughly equal to one divided by this number, or
>> p = 1/(2^62-2^52)
ans =
2.170523997312134e-019
However, as horchler already indicates, it also depends on the type of random number generator you use, as well as MATLAB's implementation thereof. Sadly, I have only basic knowledge on the implementaion details for each, but you can look here for a list of available random number generators in MATLAB and google a bit further for more precise numbers.
I am not sure whether Alexei was trying to say this, but inspired by him I think the probability will indeed be approximately the distance between numbers around 0.5.
Therefore I expect the probability to be approximately:
eps(0.5)
Which evaluates to 1.1102e-16
Given the monotonic nature of the difference between double numbers I would actually think this holds:
eps(0.5-eps(0.5)) <= yourprobability <= eps(0.5)
Implying a range of 5.5511e-17 to 1.1102e-16