I have the following model:
<bound method Model.summary of Class : LinearRegression
Schema
------
Number of coefficients : 18
Number of examples : 21613
Number of feature columns : 17
Number of unpacked features : 17
Hyperparameters
---------------
L1 penalty : 10000000000.0
L2 penalty : 0.0
Training Summary
----------------
Solver : fista
Solver iterations : 10
Solver status : Completed (Iteration limit reached).
Training time (sec) : 1.2776
Settings
--------
Residual sum of squares : 2842629034369063.5
Training RMSE : 364204.5762
Highest Positive Coefficients
-----------------------------
(intercept) : 274873.056
bathrooms : 8468.5311
grade : 842.068
sqft_living_sqrt : 350.0606
sqft_living : 24.4207
Lowest Negative Coefficients
----------------------------
No Negative Coefficients :
Does this mean that my equation would be:
Prediction = 274873.056 + 8468.5311[bathroom] + 842.068[grade]^2 + 350.0606[sqft_living_sqrt]^3 + 24.4207[sqft_living]^4
If that is correct, then how does the model know which features belong to the power 2, power 3, etc. ? If I change the order of the features will the coefficients change?
I am not sure I follow you with the ^2 ... ^3... ^4.. ?
RMSE does ^2 ... ^2 ... ( etc )
This is wat RMSE Does:
Root Mean Squared Error
Find out the difference between original and predicted values.
Square the differences
Sum all squared differences
Take the average of the sum.
Take the square root of the average
You can see my example over here as Math SE
https://math.stackexchange.com/questions/3650442/simple-calculation-from-formula-rmse/3843518#3843518
Just, substitute your values with the RMSE Formula. And calculate the RMSE.
Regards,
//Will
Related
I have a vector X which contain x and y value in column 1 and 2 respectively.
I make a calcul between each point :
Distance = pdist2(X,X);
But sometimes I have a problem of memory.
However, I use this matrix in a loop like this :
for i:1:n
find(Distance(i,:) <= epsilon);
.....
end
So, do you know how to make the calcul inside the loop of just the line i of the matrix Distance ?
Thanks
This is what I looked for :
pdist2(X(i,:),X)
G'day
Firstly, apologies for poor wording - I'm at a bit of a loss of how to describe this problem. I'm trying to calculate the conservative interpolation between two different vertical coordinate systems.
I have a vector of ocean transport values Ts, that describe the amount of transport at different depth values S. These depths are unevenly spaced (and size(S) is equal to size(Ts)+1 as the values in S are the depths at the top and bottom over which the transport value applies). I want to interpolate(/project?) this onto a vector of regularly spaced depths Z, where each new transport value Tz is formed from the values of Ts but weighted by the amount of overlap.
I've drawn a picture of what I mean (sorry for the bad quality webcam picture) I want to go from Ts1,Ts2.Ts3...TsN (bottom lines) to Tz1,Tz2,...TzN (top lines). The locations in the x direction for these are s0,s1,s2,...sN and z0,z1,z2,...zN. An example of the 'weighted overlap' would be:
Tz1 = a/(s1-s0) Ts1 + b/(s2-s1) Ts2 + c/(s3-s2) Ts3
where a, b and c are shown in the image as the length of overlap.
Some more details:
Example of z and s follow:
z = 0:5:720;
s = [222.69;...
223.74
225.67
228.53
232.39
237.35
243.56
251.17
260.41
271.5
284.73
300.42
318.9
340.54
365.69
394.69
427.78
465.11
506.62
551.98
600.54
651.2];
Note that I'm free to define z, but not s. Typically, z will be bigger than s (i.e. the smallest value in z will be smaller than in s, while the largest value in z will be larger than in s).
Help or tips greatly appreciated. Cheers,
Dave
I don't think there is an easy solution, as stated in the comments. I'll give it a go though :
One hypothesis first : We assume z0>s0 in order for your problem to be defined.
The idea (for your example) would be to get to the array below :
1 (s1-z0) s1-s0 Ts1
1 (s2-s1) s2-s1 Ts2
1 (z1-s2) s3-s2 Ts3
2 (s3-z1) s3-s2 Ts3
2 (z2-s3) s4-s3 Ts4
3 (z3-z2) s4-s3 Ts4
......
Then we would be able to compute, for each row : column1*column3/column2 and then use accumarray to sum the results with respect to the indexes in the first column.
Now the hardest part is to get this array :
Suppose you have :
A Nx1 vectors Ts
2 (N+1)x1 vectors s and z, with z(1)>s(1).
Vectsz=sort([s(2:end);z]); % Sorted vector of s and z values
In your case this vector should look like :
z0
s1
s2
z1
s3
z2
z3
...
The first column will serve as a subscript to apply accumarray, so we'll want it to increase each time there is a z value in our vector Vectsz
First=interp1(z,1:length(z),Vectsz,'previous');
Second=[diff(Vectsz);0]; % Padded with a 0 to keep the right size
Temp=diff(s);
Third=interp1(s(1:end-1),Temp,Vectsz,'previous');
This will just repeat the diff value everytime you have a z value in your vector Vectsz.
The last column is built exactly like the third one
Fourth=interp1(s(1:end-1),Ts,Vectsz,'previous');
Now that the array is built, a call to accumarray is enough to get the final result :
Res=accumarray(First,Second.*Fourth./Third);
EDIT : There is actually no need for the use of interp1 with the previous option :
Vectsz=sort([s(2:end);z]);
First=cumsum(ismember(Vectsz,z));
Second=[diff(Vectsz);0];
idx=cumsum(ismember(Vectsz,s(2:end)))+1;
Diffs=[diff(s);0];
Third=Diffs(idx);
Fourth=Ts(idx);
Res=accumarray(First,Second.*Fourth./Third);
I am training a one vs all svm classifier. I used a 200 by 459 matrix to train the classifier using VLFeat svm classifier. (http://www.vlfeat.org/matlab/vl_svmtrain.html)
[W B] = vl_svmtrain(train_image_feats', tmp', .00001);
where train_image_feats' is a 200 by 459 matrix, and tmp' is the label matrix which is 1 by 459 vector.
The above command trains the svm with no problem, but then to classify the scores obtained on the test matrix I get an error. The test matrix is obviously not of the same size as that of the training matrix.
scores(i, :) = W'*test_image_feats' + B;
Where test_image_feats' is a 200 by 90 matrix. scores is a 9 by 459 matrix. 9 Because there are 9 categories(labels) to classify and 459 are the number of training images.
The above command gives the error:
Subscripted assignment dimension mismatch.
Error in svm_classify (line 56) scores(i, :) = W'*test_image_feats'
+ B;
Edit: Full code added..
categories = unique(train_labels);
num_categories = length(categories);
scores = zeros([num_categories size(train_labels, 1)]); %train_labels is 459 by 1 size
for i=1:num_categories %there are 9 categories
tmp = strcmp(train_labels, categories{i});
tmp = tmp - (1-tmp);
[W B] = vl_svmtrain(train_image_feats', tmp', .00001);
scores(i, :) = W'*test_image_feats' + B;
end
predicted_categories = cell(size(train_labels));
parfor i=1:size(test_image_feats,1)
image_scores = scores(:, i);
label_index = find(image_scores==max(image_scores));
predicted_categories{i}=categories(label_index);
end
Conceptually you are training a model with 459 training samples to predict the scores of 90 test samples.
scores = zeros([num_categories size(train_labels, 1)]);
isn't right as it will be the size of the training set. In fact you don't have to care at all about the size of the training set, you could train the model with 20 or 20000 images the prediction step shouldn't be any different.
scores have to be defined with the test case in mind
scores = zeros([num_categories size(test_labels, 1)]);
When you used 459 for both it only worked because size(test_labels, 1) was equal to size(train_labels, 1)
The problem is not with your right hand side of the assignment, but with score(i,:): you are trying to assign a 9-by-90 size matrix into a single row of score - this simply won't fit.
In the LIBLINEAR docs, we have
matlab> model = train(training_label_vector, training_instance_matrix [,'liblinear_options', 'col']);
-training_label_vector:
An m by 1 vector of training labels. (type must be double)
-training_instance_matrix:
An m by n matrix of m training instances with n features.
It must be a sparse matrix. (type must be double)
-liblinear_options:
A string of training options in the same format as that of LIBLINEAR.
-col:
if 'col' is set, each column of training_instance_matrix is a data instance. Otherwise each row is a data instance.
However, even after reading the homepage and looking at the docs, I can't find out what the options are for liblinear_options.
Is this listed somewhere but I am clearly missing it?
Futhermore, since I am unable to find liblinear_options listed anywhere, I am stuck with the following question:
Does the train method use a linear SVM to develop a model?
Liblinear is a linear classifier. Besides SVM, it also included logistic regression based classifier. And yes, as its name indicates, the linear kernel is applied in SVM.
You may check their github page for the liblinear_options. I copied them here as well:
"liblinear_options:\n"
"-s type : set type of solver (default 1)\n"
" 0 -- L2-regularized logistic regression (primal)\n"
" 1 -- L2-regularized L2-loss support vector classification (dual)\n"
" 2 -- L2-regularized L2-loss support vector classification (primal)\n"
" 3 -- L2-regularized L1-loss support vector classification (dual)\n"
" 4 -- multi-class support vector classification by Crammer and Singer\n"
" 5 -- L1-regularized L2-loss support vector classification\n"
" 6 -- L1-regularized logistic regression\n"
" 7 -- L2-regularized logistic regression (dual)\n"
"-c cost : set the parameter C (default 1)\n"
"-e epsilon : set tolerance of termination criterion\n"
" -s 0 and 2\n"
" |f'(w)|_2 <= eps*min(pos,neg)/l*|f'(w0)|_2,\n"
" where f is the primal function and pos/neg are # of\n"
" positive/negative data (default 0.01)\n"
" -s 1, 3, 4 and 7\n"
" Dual maximal violation <= eps; similar to libsvm (default 0.1)\n"
" -s 5 and 6\n"
" |f'(w)|_1 <= eps*min(pos,neg)/l*|f'(w0)|_1,\n"
" where f is the primal function (default 0.01)\n"
"-B bias : if bias >= 0, instance x becomes [x; bias]; if < 0, no bias term added (default -1)\n"
"-wi weight: weights adjust the parameter C of different classes (see README for details)\n"
"-v n: n-fold cross validation mode\n"
"-q : quiet mode (no outputs)\n"
Possibly some new developments since this was posted. Running train in the matlab prompt will give you all the options. At least on R2020b with the version of liblinear that I just downloaded.
>> train
Usage: model = train(training_label_vector, training_instance_matrix, 'liblinear_options', 'col');
liblinear_options:
-s type : set type of solver (default 1)
for multi-class classification
0 -- L2-regularized logistic regression (primal)
1 -- L2-regularized L2-loss support vector classification (dual)
2 -- L2-regularized L2-loss support vector classification (primal)
3 -- L2-regularized L1-loss support vector classification (dual)
4 -- support vector classification by Crammer and Singer
5 -- L1-regularized L2-loss support vector classification
6 -- L1-regularized logistic regression
7 -- L2-regularized logistic regression (dual)
for regression
11 -- L2-regularized L2-loss support vector regression (primal)
12 -- L2-regularized L2-loss support vector regression (dual)
13 -- L2-regularized L1-loss support vector regression (dual)
for outlier detection
21 -- one-class support vector machine (dual)
-c cost : set the parameter C (default 1)
-p epsilon : set the epsilon in loss function of SVR (default 0.1)
-n nu : set the parameter nu of one-class SVM (default 0.5)
-e epsilon : set tolerance of termination criterion
-s 0 and 2
|f'(w)|_2 <= eps*min(pos,neg)/l*|f'(w0)|_2,
where f is the primal function and pos/neg are # of
positive/negative data (default 0.01)
-s 11
|f'(w)|_2 <= eps*|f'(w0)|_2 (default 0.0001)
-s 1, 3, 4, 7, and 21
Dual maximal violation <= eps; similar to libsvm (default 0.1 except 0.01 for -s 21)
-s 5 and 6
|f'(w)|_1 <= eps*min(pos,neg)/l*|f'(w0)|_1,
where f is the primal function (default 0.01)
-s 12 and 13
|f'(alpha)|_1 <= eps |f'(alpha0)|,
where f is the dual function (default 0.1)
-B bias : if bias >= 0, instance x becomes [x; bias]; if < 0, no bias term added (default -1)
-R : not regularize the bias; must with -B 1 to have the bias; DON'T use this unless you know what it is
(for -s 0, 2, 5, 6, 11)
-wi weight: weights adjust the parameter C of different classes (see README for details)
-v n: n-fold cross validation mode
-C : find parameters (C for -s 0, 2 and C, p for -s 11)
-q : quiet mode (no outputs)
col:
if 'col' is setted, training_instance_matrix is parsed in column format, otherwise is in row format
From this site,
The output node has a "threshold" t.
Rule:
If summed input ≥ t, then it "fires" (output y = 1).
Else (summed input < t) it doesn't fire (output y = 0).
How y equals to zero. Any Ideas appreciated.
Neural networks have a so called "activation function", it's usually some form of a sigmoid-like function to map the inputs into separate outputs.
http://zephyr.ucd.ie/mediawiki/images/b/b6/Sigmoid.png
For you it happens to be either 0 or 1 and using a comparison instead of a sigmoid function,
so your activation curve will be even sharper than the graph above. In the said graph, your t, the threshold, is 0 on the X axis.
So as pseudo code :
sum = w1 * I1 + w2 + I2 + ... + wn * In
sum is the weighted sum of all in the inputs the neuron, now all you have to do is compare that sum to t, the threshold :
if sum >= t then y = 1 // Your neuron is activated
else y = 0
You can use the last neuron's output as the networks output to predict something into 1/0, true/false etc.
If you're studying NNs, I'd suggest you start with the XOR problem, then it will all make sense.