Coefficients P values & residual in linear regression in pyspark using mlib? - pyspark

I am training linear regression model in pyspark using MAlib library.
I am looking to get full summary of linear regression model. Summary should include coefficients, pvalues, residuals and any other details we get when do summary in R.
from pyspark.ml.regression import LinearRegression
ship_lr=LinearRegression(featuresCol='features_lr',labelCol='target_residual_col')
trained_ship_model=ship_lr.fit(lr_data)
Expected Output :
Coefficients:
Feature Estimate Std Error T Value P Value
(Intercept) -1.3079 0.0705 -18.5549 0.0000
Coefficient1 0.1248 0.0158 7.9129 0.0000
Coefficient2 0.0239 0.0209 1.1455 0.2520

Related

hotteling transformation does not give desired result

i wanted to apply HOTELLING TRANSFORMATION to the give vectors and to make practice my self, that why i have wrote following code in matlab
function [Y covariance_matrix]=hotteling_trasform(X)
% this function take X1,X2,X3,,Xn as a matrix and apply hottleing
%transformation to get new set of vectors y1, y2,..ym so that covariance
%matrix of matrix consiist by yi vectors are almost diagonal
%% determine size of given matrix
[m n]=size(X);
%% compute mean of columns of given matrix
means=mean(X);
%% substract mean from given matrix
centered=X-repmat(means,m,1);
%% calculate covariance matrix
covariance=(centered'*centered)/(m-1);
%% Apply eigenvector decomposition
[V,D]=eig(covariance);
%% determine dimension of V
[m1 n1]=size(V);
%% arrange matrix so that eigenvectors are as rows,create matrix with size n1 m1
A1=zeros(n1,m1);
for ii=1:n1
A1(ii,:)=V(:,ii);
end
%% applying hoteling transformation
Y=A1*centered; %% because centered matrix is original -means
%% calculate covariance matrix
covariance_matrix=cov(Y);
then i have tested it to the given matrix
A
A =
4 6 10
3 10 13
-2 -6 -8
and after running code
[Y covariance_matrix]=hotteling_trasform(A);
covariance_matrix
covariance_matrix =
8.9281 22.6780 31.6061
22.6780 66.5189 89.1969
31.6061 89.1969 120.8030
definitely this is not diagonal matrix, so what is wrong? thanks in advance
As you're dealing with row vectors instead of column vectors you need to adjust for it in the eigenvalue/eigenvector-decomposiiton. Instead of Y=A1*centered you need Y=centered*V. Then you'll get
covariance_matrix =
0.0000 -0.0000 0.0000
-0.0000 1.5644 -0.0000
0.0000 -0.0000 207.1022
So you'll get two nonzero components which is what you could expect from only three points in the 3D-space. (They can only form a plane, but not a volume.)

Scale output of FFT (MATLAB)

I am doing some Fourier transforms of audio (.wav) data using the FFT command in MATLAB. The input values are numbers between -1.0 and 1.0
My understanding is that after taking the absolute value (modulus) of the output of the FFT, I should get values that have units of amplitude, but the actual values are on the order of thousands. This doesn't make sense as theoretically I should be able to sum the Fourier components to get the original signal back. I feel like the output should then also be between 0 and 1, so what's up here? My guess is that the FFT algorithm blows it out of proportion but I'm not sure what value to use to scale it back.
The FFT is an algorithm for computing the Discrete Fourier Transform (DFT). The inverse DFT (IDFT) has a 1/N scaling factor in its definition. Perhaps that's what's confusing you. From the Wikipedia:
DFT (from finite sequence x to Fourier coefficients X):
IDFT (from X back to x):
So, just apply ifft to the result of fft and you'll get the original result. For example:
>> x = linspace(-1,1,5)
x =
-1.0000 -0.5000 0 0.5000 1.0000
>> y = fft(x)
y =
0 -1.2500 + 1.7205i -1.2500 + 0.4061i -1.2500 - 0.4061i -1.2500 - 1.7205i
>> abs(y)
ans =
0 2.1266 1.3143 1.3143 2.1266 %// note values greater than 1
>> ifft(y)
ans =
-1.0000 -0.5000 0.0000 0.5000 1.0000
In fact, the IDFT can be expressed in terms of the DFT applying complex conjugation and the referred scaling factor. Denoting the DFT by F, the IDFT by F-1 and complex conjugate by *,
In the above example,
>> 1/numel(y) * conj(fft(conj(y)))
ans =
-1.0000 -0.5000 0.0000 0.5000 1.0000
In Matlab use the following code to scale from 1 to (roughly) 0.
dataDFT=abs(fft(data)); % Take the complex magnitude of the fft of your data
dataDFTScaled=dataDFT/max(dataDFT); % Divide by the maximum value
You don't want it to scale to zero because that would make it impossible to view on a log plot.

MATLAB: fir1 command is incompatible fir2 command

MATLAB said "The fir2 function also designs windowed FIR filters, but with an arbitrarily shaped piecewise linear frequency response.
This is in contrast to fir1, which only designs filters in standard lowpass, highpass, bandpass, and bandstop configurations."
I found fir filter coefficients with fir1 command and get frequency responses by using freqz as follows
b1=fir1(M,wn,'high') % b1:highpass FIR filter coefficients
hd=freqz(b1,1,w) %FIR filter frequency responses with respect to b1
then I pass this frequency responses(hd) to fir2 as follows
b2=fir2(M,w,hd) % get FIR filter coefficient from same frequency samples(w) and frequency responses(hd)
b1 must equal to b2 according to MATLAB
but for the FIR filter of order 13
this is the result :
b1=0.0042 0.0063 -0.0000 -0.0403 -0.1221 -0.2103 0.7470 -0.2103 -0.1221 -0.0403 -0.0000 0.0063 0.0042
b2=0.0017 -0.0044 0.0180 -0.0937 0.2075 -0.1097 -0.0012 0.0105 -0.0081 0.0050 -0.0025 0.0010 -0.0005
b1 isn't equal to b2.This is supposed to be right in theory.I don't understand what's wrong.

Multivariate Linear Regression in MATLAB

I already have my data prepared in terms of:
p1=input1 %load of today current hour
p2=input2 %load of today past one hour
p3=input3 $load of today past two hours
a1=output %load of next day current hour
I have the following code below:
%Input Set 1 For Weekday Load(d+1,t)
%(d,t),(d,t-1), (d,t-2)
L=xlsread('input_set1_weekday.xlsx',1); %2011
k=1;
size(L,1);
for a=5:2:size(L,1)-48 % L load for 2011
P(1,k)= L(a,1);
P(2,k)= L(a-2,1);
P(3,k)= L(a-4,1);
P(4,k)= L(a+48,1);
k=k+1;
end
I have my data arranged in such a way that in every column, p1, p2, p3 are my predictor variables and a1 is my response variable.
How do I now fit a linear model to this set of data to check the performance of my predictions? By the way it is electrical load forecasting model.
My other doubt is that in the examples shown by most of the sources, they use the last column data as response variable and this is the part I'm struggling with.
fitlm will be able to do this for you quite nicely. You use fitlm to train a linear regression model, so you provide it the predictors as well as the responses. Once you do this, you can then use predict to predict the new responses based on new predictors that you put in.
The basic way for you to call this is:
lmModel = fitlm(X, y, 'linear', 'RobustOpts', 'on');
X is a data matrix where each column is a predictor and each row is an observation. Therefore, you would have to transpose your matrix before running this function. Basically, you would do P(1:3,:).' as you only want the first three rows (now columns) of your data. y would be your output values for each observation and this is a column vector that has the same number of rows as your observations. Regarding your comment about using the "last" column as the response vector, you don't have to do this at all. You specify your response vector in a completely separate input variable, which is y. As such, your a1 would serve here, while your predictors and observations would be stored in X. You can totally place your response vector as a column in your matrix; you would just have to subset it accordingly.
As such, y would be your a1 variable, and make sure it's a column vector, and so you can do this a1(:) to be sure. The linear flag specifies linear regression, but that is the default flag anyway. RobustOpts is recommended so that you can perform robust linear regression. For your case, you would have to call fitlm this way:
lmModel = fitlm(P(1:3,:).', a1(:), 'linear', 'RobustOpts', 'on');
Now to predict new responses, you would do:
ypred = predict(lmModel, Xnew);
Xnew would be your new observations that follow the same style as X. You have to have the same number of columns as X, but you can have as many rows as you want. The output ypred will give you the predicted response for each observation of X that you have. As an example, let's use a dataset that is built into MATLAB, split up the data into a training and test data set, fit a model with the training set, then use the test dataset and see what the predicted responses are. Let's split up the data so that it's a 75% / 25% ratio. We will use the carsmall dataset which contains 100 observations for various cars and have descriptors such as Weight, Displacement, Model... typically used to describe cars. We will use Weight, Cylinders and Acceleration as the predictor variables, and let's try and predict the miles per gallon MPG as our outcome. Once I do this, let's calculate the difference between the predicted values and the true values and compare between them. As such:
load carsmall; %// Load in dataset
%// Build predictors and outcome
X = [Weight Cylinders Acceleration];
y = MPG;
%// Set seed for reproducibility
rng(1234);
%// Generate training and test data sets
%// Randomly select 75 observations for the training
%// dataset. First generate the indices to select the data
indTrain = randperm(100, 75);
%// The above may generate an error if you have anything below R2012a
%// As such, try this if the above doesn't work
%//indTrain = randPerm(100);
%//indTrain = indTrain(1:75);
%// Get those indices that haven't been selected as the test dataset
indTest = 1 : 100;
indTest(indTrain) = [];
%// Now build our test and training data
trainX = X(indTrain, :);
trainy = y(indTrain);
testX = X(indTest, :);
testy = y(indTest);
%// Fit linear model
lmModel = fitlm(trainX, trainy, 'linear', 'RobustOpts', 'on');
%// Now predict
ypred = predict(lmModel, testX);
%// Show differences between predicted and true test output
diffPredict = abs(ypred - testy);
This is what happens when you echo out what the linear model looks like:
lmModel =
Linear regression model (robust fit):
y ~ 1 + x1 + x2 + x3
Estimated Coefficients:
Estimate SE tStat pValue
__________ _________ _______ __________
(Intercept) 52.495 3.7425 14.027 1.7839e-21
x1 -0.0047557 0.0011591 -4.1031 0.00011432
x2 -2.0326 0.60512 -3.359 0.0013029
x3 -0.26011 0.1666 -1.5613 0.12323
Number of observations: 70, Error degrees of freedom: 66
Root Mean Squared Error: 3.64
R-squared: 0.788, Adjusted R-Squared 0.778
F-statistic vs. constant model: 81.7, p-value = 3.54e-22
This all comes from statistical analysis, but for a novice, what matters are the p-values for each of our predictors. The smaller the p-value, the more suitable this predictor is for your model. You can see that the first two predictors: Weight and Cylinders are a good representation on determining the MPG. Acceleration... not so much. What this means is that this variable is not a meaningful predictor to use, so you should probably use something else. In fact, if you were to remove this predictor and retrain your model, you would most likely see that the predicted values would closely match those where the Acceleration was included.
This is a truly bastardized version of interpreting p-values and so I defer you to an actual regression models or statistics course for more details.
This is what we have predicted the values to be, given our test set and beside it what the true values are:
>> [ypred testy]
ans =
17.0324 18.0000
12.9886 15.0000
13.1869 14.0000
14.1885 NaN
16.9899 14.0000
29.1824 24.0000
23.0753 18.0000
28.6148 28.0000
28.2572 25.0000
29.0365 26.0000
20.5819 22.0000
18.3324 20.0000
20.4845 17.5000
22.3334 19.0000
12.2569 16.5000
13.9280 13.0000
14.7350 13.0000
26.6757 27.0000
30.9686 36.0000
30.4179 31.0000
29.7588 36.0000
30.6631 38.0000
28.2995 26.0000
22.9933 22.0000
28.0751 32.0000
The fourth actual output value from the test data set is NaN, which denotes that the value is missing. However, when we run our the observation corresponding to this output value into our linear model, it predicts a value anyway which is to be expected. You have other observations to help train the model that when using this observation to find a prediction, it would naturally draw from those other observations.
When we compute the difference between these two, we get:
diffPredict =
0.9676
2.0114
0.8131
NaN
2.9899
5.1824
5.0753
0.6148
3.2572
3.0365
1.4181
1.6676
2.9845
3.3334
4.2431
0.9280
1.7350
0.3243
5.0314
0.5821
6.2412
7.3369
2.2995
0.9933
3.9249
As you can see, there are some instances where the prediction was quite close, and others where the prediction was far from the truth.... it's the crux of any prediction algorithm really. You'll have to play around with what predictors you want, as well as playing with the options with your training. Have a look at the fitlm documentation for more details on what you can play around with.
Edit - July 30th, 2014
As you don't have fitlm, you can easily use LinearModel.fit. You would call it with the same inputs like fitlm. As such:
lmModel = LinearModel.fit(trainX, trainy, 'linear', 'RobustOpts', 'on');
This should give you exactly the same results. predict should exist pre-R2014a, so that should be available to you.
Good luck!

matlab correlation and significant values

I have a rather simple question that needs addressing in matlab. I think I understand but I need someone to clarify I'm doing this correctly:
In the following example I'm trying to calculate the correlation between two vectors and the p values for the correlation.
dat = [1,3,45,2,5,56,75,3,3.3];
dat2 = [3,33,5,6,4,3,2,5,7];
[R,p] = corrcoef(dat,dat2,'rows','pairwise');
R2 = R(1,2).^2;
pvalue = p(1,2);
From this I have a R2 value of 0.11 and a p value of 0.38. Does this mean that the vectors are correlated by 0.11 (i.e. 11%) and this would be expected to occur 38 % of the same, so 62 % of the time a different correlation could occur?
>> [R,p] = corrcoef(dat,dat2,'rows','pairwise')
R =
1.0000 -0.3331
-0.3331 1.0000
p =
1.0000 0.3811
0.3811 1.0000
The correlation is -0.3331 and the p-value is 0.3811. The latter is the probability of getting a correlation as large as -0.3331 by random chance, when the true correlation is zero. The p-value is large, so we cannot reject the null hypothesis of no correlation at any reasonable significance level.
The correlation coefficient here is
r(1,2)
ans =
-0.3331
which is a correlation of -33.3%, which tells you that the two datasets are negatively linearly correlated. You can see this by plotting them:
plot(dat, dat2, '.'), grid, lsline
The p-value of the correlation is
p(1,2)
ans =
0.3811
This tells you that even if there was no correlation between two random variables, then in a sample of 9 observations you would expect to see a correlation at least as extreme as -33.3% about 38.1% of the time.
By at least as extreme we mean that the measured correlation in a sample would be below -33.3%, or above 33.3%.
Given that the p value is so large, you cannot reliably make any conclusions about whether the null hypothesis of zero correlation should be rejected or not.