I'm in the process of coding what I'm learning about Linear Regression from the coursera Machine Learning course (MATLAB). There was a similar post that I found here, but I don't seem to be able to understand everything. Perhaps because my fundamentals in Machine Learning are a bit weak.
The problem I'm facing is that, for some data... both gradient descent (GD) and the Closed Form Solution (CFS) give the same hypothesis line. However, on one particular dataset, the results are different. I've read something about, that, if the data is singular, then results should be the same. However, I have no idea how to check whether or not my data is singular.
I will try to illustrate the best I can:
1) Firstly, here is the MATLAB code adapted from here. For the given dataset, everything turned out good where both GD and the CFS gave similar results.
The Dataset
X Y
2.06587460000000 0.779189260000000
2.36840870000000 0.915967570000000
2.53999290000000 0.905383540000000
2.54208040000000 0.905661380000000
2.54907900000000 0.938988900000000
2.78668820000000 0.966847400000000
2.91168250000000 0.964368240000000
3.03562700000000 0.914459390000000
3.11466960000000 0.939339440000000
3.15823890000000 0.960749710000000
3.32759440000000 0.898370940000000
3.37931650000000 0.912097390000000
3.41220060000000 0.942384990000000
3.42158230000000 0.966245780000000
3.53157320000000 1.05265000000000
3.63930020000000 1.01437910000000
3.67325370000000 0.959694260000000
3.92564620000000 0.968537160000000
4.04986460000000 1.07660650000000
4.24833480000000 1.14549780000000
4.34400520000000 1.03406250000000
4.38265310000000 1.00700090000000
4.42306020000000 0.966836480000000
4.61024430000000 1.08959190000000
4.68811830000000 1.06344620000000
4.97773330000000 1.12372390000000
5.03599670000000 1.03233740000000
5.06845360000000 1.08744520000000
5.41614910000000 1.07029880000000
5.43956230000000 1.16064930000000
5.45632070000000 1.07780370000000
5.56984580000000 1.10697580000000
5.60157290000000 1.09718750000000
5.68776170000000 1.16486030000000
5.72156020000000 1.14117960000000
5.85389140000000 1.08441560000000
6.19780260000000 1.12524930000000
6.35109410000000 1.11683410000000
6.47970330000000 1.19707890000000
6.73837910000000 1.20694620000000
6.86376860000000 1.12510460000000
7.02233870000000 1.12356720000000
7.07823730000000 1.21328290000000
7.15142320000000 1.25226520000000
7.46640230000000 1.24970650000000
7.59738740000000 1.17997060000000
7.74407170000000 1.18972990000000
7.77296620000000 1.30299340000000
7.82645140000000 1.26011340000000
7.93063560000000 1.25622670000000
My MATLAB code:
clear all; close all; clc;
x = load('ex2x.dat');
y = load('ex2y.dat');
m = length(y); % number of training examples
% Plot the training data
figure; % open a new figure window
plot(x, y, '*r');
ylabel('Height in meters')
xlabel('Age in years')
% Gradient descent
x = [ones(m, 1) x]; % Add a column of ones to x
theta = zeros(size(x(1,:)))'; % initialize fitting parameters
MAX_ITR = 1500;
alpha = 0.07;
for num_iterations = 1:MAX_ITR
thetax = x * theta;
% for theta_0 and x_0
grad0 = (1/m) .* sum( x(:,1)' * (thetax - y));
% for theta_0 and x_0
grad1 = (1/m) .* sum( x(:,2)' * (thetax - y));
% Here is the actual update
theta(1) = theta(1) - alpha .* grad0;
theta(2) = theta(2) - alpha .* grad1;
end
% print theta to screen
theta
% Plot the hypothesis (a.k.a. linear fit)
hold on
plot(x(:,2), x*theta, 'ob')
% Plot using the Closed Form Solution
plot(x(:,2), x*((x' * x)\x' * y), '--r')
legend('Training data', 'Linear regression', 'Closed Form')
hold off % don't overlay any more plots on this figure''
[EDIT: Sorry for the wrong labeling... It's not Normal Equation, but Closed Form Solution. My mistake]
The results for this code is as shown below (Which is peachy :D Same results for both GD and CFS) -
Now, I am testing my code with another dataset. The URL for the dataset is here - GRAY KANGAROOS. I converted it to CSV and read it into MATLAB. Note that I did scaling (divided by the maximum, since if I didn't do that, no hypothesis line appears at all and the thetas come out as Not A Number (NaN) in MATLAB).
The Gray Kangaroo Dataset:
X Y
609 241
629 222
620 233
564 207
645 247
493 189
606 226
660 240
630 215
672 231
778 263
616 220
727 271
810 284
778 279
823 272
755 268
710 278
701 238
803 255
855 308
838 281
830 288
864 306
635 236
565 204
562 216
580 225
596 220
597 219
636 201
559 213
615 228
740 234
677 237
675 217
629 211
692 238
710 221
730 281
763 292
686 251
717 231
737 275
816 275
The changes I made to the code to read in this dataset
dataset = load('kangaroo.csv');
% scale?
x = dataset(:,1)/max(dataset(:,1));
y = dataset(:,2)/max(dataset(:,2));
The results that came out was like this: [EDIT: Sorry for the wrong labeling... It's not Normal Equation, but Closed Form Solution. My mistake]
I was wondering if there is any explanation for this discrepancy? Any help would be much appreciate. Thank you in advance!
I haven't run your code, but let me trow you some theory:
If your code is right (it looks like it): Increase MAX_ITER and it will look better.
Gradient descend is not ensured to converge at MAX_ITER, and actually gradient descend is a quite slow method (convergence-wise).
The convergence of Gradient descend for a "standard" convex function (like the one you try to solve) looks like this (from the Internets):
Forget, about iteration number, as it depedns in the problem, and focus in the shape. What may be happening is that your maxiter falls somewhere like "20" in this image. Thus your result is good, but not the best!
However, solving the normal equations directly will give you the minimums square error solution. (I assume normal equation you mean x=(A'*A)^(-1)*A'*b). The problem is that there are loads of cases where you can not store A in memory, or in an ill-posed problem, the normal equation will lead to ill-conditioned matrices that will be numerically unstable, thus gradient descend is used.
more info
I think I figured it out.
I immaturely thought that a maximum iteration of 1500 was enough. I tried with a higher value (i.e. 5k and 10k), and both algorithms started to give the similar solution. So my main issue was the number of iterations. It needed more iteration to properly converge for that dataset :D
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I am given the following data:
and I am asked to calculate the integral of Cp/T dT from 113.7 to 264.4.
I am unsure of how I should solve this. If I want to use the integral command, I need a function, but I don't know how my function should be in that case.
I have tried:
function func = Cp./T
T = [...]
Cp=[...]
end
but that didn't work.
Use the cumtrapz function in MATLAB.
T = [...]
Cp=[...]
CpdivT = Cp./T
I = cumtrapz(T, CpdivT)
You can read more about the function at https://www.mathworks.com/help/matlab/ref/cumtrapz.html
A simple approach using interp1 and integral using plain vanilla settings.
Would only use more sophisticated numerical techniques if required for application. You can examine the 'RelTol' and 'AbsTol' options in the documentation for integral.
Numerical Integration: (w/ linear interpolation)
% MATLAB R2017a
T = [15 20 30 40 50 70 90 110 130 140 160 180 200 220 240 260 270 275 285 298]';
Cp = [5.32 10.54 21.05 30.75 37.15 49.04 59.91 70.04 101.59 103.05 106.78 ...
110.88 114.35 118.70 124.31 129.70 88.56 90.07 93.05 96.82]';
fh =#(t) interp1(T,Cp./T,t,'linear');
t1 = 113.7;
t2 = 264.4;
integral(fh,t1,t2)
ans = 91.9954
Alternate Methods of Interpolation:
Your results will depend on your method of interpolation (see code & graphic below).
% Results depend on method of interpolation
Linear = integral(#(t) interp1(T,Cp./T,tTgts,'linear'),t1,t2) % = 91.9954
Spline = integral(#(t) interp1(T,Cp./T,tTgts,'spline'),t1,t2) % = 92.5332
Cubic = integral(#(t) interp1(T,Cp./T,tTgts,'pchip'),t1,t2) % = 92.0383
Code for graphic:
tTgts = T(1):.01:T(end);
figure, hold on, box on
p(1) = plot(tTgts,interp1(T,Cp./T,tTgts,'linear'),'b-','DisplayName','Linear')
p(2) = plot(tTgts,interp1(T,Cp./T,tTgts,'spline'),'r-','DisplayName','Spline')
p(3) = plot(tTgts,interp1(T,Cp./T,tTgts,'pchip'),'g-','DisplayName','Cubic')
p(4) = plot(T,Cp./T,'ks','DisplayName','Data')
xlim([floor(t1) ceil(t2)])
legend('show')
% Cosmetics
xlabel('T')
ylabel('Cp/T')
for k = 1:4, p(k).LineWidth = 2; end
A poor approximation: (to get rough order of magnitude estimate)
tspace = T(2:end)-T(1:end-1);
midpt = mean([Cp(1:end-1) Cp(2:end)],2);
sum(midpt.*tspace)./sum(tspace)
And you can see we're in the ballpark (makes me feel more comfortable at least).
Other viable MATLAB Functions: quadgk | quad
% interpolation method affects answer if using `interp1()`
quadgk(#(t) interp1(T,Cp./T,t,'linear'),t1,t2)
quad(#(t) interp1(T,Cp./T,t,'linear'),t1,t2)
Functions that would require more work: trapz | cumtrapz
Notice that trapz and cumtrapz both require unit spacing; to use these would require first interpolating with unit spacing.
Related Posts: (found after answer already completed)
Matlab numerical integration
How to numerically integrate vector data in Matlab?
This is probably better for your problem. Take note that I have assumed 2nd order polynomial fits your data well. You may want to get a better model structure if the fit is unsatisfactory.
% Data
T = [15 20 30 40 50 70 90 110 130 140 160 180 200 220 240 260 270 275 285 298];
Cp = [5.32 10.54 21.05 30.75 37.15 49.04 59.91 70.04 101.59 103.05 106.78 110.88 114.35 118.70 124.31 129.70 88.56 90.07 93.05 96.82];
% Fit function using 2nd order polynomial
f = fit(T',Cp'./T','poly2');
% Compare fit to actual data
plot(f,T,Cp./T)
% Create symbolic function
syms x
func = f.p1*x*x + f.p2*x + f.p3;
% Integrate function
I = int(func,113.7,264.4);
% Convert solution from symbolic to numeric value
V = double(I);
The result is 92.7839
I'm having surprisingly difficult time to figure out something which appears so simple. I have two known coordinates on a graph, (X1,Y1) and (X2,Y2). What I'm trying to identify are the coordinates for (X3,Y3).
I thought of using sin and cos but once I get here my brain stops working. I know that
sin O = y/R
cos O = x/R
so I thought of simply importing in the length of the line (in this case it was 2) and use the angles which are known. Seems very simple but for the life of me, my brain won't wrap around this.
The reason I need this is because I'm trying to print a line onto an image using poly2mask in matlab. The code has to work in the 2D space as I will be building movies using the line.
X1 = [134 134 135 147 153 153 167]
Y1 = [183 180 178 173 164 152 143]
X2 = [133 133 133 135 138 143 147]
Y2 = [203 200 197 189 185 173 163]
YZdist = 2;
for aa = 1:length(X2)
XYdis(aa) = sqrt((x2(aa)-x1(aa))^2 + (Y2(aa)-Y1(aa))^2);
X3(aa) = X1(aa) * tan(XYdis/YZdis);
Y3(aa) = Y1(aa) * tan(XYdis/YZdis);
end
polmask = poly2mask([Xdata X3],[Ydata Y3],50,50);
one approach would be to first construct a vector l connection points (x1,y1) and (x2,y2), rotate this vector 90 degrees clockwise and add it to the point (x2,y2).
Thus l=(x2-x1, y2-y1), its rotated version is l'=(y2-y1,x1-x2) and therefore the point of interest P=(x2, y2) + f*(y2-y1,x1-x2), where f is the desired scaling factor. If the lengths are supposed to be the same, then f=1 and thus P=(x2 + y2-y1, y2 + x1-x2).
Does someone know how to make a graph similar to this one with matlab?
To me it seems like a continuous stacked bar plot.
I did not manage to download the same data so I used other ones.
I tried the following code:
clear all
filename = 'C:\Users\andre\Desktop\GDPpercapitaconstant2000US.xlsx';
sheet = 'Data';
xlRange = 'AP5:AP259'; %for example
A = xlsread(filename,sheet,xlRange);
A(isnan(A))=[]; %remove NaNs
%create four subsets
A1=A(1:70);
A2=A(71:150);
A3=A(151:180);
A4=A(181:end);
edges=80:200:8000; %bins of the distributions
[n_A1,xout_A1] = histc(A1,edges); %distributions of the first subset
[n_A2,xout_A2] = histc(A2,edges);
[n_A3,xout_A3] = histc(A3,edges);
[n_A4,xout_A4] = histc(A4,edges);
%make stacked bar plot
for ii=1:numel(edges)
y(ii,:) = [n_A1(ii) n_A2(ii) n_A3(ii) n_A4(ii)];
end
bar(y,'stacked', 'BarWidth', 1)
and obtained this:
It is not so bad.. Maybe with other data it would look nicer... but I was wondering if someone has better ideas. Maybe it is possible to adapt fitdist in a similar way?
First, define the x axis. If you want it to follow the rules of bar, then use:
x = 0.5:numel(edges)-0.5;
Then use area(x,y), which produces a filled/stacked area plot:
area(x,y)
And if you want the same colors as the example you posted at the top, define the colormap and call colormap as:
map = [
218 96 96
248 219 138
253 249 199
139 217 140
195 139 217
246 221 245
139 153 221]/255;
colormap(map)
(It may not be exactly as the one you posted, but I got it quite close I think. Also, not all colors are shown in the result below as there are only 4 parameters, but all colors are defined)
Result:
I'm trying to loop a pattern of numbers using a For loop in matlab / octave
The pattern I'm looking for is
40,80,160,320,280,200 and then 1 is added to each one so the pattern would look like this:
40,80,160,320,280,200,41,81,161,321,281,201,42,82,162,322,282,202
I tried using a for loop below
clear all
numL_tmp=[40;80;160;320;280;200]
numL=[numL_tmp];
for ii=1:length(numL_tmp)
for jj=1:4
numL=[numL;numL_tmp(ii,1)+jj]
end
end
But I get
40,80,160,320,280,200,41,42,81,82,161,162,321,322,281,282,201,202
How can I fix this?
For the problem stated, nested loops are unnecessary. You could simply do the following:
clear all;
numL_tmp=[40;80;160;320;280;200];
numL = numL_tmp;
for ii=1:2
numL = [numL;numL_tmp+ii];
end
numL
This would yield:
numL =
40
80
160
320
280
200
41
81
161
321
281
201
42
82
162
322
282
202
This works because MATLAB recognizes the piece of code numL_tmp+ii as something equivalent to numL_tmp + ii*ones(size(numL_tmp)).
You can avoid loops completely:
N = 3;
numL = kron(ones(N,1),numL_tmp) + kron((0:N-1)',ones(numel(numL_tmp),1));
There are easier ways to do it, but the fundamental problem with your code is that the inner and outer loops in the wrong order. See what happens if you leave your code as is, but simply interchange the order of the two loops:
...
numL=[numL_tmp];
for jj=1:4
for ii=1:length(numL_tmp)
numL=[numL;numL_tmp(ii,1)+jj]
end
end
in the 2D array plotted below, we are interested in finding the "lump" region. As you can see it is not a continuous graph. Also, we know the approximate dimension of the "lump" region. A set of data are given below. First column contains the y values and the second contains the x values. Any suggestion as to how to detect lump regions like this?
21048 -980
21044 -956
21040 -928
21036 -904
21028 -880
21016 -856
21016 -832
21016 -808
21004 -784
21004 -760
20996 -736
20996 -712
20992 -684
20984 -660
20980 -636
20968 -612
20968 -588
20964 -564
20956 -540
20956 -516
20952 -492
20948 -468
20940 -440
20936 -416
20932 -392
20928 -368
20924 -344
20920 -320
20912 -296
20912 -272
20908 -248
20904 -224
20900 -200
20900 -176
20896 -152
20888 -128
20888 -104
20884 -80
20872 -52
20864 -28
20856 -4
20836 16
20812 40
20780 64
20748 88
20744 112
20736 136
20736 160
20732 184
20724 208
20724 232
20724 256
20720 280
20720 304
20720 328
20724 352
20724 376
20732 400
20732 424
20736 448
20736 472
20740 496
20740 520
20748 544
20740 568
20736 592
20736 616
20736 640
20740 664
20740 688
20736 712
20736 736
20744 760
20748 788
20760 812
20796 836
20836 860
20852 888
20852 912
20844 936
20836 960
20828 984
20820 1008
20816 1032
20820 1056
20852 1080
20900 1108
20936 1132
20956 1156
20968 1184
20980 1208
20996 1232
21004 1256
21012 1280
21016 1308
21024 1332
21024 1356
21028 1380
21024 1404
21020 1428
21016 1452
21008 1476
21004 1500
20992 1524
20980 1548
20956 1572
20944 1596
20920 1616
20896 1640
20872 1664
20848 1684
20812 1708
20752 1728
20664 1744
20640 1768
20628 1792
20628 1816
20620 1836
20616 1860
20612 1884
20604 1908
20596 1932
20588 1956
20584 1980
20580 2004
20572 2024
20564 2048
20552 2072
20548 2096
20536 2120
20536 2144
20524 2164
20516 2188
20512 2212
20508 2236
20500 2260
20488 2280
20476 2304
20472 2328
20476 2352
20460 2376
20456 2396
20452 2420
20452 2444
20436 2468
20432 2492
20432 2516
20424 2536
20420 2560
20408 2584
20396 2608
20388 2628
20380 2652
20364 2676
20364 2700
20360 2724
20352 2744
20344 2768
20336 2792
20332 2812
20328 2836
20332 2860
20340 2888
20356 2912
20380 2940
20428 2968
20452 2996
20496 3024
20532 3052
20568 3080
20628 3112
20652 3140
20728 3172
20772 3200
20868 3260
20864 3284
20864 3308
20868 3332
20860 3356
20884 3384
20884 3408
20912 3436
20944 3464
20948 3488
20948 3512
20932 3536
20940 3564
It may be just a coincidence, but the lump you show looks fairly parabolic. It's not completely clear what you mean by "know the approximate dimension of the lump region" but if you mean that you know approximately how wide it is (i.e. how much of the x-axis it takes up), you could simply slide a window of that width along the x-axis and do a parabolic fit (a.k.a. polyfit with degree 2) to all data that fits into the window at each point. Then, compute r^2 goodness-of-fit values at each point and the point with the r^2 closest to 1.0 would be the best fit. You'd probably need a threshold value and to throw out those where the x^2 coefficient was positive (to find lumps rather than dips) for sanity, but this might be a workable approach.
Even if the parabolic look is a coincidence, I think this would ba a reasonable approach--a downward pointing parabola is a pretty good description of a general "lump" by any definition I can think of.
Edit: Attempted Implementation Below
I got curious and went ahead and implemented my proposed solution (with slight modifications). First, here's the code (ugly but functional):
function [x, p] = find_lump(data, width)
n = size(data, 1);
f = plot(data(:,1),data(:,2), 'bx-');
hold on;
bestX = -inf;
bestP = [];
bestMSE = inf;
bestXdat = [];
bestYfit = [];
spanStart = 0;
spanStop = 1;
spanWidth = 0;
while (spanStop < n)
if (spanStart > 0)
% Drop first segment from window (since we'll advance x):
spanWidth = spanWidth - (data(spanStart + 1, 1) - x);
end
spanStart = spanStart + 1;
x = data(spanStart, 1);
% Advance spanStop index to maintain window width:
while ((spanStop < n) && (spanWidth <= width))
spanStop = spanStop + 1;
spanWidth = data(spanStop, 1) - x;
end
% Correct for overshoot:
if (spanWidth > width)
spanStop = spanStop - 1;
spanWidth = data(spanStop, 1) - x;
end
% Fit parabola to data in the current window:
xdat = data(spanStart:spanStop, 1);
ydat = data(spanStart:spanStop, 2);
p = polyfit(xdat, ydat, 2);
% Compute fit quality (mean squared error):
yfit = polyval(p,xdat);
r = yfit - ydat;
mse = (r' * r) / size(xdat,1);
if ((p(1) < -0.002) && (mse < bestMSE))
bestMSE = mse;
bestX = x;
bestP = p;
bestXdat = xdat;
bestYfit = yfit;
end
end
x = bestX;
p = bestP;
plot(bestXdat,bestYfit,'r-');
...and here's a result using the given data (I swapped the columns so column 1 is x values and column 2 is y values) with a window width parameter of 750:
Comments:
I opted to use mean squared error between the fit parabola and the original data within each window as the quality metric, rather than correlation coefficient (r^2 value) due to laziness more than anything else. I don't think the results would be much different the other way.
The output is heavily dependent on the threshold value chosen for the quadratic coefficient (see the bestMSE condition at the end of the loop). Truth be told, I cheated here by outputing the fit coefficients at each point, then selected the threshold based on the known lump shape. This is equivalent to using a lump template as suggested by #chaohuang and may not be very robust depending on the expected variance in the data.
Note that some sort of shape control parameter seems to be necessary if this approach is used. The reason is that any random (smooth) run of data can be fit nicely to some parabola, but not necessarily around the maximum value. Here's a result where I set the threshold to zero and thus only restricted the fit to parabolas pointing downwards:
An improvement would be to add a check that the fit parabola at least has a maximum within the window interval (that is, check that the first derivative goes to zero within the window so we at least find local maxima along the curve). This alone is not sufficient as you still might have a tiny little lump that fits a parabola better than an "obvious" big lump as seen in the given data set.