coordinates of perpendicular line segment in 2d Cartesian space - matlab

I'm having surprisingly difficult time to figure out something which appears so simple. I have two known coordinates on a graph, (X1,Y1) and (X2,Y2). What I'm trying to identify are the coordinates for (X3,Y3).
I thought of using sin and cos but once I get here my brain stops working. I know that
sin O = y/R
cos O = x/R
so I thought of simply importing in the length of the line (in this case it was 2) and use the angles which are known. Seems very simple but for the life of me, my brain won't wrap around this.
The reason I need this is because I'm trying to print a line onto an image using poly2mask in matlab. The code has to work in the 2D space as I will be building movies using the line.
X1 = [134 134 135 147 153 153 167]
Y1 = [183 180 178 173 164 152 143]
X2 = [133 133 133 135 138 143 147]
Y2 = [203 200 197 189 185 173 163]
YZdist = 2;
for aa = 1:length(X2)
XYdis(aa) = sqrt((x2(aa)-x1(aa))^2 + (Y2(aa)-Y1(aa))^2);
X3(aa) = X1(aa) * tan(XYdis/YZdis);
Y3(aa) = Y1(aa) * tan(XYdis/YZdis);
end
polmask = poly2mask([Xdata X3],[Ydata Y3],50,50);

one approach would be to first construct a vector l connection points (x1,y1) and (x2,y2), rotate this vector 90 degrees clockwise and add it to the point (x2,y2).
Thus l=(x2-x1, y2-y1), its rotated version is l'=(y2-y1,x1-x2) and therefore the point of interest P=(x2, y2) + f*(y2-y1,x1-x2), where f is the desired scaling factor. If the lengths are supposed to be the same, then f=1 and thus P=(x2 + y2-y1, y2 + x1-x2).

Related

Recursive functions in Matlab returns only first iteration result

I'm trying to make a recursive function in Matlab that constrains an angle to be returned between -180:180 degrees. So if i input an angle of 721 degrees, it should return 1 degree (two full revolutions and then 1).
Somehow, it's only returning the value after fist iteration. I can see from the command window that it does the calculations correctly and inputs the updated values into the function. It ends up returning the wrong value.
1st: 721 - 360 = 361
2nd: 361 - 360 = 1
It returns 361 instead of 1 and it's driving be bonkers! :)
I've searched around, at it seems that newer versions might have an issue with recursive functions?
Here is my Matlab function:
function [constrainedTo180] = constrainingTo180(inputVector)
% Returns numbers to be constrained within +/- 180 degrees
% So 358 degrees is returned as -2 degrees
% fprintf('Running constraining function \n')
[ir,ic,ip] = size(inputVector);
constrainedTo180 = nan(ir,ic,ip);
for r = 1:ir % Iterate over rows
for c = 1:ic % Iterate over coloums
for p = 1:ip % Iterate over pages
if inputVector(r,c,p) > 180
constrainedTo180(r,c,p) = inputVector(r,c,p) - 360;
fprintf('%d is Over 180 \nResult: %d \n\n\n',inputVector(r,c,p),constrainedTo180(r,c,p))
elseif inputVector(r,c,p) < -180
constrainedTo180(r,c,p) = inputVector(r,c,p) + 360;
fprintf('Under -180 \n')
else
constrainedTo180(r,c,p) = inputVector(r,c,p);
fprintf('else...\n')
end
end
end
end
% Repeat until no values is outside [-180;180]
if max(abs(constrainedTo180(:))) > 180
pause(1)
fprintf('Max is %2.2f\n', max(abs(constrainedTo180(:))))
inputVectorTemp = constrainedTo180;
fprintf('Input to recursive function is %d \n', inputVectorTemp)
constrainingTo180(inputVectorTemp);
end
If you simply run with a 1x1 matrix constrainingTo180([721]) the command window outputs:
721 is Over 180
Result: 361
Max is 361.00
Input to recursive function is 361
361 is Over 180
Result: 1
ans =
361
Let me know if I left any important information out. Thanks a lot!
You are never assigning the new call of your recursive function to the original output.
Second to last line should be
constrainedTo180=constrainingTo180(inputVectorTemp);

Gradient Descent and Closed Form Solution - Different Hypothesis Lines in MATLAB

I'm in the process of coding what I'm learning about Linear Regression from the coursera Machine Learning course (MATLAB). There was a similar post that I found here, but I don't seem to be able to understand everything. Perhaps because my fundamentals in Machine Learning are a bit weak.
The problem I'm facing is that, for some data... both gradient descent (GD) and the Closed Form Solution (CFS) give the same hypothesis line. However, on one particular dataset, the results are different. I've read something about, that, if the data is singular, then results should be the same. However, I have no idea how to check whether or not my data is singular.
I will try to illustrate the best I can:
1) Firstly, here is the MATLAB code adapted from here. For the given dataset, everything turned out good where both GD and the CFS gave similar results.
The Dataset
X Y
2.06587460000000 0.779189260000000
2.36840870000000 0.915967570000000
2.53999290000000 0.905383540000000
2.54208040000000 0.905661380000000
2.54907900000000 0.938988900000000
2.78668820000000 0.966847400000000
2.91168250000000 0.964368240000000
3.03562700000000 0.914459390000000
3.11466960000000 0.939339440000000
3.15823890000000 0.960749710000000
3.32759440000000 0.898370940000000
3.37931650000000 0.912097390000000
3.41220060000000 0.942384990000000
3.42158230000000 0.966245780000000
3.53157320000000 1.05265000000000
3.63930020000000 1.01437910000000
3.67325370000000 0.959694260000000
3.92564620000000 0.968537160000000
4.04986460000000 1.07660650000000
4.24833480000000 1.14549780000000
4.34400520000000 1.03406250000000
4.38265310000000 1.00700090000000
4.42306020000000 0.966836480000000
4.61024430000000 1.08959190000000
4.68811830000000 1.06344620000000
4.97773330000000 1.12372390000000
5.03599670000000 1.03233740000000
5.06845360000000 1.08744520000000
5.41614910000000 1.07029880000000
5.43956230000000 1.16064930000000
5.45632070000000 1.07780370000000
5.56984580000000 1.10697580000000
5.60157290000000 1.09718750000000
5.68776170000000 1.16486030000000
5.72156020000000 1.14117960000000
5.85389140000000 1.08441560000000
6.19780260000000 1.12524930000000
6.35109410000000 1.11683410000000
6.47970330000000 1.19707890000000
6.73837910000000 1.20694620000000
6.86376860000000 1.12510460000000
7.02233870000000 1.12356720000000
7.07823730000000 1.21328290000000
7.15142320000000 1.25226520000000
7.46640230000000 1.24970650000000
7.59738740000000 1.17997060000000
7.74407170000000 1.18972990000000
7.77296620000000 1.30299340000000
7.82645140000000 1.26011340000000
7.93063560000000 1.25622670000000
My MATLAB code:
clear all; close all; clc;
x = load('ex2x.dat');
y = load('ex2y.dat');
m = length(y); % number of training examples
% Plot the training data
figure; % open a new figure window
plot(x, y, '*r');
ylabel('Height in meters')
xlabel('Age in years')
% Gradient descent
x = [ones(m, 1) x]; % Add a column of ones to x
theta = zeros(size(x(1,:)))'; % initialize fitting parameters
MAX_ITR = 1500;
alpha = 0.07;
for num_iterations = 1:MAX_ITR
thetax = x * theta;
% for theta_0 and x_0
grad0 = (1/m) .* sum( x(:,1)' * (thetax - y));
% for theta_0 and x_0
grad1 = (1/m) .* sum( x(:,2)' * (thetax - y));
% Here is the actual update
theta(1) = theta(1) - alpha .* grad0;
theta(2) = theta(2) - alpha .* grad1;
end
% print theta to screen
theta
% Plot the hypothesis (a.k.a. linear fit)
hold on
plot(x(:,2), x*theta, 'ob')
% Plot using the Closed Form Solution
plot(x(:,2), x*((x' * x)\x' * y), '--r')
legend('Training data', 'Linear regression', 'Closed Form')
hold off % don't overlay any more plots on this figure''
[EDIT: Sorry for the wrong labeling... It's not Normal Equation, but Closed Form Solution. My mistake]
The results for this code is as shown below (Which is peachy :D Same results for both GD and CFS) -
Now, I am testing my code with another dataset. The URL for the dataset is here - GRAY KANGAROOS. I converted it to CSV and read it into MATLAB. Note that I did scaling (divided by the maximum, since if I didn't do that, no hypothesis line appears at all and the thetas come out as Not A Number (NaN) in MATLAB).
The Gray Kangaroo Dataset:
X Y
609 241
629 222
620 233
564 207
645 247
493 189
606 226
660 240
630 215
672 231
778 263
616 220
727 271
810 284
778 279
823 272
755 268
710 278
701 238
803 255
855 308
838 281
830 288
864 306
635 236
565 204
562 216
580 225
596 220
597 219
636 201
559 213
615 228
740 234
677 237
675 217
629 211
692 238
710 221
730 281
763 292
686 251
717 231
737 275
816 275
The changes I made to the code to read in this dataset
dataset = load('kangaroo.csv');
% scale?
x = dataset(:,1)/max(dataset(:,1));
y = dataset(:,2)/max(dataset(:,2));
The results that came out was like this: [EDIT: Sorry for the wrong labeling... It's not Normal Equation, but Closed Form Solution. My mistake]
I was wondering if there is any explanation for this discrepancy? Any help would be much appreciate. Thank you in advance!
I haven't run your code, but let me trow you some theory:
If your code is right (it looks like it): Increase MAX_ITER and it will look better.
Gradient descend is not ensured to converge at MAX_ITER, and actually gradient descend is a quite slow method (convergence-wise).
The convergence of Gradient descend for a "standard" convex function (like the one you try to solve) looks like this (from the Internets):
Forget, about iteration number, as it depedns in the problem, and focus in the shape. What may be happening is that your maxiter falls somewhere like "20" in this image. Thus your result is good, but not the best!
However, solving the normal equations directly will give you the minimums square error solution. (I assume normal equation you mean x=(A'*A)^(-1)*A'*b). The problem is that there are loads of cases where you can not store A in memory, or in an ill-posed problem, the normal equation will lead to ill-conditioned matrices that will be numerically unstable, thus gradient descend is used.
more info
I think I figured it out.
I immaturely thought that a maximum iteration of 1500 was enough. I tried with a higher value (i.e. 5k and 10k), and both algorithms started to give the similar solution. So my main issue was the number of iterations. It needed more iteration to properly converge for that dataset :D

how to make a continuous stacked bar graph

Does someone know how to make a graph similar to this one with matlab?
To me it seems like a continuous stacked bar plot.
I did not manage to download the same data so I used other ones.
I tried the following code:
clear all
filename = 'C:\Users\andre\Desktop\GDPpercapitaconstant2000US.xlsx';
sheet = 'Data';
xlRange = 'AP5:AP259'; %for example
A = xlsread(filename,sheet,xlRange);
A(isnan(A))=[]; %remove NaNs
%create four subsets
A1=A(1:70);
A2=A(71:150);
A3=A(151:180);
A4=A(181:end);
edges=80:200:8000; %bins of the distributions
[n_A1,xout_A1] = histc(A1,edges); %distributions of the first subset
[n_A2,xout_A2] = histc(A2,edges);
[n_A3,xout_A3] = histc(A3,edges);
[n_A4,xout_A4] = histc(A4,edges);
%make stacked bar plot
for ii=1:numel(edges)
y(ii,:) = [n_A1(ii) n_A2(ii) n_A3(ii) n_A4(ii)];
end
bar(y,'stacked', 'BarWidth', 1)
and obtained this:
It is not so bad.. Maybe with other data it would look nicer... but I was wondering if someone has better ideas. Maybe it is possible to adapt fitdist in a similar way?
First, define the x axis. If you want it to follow the rules of bar, then use:
x = 0.5:numel(edges)-0.5;
Then use area(x,y), which produces a filled/stacked area plot:
area(x,y)
And if you want the same colors as the example you posted at the top, define the colormap and call colormap as:
map = [
218 96 96
248 219 138
253 249 199
139 217 140
195 139 217
246 221 245
139 153 221]/255;
colormap(map)
(It may not be exactly as the one you posted, but I got it quite close I think. Also, not all colors are shown in the result below as there are only 4 parameters, but all colors are defined)
Result:

Carrying out edge detection on the ROI of image in Matlab

How can do edge detection on the ROI (only) of an image without processing the rest of the image? I have tried the following but it is not working:
h4 = #(x) edge(x,'log');
Edge_map = roifilt2(Foregound_Newframe,roi_mask,h4);
roi_mask is the binary mask that I am using and Foregound_Newframe is the gray image to be processed. Kindly provide an example. Thanks.
The error I see is that the function you are using to do the filtering requires input argument of type double, otherwise your calling syntax should work fine.
i.e. use
YourFilter = #(x) edge(double(x),'log');
When I apply this to an example fromroifilt2 docs it works fine (ok it looks weird in this case...):
clc
clear
FullImage = imread('eight.tif');
roi_col = [222 272 300 270 221 194];
roi_row = [21 21 75 121 121 75];
ROI = roipoly(FullImage,roi_col,roi_row);
YourFilter = #(x) edge(double(x),'log');
J = roifilt2(FullImage,ROI,YourFilter);
figure, imshow(FullImage), figure, imshow(J)
with following output:

Clustering with ambigous input values

Imagine the following scenario:
I have two 4x100 matrices ang1_stab and ang2_stab. These contain four angles along the columns, like this:
195.7987 16.2722 14.4171 198.5878 199.2693...
80.2062 86.7363 89.2861 89.5454 89.3411...
68.7998 -83.8318 -80.3138 69.0636 -96.4913...
-5.3262 -23.3030 -20.6823 -18.9915 -16.7224...
95.3450 183.8212 171.0686 151.8887 177.9041...
21.4780 27.2389 23.4016 27.6631 17.2893...
-13.2767 -103.5548 -115.0615 39.6790 -112.3568...
-5.3262 -23.3030 -20.6823 -18.9915 -16.7224...
The fourth angle is always the same for both matrices, so it can be neglected.
The problem: some of the columns of ang1_stab and ang2_stab are swapped, so I need to find the columns that would fit better into the other matrix and then swap the respective columns.
The complication: The calculation of the given angles is ambigous and multiples of 90° might have been added/subtracted, e.g. the angle 16° should be considered closer to 195° than to 95°.
What I have tried so far:
fp1 = []; % define clusters
fp2 = [];
for j = 1:size(ang1_stab,2) % loop over all columns
tmp1 = ang1_stab(:,j); % temporary columns
tmp2 = ang2_stab(:,j);
if j == 1 % assign first cluster center
fp1 = [fp1, tmp1];
fp2 = [fp2, tmp2];
else
mns1 = median(fp1(1:3,:),2); % calculate cluster centers
mns2 = median(fp2(1:3,:),2);
% calculate distances to cluster centers
dif11 = sum(abs((mns1-tmp1(1:3))-round((mns1-mp1(1:3))/90)*90));
dif12 = sum(abs((mns1-tmp2(1:3))-round((mns1-tmp2(1:3))/90)*90));
dif21 = sum(abs((mns2-tmp1(1:3))-round((mns2-tmp1(1:3))/90)*90));
dif22 = sum(abs((mns2-tmp2(1:3))-round((mns2-tmp2(1:3))/90)*90));
if min([dif11,dif21])<min([dif12,dif22]) % assign to cluster
if dif11<dif21
fp1 = [fp1,tmp1];
fp2 = [fp2,tmp2];
else
fp1 = [fp1,tmp2];
fp2 = [fp2,tmp1];
end
else
if dif12<dif22
fp1 = [fp1,tmp2];
fp2 = [fp2,tmp1];
else
fp1 = [fp1,tmp1];
fp2 = [fp2,tmp2];
end
end
end
end
However:
This appraoch seems overly complicated and I was wondering if I can somehow replace it with an appropriate algorithm, e.g. kmeans. However, I don't know how to account for the ambiguity in the angles in that case.
The code is working, but the clustering does currently still throw points in the wrong cluster. I just cannot find why.
I would appreciate it, if someone could tell me how to adopt this to work with built-in routines like kmeans or so.
Edit:
A small toy example:
This could be the output that I am getting:
ang1_stab = [30 10 80 100; 28 15 90 95; 152 93 180 102];
ang2_stab = [150 90 3 100; 145 92 5 95; 32 10 82 102];
What I would like to achieve:
fp1 = [30 10 80 100; 28 15 90 95; 32 10 82 102];
fp2 = [150 90 3 100; 145 92 5 95; 152 93 180 102];
Note that the last columns have been swapped.
Also note that the third element in the last column of fp2 is approximately the mean of the other elements in that row, but 180° higher. I still need to be able to identify this is the right cluster.