How can I improve my sobel operator edge detection? - matlab

I'm trying to get into the field of computer vision, and to start I implemented a Sobel filter in MATLAB, which I read about here: http://en.wikipedia.org/wiki/Sobel_operator
Here is the code:
image = double(image);
kernelx = [ -1, 0, 1;
-2, 0, 2;
-1, 0, 1];
kernely = [ 1, 2, 1;
0, 0, 0;
-1, 0, 1];
height = size(image,1);
width = size(image,2);
channel = size(image,3);
for i = 2:height - 1
for j = 2:width - 1
for k = 1:channel
magx = 0;
magy = 0;
for a = 1:3
for b = 1:3
magx = magx + (kernelx(a, b) * image(i + a - 2, j + b - 2, k));
magy = magy + (kernely(a, b) * image(i + a - 2, j + b - 2, k));
end;
end;
edges(i,j,k) = sqrt(magx^2 + magy^2);
end;
end;
end;
Here is an image I tested it on:
This is the result:
I don't know where to go from here, I've tried looking at line thinning or thresholding, what steps should I take to make this run better?

Your kernel in the y direction seems to be incorrect, it should be
[ 1, 2, 1;
0, 0, 0;
-1, -2, -1];
Further, if you want to improve edge detection, you can look into Hysteresis, its an easy way to complete some obvious contours in an image which might be missed out
http://en.wikipedia.org/wiki/Canny_edge_detector#Tracing_edges_through_the_image_and_hysteresis_thresholding

Results first
1 Typo in Kernel
As Bharat Singh pointed out, your y-Kernel looks wrong. (Later analysis shows that it changes the results but that isn't the main problem.) If you want you can use your original kernel in my code below to see what the result is. (For posterity: kernely(3,:) = [-1, 0, 1];) Basically, it looks like the input image.
2 Use Convolution to make debugging fast
Before you do anything else, just use the convolution function that is provided by Matlab.
Also, to speed things up, use conv2. While you are experimenting, you might want to use parfor instead of the outer for-loop. The convolution is instantaneous on my machine and the parfor version takes minutes.
3 imshow is causing problems
There is an alternative named imagesc, that scales the image automatically. Or you can call imshow(image, []). But the problem in this case is that each channel looks right but the automated mixing of the channels by Matlab doesn't work. (Remember they are not RGB anymore but more like magnitude of the R-channel derivative |dR| etc. )
Check this by looking at each resulting channel individually (imshow(E(:,:,1), [])) or as a euclidean average (see my code). If you run imshow(E, []) you get the blown out highlights. The same happens if you pass it through rgb2gray first.
Code
image = imread('cat.jpg');
image = double(image);
kernelx = [ -1, 0, 1;
-2, 0, 2;
-1, 0, 1];
kernely = [ 1, 2, 1;
0, 0, 0;
-1, -2, -1];
height = size(image,1);
width = size(image,2);
channel = size(image,3);
edges = zeros(height, width, channel);
if exist('chooseSlow', 'var')
parfor i = 2:height - 1
for j = 2:width - 1
for k = 1:channel
magx = 0;
magy = 0;
for a = 1:3
for b = 1:3
magx = magx + (kernelx(a, b) * image(i + a - 2, j + b - 2, k));
magy = magy + (kernely(a, b) * image(i + a - 2, j + b - 2, k));
end;
end;
edges(i,j,k) = sqrt(magx^2 + magy^2);
end;
end;
end;
end
%% Convolution approach
E = zeros(height, width, channel);
for k=1:channel
Magx = conv2(image(:,:,k), kernelx, 'same');
Magy = conv2(image(:,:,k), kernely, 'same');
E(:,:,k) = sqrt(Magx .^2 + Magy .^2);
end
imshow(sqrt(E(:,:,1).^2 + E(:,:,2).^2 + E(:, :, 3) .^2 ), []);
print('result.png', '-dpng');

Related

Optimizing DP in matlab

I have the following DP which I am applying on a binarized image (either 0 or 1) in Matlab
[x, y] = size(img);
dp = zeros(x, y);
dp(1,:) = img(1,:);
dp(:,1) = img(:,1);
for i = 2:x
for j = 2:y
if img(i, j) == 0
dp(i, j) = min([dp(i, j - 1), dp(i - 1, j), dp(i - 1, j - 1)]) + 1;
end
end
end
The code for large x and y takes a lot of time maybe because of the if condition and using for loops instead of writing vectorized code.
Can anyone optimize it.?
Or is there any approach which optimizes the above code by exploiting the fact that the matrix img contains either 0 or 1 (fewer 1s than 0s).
Also is it possible to somehow use parallel for loops to speed up.?
As far as I am aware, you cannot really speed up this computation in general. But if you know that there are only very few entries where img(i,j)==0 following approach might save you a little bit of time:
[x, y] = size(img);
dp = zeros(x, y);
dp(1,:) = img(1,:);
dp(:,1) = img(:,1);
[i, j] = find(img(2:end, 2:end) == 0); % Extract only these pixels where we actually need to do something
i = i + 1; %correct for removing the first row and column
j = j + 1;
for k = 1:numel(i);
dp(i(k), j(k)) = min([dp(i(k), j(k) - 1), dp(i(k) - 1, j(k)), dp(i(k) - 1, j(k) - 1)]) + 1;
end

Precision of calculations [duplicate]

This question already has an answer here:
How to obtain Fortran precision in MatLAB
(1 answer)
Closed 7 years ago.
I am doing a calculation in Fortran on a double-precision variable, and after the calculation the variable gets the value -7.217301636365630e-24.
However, when I do the same computation in Matlab, the variable just gets the value 0. Is there a way to increase the precision of MatLAB when doing calculations such that I would also be able to get something on the order of 7e-24?
Ideally, it would be something I could apply to all calculations in the script and not just a single variable. Something similar to when using format long.
For me this kind of precision is crucial as I need to determine if a variable is indeed negative or not.
I have added the code. It is rather long, but I couldn't trim it further without throwing away variables and their precision. The last term, Ax(i,:,:), is the one that I would like to have a very high precision on. So the important stuff occurs only in the last line.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% CONSTANTS
clc
clear all
sym_weight = [4/9, 1/9,1/9,1/9,1/9, 1/36,1/36,1/36,1/36];
dir_x = [ 0, 1, 0, -1, 0, 1, -1, -1, 1];
dir_y = [ 0, 0, 1, 0, -1, 1, 1, -1, -1];
ly = 11; lx = ly;
xC = 5; yC=xC;
density_high = 1.0;
density_low = 0.1;
radius = 2;
interface_w = 1;
sigma_st = 0.0001;
beta = 12*sigma_st/(interface_w*(density_high-density_low)^4);
kappa = 1.5*sigma_st*interface_w/(density_high-density_low)^2;
saturated_density = 0.5*(density_high+density_low);
for x=1:lx
for y=1:ly
for i=1:9
fIn(i, x, y) = sym_weight(i)*density_high;
gIn(i, x, y) = 3*sym_weight(i);
test_radius = sqrt((x-xC)^2 + (y-yC)^2);
if(test_radius <= (radius+interface_w))
fIn(i, x, y) = sym_weight(i)*( saturated_density - 0.5*(density_high-density_low)*tanh(2*(radius-sqrt((x-xC)^2 + (y-yC)^2))/interface_w) );
end
end
end
end
density_2d = ones(lx)*saturated_density;
for i=1:lx
density_aux(:,:,i) = abs(density_2d(:, i)');
end
density_local = sum(fIn);
L_density_local = (+1.0*(circshift(density_local(1,:,:), [0, +1, +1]) + circshift(density_local(1,:,:), [0, -1, +1]) + circshift(density_local(1,:,:), [0, +1, -1]) + circshift(density_local(1,:,:), [0, -1, -1])) + ...
+4.0*(circshift(density_local(1,:,:), [0, +1, +0]) + circshift(density_local(1,:,:), [0, -1, +0]) + circshift(density_local(1,:,:), [0, +0, +1]) + circshift(density_local(1,:,:), [0, +0, -1])) + ...
-20.0*density_local(1,:,:));
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
chem_pot = 4*beta*(density_local-density_low).*(density_local-density_high).*(density_local-density_aux) - kappa*L_density_local/6;
for i=3
Ax(i,:,:) = (+circshift(chem_pot(1,:,:), [0,-2*dir_x(i),-2*dir_y(i)]) - chem_pot(1,:,:));
end
You have not shown the fortran code, but be aware that in Fortran, when you do this:
density_low = 0.1
The literal 0.1 is single precision, regardless of the type of density_low.
All of those literals need to be expressed as 0.1D0 or 0.1_k where k is the appropriate kind integer.
(Sorry if you knew that, but its a common mistake )

data fitting an ellipse in 3D space

Forum
I've got a set of data that apparently forms an ellipse in 3D space (not an ellipsoid, but a curve in 3D).
Being inspired by following thread http://au.mathworks.com/matlabcentral/newsreader/view_thread/65773
and with the help from someone ,I manage to get the optimization code running and outputs a set of best parameters x (vector). However, when I try to use this x to replicate the ellipse,the outcomes is a strange straight line in the space. I have been stacked on this for days., still don't know what went wrong....pretty much devastated... I hope someone can shed some light on this. The Mathematica formulations for the ellipse follows the same as in the above thread ,where
The 3D-ellipse is given by: (x;y;z) = (z1;z2;z3) + R(alpha,beta,gamma).(acos(phi); b*sin(phi);0)
where:
* z is the translation vector.
* R is the rotation matrix (using Euler angles, we first rotate alpha rad around the x-axis, then beta rad around the y-axis and finally gamma rad around the z-axis again).
* a is the long axis of the ellipse
* b is the short axis of the ellipse.
Here is my target function for optimization (ellipsefit.m)
function [merit]= ellipsefit(x, vmatrix) % x is the initial parameters, vmatrix stores the datapoints
load vmatrix.txt % In vmatrix, the data are stored: N rows x 3 columns
a = x(1);
b = x(2);c
alpha = x(3);
beta = x(4);
gamma = x(5);
z = [x(6),x(7),x(8)];
t = z'
[dim1, dim2]=size(vmatrix);
% construction of rotation matrix R(alpha,beta,gamma)
R1 = [cos(alpha), sin(alpha), 0; -sin(alpha), cos(alpha), 0; 0, 0, 1];v
R2 = [1, 0, 0; 0, cos(beta), sin(beta); 0, -sin(beta), cos(beta)];
R3 = [cos(gamma), sin(gamma), 0; -sin(gamma), cos(gamma), 0; 0, 0, 1];
R = R3*R2*R1;
% first compute vector phi (in the length of the data) by minimizing for every
% point in the data set the distance of this point to the ellipse
% (with its initial parameters a,b,alpha,beta,gamma, z1,z2,z3 held fixed) with respect to phi.
for i=1:dim1
point=vmatrix(i,:)';
dist=#(phi)sum((R*[a*cos(phi); b*sin(phi); 0]+t-point)).^2;
phi(i)=fminbnd(dist,0,2*pi);
end
v = [a*cos(phi); b*sin(phi); zeros(size(phi))];
P = R*v;
%The targetfunction is: g = (xi1,xi2,xi3)' -(z1,z2,z3)'-R(alpha,beta,gamma)(a cos(phi), b sin(phi),0)'
% Construction of distance function
merit = [vmatrix(:,1)-z(1)-P(1),vmatrix(:,2)-z(2)-P(2),vmatrix(:,3)-z(3)-P(3)];
merit = sqrt(sum(sum(merit.^2)))
end
here is the main function for parameters initialization and call for opts (xfit.m)
function [y] = xfit (x)
x= [1 1 1 1 1 1 1 1] % initial parameters
[x] = fminsearch(#ellipsefit,x) % set the distance minimum as the target function
y=x
end
code to reconstruct the ellipse in scatter points (ellipsescatter.txt)
x= [0.655,0.876,1.449,2.248,1.024,0.201,-0.11,0.002] % values obtained according to above routines
a = x(1);
b = x(2);
alpha = x(3);
beta = x(4);
gamma = x(5);
z = [x(6),x(7),x(8)];
R1 = [cos(alpha), sin(alpha), 0; -sin(alpha), cos(alpha), 0; 0, 0, 1];
R2 = [1, 0, 0; 0, cos(beta), sin(beta); 0, -sin(beta), cos(beta)];
R3 = [cos(gamma), sin(gamma), 0; -sin(gamma), cos(gamma), 0; 0, 0, 1];
R = R3*R2*R1;
phi=linspace(0,2*pi,100)
v = [a*cos(phi); b*sin(phi); zeros(size(phi))];
P = R*v;
u = P'
and last the data points (vmatrix)
0.002037965 0.004225765 0.002020202
0.005766671 0.007269193 0.004040404
0.010004802 0.00995638 0.006060606
0.014444336 0.012502725 0.008080808
0.019083408 0.014909533 0.01010101
0.023967745 0.017144827 0.012121212
0.03019849 0.01969697 0.014591289
0.038857283 0.022727273 0.017839321
0.045443501 0.024730475 0.02020202
0.051213405 0.026346492 0.022222222
0.061038174 0.028787879 0.02555121
0.069408829 0.030575164 0.028282828
0.075785861 0.031818182 0.030321465
0.088818543 0.033954681 0.034343434
0.095538223 0.03490652 0.036363636
0.109421234 0.036499949 0.04040404
0.123800737 0.037746182 0.044444444
0.131206601 0.038218171 0.046464646
0.146438211 0.038868525 0.050505051
0.162243245 0.039117883 0.054545455
0.178662839 0.03893748 0.058585859
0.195740664 0.038296774 0.062626263
0.204545539 0.037790433 0.064646465
0.222781268 0.036340005 0.068686869
0.23715887 0.034848485 0.071748051
0.251787024 0.033009003 0.074747475
0.26196429 0.031542949 0.076767677
0.278510276 0.028787879 0.079919236
0.294365342 0.025757576 0.082799669
0.306221108 0.023197784 0.084848485
0.31843759 0.020305704 0.086868687
0.331291367 0.016967964 0.088888889
0.342989936 0.013636364 0.090622484
0.352806191 0.010606061 0.091993214
0.36201461 0.007575758 0.093211986
0.376385537 0.002386324 0.094949495
0.386214665 -0.001515152 0.096012
0.396173756 -0.005800677 0.096969697
0.406365393 -0.010606061 0.097799682
0.417897899 -0.016666667 0.098516141
0.428059375 -0.022727273 0.098889844
0.436894505 -0.028787879 0.09893196
0.444444123 -0.034848485 0.098652697
0.45074522 -0.040909091 0.098061305
0.455830971 -0.046969697 0.097166076
0.457867157 -0.05 0.096591789
0.46096663 -0.056060606 0.095199991
0.461974832 -0.059090909 0.094368708
0.462821268 -0.063709158 0.092929293
0.46279206 -0.068181818 0.091323015
0.462224312 -0.071212121 0.090097745
0.461247257 -0.074242424 0.088770148
0.459194871 -0.07812596 0.086868687
0.456406121 -0.0818267 0.084848485
0.45309565 -0.085162601 0.082828283
0.449335762 -0.088184223 0.080808081
0.445185841 -0.090933095 0.078787879
0.440695103 -0.093443633 0.076767677
0.435904796 -0.095744683 0.074747475
0.429042582 -0.098484848 0.072052312
0.419877272 -0.101489369 0.068686869
0.41402731 -0.103049401 0.066666667
0.407719192 -0.104545455 0.064554798
0.395265308 -0.106881864 0.060606061
0.388611992 -0.107880111 0.058585859
0.374697979 -0.10945186 0.054545455
0.360058411 -0.11051623 0.050505051
0.352443612 -0.11084211 0.048484848
0.336646801 -0.111097219 0.044444444
0.320085063 -0.110817414 0.04040404
0.31150078 -0.110465333 0.038383838
0.293673303 -0.109300395 0.034343434
0.275417637 -0.107575758 0.030396076
0.265228963 -0.106361993 0.028282828
0.251914589 -0.104545455 0.025603647
0.234385536 -0.101745907 0.022222222
0.223443994 -0.099745394 0.02020202
0.212154519 -0.097501571 0.018181818
0.20046153 -0.09497557 0.016161616
0.188298809 -0.092121085 0.014141414
0.17558878 -0.088883868 0.012121212
0.162241674 -0.085201142 0.01010101
0.148154337 -0.081000773 0.008080808
0.136529019 -0.077272727 0.006507255
0.127611912 -0.074242424 0.005361311
0.116762238 -0.070350086 0.004040404
0.103195122 -0.065151515 0.002507114
0.095734279 -0.062121212 0.001725236
0.081719652 -0.056060606 0.000388246
0 0 0
This answer is not a direct fit in 3D, it instead involves first a rotation of the data so that the plane of the points coincides with the xy plane, then a fit to the data in 2D.
% input: data, a N x 3 array with one set of Cartesian coords per row
% remove the center of mass
CM = mean(data);
datap = data - ones(size(data,1),1)*CM;
% now rotate all points into the xy plane ...
% start by finding the plane:
[u s v]=svd(datap);
% rotate the data into the principal axes frame:
datap = datap*v;
% fit the equation for an ellipse to the rotated points
x= [0.25 0.07 0.037 0 0]'; % initial parameters
options=1;
xopt = fmins(#fellipse,x,options,[],datap) % set the distance minimum as the target function
This is the function fellipse, based on the function provided:
function [merit]= fellipse(x,data) % x is the initial parameters, data stores the datapoints
a = x(1);
b = x(2);
alpha = x(3);
z = x(4:5);
R = [cos(alpha), sin(alpha), 0; -sin(alpha), cos(alpha), 0; 0, 0, 1];
data = data*R;
merit = 0;
[dim1, dim2]=size(data);
for i=1:dim1
dist=#(phi)sum( ( [a*cos(phi);b*sin(phi)] + z - data(i,1:2)').^2 );
phi=fminbnd(dist,0,2*pi);
merit = merit+dist(phi);
end
end
Also, note again that this can be turned into a fit directly in 3D, but this answer is just as good if you can assume the data points lie approx. in a 2D plane. The current solution is likely much more efficient then a solution in 3D with additional parameters.
Hopefully the code is self-explanatory. I recommend looking at the link included in the OP, it explains the purpose of the loop over phi.
And this is how you can inspect the result of the fit:
a = xopt(1);
b = xopt(2);
alpha = xopt(3);
z = [xopt(4:5) ; 0]';
phi = linspace(0,2*pi,100)';
simdat = [a*cos(phi) b*sin(phi) zeros(size(phi))];
R = [cos(alpha), -sin(alpha), 0; sin(alpha), cos(alpha), 0; 0, 0, 1];
simdat = simdat*R + ones(size(simdat,1), 1)*z ;
figure, hold on
plot3(datap(:,1),datap(:,2),datap(:,3),'o')
plot3(simdat(:,1),simdat(:,2),zeros(size(simdat,1),1),'r-')
Edit
The following is a 3D approach. It does not appear to be very robust as it is quite sensitive to the choice of starting parameters. Some improvements may be necessary.
CM = mean(data);
datap = data - ones(size(data,1),1)*CM;
xopt = [ 0.07 0.25 1 -0.408976 0.610120 0 0 0]';
options=1;
xopt = fmins(#fellipse3d,xopt,options,[],datap) % set the distance minimum as the target function
The function fellipse3d is
function [merit]= fellipse3d(x,data) % x is the initial parameters, data stores the datapoints
a = abs(x(1));
b = abs(x(2));
alpha = x(3);
beta = x(4);
gamma = x(5);
z = x(6:8)';
[dim1, dim2]=size(data);
R1 = [cos(alpha), sin(alpha), 0; -sin(alpha), cos(alpha), 0; 0, 0, 1];
R2 = [1, 0, 0; 0, cos(beta), sin(beta); 0, -sin(beta), cos(beta)];
R3 = [cos(gamma), sin(gamma), 0; -sin(gamma), cos(gamma), 0; 0, 0, 1];
R = R3*R2*R1;
data = (data - z(ones(dim1,1),:))*R;
merit = 0;
for i=1:dim1
dist=#(phi)sum( ([a*cos(phi);b*sin(phi);0] - data(i,:)').^2 );
phi=fminbnd(dist,0,2*pi);
merit = merit+dist(phi);
end
end
You can visualize the results with
a = xopt(1);
b = xopt(2);
alpha = -xopt(3);
beta = -xopt(4);
gamma = -xopt(5);
z = xopt(6:8)' + CM;
dim1 = 100;
phi = linspace(0,2*pi,dim1)';
simdat = [a*cos(phi) b*sin(phi) zeros(size(phi))];
R1 = [cos(alpha), sin(alpha), 0; ...
-sin(alpha), cos(alpha), 0; ...
0, 0, 1];
R2 = [1, 0, 0; ...
0, cos(beta), sin(beta); ...
0, -sin(beta), cos(beta)];
R3 = [cos(gamma), sin(gamma), 0; ...
-sin(gamma), cos(gamma), 0; ...
0, 0, 1];
R = R1*R2*R3;
simdat = simdat*R + z(ones(dim1,1),:);
figure, hold on
plot3(data(:,1),data(:,2),data(:,3),'o')
plot3(simdat(:,1),simdat(:,2),simdat(:,3),'r-')

Filter matrix rows depending on values in a second matrix

Given a 2x3 matrix x and a 4x2 matrix y, I'd like to use each row of y to index into x. If the value in x is not equal to -1 I'd like to remove that row from y. Here's an example that does what I'd like, except I'd like to do it in a fast, simple way without a loop.
x = [1, 2, 3; -1, 2, -1];
y = [1, 1; 1, 3; 2, 1; 2, 3];
for i=size(y,1):-1:1
if x(y(i,1), y(i,2)) ~= -1
y(i,:) = [];
end
end
This results in:
y =
2 1
2 3
A raw approach to what sub2ind follows (as used by this pretty nice-looking solution posted by Luis) inherently would be this -
y = y(x((y(:,2)-1)*size(x,1)+y(:,1))==-1,:)
Benchmarking
Benchmarking Code
N = 5000;
num_runs = 10000;
x = round(rand(N,N).*2)-1;
y = zeros(N,2);
y(:,1) = randi(size(x,1),N,1);
y(:,2) = randi(size(x,2),N,1);
disp('----------------- With sub2ind ')
tic
for k = 1:num_runs
y1 = y(x(sub2ind(size(x), y(:,1), y(:,2)))==-1,:);
end
toc,clear y1
disp('----------- With raw version of sub2ind ')
tic
for k = 1:num_runs
y2 = y(x((y(:,2)-1)*size(x,1)+y(:,1))==-1,:);
end
toc
Results
----------------- With sub2ind
Elapsed time is 4.095730 seconds.
----------- With raw version of sub2ind
Elapsed time is 2.405532 seconds.
This can be easily vectorized as follows (see sub2ind):
y = y(x(sub2ind(size(x), y(:,1), y(:,2)))==-1,:);
>> x = [1, 2, 3; -1, 2, -1];
>>y = [1, 1;
1, 2;
1, 3;
2, 1;
2, 2;
2, 3];
>>row_idx = reshape((x == -1)',1,6);
>>y = y(row_idx,:);
I think you didn't include all the index of x in y. I included all of them in y. Have a look..
Generalized version:
>> x = [1, 2, 3; -1, 2, -1];
>>y = [1, 1;
1, 2;
1, 3;
2, 1;
2, 2;
2, 3];
>>row_idx = reshape((x == -1)',1,size(x,1)*size(x,2));
>>y = y(row_idx,:);

Finding an equation of linear classifier for two separable sets of points using perceptron learning

I would like to write a matlab function to find an equation of a linear classifier for 2 separable sets of points using one single-layer perceptron. I have got 2 files:
script file - run.m:
x_1 = [3, 3, 2, 4, 5];
y_1 = [3, 4, 5, 2, 2];
x_2 = [6, 7, 5, 9, 8];
y_2 = [3, 3, 4, 2, 5];
target_array = [0 0 0 0 0 1 1 1 1 1];
[ func ] = classify_perceptron([x_1 x_2; y_1 y_2], target_array);
x = -2:10;
y = arrayfun(func, x);
plot(x_1, y_1, 'o', x_2, y_2, 'X', x, y);
axis([-2, 10, -2, 10]);
classify_perceptron.m
function [ func ] = classify_perceptron( points, target )
% points - matrix of x,y coordinates
% target - array of expected results
% func - function handler which appropriately classifies a point
% given by x, y arguments supplied to this function
target_arr = target;
weights = rand(1, 2);
translation = rand();
for i=1:size(points, 2)
flag = true;
while flag
result = weights * points(:, i) + translation;
y = result > 0;
e = target_arr(1, i) - y;
if e ~= 0
weights = weights + (e * points(:, i))';
translation = translation + e;
else
flag = false;
end
end
end
func = #(x)(-(translation + (weights(1, 1) * x)) / weights(1, 2));
return
end
The problem is that I don't know where I am making the mistake that leads to incorrect result. It looks like the slope of the line is right, however translation should be a bit bigger. I would be really thankful for pointing me in the right direction. The result I get is presented in the picture below:
Ok, so I have made a significant progress. In case someone runs into the same problem I present to you the solution. The problem has been solved by adding a variable learning_rate = 0.1 and packing the loop iterating over points into another loop iterating as many times as specified in the variable epochs (e.g. 300) .