Sigmoid and it's dearativate - neural-network

public double Sigmoid(double x)
{
return 2 / (1 + Math.Exp(-2 * x)) - 1;
}
public double Derivative(double x)
{
double s = Sigmoid(x) - (Sigmoid(x)* Sigmoid(x));
return s;
}
When i train the network it is giving output:
0,0 = 0 it is always 0 //I dont know
0,1 = 0,67 and it is going up //good but after 1000 repets it gets to
0.20 and it is goind down
1,0 = 0.50 and it is going up //good but after 1000 repets it gets to
0.20 and it is goind down
1,1 = 0.80 and it is going up //wrong it should go down.
Where is the mistake?
Neural network (XOR and back propagation)
int pw = Convert.ToInt32(textBox1.Text);
for (int i12 = 0; i12 < pw; i12++)
{
//i1 = Convert.ToDouble(textBox2.Text);
// i2 = Convert.ToDouble(textBox3.Text);
// desired = Convert.ToDouble(textBox1.Text);
for (int i = 0; i < 4; i++)
{
if (i == 0)
{
i1 = 1;
i2 = 1;
desired = 0;
}
else if (i == 1)
{
i1 = 1;
i2 = 0;
desired = 1;
}
else if (i == 2)
{
i1 = 0;
i2 = 1;
desired = 1;
}
else if (i == 3)
{
i1 = 0;
i2 = 0;
desired = 0;
}
// double[] questions = new double[2];
// questions[0] = 1;
// questions[1] = 0;
// Random rnd = new Random();
// double s = questions[rnd.Next(0, 2)];
// double s1 = questions[rnd.Next(0, 2)];
// i1 = s;
// i2 = s1;
//hidden layer hidden values
h1 = i1 * w1 + i2 * w2; //i1*w1+i2*w2
h2 = i1 * w3 + i2 * w4;//i1*w3+i2*w4
h3 = i1 * w5 + i2 * w6;//i1*w5+i2*w6;
//hidden layer hidden values
//VALUE OF HIDDEN LAYER
h1v = Sigmoid(h1);
h2v = Sigmoid(h2);
h3v = Sigmoid(h3);
//VALUE OF HIDDEN LAYER
//output final
output = h1v * w7 + h2v * w8 + h3v * w9;
outputS = Sigmoid(output);
//output final
//BACKPROPAGATION
//MARGIN ERROR
Error = desired - outputS; //desired-cena jaka ma byc OutputS-zgadnienta cena
//Margin Error
//DElta output sum
deltaoutputsum = Derivative(output) * Error * 0.05; //output bez sigmoida i error
//Delta output sum
//weight of w7,w8,w9.
w7b = w7; //0.3
w8b = w8; // 0.5
w9b = w9;// 0.9
w7 = w7 + deltaoutputsum * h1v; //waga w7
w8 = w8 + deltaoutputsum * h2v; //waga w8
w9 = w9 + deltaoutputsum * h3v; //waga w9
//weights of w7,w8,w9.
//DELTA HIDDEN SUm
h1 = deltaoutputsum * w7b * Derivative(h1);
h2 = deltaoutputsum * w8b * Derivative(h2);
h3 = deltaoutputsum * w9b * Derivative(h3);
//DELTA HIDDEN SUM
//weights 1,2,3,4,5,6
w1 = w1 - h1 * i1;
w2 = w2 - h1 * i2;
w3 = w3 - h2 * i1;
w4 = w4 - h2 * i2;
w5 = w5 - h3 * i1;
w6 = w6 - h3 * i2;
Why after training it give:
1.0 == close to 0, should be close to 1
1.1 == close to 1,should be 0
0.0 == it is good, close to 0
0.1 == close to 0,should be close to 1
This is the code to use after training(i1 and i1 are inputs 1 or 0 )
i1 = Convert.ToDouble(textBox4.Text);
i2 = Convert.ToDouble(textBox5.Text);
//hidden layer hidden values
h1 = i1 * w1 + i2 * w2; //i1*w1+i2*w2
h2 = i1 * w3 + i2 * w4;//i1*w3+i2*w4
h3 = i1 * w5 + i2 * w6;//i1*w5+i2*w6;
//hidden layer hidden values
//VALUE OF HIDDEN LAYER
h1v = Sigmoid(h1);
h2v = Sigmoid(h2);
h3v = Sigmoid(h3);
//VALUE OF HIDDEN LAYER
//output final
output = h1v * w7 + h2v * w8 + h3v * w9;
outputS = Sigmoid(output);
MessageBox.Show(outputS.ToString());
w1-w10 are weights. h1v are valuse of hidden layers. h1 are weights of hidden layers

Related

matlab kalman filter coding

T = 0.2;
A = [1 T; 0 1];
B = [T^2 / 2 T];
H = [1 0];
G = [0 1]';
Q = 0.00005;
R = 0.006;
x1(1) = 0;
x2(1) = 0;
x1e(1) = 0;
x2e(1) = 0;
xest = [x1e(1) x2e(1)]';
x1p(1) = 0;
x2p(1) = 0;
PE = [R 0; 0 0];
PP = A * PE(1) * A' + Q;
for i= 1:25
if i < 10
u = 0.25;
else
u = 0;
end
x1(i+1) = x1(i) + T * x2(i) + (T^2 / 2) * u;
x2(i+1) = x2(i) + T * u + sqrt(Q) * randn;
y(i+1) = x1(i+1) + sqrt(R) * randn;
PP = A * PE * A' + G * Q * G';
K = PP * H' * inv(H * PP * H' + R);
PE = [eye(2) - K * H] * PP;
xpredict = A * xest + B * u;
xest = xpredict + K * (y(i+1) -H * xpredict);
x1e(i+1) = [1 0] * xest;
x2e(i+1) = [0 1] * xest;
end
Unable to perform assignment because the left and right sides have a different number of elements.
Error in rrrr (line 34)
x1e(i+1) = [1 0] * xest;
how i can solve the error
xpredict must be a 2x1 vector. To solve it, you need to transpose B in line 5 i.e., B = [T^2 / 2 T]',
Since from Newton's laws of motion with constant velocity, we have

Straighten contours OpenCV

Hello guys I would like to "straighten" some contours using opencv/python. Is there anyway to accomplish this?
I have attached two images:
in the current stage
and how I would like to see it .
Bounding boxes resolve the majority of the problem, but there are some exceptions that do not produce the desired outcome (see the top right contour in the image).
Thank you very much!
Current Contours
Squared Contours
def approximate_contours(image: np.ndarray, eps: float, color=(255, 255, 255)):
contours, _ = cv.findContours(image, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
image = cv.cvtColor(image, cv.COLOR_GRAY2BGR)
approx_contours = []
for cnt in contours:
epsilon = eps * cv.arcLength(cnt, True)
approx = cv.approxPolyDP(cnt, epsilon, True)
cv.drawContours(image, [approx], -1, color=color)
approx_contours.append(approx)
return image, approx_contours
def get_angle(pts: np.ndarray):
a = np.array([pts[0][0][0], pts[0][0][1]])
b = np.array([pts[1][0][0], pts[1][0][1]])
c = np.array([pts[2][0][0], pts[2][0][1]])
ba = a - b
bc = c - b
unit_vector_ba = ba / np.linalg.norm(ba)
unit_vector_bc = bc / np.linalg.norm(bc)
dot_product = np.dot(unit_vector_ba, unit_vector_bc)
angle_rad = np.arccos(dot_product)
angle_deg = degrees(angle_rad)
try:
int(angle_deg)
except Exception as e:
raise Exception("nan value detected")
return int(angle_deg)
def move_points(contour:np.ndarray, pts: np.ndarray, angle: int, ext: list, weight=1):
(ext_left, ext_right, ext_bot, ext_top) = ext
a = np.array([pts[0][0][0], pts[0][0][1]])
b = np.array([pts[1][0][0], pts[1][0][1]])
c = np.array([pts[2][0][0], pts[2][0][1]])
right_angle = False
if 45 < angle < 135:
right_angle = True
diff_x_ba = abs(b[0] - a[0])
diff_y_ba = abs(b[1] - a[1])
diff_x_bc = abs(b[0] - c[0])
diff_y_bc = abs(b[1] - c[1])
rap_ba = diff_x_ba / max(diff_y_ba, 1)
rap_bc = diff_x_bc / max(diff_y_bc, 1)
if rap_ba < rap_bc:
a[0] = int((a[0] * weight + b[0]) / (2 + weight - 1))
b[0] = a[0]
c[1] = int((c[1] + b[1]) / 2)
b[1] = c[1]
else:
c[0] = int((c[0] + b[0]) / 2)
b[0] = c[0]
a[1] = int((a[1] * weight + b[1]) / (2 + weight - 1))
b[1] = a[1]
else:
diff_x_ba = abs(b[0] - a[0])
diff_y_ba = abs(b[1] - a[1])
diff_x_bc = abs(b[0] - c[0])
diff_y_bc = abs(b[1] - c[1])
if (diff_x_ba + diff_x_bc) > (diff_y_ba + diff_y_bc):
a[1] = int((a[1] * weight + b[1] + c[1]) / (3 + weight - 1))
b[1] = a[1]
c[1] = a[1]
else:
a[0] = int((a[0] * weight + b[0] + c[0]) / (3 + weight - 1))
b[0] = a[0]
c[0] = a[0]
return a, b, c, right_angle
def straighten_contours(contours: list, image: np.ndarray, color=(255, 255, 255)):
image = cv.cvtColor(image, cv.COLOR_GRAY2BGR)
for cnt in contours:
idx = 0
ext_left = cnt[cnt[:, :, 0].argmin()][0]
ext_right = cnt[cnt[:, :, 0].argmax()][0]
ext_top = cnt[cnt[:, :, 1].argmin()][0]
ext_bot = cnt[cnt[:, :, 1].argmax()][0]
while idx != int(cnt.size / 2):
try:
angle = get_angle(cnt[idx:idx + 3])
except Exception:
idx += 1
continue
(a, b, c, right_angle) = move_points(cnt, cnt[idx:idx + 3], angle, [ext_left, ext_right, ext_bot, ext_top])
cnt[idx][0] = a
cnt[idx + 1][0] = b
cnt[idx + 2][0] = c
idx += 1
if not right_angle:
idx -= 1
cnt = np.delete(cnt, (idx + 1), 0)
if idx == 1:
cnt = np.append(cnt, cnt[:2], axis=0)
cnt = np.delete(cnt, [0, 1], 0)
cv.drawContours(image, [cnt], -1, color=color)
return image
I managed to do some workarounds. The straighten contours function is applied onto the approximate_contours result (the first image in the question). Is not as good as I would have wanted it to be but it works.

Assignment to an array defined outside parloop inside parfor

Consider the following code.
Wx = zeros(N, N);
for ii = 1 : 1 : N
x_ref = X(ii); y_ref = Y(ii);
nghlst_Local = nghlst(ii, find(nghlst(ii, :))); Nl = length(nghlst_Local);
x_Local = X(nghlst_Local, 1); y_Local = Y(nghlst_Local, 1);
PhiU = ones(Nl+1, Nl+1); PhiU(end, end) = 0;
Phi = ones(Nl+1, Nl+1); Phi(end, end) = 0;
Bx = zeros(Nl+1,1);
for jj = 1 : 1 : Nl
for kk = 1 : 1 : Nl
rx = x_Local(jj,1) - x_Local(kk,1);
ry = y_Local(jj,1) - y_Local(kk,1);
PhiU(jj, kk) = (1 - U(1,1))) / sqrt(rx^2 + ry^2 + c^2);
end
rx = x_ref - x_Local(jj);
ry = y_ref - y_Local(jj);
Bx(jj, 1) = ( (Beta * pi * U(1,1)/(2*r_0*norm(U))) * cos( (pi/2) * (-rx * U(1,1) - ry * U(2,1)) / (r_0 * norm(U)) ) ) / sqrt(rx^2 + ry^2 + c^2) - rx * (1 - Beta * sin( (pi/2) * (-rx * U(1,1) - ry * U(2,1)) / (r_0 * norm(U)) ))/ (rx^2 + ry^2 + c^2)^(3/2);
end
invPhiU = inv(PhiU);
CX = Bx' * invPhiU; CX = CX (1, 1:end-1); Wx (ii, nghlst_Local) = CX;
end
I want to convert the first for loop into parfor loop. The rest of the code works fine, but the following assignment statement does not work when I change for to parfor.
Wx (ii, nghlst_Local) = CX;
I want to know what is this is wrong and how to remove such errors. Thank you.

How to train this neural network?

I programmed a simple back propagation NN. Here is the code snippet:
for (int i = 0; i < 10000; i++)
{
/// i1 = Convert.ToDouble(textBox1.Text);
//i2 = Convert.ToDouble(textBox2.Text);
//desired = Convert.ToDouble(textBox3.Text);
Random rnd = new Random();
i1 = rnd.Next(0, 1);
Random rnd1 = new Random();
i2 = rnd1.Next(0, 1);
if(i1 == 1 && i2 == 1)
{
desired = 0;
}
else if(i1 == 0&&i2 == 0)
{
desired = 0;
}
else
{
desired = 1;
}
//hidden layer hidden values
h1 = i1 * w1 + i2 * w2; //i1*w1+i2*w2
h2 = i1 * w3 + i2 * w4;//i1*w3+i2*w4
h3 = i1 * w5 + i2 * w6;//i1*w5+i2*w6;
//hidden layer hidden values
//VALUE OF HIDDEN LAYER
h1v = Sigmoid(h1);
h2v = Sigmoid(h2);
h3v = Sigmoid(h3);
//VALUE OF HIDDEN LAYER
//output final
output = h1v * w7 + h2v * w8 + h3v * w9;
outputS = Sigmoid(output);
//output final
//BACKPROPAGATION
//MARGIN ERROR
Error = desired - outputS; //desired-cena jaka ma byc OutputS-zgadnienta cena
//Margin Error
//DElta output sum
deltaoutputsum = Derivative(output) * Error; //output bez sigmoida i error
//Delta output sum
//weight of w7,w8,w9.
w7b = w7; //0.3
w8b = w8; // 0.5
w9b = w9;// 0.9
w7 = w7 + deltaoutputsum * h1v; //waga w7
w8 = w8 + deltaoutputsum * h2v; //waga w8
w9 = w9 + deltaoutputsum * h3v; //waga w9
//weights of w7,w8,w9.
//DELTA HIDDEN SUm
h1 = deltaoutputsum * w7b * Derivative(h1);
h2 = deltaoutputsum * w8b * Derivative(h2);
h3 = deltaoutputsum * w9b * Derivative(h3);
//DELTA HIDDEN SUM
//weights 1,2,3,4,5,6
w1 = w1 - h1 * i1;
w2 = w2 - h1 * i2;
w3 = w3 - h2 * i1;
w4 = w4 - h2 * i2;
w5 = w5 - h3 * i1;
w6 = w6 - h3 * i2;
label1.Text = outputS.ToString();
label2.Text = w1.ToString();
label3.Text = w2.ToString();
label4.Text = w3.ToString();
label5.Text = w4.ToString();
label6.Text = w5.ToString();
label7.Text = w6.ToString();
label8.Text = w7.ToString();
label9.Text = w8.ToString();
label10.Text = w9.ToString();
//weights 1,2,3,4,5,6
}
It is very simple to solve XOR problems. But I'dont now how to predict the output. Here i must provide answear to set the weights, but how to predict?
It train 10,000 on random training data.
Now when it is trained how to predict the answear?
Please help.
Sorry for my english but I dont now it very well.
h1-3 are weights of nodes
h1v are values of nodes
w1-10 are weights
I believe your problem lies in how you are training.
Do the following and I believe your program will be correct
Try training each of the data sets one after another instead of random, random works for continuous floating point values, but when you are working with XOR, you might run into issues where training too much on one or two sets of values (because of the nature of random) will cause issues moving the wieghts back toward a value that works with other input XOR values. So train on [1,1], then immediately [1,0] then [0,1] and then [0, 0] and repeat over and over.
Make sure the derivative function is correct; the derivative of a sigmoid should be sigmoid(x) - sigmoid(x)^2
name your hidden sum values something different than h1, h2 etc.. if you use that for the hidden node input values.
If you do those things, it appears you should have something exactly mathematically equivalent to what "how to build a neural-network" has.
I would also recommend having values that aren't persistent initialized inside your loop instead of outside. I may be wrong, but I don't think any value except your w1 w2 w3 etc... values need to be persistent through every training iteration. Not doing this causes hard to catch bugs and make reading the code harder since you can't guarantee variables aren't being modified elsewhere.

How can I fix the link between the multiplier and eqn(x)?

I am right now stuck on a problem in matlab. What I have done is that I have an equation that is passed on into another function which works by the bisection-method.
But I have a multiplier that I am trying to implement which somehow leads to the function crashing.
Before I introduced the multiplier it all worked, I tried breaking it down by entering the multiplier value manually and it didn't work
P_{1} = 0.6;
P_{2} = 0.2;
P_{3} = 0.2;
a_1 = 4/3;
a_2 = -7/3;
b_1 = -1/3;
b_2 = 4/3;
persistent multiplier
multiplier = exp(a_1 * 44 + a_2 * 14 + 0);
eqn = #(x) ((a_1 * x + b_1)^a_1) * ((a_2 * x + b_2)^a_2) * x ...
-(P_{1}^a_1) * (P_{2}^a_2) * P_{3} * multiplier;
Q_{3} = Bisectionmethod(a_1, a_2, b_1, b_2, eqn);
Here is the calculating part of the bisection method.
x_lower = max(0, -b_1 / a_1);
x_upper = -b_2 / a_2;
x_mid = (x_lower + x_upper)/2;
Conditional statement encompassing the method of bisection
while abs(eqn(x_mid)) > 10^(-10)
if (eqn(x_mid) * eqn(x_upper)) < 0
x_lower = x_mid;
else
x_upper = x_mid;
end
x_mid = (x_lower + x_upper)/2;
end
Based on the information you provided this is what I came up with
function Q = Stackoverflow
persistent multiplier
P{1} = 0.6;
P{2} = 0.2;
P{3} = 0.2;
a1 = 4/3;
a2 = -7/3;
b1 = -1/3;
b2 = 4/3;
multiplier = exp(a1 * 44 + a2 * 14 + 0);
eqn = #(x) ((a1 .* x + b1).^a1) .* ((a2 .* x + b2).^a2) .* x -(P{1}.^a1) .* (P{2}.^a2) .* P{3} .* multiplier;
Q{3} = Bisectionmethod(eqn, max([0, -b1/a1]), -b2/a2, 1E-10);
end
function XOut = Bisectionmethod(f, xL, xH, EPS)
if sign(f(xL)) == sign(f(xH))
XOut = [];
error('Cannot bisect interval because can''t ensure the function crosses 0.')
end
x = [xL, xH];
while abs(diff(x)) > EPS
x(sign(f(mean(x))) == sign(f(x))) = mean(x);
end
XOut = mean(x);
end