Neural Network in matlab: How to specify input weights - neural-network

I need help rectifying this code to implement XOR using Neural Network in matlab. But, I am unable to set the input weights from the input layer to the first layer. The network has input layer, hidden layer and output layer of 2,2 and 1 neurons respectively.
Can somebody help me with this?
net=network;
net.numInputs = 1;
net.inputs{1}.size = 2;
net.numLayers = 2;
net.layers{1}.size = 2;
net.layers{2}.size = 1;
net.inputConnect(1) = 1;
net.layerConnect(2, 1) = 1;
net.outputConnect(2) = 1;
net.targetConnect(2) = 1;
net.layers{1}.transferFcn = 'logsig';%>> net.layers{2}.transferFcn = 'purelin';
net.layers{2}.transferFcn = 'logsig';
net.biasConnect = [ 1 ; 1];
net.layers{1}.initFcn = 'initwb';
net.layers{2}.initFcn = 'initwb';
net.inputWeights={1 1;1 1};%ask this. error is not explanatory. probably syntax.
net.biases{1}={-1.5 -0.5};
net.biases{2}=-0.5;
net.layerWeights{2,1}={-2 1};
P=[0 1 0 1;0 0 1 1];
T=[0 1 1 0];
net.initFcn = 'initlay';
net = init(net);
net.adaptFcn = 'adaptwb';
net.inputWeights{1,1}.learnFcn = 'learnp';
net.biases{1}.learnFcn = 'learnp';
net.adaptParam.passes =3;
net.performFcn = 'mse';
y = sim(net,P)

doc network tells me:
If net.inputConnect(i,j) is 1, then net.inputWeights{i,j} is a structure
defining the weight to layer i from input j.
So instead of setting a cell array for the net.inputWeights you should be setting the elements of net.inputWeights for each combination of input and first-layer nodes like this:
net.inputWeights{1,1} = weight11; % input1 node 1
net.inputWeigtts{1,2} = weight12; % input1 node 2
...

Related

Is there a way to derive implict ODE function from big symbolic expression?

I'm Davide and I have a problem with the derivation of a function that later should be given to ode15i in Matlab.
Basically I've derived a big symbolic expression that describe the motion of a spececraft with a flexible appendice (like a solar panel). My goal is to obtain a function handle that can be integrated using the built-in implicit solver in Matlab (ode15i).
The problem I've encounter is the slowness of the Symbolic computations, especially in the function "daeFunction" (I've run it and lost any hope for a responce after 3/4 hours had passed).
The system of equations, that is derived using the Lagrange's method is an implicit ode.
The complex nature of the system arise from the flexibility modelling of the solar panel.
I am open to any suggestions that would help me in:
running the code properly.
running it as efficiently as possible.
Thx in advance.
Here after I copy the code. Note: Matlab r2021a was used.
close all
clear
clc
syms t
syms r(t) [3 1]
syms angle(t) [3 1]
syms delta(t)
syms beta(t) [3 1]
mu = 3.986e14;
mc = 1600;
mi = 10;
k = 10;
kt = 10;
Ii = [1 0 0 % for the first link it is different thus I should do a functoin or something that writes everything into an array or a vector
0 5 0
0 0 5];
% Dimension of satellite
a = 1;
b = 1.3;
c = 1;
Ic = 1/12*mc* [b^2+c^2 0 0
0 c^2+a^2 0
0 0 a^2+b^2];
ra_c = [0 1 0]';
a = diff(r,t,t);
ddelta = diff(delta,t);
dbeta = diff(beta,t);
dddelta = diff(delta,t,t);
ddbeta = diff(beta,t,t);
R= [cos(angle1).*cos(angle3)-cos(angle2).*sin(angle1).*sin(angle3) sin(angle1).*cos(angle3)+cos(angle2).*cos(angle1).*sin(angle3) sin(angle2).*sin(angle3)
-cos(angle1).*sin(angle3)-cos(angle2).*sin(angle1).*cos(angle3) -sin(angle1).*sin(angle3)+cos(angle2).*cos(angle1).*cos(angle3) sin(angle2).*cos(angle3)
sin(angle2).*sin(angle3) -sin(angle2).*cos(angle1) cos(angle2)];
d_angle1 = diff(angle1,t);
d_angle2 = diff(angle2,t);
d_angle3 = diff(angle3,t);
dd_angle1 = diff(angle1,t,t);
dd_angle2 = diff(angle2,t,t);
dd_angle3 = diff(angle3,t,t);
d_angle = [d_angle1;d_angle2;d_angle3];
dd_angle = [dd_angle1;dd_angle2;dd_angle3];
omega = [d_angle2.*cos(angle1)+d_angle3.*sin(angle2).*sin(angle1);d_angle2.*sin(angle1)-d_angle3.*sin(angle2).*cos(angle1);d_angle1+d_angle3.*cos(angle2)]; % this should describe correctly omega_oc
d_omega = diff(omega,t);
v1 = diff(r1,t);
v2 = diff(r2,t);
v3 = diff(r3,t);
v = [v1; v2; v3];
[J,r_cgi,R_ci]= Jacobian_Rob(4,delta,beta);
% Perform matrix multiplication
for mm = 1:4
vel(:,mm) = J(:,:,mm)*[ddelta;dbeta];
end
vel = formula(vel);
dr_Ccgi = vel(1:3,:);
omega_ci = vel(4:6,:);
assumeAlso(angle(t),'real');
assumeAlso(d_angle(t),'real');
assumeAlso(dd_angle(t),'real');
assumeAlso(r(t),'real');
assumeAlso(a(t),'real');
assumeAlso(v(t),'real');
assumeAlso(beta(t),'real');
assumeAlso(delta(t),'real');
assumeAlso(dbeta(t),'real');
assumeAlso(ddelta(t),'real');
assumeAlso(ddbeta(t),'real');
assumeAlso(dddelta(t),'real');
omega = formula(omega);
Tc = 1/2*v'*mc*v+1/2*omega'*R*Ic*R'*omega;
% kinetic energy of all appendices
for h = 1:4
Ti(h) = 1/2*v'*mi*v+mi*v'*skew(omega)*R*ra_c+mi*v'*skew(omega)*R*r_cgi(:,h)+mi*v'*R*dr_Ccgi(:,h)+1/2*mi*ra_c'*R'*skew(omega)'*skew(omega)*R*ra_c ...
+ mi*ra_c'*R'*skew(omega)'*skew(omega)*R*r_cgi(:,h)+mi*ra_c'*R'*skew(omega)'*R*dr_Ccgi(:,h)+1/2*omega'*R*R_ci(:,:,h)*Ii*(R*R_ci(:,:,h))'*omega ...
+ omega'*R*R_ci(:,:,h)*Ii*R_ci(:,:,h)'*omega_ci(:,h)+1/2*omega_ci(:,h)'*R_ci(:,:,h)*Ii*R_ci(:,:,h)'*omega_ci(:,h)+1/2*mi*r_cgi(:,h)'*R'*skew(omega)'*skew(omega)*R*r_cgi(:,h)+mi*r_cgi(:,h)'*R'*skew(omega)'*R*dr_Ccgi(:,h)...
+ 1/2*mi*dr_Ccgi(:,h)'*dr_Ccgi(:,h);
Ugi(h) = -mu*mi/norm(r,2)+mu*mi*r'/(norm(r,2)^3)*(R*ra_c+R*R_ci(:,:,h)*r_cgi(:,h));
end
Ugc = -mu*mc/norm(r,2);
Ue = 1/2*kt*(delta)^2+sum(1/2*k*(beta).^2);
U = Ugc+sum(Ugi)+Ue;
L = Tc + sum(Ti) - U;
D = 1/2 *100* (ddelta^2+sum(dbeta.^2));
%% Equation of motion derivation
eq = [diff(jacobian(L,v),t)'-jacobian(L,r)';
diff(jacobian(L,d_angle),t)'-jacobian(L,angle)';
diff(jacobian(L,ddelta),t)'-jacobian(L,delta)'+jacobian(D,ddelta)';
diff(jacobian(L,dbeta),t)'-jacobian(L,beta)'+jacobian(D,dbeta)'];
%% Reduction to first order sys
[sys,newVars,R1]=reduceDifferentialOrder(eq,[r(t); angle(t); delta(t); beta(t)]);
DAEs = sys;
DAEvars = newVars;
%% ode15i implicit solver
pDAEs = symvar(DAEs);
pDAEvars = symvar(DAEvars);
extraParams = setdiff(pDAEs,pDAEvars);
f = daeFunction(DAEs,DAEvars,'File','ProvaSum');
y0est = [6778e3 0 0 0.01 0.1 0.3 0 0.12 0 0 0 7400 0 0 0 0 0 0 0 0]';
yp0est = zeros(20,1);
opt = odeset('RelTol', 10.0^(-7),'AbsTol',10.0^(-7),'Stats', 'on');
[y0,yp0] = decic(f,0,y0est,[],yp0est,[],opt);
% Integration
[tSol,ySol] = ode15i(f,[0 0.5],y0,yp0,opt);
%% Funcitons
function [J,p_cgi,R_ci]=Jacobian_Rob(N,delta,beta)
% Function to compute Jacobian see Robotics by Siciliano
% N total number of links
% delta [1x1] beta [N-1x1] variable that describe position of the solar
% panel elements
beta = formula(beta);
L_link = [1 1 1 1]'; % Length of each link elements in [m], later to be derived from file or as function input
for I = 1 : N
A1 = Homog_Matrix(I,delta,beta);
A1 = formula(A1);
R_ci(:,:,I) = A1(1:3,1:3);
if I ~= 1
p_cgi(:,I) = A1(1:3,4) + A1(1:3,1:3)*[1 0 0]'*L_link(I)/2;
else
p_cgi(:,I) = A1(1:3,4) + A1(1:3,1:3)*[0 0 1]'*L_link(I)/2;
end
for j = 1:I
A_j = formula(Homog_Matrix(j,delta,beta));
z_j = A_j(1:3,3);
Jp(:,j) = skew(z_j)*(p_cgi(:,I)-A_j(1:3,4));
Jo(:,j) = z_j;
end
if N-I > 0
Jp(:,I+1:N) = zeros(3,N-I);
Jo(:,I+1:N) = zeros(3,N-I);
end
J(:,:,I)= [Jp;Jo];
end
J = formula(J);
p_cgi = formula(p_cgi);
R_ci = formula(R_ci);
end
function [A_CJ]=Homog_Matrix(J,delta,beta)
% This function is made sopecifically for the solar panel
% define basic rotation matrices
Rx = #(angle) [1 0 0
0 cos(angle) -sin(angle)
0 sin(angle) cos(angle)];
Ry = #(angle) [ cos(angle) 0 sin(angle)
0 1 0
-sin(angle) 0 cos(angle)];
Rz = #(angle) [cos(angle) -sin(angle) 0
sin(angle) cos(angle) 0
0 0 1];
if isa(beta,"sym")
beta = formula(beta);
end
L_link = [1 1 1 1]'; % Length of each link elements in [m], later to be derived from file or as function input
% Rotation matrix how C sees B
R_CB = Rz(-pi/2)*Ry(-pi/2); % Clarify notation: R_CB represent the rotation matrix that describe the frame B how it is seen by C
% it is the same if it was wrtitten R_B2C
% becouse bring a vector written in B to C
% frame --> p_C = R_CB p_B
% same convention used in siciliano how C sees B frame
A_AB = [R_CB zeros(3,1)
zeros(1,3) 1];
A_B1 = [Rz(delta) zeros(3,1)
zeros(1,3) 1];
A_12 = [Ry(-pi/2)*Rx(-pi/2)*Rz(beta(1)) L_link(1)*[0 0 1]'
zeros(1,3) 1];
if J == 1
A_CJ = A_AB*A_B1;
elseif J == 0
A_CJ = A_AB;
else
A_CJ = A_AB*A_B1*A_12;
end
for j = 3:J
A_Jm1J = [Rz(beta(j-1)) L_link(j-1)*[1 0 0]'
zeros(1,3) 1];
A_CJ = A_CJ*A_Jm1J;
end
end
function [S]=skew(r)
S = [ 0 -r(3) r(2); r(3) 0 -r(1); -r(2) r(1) 0];
end
I found your question beautiful. My suggestion is to manipulate the problem numerically. symbolic manipulation in Matlab is good but is much slower than numerical calculation. you can define easily the ode into a system of first-order odes and solve them using numerical integration functions like ode45. Your code is very lengthy and I couldn't manage to follow its details.
All the Best.
Yasien

using sim in matlab armax

I write code below but i find an error that I do not know what it is please help me
the error is
Error using idmodel/sim (line 114)
The simulation input data must be specified using an iddata object or a double
matrix.
Error in Untitled (line 17)
y = sim(sys,u);
clc;
clear all ;
close all;
A = [1 -0.5 0.06];
B = [5 -2];
C = [1 -0.2 0.001];
Ts = 1; %sample time
sys = idpoly(A,B,C,'Ts',1);
Range = [-1 1];
Band = [0 1];
u = stairs(idinput(100,'prbs',Band,Range)); %form a prbs input
opt1 = simOptions('AddNoise',true);
y = sim(sys ,u,opt1);
iodata = iddata(y,u,Ts);
na = 3; nb = 2; nc = 3; nk = 1;
me = armax(iodata,[na,nb,nc,nk]);
compare(iodata,me)
thank you very much
Your input variable u, should be a column vector, but with your code it is a graphics object, use class(u) to check this. If you replace this line
u = stairs(idinput(100,'prbs',Band,Range)); %form a prbs input
with something like this:
u = [zeros(25, 1); ones(25, 1)]; % step input
Then the code no longer crashes.

neural network code explanation

The following is an implementation of a simple Perceptron supplied in a blog.
input = [0 0; 0 1; 1 0; 1 1];
numIn = 4;
desired_out = [0;1;1;1];
bias = -1;
coeff = 0.7;
rand('state',sum(100*clock));
weights = -1*2.*rand(3,1);
iterations = 10;
for i = 1:iterations
out = zeros(4,1);
for j = 1:numIn
y = bias*weights(1,1)+...
input(j,1)*weights(2,1)+input(j,2)*weights(3,1);
out(j) = 1/(1+exp(-y));
delta = desired_out(j)-out(j);
weights(1,1) = weights(1,1)+coeff*bias*delta;
weights(2,1) = weights(2,1)+coeff*input(j,1)*delta;
weights(3,1) = weights(3,1)+coeff*input(j,2)*delta;
end
end
I have the following questions,
(1) which one is training data here?
(2) which one is test data here?
(3) which are the labels here?
training data is [0 0; 0 1; 1 0; 1 1] in the other view every row is one set of training data as follow
>> input
input =
0 0
0 1
1 0
1 1
and target is
desired_out =
0
1
1
1
please think about desired_out this is your labels .. every row in training data(input) have a specific output(label) in binary set{0,1}(because this example for implementation of OR logic circuit.
in matlab you can use or function as below for further understanding:
>> or(0,0)
ans =
0
>> or(1,0)
ans =
1
>> or(0,1)
ans =
1
>> or(1,1)
ans =
1
Note that your code has not any training test and this code just trying to get weights and other parameters of perceptron but you can add training test to your code by just little program
NumDataTest = 10 ;
test=randi( [0 , 1] , [ NumDataTest , 2]) ...
+(2*rand(NumDataTest,2)-1)/20;
so test data will be similar to below
test =
1.0048 1.0197
0.0417 0.9864
-0.0180 1.0358
1.0052 1.0168
1.0463 0.9881
0.9787 0.0367
0.9624 -0.0239
0.0065 0.0404
1.0085 -0.0109
-0.0264 0.0429
for test this data you can use your own program by below code:
for i=1:NumDataTest
y = bias*weights(1,1)+test(i,1)*weights(2,1)+test(i,2)*weights(3,1);
out(i) = 1/(1+exp(-y));
end
and finally:
table(test(:,1),test(:,2),out,'VariableNames',{'input1' 'input2' 'output'})
output is
input1 input2 output
_________ _________ ________
1.0048 1.0197 0.99994
0.041677 0.98637 0.97668
-0.017968 1.0358 0.97527
1.0052 1.0168 0.99994
1.0463 0.98814 0.99995
0.97875 0.036674 0.9741
0.96238 -0.023861 0.95926
0.0064527 0.040392 0.095577
1.0085 -0.010895 0.97118
-0.026367 0.042854 0.080808
Code section:
clc
clear
input = [0 0; 0 1; 1 0; 1 1];
numIn = 4;
desired_out = [0;1;1;1];
bias = -1;
coeff = 0.7;
rand('state',sum(100*clock));
weights = -1*2.*rand(3,1);
iterations = 100;
for i = 1:iterations
out = zeros(4,1);
for j = 1:numIn
y = bias*weights(1,1)+input(j,1)*weights(2,1)+input(j,2)*weights (3,1);
out(j) = 1/(1+exp(-y));
delta = desired_out(j)-out(j);
weights(1,1) = weights(1,1)+coeff*bias*delta;
weights(2,1) = weights(2,1)+coeff*input(j,1)*delta;
weights(3,1) = weights(3,1)+coeff*input(j,2)*delta;
end
end
%% Test Section
NumDataTest = 10 ;
test=randi( [0 , 1] , [ NumDataTest , 2]) ...
+(2*rand(NumDataTest,2)-1)/20;
for i=1:NumDataTest
y = bias*weights(1,1)+test(i,1)*weights(2,1)+test(i,2)*weights(3,1);
out(i) = 1/(1+exp(-y));
end
table(test(:,1),test(:,2),out,'VariableNames',{'input1' 'input2' 'output'})
I hope this helps and sorry for my English if it's bad

How to use Neural network for non binary input and output

I tried to use the modified version of NN back propagation code by Phil Brierley
(www.philbrierley.com). When i try to solve the XOR problem it works perfectly. but when i try to solve a problem of the form output = x1^2 + x2^2 (ouput = sum of squares of input), the results are not accurate. i have scaled the input and ouput between -1 and 1. I get different results every time i run the same program (i understand its due to random wts initialization), but results are very different. i tried changing learning rate but still results converge.
have given the code below
%---------------------------------------------------------
% MATLAB neural network backprop code
% by Phil Brierley
%--------------------------------------------------------
clear; clc; close all;
%user specified values
hidden_neurons = 4;
epochs = 20000;
input = [];
for i =-10:2.5:10
for j = -10:2.5:10
input = [input;i j];
end
end
output = (input(:,1).^2 + input(:,2).^2);
output1 = output;
% Maximum input and output limit and scaling factors
m1 = -10; m2 = 10;
m3 = 0; m4 = 250;
c = -1; d = 1;
%Scale input and output
for i =1:size(input,2)
I = input(:,i);
scaledI = ((d-c)*(I-m1) ./ (m2-m1)) + c;
input(:,i) = scaledI;
end
for i =1:size(output,2)
I = output(:,i);
scaledI = ((d-c)*(I-m3) ./ (m4-m3)) + c;
output(:,i) = scaledI;
end
train_inp = input;
train_out = output;
%read how many patterns and add bias
patterns = size(train_inp,1);
train_inp = [train_inp ones(patterns,1)];
%read how many inputs and initialize learning rate
inputs = size(train_inp,2);
hlr = 0.1;
%set initial random weights
weight_input_hidden = (randn(inputs,hidden_neurons) - 0.5)/10;
weight_hidden_output = (randn(1,hidden_neurons) - 0.5)/10;
%Training
err = zeros(1,epochs);
for iter = 1:epochs
alr = hlr;
blr = alr / 10;
%loop through the patterns, selecting randomly
for j = 1:patterns
%select a random pattern
patnum = round((rand * patterns) + 0.5);
if patnum > patterns
patnum = patterns;
elseif patnum < 1
patnum = 1;
end
%set the current pattern
this_pat = train_inp(patnum,:);
act = train_out(patnum,1);
%calculate the current error for this pattern
hval = (tanh(this_pat*weight_input_hidden))';
pred = hval'*weight_hidden_output';
error = pred - act;
% adjust weight hidden - output
delta_HO = error.*blr .*hval;
weight_hidden_output = weight_hidden_output - delta_HO';
% adjust the weights input - hidden
delta_IH= alr.*error.*weight_hidden_output'.*(1-(hval.^2))*this_pat;
weight_input_hidden = weight_input_hidden - delta_IH';
end
% -- another epoch finished
%compute overall network error at end of each epoch
pred = weight_hidden_output*tanh(train_inp*weight_input_hidden)';
error = pred' - train_out;
err(iter) = ((sum(error.^2))^0.5);
%stop if error is small
if err(iter) < 0.001
fprintf('converged at epoch: %d\n',iter);
break
end
end
%Output after training
pred = weight_hidden_output*tanh(train_inp*weight_input_hidden)';
Y = m3 + (m4-m3)*(pred-c)./(d-c);
% Testing for a new set of input
input_test = [6 -3.1; 0.5 1; -2 3; 3 -2; -4 5; 0.5 4; 6 1.5];
output_test = (input_test(:,1).^2 + input_test(:,2).^2);
input1 = input_test;
%Scale input
for i =1:size(input1,2)
I = input1(:,i);
scaledI = ((d-c)*(I-m1) ./ (m2-m1)) + c;
input1(:,i) = scaledI;
end
%Predict output
train_inp1 = input1;
patterns = size(train_inp1,1);
bias = ones(patterns,1);
train_inp1 = [train_inp1 bias];
pred1 = weight_hidden_output*tanh(train_inp1*weight_input_hidden)';
%Rescale
Y1 = m3 + (m4-m3)*(pred1-c)./(d-c);
analy_numer = [output_test Y1']
plot(err)
This is the sample output i get for problem
state after 20000 epochs
analy_numer =
45.6100 46.3174
1.2500 -2.9457
13.0000 11.9958
13.0000 9.7097
41.0000 44.9447
16.2500 17.1100
38.2500 43.9815
if i run once more i get different results. as can be observed for small values of input i get totally wrong ans (negative ans not possible). for other values accuracy is still poor.
can someone tell what i am doing wrong and how to correct.
thanks
raman

how to repeat function with for loop?

I am trying to generate a pn sequence and it works. However, when I try I call the function with different inputs in a for-loop, it gives me the same results each time. As if it is not affected by using the for loop. Why?
This is my code:
%e.g. noof flip flops 4 ==>
function[op_seq]=pnseq(a,b,c)
a = 7;
%generator polynomial x4+x+1 ==>
b = [1 0 0 1 1 0 1 ]
%initial state [1 0 0 0] ==>
c = [1 0 0 0 1 0 1 ]
%refere figure to set a relation between tap function and initial state
%
for j= 1:50,
x = a;
tap_ff =b;
int_stat= c;
for i = 1:1: length(int_stat)
old_stat(i) = int_stat(i);
gen_pol(i) = tap_ff(i);
end
len = (2 ^x)-1;
gen_pol(i+1)= 1;
gen_l = length(gen_pol);
old_l = length(old_stat);
for i1 = 1: 1:len
% feed back input genration
t = 1;
for i2 = 1:old_l
if gen_pol(i2)==1
stat_str(t) = old_stat(gen_l - i2);
i2 = i2+1;
t = t+1;
else
i2 = i2+1;
end
end
stat_l = length(stat_str);
feed_ip = stat_str(1);
for i3 = 1: stat_l-1
feed_ip = bitxor(feed_ip,stat_str(i3 + 1));
feed_ipmag(i1) = feed_ip;
i3 = i3+1;
end
% shifting elements
new_stat = feed_ip;
for i4 = 1:1:old_l
new_stat(i4+1) = old_stat(i4);
old_stat(i4)= new_stat(i4);
end
op_seq(i1) = new_stat(old_l +1);
end
%op_seq;
end
I assume you're doing something like:
for n = 1:10
...
% set a,b,c for this n
...
op_seq =pnseq(a,b,c)
...
end
and that you see the same op_seq output for each case. This is because you have a,b,c as inputs, but you overwrite them at the start of your function. If I remove, or comment out the following lines in your function:
a = 7;
b = [1 0 0 1 1 0 1 ]
c = [1 0 0 0 1 0 1 ]
Then I get different results for calling the function with different a,b,c. There is nothing random in your function, so the same inputs give the same outputs.