Li-on Cell Model Paralelizatio Error in SIMSCAPE - simulink

I have a li-on cell model on simscape. I takes
Instant Voltage Value [Voltage]
Full Capacity [Amper Second]
Initial SOC[%] as input and have outputs:
Cout : Remaining Capacity [Amper second]
SOC : Remaining SOC [%]
"+" and "-" Simscape Electrical Terminal
It works when I connect them in series but when I connect them in parallel it gives me error depicted below. Wat may be the reason? How can I solve that?
Thanks.
component v_ysk
inputs
v_ins = {0,'1'};
c_full = {0, 'A*s'};
c_initial = {0,'A*s'};
end
outputs
c_out ={0,'A*s'};
soc = {0,'1'};
end
nodes
p=foundation.electrical.electrical % +:right
n=foundation.electrical.electrical % -:right
end
parameters (Size =variable)
end
variables(Access=private)
i = { 0, 'A' };
v = { 0 ,'V'};
end
branches
i : p.i -> n.i;
end
equations
c_out == c_initial + integ(i);
v == p.v - n.v;
soc == (c_out/c_full)*100;
if (c_out>0 && c_out<= c_full)
v == {v_ins,'V'};
else
v == {v_ins , 'V'};
end
end
end

Here integ is the source of the problem. Instead of integ, I have used .der which is actually same.

Related

Time Varying Transfer Function

I have a discrete transfer function whose numerator and denominator are coming from an input port. At every sample time these numerator and denominator vectors change.
E.g.
# t == 0
den == [1 0 -1]
# t == 1
den == [1 0 0 -1]
What do I have to do in order for the transfer function to work with this?
I have tried:
Having a variable length signal.
SIMULINK did not like this and refused to run, complaining that the discrete transfer function block could not handle variable sized input.
Padding the vector with many leading zeros
This led to periodic spikes in the signal. Additionally, Simulink does not let you do this if you enter the values by hand, rather than as an input, so I don't think this is the way to do it either.
Any help is much appreciated.
This is solved trivially with a single Simulink S-Function file sfuntvf.m, like this.
In there, a custom fixed vector state with y and us history is saved, and the time varying part is simply applied over the time history (depicted in cyan and yellow); here a FIR Filter with a random order (depicted in magenta).
Other build-in implementations shall consider the sucessive initial conditions between calls, and require the verification of the initial condition structure. The shown implementation is the easiest to deploy and understand.
The A and B vectors can be modified accordingly.
function [sys,x0,str,ts] = sfuntvf(t,x,u,flag)
N=20;
n=randi(N-2)+1;
A=[1;zeros(N-1,1)];
B=[1/n*ones(n,1);zeros(N-n,1)];
switch flag,
case 0, [sys,x0,str,ts] = mdlInitializeSizes(N);
case 2, sys = mdlUpdate(t,x,u,N,A,B);
case 3, sys = mdlOutputs(t,x,u,N,n);
case 9, sys = [];
otherwise DAStudio.error('Simulink:blocks:unhandledFlag',num2str(flag));
end
function [sys,x0,str,ts] = mdlInitializeSizes(N)
sizes = simsizes;
sizes.NumContStates = 0;
sizes.NumDiscStates = 2*N;
sizes.NumOutputs = 2;
sizes.NumInputs = 1;
sizes.DirFeedthrough = 0;
sizes.NumSampleTimes = 1;
sys = simsizes(sizes);
x0 = zeros(2*N,1);
str = [];
ts = [1 0];
function sys = mdlUpdate(t,x,u,N,A,B)
un=x(1:N,1);
yn=x(N+1:2*N,1);
y=-A(2:end)'*yn(2:end)+B'*un;
sys = [u;un(1:end-1);y;yn(1:end-1)];
function sys = mdlOutputs(t,x,u,N,n)
sys = [x(N+1);n];

Back Propagation Neural Network Hidden Layer all output is 1

everyone I have created a neural network with 1600 input, one hidden layer with different number of neurons nodes and 24 output neurons.
My code shown that I can decrease the error each epoch, but the output of hidden layer always is 1. Due to this reason, the weight adjusted always produce same result for my testing data.
I try different number of neuron nodes and learning rate in the ANN and also randomly initialize my initial weight. I use sigmoid function as my activate function since my output is either 1 or 0 in different output.
May I know that what is the main reason that causes the output of hidden layer always is 1 and how should i solve it?
My purpose for this neural network is to recognize 24 hand shape for alphabet, I try intensities data in my first phase of project.
I have try 30 hidden neural nodes also 100 neural nodes even 1000 neural nodes but the output of hidden layer still is 1. Due to this reason, all of the outcome in testing data is always similar.
I added the code for my network
Thanks
g = inline('logsig(x)');
[row, col] = size(input);
numofInputNeurons = col;
weight_input_hidden = rand(numofInputNeurons, numofFirstHiddenNeurons);
weight_hidden_output = rand(numofFirstHiddenNeurons, numofOutputNeurons);
epochs = 0;
errorMatrix = [];
while(true)
if(totalEpochs > 0 && epochs >= totalEpochs)
break;
end
totalError = 0;
epochs = epochs + 1;
for i = 1:row
targetRow = zeros(1, numofOutputNeurons);
targetRow(1, target(i)) = 1;
hidden_output = g(input(1, 1:end)*weight_input_hidden);
final_output = g(hidden_output*weight_hidden_output);
error = abs(targetRow - final_output);
error = sum(error);
totalError = totalError + error;
if(error ~= 0)
delta_final_output = learningRate * (targetRow - final_output) .* final_output .* (1 - final_output);
delta_hidden_output = learningRate * (hidden_output) .* (1-hidden_output) .* (delta_final_output * weight_hidden_output');
for m = 1:numofFirstHiddenNeurons
for n = 1:numofOutputNeurons
current_changes = delta_final_output(1, n) * hidden_output(1, m);
weight_hidden_output(m, n) = weight_hidden_output(m, n) + current_changes;
end
end
for m = 1:numofInputNeurons
for n = 1:numofFirstHiddenNeurons
current_changes = delta_hidden_output(1, n) * input(1, m);
weight_input_hidden(m, n) = weight_input_hidden(m, n) + current_changes;
end
end
end
end
totalError = totalError / (row);
errorMatrix(end + 1) = totalError;
if(errorThreshold > 0 && totalEpochs == 0 && totalError < errorThreshold)
break;
end
end
I see a few obvious errors that need fixing in your code:
1) You have no negative weights when initialising. This is likely to get the network stuck. The weight initialisation should be something like:
weight_input_hidden = 0.2 * rand(numofInputNeurons, numofFirstHiddenNeurons) - 0.1;
2) You have not implemented bias. That will severely limit the ability of the network to learn. You should go back to your notes and figure that out, it is usually implemented as an extra column of 1's inserted into input and activation vectors/matrix before determining the activations of each layer, and there should be a matching additional column of weights.
3) Your delta for output layer is wrong. This line
delta_final_output = learningRate * (targetRow - final_output) .* final_output .* (1 - final_output);
. . . is not the delta for the output layer activations. It has some extra unwanted factors.
The correct delta for logloss objective function and sigmoid activation in output layer would be:
delta_final_output = (final_output - targetRow);
There are other possibilities, depending on your objective function, which is not shown. You original code is close to correct for mean squared error, which would probably still work if you changed the sign and removed the factor of learningRate
4) Your delta for hidden layer is wrong. This line:
delta_hidden_output = learningRate * (hidden_output) .* (1-hidden_output) .* (delta_final_output * weight_hidden_output');
. . . is not the delta for the hidden layer activations. You have multiplied by the learningRate for some reason (combined with the other delta that means you have a factor of learningRate squared).
The correct delta would be:
delta_hidden_output = (hidden_output) .* (1-hidden_output) .* (delta_final_output * weight_hidden_output');
5) Your weight update step needs adjusting to match fixes to (3) and (4). These lines:
current_changes = delta_final_output(1, n) * hidden_output(1, m);
would need to be adjusted to get correct sign and learning rate multiplier
current_changes = -learningRate * delta_final_output(1, n) * hidden_output(1, m);
That's 5 bugs from looking through the code, I may have missed some. But I think that's more than enough for now.

Neural Network convergence speed (Levenberg-Marquardt) (MATLAB)

I was trying to approximate a function (single input and single output) with an ANN. Using MATLAB toolbox I could see that with 5 or more neurons in the hidden layer, I can achieve a very nice result. So I am trying to do it manually.
Calculations:
As the network has only one input and one output, the partial derivative of the error (e=d-o, where 'd' is the desired output and 'o' is the actual output) in respect to a weigth which connects a hidden neuron j to the output neuron, will be -hj (where hj is the output of a hidden neuron j);
The partial derivative of the error in respect to output bias will be -1;
The partial derivative of the error in respect to a weight which connects the input to a hidden neuron j will be -woj*f'*i, where woj is the hidden neuron j output weigth, f' is the tanh() derivative and 'i' is the input value;
Finally, the partial derivative of the error in respect to hidden layer bias will be the same as above (in respect to input weight) except that here we dont have the input:
-woj*f'
The problem is:
the MATLAB algorithm always converge faster and better. I can achieve the same curve as MATLAB does, but my algorithm requires much more epochs.
I've tried to remove pre and postprocessing functions from MATLAB algorithm. It still converges faster.
I've also tried to create and configure the network, and extract weight/bias values before training so I could copy them to my algorithm to see if it converges faster but nothing changed (is the weight/bias initialization inside create/configure or train function?).
Does the MATLAB algorithm have some kind of optimizations inside the code?
Or may be this difference only in the organization of the training set and weight/bias initialization?
In case one wants to look my code, here is the main loop which makes the training:
Err2 = N;
epochs = 0;
%compare MSE of error2
while ((Err2/N > 0.0003) && (u < 10000000) && (epochs < 100))
epochs = epochs+1;
Err = 0;
%input->hidden weight vector
wh = w(1:hidden_layer_len);
%hidden->output weigth vector
wo = w((hidden_layer_len+1):(2*hidden_layer_len));
%hidden bias
bi = w((2*hidden_layer_len+1):(3*hidden_layer_len));
%output bias
bo = w(length(w));
%start forward propagation
for i=1:N
%take next input value
x = t(i);
%propagate to hidden layer
neth = x*wh + bi;
%propagate through neurons
ij = tanh(neth)';
%propagate to output layer
neto = ij*wo + bo;
%propagate to output (purelin)
output(i) = neto;
%calculate difference from target (error)
error(i) = yp(i) - output(i);
%Backpropagation:
%tanh derivative
fhd = 1 - tanh(neth').*tanh(neth');
%jacobian matrix
J(i,:) = [-x*wo'.*fhd -ij -wo'.*fhd -1];
%SSE (sum square error)
Err = Err + 0.5*error(i)*error(i);
end
%calculate next error with updated weights and compare with old error
%start error2 from error1 + 1 to enter while loop
Err2 = Err+1;
%while error2 is > than old error and Mu (u) is not too large
while ((Err2 > Err) && (u < 10000000))
%Weight update
w2 = w - (((J'*J + u*eye(3*hidden_layer_len+1))^-1)*J')*error';
%New Error calculation
%New weights to propagate
wh = w2(1:hidden_layer_len);
wo = w2((hidden_layer_len+1):(2*hidden_layer_len));
%new bias to propagate
bi = w2((2*hidden_layer_len+1):(3*hidden_layer_len));
bo = w2(length(w));
%calculate error2
Err2 = 0;
for i=1:N
%forward propagation again
x = t(i);
neth = x*wh + bi;
ij = tanh(neth)';
neto = ij*wo + bo;
output(i) = neto;
error2(i) = yp(i) - output(i);
%Error2 (SSE)
Err2 = Err2 + 0.5*error2(i)*error2(i);
end
%compare MSE from error2 with a minimum
%if greater still runing
if (Err2/N > 0.0003)
%compare with old error
if (Err2 <= Err)
%if less, update weights and decrease Mu (u)
w = w2;
u = u/10;
else
%if greater, increment Mu (u)
u = u*10;
end
end
end
end
It's not easy to know the exact implementation of the Levenberg Marquardt algorithm in Matlab. You may try to run the algorithm one iteration at a time, and see if it is identical to your algorithm. You can also try other implementations, such as, http://www.mathworks.com/matlabcentral/fileexchange/16063-lmfsolve-m--levenberg-marquardt-fletcher-algorithm-for-nonlinear-least-squares-problems, to see if the performance can be improved. For simple learning problems, convergence speed may be a matter of learning rate. You might simply increase the learning rate to get faster convergence.

Problems with Plotting Matlab Function

I am Beginner in Matlab, i would like to plot system concentration vs time plot at a certain time interval following is the code that i have written
%Input function of 9 samples with activity and time calibrated with Well
%counter value approx : 1.856 from all 9 input values of 3 patients
function c_o = Sample_function(td,t_max,A,B)
t =(0 : 100 :5000); % time of the sample post injection in mins
c =(0 : 2275.3 :113765);
A_max= max(c); %Max value of Concentration (Peak of the curve)
if (t >=0 && t <= td)
c_o(t)=0;
else if(td <=t && t<=t_max)
c_o(t)= A_max*(t-td);
else if(t >= t_max)
c_o(t)=(A(1)*exp(-B(1)*(t-t_max)))+(A(2)*exp(-B(2)*(t- t_max)))+...
(A(3)*exp(-B(3)*(t-t_max)));
end
fprintf('plotting Data ...\n');
hold on;
figure;
plot(c_o);
xlabel('Activity of the sample Ba/ml ');
ylabel('time of the sample in minutes');
title (' Input function: Activity sample VS time ');
pause;
end
I am getting following error
Operands to the || and && operators must be convertible to logical scalar values.
Error in Sample_function (line 18)
if (t >=0 && t <= td)
Kindly .Let me know if my logic is incorrect
Your t is not a single value to compare with 0 so it cannot evaluate to true or false.
You want to do this with logical indexing
c_o = zeros(size(t));
c_o(t>=0 & t<=td) = 0; % this line is actually redundant and unnecessary since we initialized the vector to zeros
c_o(t>td & t<=t_max) = A_max*(t(t>td & t<=t_max)-td);
c_o(t>t_max) = (A(1)*exp(-B(1)*(t(t>t_max)-t_max)))+(A(2)*exp(-B(2)*(t(t>t_max)- t_max)))...
+ (A(3)*exp(-B(3)*(t(t>t_max)-t_max)));
You could also make this a little prettier (and easier to read) by assigning the logical indexes to variables:
reg1 = (t>=0 & t<=td);
reg2 = (t>td & t<=t_max);
reg3 = (t>t_max);
Then, for instance, the second assignment becomes the much more readable:
c_o(reg2) = A_max*(t(reg2)-td);
t is written as a array of numbers. So, it can't be compared with a scalar value ex. 0.
Try it in a for loop
for i=1:length(t)
if (t(i) >=0 && t(i) <= td)
c_o(t(i))=0;
else if(td <=t(i) && t(i)<=t_max)
c_o(t(i)))= A_max*(t(i)-td);
else if(t(i) >= t_max)
c_o(t)=(A(1)*exp(-B(1)*(t(i)-t_max)))+(A(2)*exp(-B(2)*(t(i)- t_max)))...
+ (A(3)*exp(-B(3)*(t(i)-t_max)));
end
end

how to get back to the first index of matrices

I have to implement a single layer perceptron using Matlab.
The problem which I am facing is that when I ran my program it gave me output for every input (it shows results 4 times), but I want to go back to the first index of matrix, after when it reached to the fourth, and I can't find out that how I get back to the first index of matrix.
I want to train my program so that it yields the same result as in b by iterating the matrix in every loop.
This is my current code:
a = [ 1 1
1 -1
-1 1
-1 -1 ];
b = [1
-1
-1
-1];
disp(a);
disp(b);
x = a(:,1);
disp(x);
y = a(:,2);
disp(y)
learningrate = 0.1;
maxiteration = 10;
weight(1)=0.1;
weight(2)=0.1;
weight(3)=0.1;
count = length(x);
for p = 1:count
s = (x(p) * weight(1))+ (y(p) * weight(2))+ weight(3);
if s >= 0
result = 1;
if result ~= b(p)
weight(1) = weight(1)+learningrate*(b(p)-result)*x(p);
weight(2) = weight(2)+learningrate*(b(p)-result)*y(p);
weight(3) = weight(3)+learningrate*(b(p)-result);
disp(result);
disp(x(p));
disp(y(p));
disp(weight(1));
disp(weight(2));
disp(weight(3));
end
else
if s <= 0
result = -1;
disp(result);
if result ~= b(p)
weight(1)=weight(1)+learningrate*(b(p)-result)*x(p);
weight(2)=weight(2)+learningrate*(b(p)-result)*y(p);
weight(3)=weight(3)+learningrate*(b(p)-result);
disp(x(p));
disp(y(p));
disp(weight(1));
disp(weight(2));
disp(weight(3));
end
end
end
end
#Amro has posted an elaborate answer on implementing a single layer perceptron with MATLAB. His post is not only valuable in terms of 'getting some code', but on how a technical problem shall be solved. It starts with a graphical representation of the perceptron showing the signal flow, the problem description. And goes on with excellent comments in the code, as part of the solution.
Just replacing variables a and b in your code with meaningful names could make a big difference.