Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 7 years ago.
Improve this question
I have to implement a single layer neural network or perceptron.For this, I have 2 files data sets , one for the input and one for the output.I have to do this in matlab without using neural toolbox.The format of 2 files is given below.
In:
0.832 64.643
0.818 78.843
1.776 45.049
0.597 88.302
1.412 63.458
Out:
0 0 1
0 0 1
0 1 0
0 0 1
0 0 1
The target output is "1 for a particular class that the corresponding input belongs to and "0 for the remaining 2 outputs.
I tried to do this, But it is not working for me.
load in.data
load out.data
x = in(:1);
y = in(:2);
learning rate = 0.2;
max_iteration = 50;
function result = calculateOutput(weights,x, y)
s = x*(weights(1) +weight(2) +weight(3));
if s>=0
result = 1
else:
result = -1
end
end
Count = length(x);
weights[0] = rand();
weights[1] = rand();
weights[2] = rand();
iter = 0;
do {
iter++;
globalerror = 0;
for(p=0; p<count;p++){
output = calculateoutput(weights,x[p],y[p]);
localerror = output[p] - output
weights[0]+= learningrate *localerror*x[p];
weights[1]+= learningrate *localerror*y[p];
weights[2]+= learningrate *localerror;
globalerror +=(localerror*localerror);
}
}while(globalerror != 0 && iter <= max_iteration);
Where is the mistake in this algorithm??
I am referring the example given in the link below:-
Perceptron learning algorithm not converging to 0
Here's a list of what I see wrong:
deep breath
The indexing syntax (:1) is incorrect. Perhaps you mean (:,1) as in "all rows of column 1".
There is no such thing as a do...while loop in MATLAB. Only FOR and WHILE loops. Also, your for loop is defined wrong. MATLAB has a different syntax for that.
There are no ++ or += operators in MATLAB.
The "not equal" operator in MATLAB is ~=, not !=.
Indexing of vectors and matrices needs to be done with parentheses (), not square brackets [].
Is all of this inside a function or a script? You can not define functions, namely calculateOutput, in a script. You would have to put that in its own m-file calculateOutput.m. If all of the code is actually inside a larger function, then calculateOutput is a nested function and should work fine (assuming you have ended the larger enclosing function with an end).
You have a number of apparent typos for variable names:
weight vs. weights (as per phoffer's answer)
Count vs. count (MATLAB is case-sensitive)
calculateOutput vs. calculateoutput (again, case-sensitivity)
learning rate vs. learningrate (variables can't have spaces in them)
heavy exhale ;)
In short, it needs quite a bit of work.
The main mistake is that this is not written using Matlab syntax. Here is an attempt to do what I think you were trying to do.
Unfortunately, there is a fundamental problem with your algorithm (see comments in the code). Also, I think you should have a look at the very good Matlab documentation. Reading the manual will tell you quickly how you format this.
function neuralNetwork
%# load data
load in.data
load out.data
x = in(:,1);
y = in(:,2);
%# set constants
learningrate = 0.2;
max_iteration = 50;
% initialize parameters
count = length(x);
weights = rand(1,3); % creates a 1-by-3 array with random weights
iter = 0;
while globalerror ~= 0 && iter <= max_iteration
iter = iter + 1;
globalerror = 0;
for p = 1:count
output = calculateOutput(weights,x(p),y(p));
%# the following line(s) cannot possibly work
%# output is not a vector, since the previous line
%# assigns it to a scalar
%# Also, arrays are accessed with parentheses
%# and indexing starts at 1
%# and there is no += operator in Matlab
localerror = output[p] - output
weights[0]+= learningrate *localerror*x[p];
weights[1]+= learningrate *localerror*y[p];
weights[2]+= learningrate *localerror;
globalerror +=(localerror*localerror);
end %# for-loop
end %# while-loop
%# subfunctions in Matlab are put at the end of the file
function result = calculateOutput(weights,x, y)
s = x*(weights(1) +weight(2) +weight(3));
if s>=0
result = 1
else:
result = -1
end
end
On line #10, you have weights(1) +weight(2) +weight(3); but the rest of the code has weights with an s.
EDIT: Also, MATLAB does not have the ++ operator; your for loop will cause an error. In MATLAB, construct a for loop like this:
for p=0:count
blah blah blah
end
Also, MATLAB does not use the += operator either, as Jonas pointed out in his code. You need to do this:
weights(0) = weights(0) + learningrate * localerror * x(p)
Related
I am trying to use the Metropolis Hastings algorithm with a random walk sampler to simulate samples from a function $$ in matlab, but something is wrong with my code. The proposal density is the uniform PDF on the ellipse 2s^2 + 3t^2 ≤ 1/4. Can I use the acceptance rejection method to sample from the proposal density?
N=5000;
alpha = #(x1,x2,y1,y2) (min(1,f(y1,y2)/f(x1,x2)));
X = zeros(2,N);
accept = false;
n = 0;
while n < 5000
accept = false;
while ~accept
s = 1-rand*(2);
t = 1-rand*(2);
val = 2*s^2 + 3*t^2;
% check acceptance
accept = val <= 1/4;
end
% and then draw uniformly distributed points checking that u< alpha?
u = rand();
c = u < alpha(X(1,i-1),X(2,i-1),X(1,i-1)+s,X(2,i-1)+t);
X(1,i) = c*s + X(1,i-1);
X(2,i) = c*t + X(2,i-1);
n = n+1;
end
figure;
plot(X(1,:), X(2,:), 'r+');
You may just want to use the native implementation of matlab mhsample.
Regarding your code, there are a few things missing:
- function alpha,
- loop variable i (it might be just n but it is not suited for indexing since it starts at zero).
And you should always allocate memory in matlab if you want to fill it dynamically, i.e. X in your case.
To expand on the suggestions by #max, the code appears to work if you change the i indices to n and replace
n = 0;
with
n = 2;
X(:,1) = [.1,.1];
It would probably be better to assign X(:,1) to random values within your accept region (using the same code you use later), and/or include a burn-in period.
Depending upon what you are going to do with this, it may also make things cleaner to evaluate the argument to sin in the f function to keep it within 0 to 2 pi (likely by shifting the value by 2 pi if it exceeds those bounds)
This Question is in continuation to a previous one asked Matlab : Plot of entropy vs digitized code length
I want to calculate the entropy of a random variable that is discretized version (0/1) of a continuous random variable x. The random variable denotes the state of a nonlinear dynamical system called as the Tent Map. Iterations of the Tent Map yields a time series of length N.
The code should exit as soon as the entropy of the discretized time series becomes equal to the entropy of the dynamical system. It is known theoretically that the entropy of the system is log_2(2). The code exits but the frst 3 values of the entropy array are erroneous - entropy(1) = 1, entropy(2) = NaN and entropy(3) = NaN. I am scratching my head as to why this is happening and how I can get rid of it. Please help in correcting the code. THank you.
clear all
H = log(2)
threshold = 0.5;
x(1) = rand;
lambda(1) = 1;
entropy(1,1) = 1;
j=2;
tol=0.01;
while(~(abs(lambda-H)<tol))
if x(j - 1) < 0.5
x(j) = 2 * x(j - 1);
else
x(j) = 2 * (1 - x(j - 1));
end
s = (x>=threshold);
p_1 = sum(s==1)/length(s);
p_0 = sum(s==0)/length(s);
entropy(:,j) = -p_1*log2(p_1)-(1-p_1)*log2(1-p_1);
lambda = entropy(:,j);
j = j+1;
end
plot( entropy )
It looks like one of your probabilities is zero. In that case, you'd be trying to calculate 0*log(0) = 0*-Inf = NaN. The entropy should be zero in this case, so you you can just check for this condition explicitly.
Couple side notes: It looks like you're declaring H=log(2), but your post says the entropy is log_2(2). p_0 is always 1 - p_1, so you don't have to count everything up again. Growing the arrays dynamically is inefficient because matlab has to re-copy the entire contents at each step. You can speed things up by pre-allocating them (only worth it if you're going to be running for many timesteps).
I am trying to parallelize pagerank using matlab parfor command as shown below.
However, the line which is marked by ** ** and is also written below:
p_t1(L{j,i}) = p_t1(L{j,i}) +constant;
is causing this error:
"Undefined function 'p_t1' for input arguments of type 'int32'."
When I comment it out, the code works. When I replace the parfor with a normal for, it works again. What is the problem?
Thanks
S='--------------------------------------------------';
n = 6;
index=1;
a = [2 6 3 4 4 5 6 1 1];
b = [1 1 2 2 3 3 3 4 6];
a = int32(a);
b=int32(b);
a=transpose(a);
b=transpose(b);
vec_len=length(a);
%parpool(2);
for i=1:vec_len
G(1,i)=a(i);
G(2,i)=b(i);
con_size(i)=0;
end
%n=916428
for j = 1:vec_len
from_idx=a(j);
con_size(from_idx)=con_size(from_idx)+1;
L{from_idx,con_size(from_idx)}=b(j);
end
% Power method
max_error=100;
p = .85;
delta = (1-p)/n;
p_t1 = ones(n,1)/n;
p_t0 = zeros(n,1);
cnt = 0;
tic
while max_error > .0001
p_t0 = p_t1;
p_t1 = zeros(n,1);
parfor j = 1:n
constant=p_t0(j)/con_size(j);
constant1=p_t0(j)/n;
if con_size(j) == 0
p_t1 = p_t1 + constant1;
else
for i=1:con_size(j)
**p_t1(L{j,i}) = p_t1(L{j,i}) +constant;**
end
end
end
p_t1;
sum(p_t1);
p_t1 = p*p_t1 + delta;
cnt = cnt+1;
max_error= max(abs(p_t1-p_t0));
%disp(S);
end
toc
%delete(gcp)
Each variable inside parfor loop must be classified into one of several categories. Your p_t1 variable cannot be classified as any. It appears that you are trying to accumulate result in this variable across multiple iterations, which can be done using reduction variables. This is possible only if the result does not depend on the order of iterations (which appears to be the case in your example), but as you can see from the documentation, only certain types of expressions are allowed for reduction variables - mostly just simple assignments. In particular, you cannot index into a reduction variable - neither on the left-hand side, nor on the right-hand side of the assignment.
Matlab should be also producing plenty of warnings and M-Lint (statical code analyzer) warnings to highlight this issue.
You need to re-write your code so that Matlab could clearly classify each variable used inside parfor into one of the allowed categories.
I'm simulating equations of motion for a (somewhat odd) system with mass-springs and double pendulum, for which I have a mass matrix and function f(x), and call ode45 to solve
M*x' = f(x,t);
I have 5 state variables, q= [ QDot, phi, phiDot, r, rDot]'; (removed Q because nothing depends on it, QDot is current.)
Now, to calculate some forces, I would like to also save the calculated values of rDotDot, which ode45 calculates for each integration step, however ode45 doesn't give this back. I've searched around a bit, but the only two solutions I've found are to
a) turn this into a 3rd order problem and add phiDotDot and rDotDot to the state vector. I would like to avoid this as much as possible, as it's already non-linear and this really makes matters a lot worse and blows up computation time.
b) augment the state to directly calculate the function, as described here. However, in the example he says to make add a line of zeros in the mass matrix. It makes sense, since otherwise it will integrate the derivative and not just evaluate it at the one point, but on the other hand it makes the mass matrix singular. Doesn't seem to work for me...
This seems like such a basic thing (to want the derivative values of the state vector), is there something quite obvious that I'm not thinking of? (or something not so obvious would be ok too....)
Oh, and global variables are not so great because ode45 calls the f() function several time while refining it's step, so the sizes of the global variable and the returned state vector q don't match at all.
In case someone needs it, here's the code:
%(Initialization of parameters are above this line)
options = odeset('Mass',#massMatrix);
[T,q] = ode45(#f, tspan,q0,options);
function dqdt = f(t,q,p)
% q = [qDot phi phiDot r rDot]';
dqdt = zeros(size(q));
dqdt(1) = -R/L*q(1) - kb/L*q(3) +vs/L;
dqdt(2) = q(3);
dqdt(3) = kt*q(1) + mp*sin(q(2))*lp*g;
dqdt(4) = q(5);
dqdt(5) = mp*lp*cos(q(2))*q(3)^2 - ks*q(4) - (mb+mp)*g;
end
function M = massMatrix(~,q)
M = [
1 0 0 0 0;
0 1 0 0 0;
0 0 mp*lp^2 0 -mp*lp*sin(q(2));
0 0 0 1 0;
0 0 mp*lp*sin(q(2)) 0 (mb+mp)
];
end
The easiest solution is to just re-run your function on each of the values returned by ode45.
The hard solution is to try to log your DotDots to some other place (a pre-allocated matrix or even an external file). The problem is that you might end up with unwanted data points if ode45 secretly does evaluations in weird places.
Since you are using nested functions, you can use their scoping rules to mimic the behavior of global variables.
It's easiest to (ab)use an output function for this purpose:
function solveODE
% ....
%(Initialization of parameters are above this line)
% initialize "global" variable
rDotDot = [];
% Specify output function
options = odeset(...
'Mass', #massMatrix,...
'OutputFcn', #outputFcn);
% solve ODE
[T,q] = ode45(#f, tspan,q0,options);
% show the rDotDots
rDotDot
% derivative
function dqdt = f(~,q)
% q = [qDot phi phiDot r rDot]';
dqdt = [...
-R/L*q(1) - kb/L*q(3) + vs/L
q(3)
kt*q(1) + mp*sin(q(2))*lp*g
q(5)
mp*lp*cos(q(2))*q(3)^2 - ks*q(4) - (mb+mp)*g
];
end % q-dot function
% mass matrix
function M = massMatrix(~,q)
M = [
1 0 0 0 0;
0 1 0 0 0;
0 0 mp*lp^2 0 -mp*lp*sin(q(2));
0 0 0 1 0;
0 0 mp*lp*sin(q(2)) 0 (mb+mp)
];
end % mass matrix function
% the output function collects values for rDotDot at the initial step
% and each sucessful step
function status = outputFcn(t,q,flag)
status = 0;
% at initialization, and after each succesful step
if isempty(flag) || strcmp(flag, 'init')
deriv = f(t,q);
rDotDot(end+1) = deriv(end);
end
end % output function
end
The output function only computes the derivatives at the initial and all successful steps, so it's basically doing the same as what Adrian Ratnapala suggested; re-run the derivative at each of the outputs of ode45; I think that would even be more elegant (+1 for Adrian).
The output function approach has the advantage that you can plot the rDotDot values while the integration is being run (don't forget a drawnow!), which can be very useful for long-running integrations.
I'm trying to run this loop as a parfor-loop:
correlations = zeros(1,N);
parfor i = 1:(size(timestamps,1)-1)
j = i+1;
dts = timestamps(j) - timestamps(i);
while (dts < T) && (j <= size(timestamps,1))
if dts == 0 && detectors(i) ~= detectors(j)
correlations(1) = correlations(1) + 2;
elseif detectors(i) ~= detectors(j)
dts = floor(dts/binning)+1;
correlations(dts) = correlations(dts) + 1;
end
j = j + 1;
if j <= size(timestamps,1)
dts = timestamps(j) - timestamps(i);
end
end
end
Matlab gives me the following error:
Error: File: correlate_xcorr.m Line: 18 Column: 17
The variable correlations in a parfor cannot be classified.
See Parallel for Loops in MATLAB, "Overview".
Line 18 is the following:
correlations(1) = correlations(1) + 2;
I can not understand why this shouldn'n be possible. The final value of correlations doesn't depend on the order in which the loop is executed, but only dts and detectors. I found similar examples in the documentation which work fine.
Why can't Matlab execute this code and how can I fix it?
I found the following solution and it seems to work. The program looks a litte different, but it has the same shape. This way Matlab is forced to think x/correlations is a reduction variable.
X = zeros(1,5);
parfor i= 1:1000
a = zeros(1,5);
dts = randi(10)-1;
if dts == 0
a(1) = (a(1) + 2);
elseif dts <= 5
a(dts) = a(dts) +1;
end
X = X + a;
end
MATLAB cannot determine that your loop is order independent because of the way you're accessing correlations(1) from multiple iterations of the PARFOR loop. It looks like this value is in some way 'special', it should work to make a 'reduction' variable, i.e. replace correlations(1) with correlations_1 or something.
The next problem you'll encounter is that you are not correctly 'slicing' the remainder of correlations. For MATLAB to analyse a PARFOR loop, it needs to be able to tell that each loop iteration is writing only to its 'slice' of the output variables. In practice, this means that you must index the outputs using literally the loop index.
More about PARFOR variable classification here: http://www.mathworks.com/help/distcomp/advanced-topics.html#bq_of7_-1
EDIT:
If you want correlations to behave strictly as a reduction variable as I believe you imply in your comments, you need to give PARFOR a clue that's what you mean: in particular, you need to add to the whole variable each time you need to. In other words, something more like:
parfor ...
dummyVec = zeros(1, N);
dummyVec(elementToIncrement) = 1;
correlations = correlations + dummyVec;
end
I agree this is non-obvious. See http://blogs.mathworks.com/cleve/2012/11/26/magic-squares-meet-supercomputing/ for more.