I'm trying to optimise and validate a neural network using Netlab on Matlab
I'd like to find the error value for each iteration, so I can see convergence on a plot. This can be done by storing the errors presented in the command window which is done by setting options(1) to 1 using errlog is a netopt output.
However these errors are not the same as mlperr which gives an error value of 0.5*(sum of squares error) for the last iteration. I can't really validly use them if I don't know how they're calculated.
Does anybody know what the errors displayed in the command window represent (I'm using scaled conjugate gradient as my optimisation algorithm)?
Is there a way of storing the mlperr for each iteration that the network
runs?
Any help is greatly appreciated, many thanks!
NB:
I have tried doing something similar to this :
ftp://ftp.dcs.shef.ac.uk/home/spc/com336/neural-lab-wk6.html
However it gives different results to running the network with the number of iterations specified under options(14) rather than k for some reason.
Yes certainly,
The ERRLOG vector, created as an output to the network optimisation function netopt with the following syntax
[NET, OPTIONS, ERRLOG] = netopt(NET, OPTIONS, X, T, ALG)
Each row of ERRLOG gives 0.5*SSE (sum of squares error) for the corresponding iteration of network optimisation. This error is calculated between the predicted outputs (y) and the target outputs (t).
The MLPERR function, hast the following syntax
E = mlperr(NET, X, T)
It also gives 0.5*SSE between predicted outputs (y) and target outputs (t), but as the network parameters are constant (NET should be pre-trained), E is a singular value.
If netopt was run with an ERRLOG output, and then MLPERR was run with the same network and variables, E should be the same value as value of the final row of ERRLOG (the error after the final iteration of network optimisation).
Hope this is of some use to someone!
Related
I need to implement a if/else in simulink to find out if a input is a scalar value or a matrix. Please see, the diagram below :
Given:
Block(1) - is a input that can be a scalar "1" or a matrix "[[0 15];[5 10]]"
Block(2) - must return the signal dimension of the input. Ex: 1 for scalar and >1 for a matrix
The requirements are:
Everything must work interpreted or compiled (Simulink coder)
The final output of blocks (4) and (5) are scalars
I have average understanding of CMexSFunctions. So if I need to implement one to solve the problem it is ok
So far, I have had the following problems:
I don't at all if what I am planning to do is feasible
I don't know how to implement Block(2) to work on compiled mode
Even though there is a if/else, simulink performs a pre-check before running to verify if all signal dimensions are ok. During this check, it gives a error saying ex: that Block(5) has a input of matrix
Any Clues?
Block(2) is the easiest part which can be implemented using the "Probe" block in Simulink library. Your Input at port 1 must be variable sized signal since you are expecting a scalar or matrix.
I assume you are feeding Input(1) to blocks 4 and 5. At model compile time Simulink does not know which one of these blocks are going to run based on the input size. So Simulink needs to assume both blocks may get scalar or matrix. You need to make blocks 4 and 5 not throw error for both scalar and matrix even though they will be used only for one type at run-time.
If you are not able to do this, for the scalar case a simple work around is to place a Selector before block 5 that selects the first sample always. This will let Simulink know that the input to block 5 is always a scalar.
Essentially I want to have fminsearch run over a variety or parameters.
So I have the following snipet of code running:
%Setting up the changeable WIRX parameters:
L = 0.15; %Length along the electrodes in meters
I = 3000; %Current in amps
%Running the fminsearch:
TeNe = fminsearch(#(params) TeNe(params,L,I),[5,1.5e21],optimset('MaxFunEvals', 100000,'MaxIter', 100000));
What I want to do is be able to run this in a for loop with an array of values for L and I. However what i noticed is that I cannot even run this piece of code twice in a row with out getting the error:
Subscript indices must either be real positive integers or logicals.
Any insight would be much appreciated!
I assume that TeNe is a function which you call with the following inputs: (params,L,I).
However, the output of fminsearch is also assigned to TeNe.
That is why, after the first loop iteration you get the error you see. L has been set to 0.15, however, this makes no sense as an index to an array called TeNe - which you end up with after running fminsearch.
Consider changing the name of the output variable.
I've just updated to Matlab 2014a finally. I have loads of scripts that use the Symbolic Math Toolbox that used to work fine, but now hit the following error:
Error using mupadmex
Error in MuPAD command: Division by zero. [_power]
Evaluating: symobj::trysubs
I can't post my actual code here, but here is a simplified example:
syms f x y
f = x/y
results = double(subs(f, {'x','y'}, {1:10,-4:5}))
In my actual script I'm passing two 23x23 grids of values to a complicated function and I don't know in advance which of these values will result in the divide by zero. Everything I can find on Google just tells me not to attempt an evaluation that will result in the divide by zero. Not helpful! I used to get 'inf' (or 'NaN' - I can't specifically remember) for those it could not evaluate that I could easily filter for when I do the next steps on this data.
Does anyone know how to force Matlab 2014a back to that behaviour rather than throwing the error? Or am I doomed to running an older version of Matlab forever or going through the significant pain of changing my approach to this to avoid the divide by zero?
You could define a division which has the behaviour you want, this division function returns inf for division by zero:
mydiv=#(x,y)x/(dirac(y)+y)+dirac(y)
f = mydiv(x,y)
results = double(subs(f, {'x','y'}, {1:10,-4:5}))
I am working on a project that needs to use hidden markov models. I downloaded Kevin Murphy's toolbox. I have some problems about the usage. In the toolbox webpage, he says that first input of dhmm_em and dhmm_logprob are symbol sequence data. On their examples, they give row vectors as data. So, when I give my symbol sequence as row vector, I get error;
??? Error using ==> assert at 9
assertion violated:
Error in ==> fwdback at 105
assert(approxeq(sum(alpha(:,t)),1))
Error in ==> dhmm_logprob at 17
[alpha, beta, gamma, ll] = fwdback(prior,
transmat, obslik, 'fwd_only', 1);
Error in ==> mainCourseProject at 110
loglik(train_act) =
dhmm_logprob(orderedSymbols,
hmm{train_act}.prior,
hmm{train_act}.trans,
hmm{act}.emiss);
However, before giving this error, code works for some symbol vectors. When I give my data as column vector, functions work fine, no errors. So why exactly am I getting this error?
You might say that I should be giving not single vectors, but vector sets, I also tried to collect my feature vectors in a struct and give row vectors as such, but nothing changed, I still get assertion error.
By the way, my symbol sequence does not have any zeros, I am doing everything almost the same as they showed in their examples, so I would be greatful if anyone could help me please.
Im not sure, but from the function call stack shown above, shouldn't the last line be hmm{train_act}.emiss instead of hmm{act}.emiss.
In other words when you computing the log-probability of a sequence, you should pass components that belong to the same HMM model (transition matrix, emission matrix, and prior probabilities).
By the way, the ASSERT in the code is a sanity check that a vector of probabilities should sum to 1. Oftentimes, when working with very small values (log-probabilities), numerical stability issues can creep in... You could edit the APPROXEQ function to relax the comparison a bit, by giving it a bigger margin of error
This error message and the code it refers to are human-readable. An assertion is a guard put in by the programmer, to ensure that certain conditions are met. In this case, what is the condition? approxeq(sum(alpha(:,t)),1) I'd venture to say that approxeq wants the values to be approximately equal, so this boils down to: sum(alpha(:,t)) ~= 1
Without knowing anything about the code, I'd also guess that these refer to probabilities. The probabilities of a node's edges must sum to one. Hopefully this starts you down a productive debugging path. If you can't figure out what's wrong with your input that produces this condition, start wading into the code a bit to see where this alpha vector comes from, and how it ended up invalid.
I am having some issues with fminsearch of matlab. I have defined the TolX and TolFun as following
options = optimset('TolFun',1e-8, 'TolX', 1e-8)
Then I tried to estimate the parameters of my functions using
[estimates val] = fminsearch(model, start_point,options)
However, the val is around 3.3032e-04. Even though I specified the TolFun to be 1e-8, it still terminates before that with value around 3.3032e-04. Actually, the desired value of the parameter is obtained at something around 1.268e-04. So I tried to set the TolFun. Why is it not working, it should have converged to the least value of the function isn't it?
There are other reasons for termination of the search, for example, max number of function evaluations, max number of iterations, etc. fminsearch provides additional output arguments that give you information about the reason for termination. You especially want the full OUTPUT argument, which provides number of iterations, termination message, etc.
[X,FVAL,EXITFLAG,OUTPUT] = fminsearch(...) returns a structure
OUTPUT with the number of iterations taken in OUTPUT.iterations, the
number of function evaluations in OUTPUT.funcCount, the algorithm name
in OUTPUT.algorithm, and the exit message in OUTPUT.message.
Another possibility is that you've gotten stuck in local minimum. There's not much to be done for that problem, except to choose a different start point, or a different optimizer.