From matlab HHM documentation Link:
Using hmmtrain. If you do not know the sequence of states states, but you have initial guesses for TRANS and EMIS, you can still estimate TRANS and EMIS using hmmtrain.
I cannot figure out how to find the initial guess of Transition and Emission matrix.
Else If I make a random guess, then how to tweak these matrices even after making a guess.
An initial guess is just that, a guess. You can make them whatever you want, but it's not a bad idea to try to understand how your system works first. A well-behaved (ergodic) system should be able to converge regardless of this guess given an infinite number of samples. Once you're done, you can tweak your guess and then run the code again if you'd like.
You can also try to play around with the tolerance values for the algorithm or the max number of iterations, which can be a limiting factor.
Related
I'm trying to understand Unity Physics engine (PhysX), Can somebody explain that what exactly Default Solver Iterations and Default Solver Velocity Iterations are?
This is from Unity documentation :
Default Solver Iterations: Define how many solver processes Unity runs
on every physics frame. Solvers are small physics engine tasks which
determine a number of physics interactions, such as the movements of
joints or managing contact between overlapping Rigidbody components.
This affects the quality of the solver output and it’s advisable to
change the property in case non-default Time.fixedDeltaTime is used,
or the configuration is extra demanding. Typically, it’s used to
reduce the jitter resulting from joints or contacts.
Please provide some example of how it works and how does increase or decreasing it affects the final result?
I asked this question on Unity Forum and Hyblademin answered it:
In mathematics, an iterative solution method is any algorithm which
approximately solves a system of unknown values like [x1, x2, x3 ...
xn] by repeating a set of steps (iterating). Often, the system of
interest is a set of linear equations exactly like those seen in
algebra class but with a prohibitively high number of unknowns.
Starting with a guess for the solution to each unknown, which could be
based on a similar, known system or could be from a common starting
point like [1, 1, 1 ... 1], a procedure is carried out which gives an
approximate solution which will be closer to the exact values. After
only one iteration, the approximation won't be a very good one unless
the initial guess was already close. But the procedure can be repeated
with the first approximation as the new input, which will give a
closer approximation.
After repeating a few more times, we can expect a reliable
approximation. It still isn't exact, which we could confirm by just
plugging in our answers into our original system and seeing that it
isn't quite right (after simplifying, we would end up with things like
10=10.001 or something to that effect). That said, if the
approximation is close enough for our application, we stop iterating
and use it.
These lecture notes courtesy of a Notre Dame course give a nice
example of this in action using the well-known Jacobi method. Carrying
out an iteration of an iterative method outputs an approximation that
is better than the input because the methods are defined in a way that
causes this to happen, and this is a property called convergence. When
looking at why any given method converges, things get abstract pretty
quickly. I think this is outside the scope of your question,
especially since I don't know what method(s) Unity uses anyway.
When physics is calculated in Unity, we end up with a lot of systems
of equations. We could draw a free-body diagram to show forces and
torques during a collision for a given FixedUpdate in a Unity runtime
to show this. We could try to solve them "directly", which means to
use logical relationships to determine the exact results of the values
(like solving for x in algebra class), but even if the systems are on
the simple side, doing a lot of them will slow the execution to a
crawl. Luckily, iterative, "indirect" methods can be used to get a
pretty good approximation at a fraction of the computing cost.
Increasing the number of iterations will lead to more precise
approximate solutions. There is a point where increasing the number of
iterations gives an increase in precision that is not at all worth the
processing overhead of doing another iteration. But the number of
iterations for this point depends on what you need your project to do.
Sometimes a given arrangement of physics objects will result in jitter
with the default settings that might be improved with more solver
iterations, which is mentioned in the manual entry. There isn't a
great way to determine if changing solver iteration counts will
improve behavior or performance in the way that you need, except for
just trial and error (use the Profiler for a more-objective indication
of performance impact).
https://forum.unity.com/threads/what-does-default-solver-iteration-means.673912/#post-4512004
I'm using MATLAB to work with some spectrograms. I'm new to this kind of thing and come from more of a CS background than a signals background, so I'm not sure what I'm missing here although it may turn out to be fairly basic.
I'm trying to compute spectral difference, which I conceptually am pretty sure I understand. I've got a signal, I can do the SFT on it with MATLAB's spectrogram() function. Then, I try to loop over the spectrogram for the entire signal and at each sampled point compute the difference from the previous point by looping over powers at each frequency and subtracting. I thought I had the concept down, but when I try to run it, I realize that the points returned when I write "MySpectrogram(n,k)" are complex numbers, or atleast look like that. They are formed such that the first part of the number is negative, and then after it there is a +Coefficient*I (-.07+0.0061i) for example. I tried to square these results. After squaring them, they still appear as complex numbers. Now I am totally lost. Can someone explain what's happening?
I'm calling s = spectrogram(x,window,noverlap,nfft).
Here's the documentation for spectrogram
If you need to access the power spectrum, use this:
[s,f,t,ps] = spectrogram(x,window,noverlap,f,fs)
I have a question about curve fitting, I have many curves like the one in the picture.
X axis : time
Y axis : temperature
Each sample comes out every 30s.
GOAL : predict the value at the end of the transient
What would you do in this situation?
What I am doing is this :
for every new sample I start a new fitting (and so each fitting is independent from the previous one) and check the value of the fitted curve 2 hours (all curves I have set before 2h) after the start of the measurement. If for a number (let's say 5) of subsequent fitting the value in the future stays more or less the same(+-0.2°C) I so assume that the estimation is the right one.
This approach seems to me far too simple and I think I am not exploiting all information. For example the info of the error I am making punctually (e.g. at minute 4:00 I predict and at 4:30 I see that I am doing an error).
In the picture the red part of the curve is excluded (but the real data in the future passes through it). the estimation is the blue one. You see in this case I don't have a good prediction... In general I have also more flat curves.
Based on the comments above, I tried to formulate an answer as no one else is giving some input.
I think your are using a good basic procedure. Better results may be obtained by using a more appropriate fitting curve, which includes all the dominant dynamics, but avoids overfitting of the data. Based on your figure, the simplest form I could think of is:
s + a(1-e^(-t/tau))
with parameters s (the initial temperature), a (amplitude = steady state value) and tau (dominant time constant). As you mentioned yourself, limiting the allowed range for the parameters may avoid overfitting and increase the quality of your estimation.
Using a random high order function, like you are using now, may give good interpolation results, but are dangerous to use for extrapolation, because strange effects may occur outside the fitting region.
Alternatives
Using the error (eg. correcting for the extrapolated error) may be possible, but is tricky and may not always give good results.
Training a neural network to perform the estimation is probably overkill, but may give better results if applied correctly. Note that you need a lot of training data which should be representative for the data for which you will use the neural network later on.
First for those, who are not familiar with Simulink, there is a imaginable outside-Simulink partial solution:
I need to create a vector satisfying the following conditions:
known initial value a1
known final value a2
it has a pre-defined step size, but the length is not pre-determined
the first derivative over the whole range is limited to v_max resp. -v_max
the second derivative over the whole range is limited to a_max resp. -a_max
the third derivative over the whole range is limited to j_max resp. -j_max
at the first and the final point all derivatives are zero.
Before you ask "what have you tried so far", I just had the idea to solve it outside Simulink and I tried the whole stuff below ;)
But maybe you guys have a good idea, while I keep working on my own solution.
I'd like to generate smooth ramp signals (3rd derivative limited) based on a trigger signal in Simulink.
To get a triggered step I created a triggered subsystem propagating the trigger output. It looks like that:
But I actually don't want a step, I need a very smooth ramp with limited derivatives up to the 3rd order. The math behind is:
displacement: x
speed: v = x'
acceleration: a = v' = x''
jerk: j = a' = v'' = x'''
(If this looks familiar to you, I once had a very similar question. I thought about a bounty on it, but after the necessary edit of the question both answers would have been invalid)
As there are just rate limiters of 1st order, I used two derivates and a double integration to resolve my problem. But there is a mayor drawback, I can not ignore anymore. For the sake of illustration I chose a relatively big step size of 0.1.
The complete minimal example (Fixed Step, stepsize: 0.1, ode4): Download here
It can be seen, that the signal not even reaches the intended step height of 10 and furthermore is not constant at the end.
Over the development process of my whole model, this approach was satisfactory enough for small step sizes. But I reached the point where I really need the smooth ramp as intended. That means I need a finally constant signal at exactly the value, specified by the step height gain.
I already spent days to resolve the problem, and hope to fine some help here now.
Some of my ideas:
dynamically increase the step height over the actual desired value and saturate the final output. If the rate limits,step height and the simulation step size wouldn't be flexible one could probably find a satisfying solution. But as everything has to be flexible, there are too much cases where the acceleration and jerk limit is violated.
I tried to use the Matlab function block and write my own 3rd order rate limiter. Though it seems possible for me for the trigger moment, I have no solution how to smooth the "deceleration" at the end of the ramp. Also I'd need C-compilers, which would make it hard to use my model on other systems without problems. (At least I think so.)
The solver can not be changed siginificantly (either ode3 or ode4) and a fixed step size is mandatory (0.00001 to 0.01).
Currently used, not really useful approach:
For a dynamic amplification of 1.07 I get the following output (all values normalised on their limits):
Though the displacement looks nice, the violation of the acceleration limit is very harmful.
For a dynamic amplification of 1.05 I get the following output (all values normalised on their limits):
The acceleration stays in its boundaries, but the displacement does not reach the intended value. (not really clear in the picture) The jerk is still to big. (I could live with that, but it's not nice)
So it appears to me that a inside-Simulink solutions is far from reality. Any ideas how to create a well-behaving custom function block?
Simulation step size, step height, and the rate limits are known before the simulation starts. (But I have a lot of these triggered smooth ramps in a row, it should feed a event-discrete control). So I could imagine to create the whole smooth ramp outside simulink and save it as a timeseries object and append it on the current signal when the trigger is activated.
The problems you see are because the difference is not conditioned very well.
Taking the difference amplifies the numerical that exists in your simulation.
Also the jerk will always be large if you try to apply an actual step.
I guess for your approach it would be better to work the other way around:
i.e. make a jerk, acceleration and velocity with which your step is achieved.
I think your looking for something like the ref3 block:
http://www.dct.tue.nl/home_of_ref3.htm
Note the disclaimer on the site and that it is a little cumbersome to use.
An easy (yet to be improved) way is to use a rate limiter and then a state space model with a filter. From the filter you get the velocity, which in turn you can apply a rate limiter to. You continue with rate-limiter and filters until you have the desired curve.
Otherwise you can come up with numerical rate-limiters of higher order using ie runge kutta formulas or finite differences. However it was pointed out, that they may suffer from bad conditioning.
What I usually do is to use one rate limiter and a filter of 3rd Order and just tune the time constant (1 tripple pole), such that my needs are met. This works well, esp
Integrator chains of length > 1 are unstable!
There is a huge field of research dealing with trajectory planning. The easiest way might be to use FIR filters (Biagotti et al) or to implement an online trajectory planner (Ezair et al 2014 / Knierim et al 2012).
I'm trying my best to work it out with fmincon in MATLAB. When I call the function, I get one of the two following errors:
Number of function evaluation exceeded, or
Number of iteration exceeded.
And when I look at the solution so far, it is way off the one intended (I know so because I created a minimum vector).
Now even if I increase any of the tolerance constraint or max number of iterations, I still get the same problem.
Any help is appreciated.
First, if your problem can actually be cast as linear or quadratic programming, do that first.
Otherwise, have you tried seeding it with different starting values x0? If it's starting in a bad place, it may be much harder to get to the optimum.
If it's possible for you to provide the gradient of the function, that can help the optimizer tremendously (though obviously only if you can find it some way other than numerical differentiation). Similarly, if you can provide the (full or sparse) Hessian relatively cheaply, you're golden.
You can also try using a different algorithm in the solver.
Basically, fmincon by default has almost no info about the function it's trying to optimize, and providing more can be extremely helpful. If you can tell us more about the objective function, we might be able to give more tips.
The L1 norm is not differentiable. That can make it difficult for the algorithm to converge to a point where one of the residuals is zero. I suspect this is why number of iterations limits are exceeded. If your original problem is
min norm(residual(x),1)
s.t. Aeq*x=beq
you can reformulate the problem differentiably, as follows
min sum(b)
s.t. -b(i)<=residual(x,i)<=b(i)
Aeq*x=beq
where residual(x,i) is the i-th residual, x is the original vector of unknowns, and b is a further unknown vector of bounds that you add to the problem.