Related
I'm trying to build a system in Simulink, but I get errors about Algebraic Loops.
Could you please help?
The goal of this system is to observe behaviour of double pendulum with a spring attached to the lower part of it.
Here's my system: http://1drv.ms/1GPqeeQ
I can't post pictures, because i don't have enough points on StackOverflow.
Yep it's common problem.
The problem is that simulink try to use variable value to calculate itself (at one step).
You can solve this problem easily - you just need to add Unit Delay block for this variable. Like this:
You can see I use variable Vd to calculate itself again at every step.
I added Unit Delay and simulink use the value of Vd from PREVIOUS STEP!
It works perfectly!
I am doing parameter estimation in matlab using lsqnonlin function.
In my work, I need to plot a graph to show the error in terms of lsqnonlin iteration. So, I need to know which iteration is running at each point of time in lsqnonlin. Could anybody help me how I can extract the iteration number while lsqnonlin is running?
Thanks,
You want to pass it an options parameter setting 'display' to either 'iter' or 'iter-detailed'
http://www.mathworks.com/help/optim/ug/lsqnonlin.html#f265106
Never used it myself, but looking at the help of lsqnonlin, it seems that there is an option to set a custom output function, which gets called during every iteration of the solver. Looking at the specification, it seems that the values optimValues.iteration and optimValues.fval get passed into the function, which is probably the things you are interested in.
You should thus define your own function with the right signature, and depending on your wishes, this function prints it on the command line, makes a plot, saves the intermediate results in a vector, etc. Finally, you need to pass this function as a function handle to the solver: lsqnonlin(..., 'OutputFcn', #your_outputfun).
The simple way to do this would be:
Start with a low number of (maximum) iterations
Get the result
Increase the number of iterations
Get the result
If the maximum iterations is used Go to step 3
This is what I would recommend in most cases when performance is not a big issue.
However, if you cannot afford to do it like this, try edit lsqnonlin and go digging untill you find the point where the number of iterations is found. Then change the function to make sure you store the results you need at that point. (don't forget to change it back afterwards).
The good news is that all relevant files seem to be editable, the bad news is that it is not so clear where you can find the current number of iterations. A quick search led me to fminbnd, but I did not manage to confirm that this is actually used by lsqnonlin.
Sorry about this noob question, because I never work with matlab and signal processing before.
Here is what I want to do: I have a fixed length of byte array X, now I want to encode it to a sound file, I also want this process to be reversible, which means the sound can be converted back to X with no error. I searched online, and found the following code:
M = 16;
x = randint(5000,1,M);
y=modulate(modem.qammod(M),x);
My question is that, is QAM the best way to do this? and how to use it? A little bit code example will be really appreciated, Thank you!
update#1: I tried to output y by sound(y), but matlab does not allow me to do so, it says I can only output floating numbers. How can I solve this? Thank you!
If you need to transmit over the air, you have quiet a lot of work in front of you I think. The most difficult problem to solve in a telecommunications system is often synchronization, meaning that your receiver will have to know where the QAM symbols are placed in time. This is not easy. If you choose to go ahead I agree with mtrw that you should try dsp.stackexchange.com.
Try for example to imaging a simple modulation scheme where each bit is converted to a short piece of sine with the frequency depending on whether the bit is one or zero. How would you go about decoding this on the receiver end? You need to detect the onset of the first bit and have some self maintaining clock running for synchronization on the receiver to find bits in case they do not change, aka a PLL (Phase Locked Loop). This could possibly be made easier by using manchester coding, but you would still have to do quite a lot to get it running.
As you see, there are no easy solutions when you leave the save Matlab harbor :-)
Best regards
I have a problem with ode45. I've defined a function and trying to solve it by ode, but when i run it, it takes so long. I tried to display the "t" input in my function and it showed time step was 10^-8 ! [I do not get any error from ode45]
So i put a breakpoint at the end of my function, and after I Step once, it goes to sym.m file and calls Function delet(h)
function dxr=Dynfun(t,x)
...
dxr=[A;B]
after Step it goes to
function delete(h)
if builtin('numel',h)==1 && inmem('-isloaded','mupadmex') && builtin('numel',h.s)==1 && ~isa(h.s,'maplesym')
mupadmex(h.s,1);
end
end
and that's what makes it too long, because it goes in a loop in there.
what's the problem?! Thanks
Sounds like it's a "stiff" problem to me. I would recommend using a solver that is designed for stiff problems. I would also recommend trying a fixed step solver at a small step size ~ 0.001 and see what the output looks like. If you are breaking in sym.m, sounds like you've some some symbolic logic going on in there. Is there a way you could take your symbolic expression and convert it to a matlab script?
As indicated by macduff, your problem could be stiff. Try ode15s (which is designed for stiff problems) and see if the stepsize still decreases to unacceptably low values.
If that is indeed the case, then your problem might contain a singularity for the initial values you give it. If your problem has dimensions lower than 3, you can define a small event function to get insight into the values at each step, and plot them to see if there is indeed something problematic going on.
Then -- do you really need symbolic math? The philosophy behind that is that it's easier to read for humans, which makes it terrible to deal with for computers :) If you can transform it into something non-symbolic, please do -- this will noticeably increase performance.
Also, more a word of advice, delete is also Matlab builtin function. It is generally a bad idea to name your functions after Matlab buitins -- it's confusing, and can cause a lot of overhead while Matlab is deciding which one to use.
I used Matlab-fminsearch for a negativ max likelihood model for a binomial distributed function. I don't get any error notice, but the parameter which I want to estimate, take always the start value. Apparently, there is a mistake. I know that I ask a totally general question. But is it possible that anybody had the same mistake and know how to deal with it?
Thanks a lot,
#woodchips, thank you a lot. Step by step, I've tried to do what you advised me. First of all, I actually maximized (-log(likelihood)) and this is not the problem. I think I found out the problem but I still have some questions, if I don't bother you. I have a model(param) to maximize in paramstart=p1. This model is built for (-log(likelihood(F))) and my F is a vectorized function like F(t,Z,X,T,param,m2,m3,k,l). I have a data like (tdata,kdata,ldata),X,T are grids and Z is a function on this grid and (m1,m2,m3) are given parameters.When I want to see the value of F(tdata,Z,X,T,m1,m2,m3,kdata,ldata), I get a good output. But I think fminsearch accept that F(tdata,Z,X,T,p,m2,m3,kdata,ldata) like a constant and thatswhy I always have as estimated parameter the start value. I will be happy, if you have any advise to tweak that.
You have some options you can try to tweak. I'd start with algorithm.
When the function value practically doesn't change around your startpoint it's also problematic. Maybe switching to log-likelyhood helps.
I always use fminunc or fmincon. They allow also providing the Hessian (typically better than "estimated") or 'typical values' so the algorithm doesn't spend time in unfeasible regions.
It is virtually always true that you should NEVER maximize a likelihood function, but ALWAYS maximize the log of that function. Floating point issues will almost always corrupt the problem otherwise. That your optimization starts and stops at the same point is a good indicator this is the problem.
You may well need to dig a little deeper than the above, but even so, this next test is the test I recommend that all users of optimization tools do for every one of their problems, BEFORE they throw a function into an optimizer. Evaluate your objective for several points in the vicinity. Does it yield significantly different values? If not, then look to see why not. Are you creating a non-smooth objective to optimize, or a zero objective? I.e., zero to within the supplied tolerances?
If it does yield different values but still not converge, then make sure you know how to call the optimizer correctly. Yeah, right, like nobody has ever made this mistake before. This is actually a very common cause of failure of optimizers.
If it does yield good values that vary, and you ARE calling the optimizer correctly, then think if there are regions into which the optimizer is trying to diverge that yield garbage results. Is the objective generating complex or imaginary results?