I am trying to solve a mixed integer linear program via YALMIP using Gurobi optimizer. I solve a problem which has a specific result I know. However, when I start Matlab in my computer for the first time it is being solved in 0.5 seconds. I clear everything and run the code again. The same result is coming in 0.8 seconds. Then, 1 seconds and so on. How can I really fix this? I am clearing the workspace after each run!
Related
I solved an optimal power flow (OPF) problem modeled as a second-order cone programming (SOCP). At first I solved using solver CPLEX in AMPL lenguage, and took 0.08s; then I solved the same problem with CPLEX for Matlab (command cplexqcp) and took 0.86s. Times shown correspond only to the times demand by the solver (CPLEX). Does anyone know what makes such a time difference?
Time results for CPLEX/AMPL:
Elapsed AMPL time : 0.430s
Elapsed Solve time : 0.080s
Elapsed CPU time: 0.510s
Time results for CPLEX/MATLAB:
:
tic
[x,fval]=cplexqcp([],f,[],[],Aeq,beq,l,Qc,r,Li,Ls);
toc
:
Elapsed time is 0.860856 seconds.
Note: The problem has 542 variables.
Hard to say without access to the detail, but my first guess would be preprocessing. Before passing the problem to the solver, AMPL will attempt to simplify it e.g. eliminating variables that are dependent on other variables. This can make a significant difference to the solution time.
(For large problems, it can also make a big difference to data I/O time between AMPL and solver, but for only 542 variables that's probably not a big issue.)
Another possibility is that AMPL and Matlab are invoking CPLEX with different options (e.g. different solution tolerances).
I have the following problem:
I have to use an ode-solver to solve a chemical reaction equation. The rate constants are functions of time and can suddenly change (puls from electric discharge).
One way to solve this is to keep the stepsize very small hmax < dt. This results in a high comp. affort --> time consuming. My question is: Is there an efficient way to make this work? I thought about to def hmax(puls_ON) with plus_ON=True within the puls and plus_ON=False between. However, since dt is increasing in time, it may dose not even recognize the puls, because the time interval is growing hmax=hmax(t).
A time-grid would be the best option I thin, but I don't think this is possible with odeint?
Or is it possible to somehow force the solver to integrate at a specific point in time (e.g. t0 ->(hmax=False)->tpuls_1_start->(hmax=dt)->tpuls_1_end->(hmax=False)->puls_2_start.....)?
thx
There is an optional parameter tcrit for the odeint that you could try:
Vector of critical points (e.g. singularities) where integration care should be taken.
I don't know what it actually does but it may help to not simply step over the pulse.
If that does not work you can of course manually split your integration into different intervals. Integrate until your tpuls_1_start. Then restart the integration using the results from the previous one as initial values.
I want to simulate a model in Dymola in real-time for HiL use. In the results I see that the Simulation is advancing about 5% too fast.
Integration terminated successfully at T = 691200
CPU-time for integration : 6.57e+005 seconds
CPU-time for one GRID interval: 951 milli-seconds
I already tried to increase the grid interval to reduce the relativ error, but still the simulation is advancing too fast. I only read about aproaches to reduce model complexity to allow simulation within the defined time steps.
Note, that the Simulation does keep up with real-time and is even faster. How can I ín this case match simulated time and real time?
Edit 1:
I used Lsodar solver with checked "Synchronize with realtime option" in Realtime tab. I have the realtime simulation licence option. I use Dymola 2013 on Windows 7. Here the result for a stepsize of 15s:
Integration terminated successfully at T = 691200
CPU-time for integration : 6.6e+005 seconds
CPU-time for one GRID interval : 1.43e+004 milli-seconds
The deviation still is roughly about 4.5%.
I did however not use inline integration.
Do I need hard realtime or inline integration to improve those results? It should be possible to get a deviation lower than 4.5% using soft realtime or not?
Edit 2:
I took the Python27 block from the Berkeley Buildings library to read the System time and compare it with the Simulation advance. The result shows that 36 hours after Simulation start, the Simulation slows down slightly (compared to real time). About 72 hours after the start of the simulation it starts getting about 10% faster than real time. In addition, the jitter in the result increases after those 72 hours.
Any explanations?
Next steps will be:
-changing to fixed step solver (Might well be this is a big part of the solution)
-Changing from DDE Server to OPC Server, which at the Moment doesn't not seem to be possible in Dymola 2013 however.
Edit 3:
Nope... using a fixed step solver does seem to solve the problem. In the first 48 hours of simulation time the deviation seems to be equal to the deviation using a solver with variable step size. In this example I used the Rkfix 3 solver with an integrator step of 0.1.
Nobody knows how to get rid of those huge deviations?
If I recall correctly, Dymola has a special compilation option for real-time performance. However, I think it is a licensed option (not sure).
I suspect that Dymola is picking up the wrong clock speed.
You could use the "Slowdown factor" that is in the Simulation Setup, on the Realtime tab just below "Synchronize with realtime". Set this to 1/0.95.
There is a parameter in Dymola that you can use to set the CPU speed but I could not find this now, I will have a look for this again later.
I solved the problem switching to an embedded OPC-Server. Error between real time and simulation time in this case is shown below.
Compiling Dymola Problems with an embedded OPC-Server requires administrator rights (which I did not have before). The active folder of Dymola must not be write protected.
I am trying to run a real-time simulation in Simulink using Real-time Workshop. The target is grt(I have tried rtwin, but my simulation refuses to compile for it). I need the simulation to run in real-time so that one second in simulation lasts one second of real time. Grt ignores realtime and finishes the simulation in shortest time possible. Is there any way to synchronise it?
I have tried http://www.mathworks.com/matlabcentral/fileexchange/3175 but could not get it to work(does not compile).
Thank you for any suggestions.
Looks like it is impossible. I was able to slow down the execution by using Sleep(time in ms) function from WinApi and clock function from time.h, which looked quite good for low sample rates. However, when I increased the sample rate the Sleep function was sleeping for too long, which resulted in errors, with one second in simulation lasting more than one real world second.
The idea was to say that one period of iteration should last, let's say 200ms. Then time how long it takes for one iteration of code to execute using the clock function. Then call Sleep(200 - u), where u is the length of the iteration. The problem is that Sleep function sleeps the process and wakes it up when it wants to, not when you tell it to in the argument.
I know this is not a solution, but post this so that if anyone faces the same problem as me they won't try this dead-end solution. I had to rewrite the simulation for rtwin and now it works fine.
Another idea would be to somehow use interrupts, but I guess it would be quite complicated and not worth the trouble.
This is my simple fortran program
program accel
implicit none
integer, dimension(5000) ::a,b,c
integer i
real t1,t2
do i=1,5000
a(i)=i+1
b(i)=i+2
end do
call cpu_time(t1)
do i=1,5000
c(i)=a(i)*b(i)
end do
call cpu_time(t2)
write (*,*)'Elapsed CPU time = ',t2-t1,'seconds'
end program accel
but cpu time shows 0.0000 sec. why?
Short answer
Use this to display your answer:
write(*,'(A,F12.10,A)')'Elapsed CPU time = ',t2-t1,' seconds.'
Longer answer
There can be at least two reasons why you get zero as suggested in the answers of #Ernest Friedman-Hill and #Klas Lindbäck:
The computation is taking less than 0.00005 seconds
The compiler optimizes away the whole loop
In the first case, you have a few options:
You can display more digits of t2-t1 using a format like I gave you above, or alternatively you can print the result in milliseconds: 1000*(t2-t1)
Add more iterations: if you do 50000 iterations instead of 5000, it should take ten times longer.
Make each iteration longer: you can replace your multiplication by a sequence of complicated operations possibly using math functions
In the second case, you can:
Disable optimization by passing the appropriate flag to your compiler (-O0 for gfortran)
Use c somewhere in your program after the loop
I compiled your program using gfortran 4.2.1 on OS X Lion and it worked out of the box (displaying time in exponential notation) and the formatting (short answer) worked fine too. I tried enabling and disabling optimization and it worked fine too.
The accuracy of cpu_time is probably platform dependent so that may also explain different behaviors across different machines, but with all this you should be able to solve your problem.
It doesn't take very long to do 5000 multiplications -- it may simply be taking less than one unit of cpu_time()'s resolution. Crank that 5000 up to 100000 or so, and then you'll likely see something.
The optimiser has seen that c is never read, so the calculation of c can be skipped.
If you print the value of c, the loop willnot be optimised away.