what time units does this simulation stop time use? is it seconds or msec? or is there any method to measure this time as some time I feel 1 unit of this time is not of constant length?
It's seconds. But Simulink does not run in real-time, so one second of simulation time can a lot less than a second of real-time (if your model runs very fast) or a lot more (if your model runs very slow).
If your model runs "too fast", you can use utilities such as Simulink Block for Real Time Execution, Simulink® Real Time Execution, Real-Time Blockset 7.1 for Simulink, Real-Time Pacer for Simulink or RTsync Blockset (there are plenty to choose from) to slow it down to real-time.
Related
I'm trying to simulate some electronics stuff in Simulink. Current mode PWM, so I need to compare MOSFET current with constant.
I tried to use "Relational operator" and "Compare to zero" with the same result.
In Continuous simulation mode, it reacts all over the place, like +-5e-6 sec and it is no help when the PWM signal is 20e-6 sec period.
When I run it in Discrete mode on 5e-8 sec sample time it works all right.
In tutorials, all electronics simulation runs in continuous mode, and calculating blocks seem to work right but not those blocks. Are there some settings for this?
The system is based on Simscape / Electrical / Specialized Power Systems blocks
Like this example: https://www.mathworks.com/help/physmod/sps/ug/boost-converter.html
I've creating following model:
The simulation works as expected, but the speed of it is immensely slow. When listening to the output, it's just a small noise every once in a while, at the T indicator barely increases by 0.005/s. I understand the software has to run audio samples through the algorithm constantly, but the simulation being this slow makes me concern about when I have to use it in practice, as it then has to be used on a Microphone on a Raspberry Pi in real-time.
Am I using some bad blocks, is my signal set-up in a bad manner, what can I do to increase the performance of the system?
EDIT - Info from my profiler:
I have a Simulink model with master clock value of 4410 Hz. I know for a fact that computation time of some algorithms (e.g. cubic spline interpolation on a 4410 sample frame being accumulated in real-time) is much longer than the master time period (i.e. computation time of spline is cca 0.7 seconds). I would expect Simulink to output frame elements AFTER initial 1 second + propagation time delay (like in hardware languages, e.g. VHDL), but it actually starts outputting the elements of the frame just after one seconds (which is the length of frame, 4410/4410 seconds). This wouldn't be a problem if my output values weren't unexpected/wrong.
How does Simulink build the simulation in this case? It would appear that it stops the simulation for larger computation times, then continues it afterwards.
A simulink simulation assumes infinite computation capacity, it does not simulate computation times. It does not stop the simulation, it does not use a real clock at all. While simulink is a bit more complicated with the different solvers, you can take a look at discrete event simulation which should give a simple example of isolating the simulation clock form your real clock.
I have an agent-based simulation (using Java-based Repast) that generates a time series in its output for my different treatments. I am measuring performance through time, and at each time tick the performance is the mean of 30 runs (30 samples). In all of the treatments the performance starts from near 0 and ends in 100%, but with different speed. I was wondering if there is any stochastic model or probabilistic way to to compare the speed or growth of these time series. I want to find out which one "significantly" grows faster.
Thanks.
I profiled the sim(net,input) function (part of the neural network toolbox) and noticed it spends a relatively lot of time in the initialization part calling net=obj2struct(net) every time sim is called. Is there any way (beside writing my own ad-hoc sim function) to pass as parameter the already converted struct-type net in order to avoid wasting time converting every time? (convert once, run multiple times)
It would be very nice for NN relatively small (like the one I'm using), whose conversion takes more time than the simulation itself.