Bulk conveyors are rounding batch sizes and losing material... What causes this and how do you fix it? - anylogic

In my model I have bulk conveyors filling tanks throughout the model, and delays which are stopped upon the tanks becoming full with a delay.stopDelayForAll() command. However, occasionally the tanks fill up to 999.999 / 1000 and never completely fill due to some amount being rounded and lost on the bulk conveyor (as shown in the console warning in the picture), so my agents in the delay blocks get stuck and never leave the delay. When this happens, the conveyor shows that it is still moving and still has some material (on the order of anywhere from 1E-6 to 1E-3 kilograms), but this material never actually flows to the tank.
I have a variable (type double) called d_conveyorThroughputTPH representing the throughput in tons per hour, and in the conveyor I had the tons/sec input set to d_conveyorThroughputTPH/3600 - I initially thought the machine precision of this division was causing the rounding error, but even after changing that conveyor parameter to roundToDecimal(d_conveyorThroughputTPH/3600, 3), the issue still persists.
My conveyor lengths are 'Defined by conveyor belt shape' and the speeds are 500fpm.
Does anybody know what may be the cause of this issue, or how to solve it?
Thanks

The fluid library has some tolerance levels that apply that can cause this problem
https://anylogic.help/library-reference-guides/fluid-library/index.html#handling-numeric-errors
The biggest issue with this, is that in a model, this might happen with such an extremely low probability that it's easy to get confused
It's not super easy to find a solution.

Related

Simulink counting switching frequency?

I need to create some kind of counter, that counts how many times all of the transistors(signal_1, signal_2, signal_3, signal_4) I have used turn ON (and OFF) per second. I need to show the difference in switching frequency between 2-level and 3-level inverter, the signal is PWM. I have no clue how to do it, I'm really lost here.
This is my schematics (just 1/3 of it, other 2/3 is just doubled this).

Matlab Simulink: while loop with subtraction

I am hoping somebody here will be able to help me out with my small issue with one of the Simulink/Matlab code. It is quite similar to the problem that I’ve discussed earlier, but a little bit more complicated and now it is more a Simulink issue, rather than a Matlab one.
So I have a turbine which speed is controlled by the gate’s opening, hence the control voltage. By controlling the gate opening I am accelerating the turbine and at some point in time, I need to introduce a saturation effect (since I am testing the code now, it will be done an external signal). This effect won’t change the control voltage, but it affects other components of the system, hence at the same control voltage, the turbine’s speed will go up. But at the same time, I need to keep the speed at the same value as it was before the saturation effect (let’s say it was 320 rpm). To do so I need to decrease the control voltage and should keep doing it until I reach the speed as it was before. There is no need to do it instantly (this approach will be later introduced in hardware), but it will be a nice thing to check the algorithm in these synthetic tests.
In terms of the model, I was planning to use a while loop with the speed requirement “if speed > 320” again, now just to simplify things. To decrease the control voltage I was planning to subtract from the original 50 (% opening) - 0.25 (u2) at first and after that increasing this value by 0.25 until I decrease the speed below 320. I can’t know the exact opening when this requirement will be satisfied, hence I need some kind of algorithm to “track” this voltage.
So it should be something like this:
u2 = 0;
While speed > 320
u2 = u2+0.25
End
u2 is initially zero since we have a predefined initial control voltage. And obviously, when we reach the motor’s speed below 320, I need to keep the latest value of the u2 (and control voltage).
Overall, it is a small code and should be done in Simulink (don’t want to introduce any other Fcn function into the model). I’ve never used while and if blocks in Simulink, but so far I came up with this system. It’s a simplified version of my model, but the control principle is the same.
We are getting the motor speed of 350, compared with 320 (the speed before “saturation), and if our speed after saturation is higher, we need to reduce the control voltage. To trigger the while loop block I’ve decided to use a simple switch. The while block meanwhile is:
Definitely not the best implementation but I was trying a lot of different combinations and without any real success. I am always getting the same error:
Was trying to use a step signal instead of the constant “7” – to model acceleration of the motor, and was getting the same error at the moment of acceleration above 320 threshold. So looks like the approach is almost right but mathematically it fails to find the most suitable solution. I’ve tried to implement a transport delay in the memory part of the while subsystem but was getting errors during compilation all the time.
Are there any obvious (and not so) mistakes? Or maybe from the beginning, I should have chosen another approach… I really hope that somebody will be able to help. Thank you in advance and have a great day.
I do not think that you have used While block correctly.
This is what I have done, I used a "Matlab function" block instead of "While" block as follows,
The function in Matlab function is
function u2=fcn(speed,u2d)
if speed>320
u2=u2d+0.25;
else
u2=u2d;
end
And the results I have got, Scope 1
Scope
Edit
As you prefer a function free model, the following may do the same.

Why are computer/game physics engines often non-deterministic?

Developing with various game physics engines over the years, I've noticed that on the same machine I observe widely different results in physics simulations between runs. Most recently, the Unity engine does this, even though physics are calculated at set intervals of time (FixedUpdate) -- as far as I can determine it should be completely independent of frame-rate.
I've asked this question on game forums before, and was told it was due to chaotic motion: see double pendulum. But, even the double pendulum is deterministic if the starting conditions are exactly controlled, right? On the same machine, shouldn't floating point math behave the same way?
I understand that there are problems with floating point math accuracy, but I understand those problems (as outlined here) to not be problems on the same hardware -- isn't floating point inaccuracy still deterministic? What am I missing?
tl;dr: If running a simulation on the same machine, using the same floating point math(?), shouldn't the simulation be deterministic?
Thank you very much for your time.
Yes, you are correct. A program executed on the same machine will give identical results each time (at least ideally---there might be cosmic rays or other external things that affect memory and what not, but I would say these are not of our concern). All calculations on a computer are deterministic, and so all algorithms of a computer will necessarily be deterministic (which is the reason it's so hard to make random number generators)!
Most likely the randomness you see is implemented in the program with some random number generator, and the seed for the random numbers varies from run to run. Should you start the simulation with the same seed, you will see the same result.
Edit: I'm not familiar with Unity, but doing some more research seems to indicate that the FixedUpdate routine might be the problem.
Except for the audio thread, everything in unity is done in the main thread if you don't explicitly start a thread of your own. FixedUpdate runs on the very same thread, at the same interval as Update, except it makes up for lost time and simulates a fixed time step.
source
If this is the case, and the function itself looks somewhat like:
void physicsUpdate(double currentTime, double lastTime)
{
double deltaT = currentTime - lastTime;
// do physics using `deltaT`
}
Here we will naturally get different behaviour due to deltaT not being same from two different runs. This is determined from what other processes are running in the background, as they could delay the main thread. This function would be called irregularly and you would observe different results from runs. Note that these irregularities will mostly not be due to floating point inprecision, but due to inaccuracies when doing integration. (E.g. velocity is often calculated by v = a*deltaT, which assumes a constant acceleration since last update. This is in general not true).
However, if the function would look like this:
void physicsUpdate(double deltaT)
{
// do physics using `deltaT`
}
Every time you do simulations using this you will always get the exact same result.
I've not got much experience with Unity or its physics simulations, but I've found the following forum post which also links to an article which seems to indicate it's down to precision with the floating point calculations.
As you've mentioned, a lot of people seem to keep rehashing this question!
The forum also links to this blog post which may shed some light on the issue.

High latency when transmitting with HackRF One from GNU Radio

So I reverse-engineered an RC toy wireless protocol, and written a flow graph in GNU Radio Companion for reproducing the toy's commands.
The sink block is osmocom sink, and transmitting device is HackRF One.
The problem I encountered is severe lag between pressing button and toy reacting, which is about 1-2 seconds.
Quickly pressing, for example, "forward" button twice will result in toy jerking forward twice in quick succession as well, about a second after pressing. So it's not like the flow graph itself is slow. The CPU usage is also fairly low. This looks like some buffering issue.
I'm pretty sure that the flow graph itself reacts to button presses instantly, as debugging prints appear the same time as button is pressed/depressed, and square waves appear in the GUI scope sink instantly, as well.
I tried to lower number of HackRF buffers to one (by settings device arguments to hackrf,buffers=1), but it didn't help.
I set "Max Number of Outputs" to 10 for flow graph as well, and it also didn't make a difference (I tried some other values, too). Given that signal appears in the GUI scope much sooner than after 1 second, it shouldn't have worked anyway.
How can I reduce the latency then?
EDIT: I followed #Manos's suggestion and tried adjusting sample rate.
Adding a rational resampler block with interpolation of 8 (and adjusting sink sample rate accordingly) AND setting hackrf,buffers=1 makes latency almost non-existent.
However, reducing output sample rate of my custom block to 500k and having 16x interpolation still causes noticeable lag, maybe 400-500ms (still not as dramatic as when it's 1M at both source and sink). I'm not sure how to fix it. Unfortunately, running my custom block at 1M consumes 100% CPU and causes occasional underflows.

Dymola_InlineAfterIndexReduction

This question is related to an issue that I encountered when I was playing with some blocks. Here's the model that I have,
As you can see, there are two kinds of connections, the inputs of the first connection(from the top to bottom) is u[1],u[2],u[3], other blocks are quite self-explanatory (all default values, except that startTime = 5 for the step input block).
From my knowledge, the first kind of connection only outputs angular velocity but not angle and angle acceleration(they are both zero), which is a bit not realistic(I'll explain why I did this). The second connection outputs an angular velocity as well.
My problem was that, in the Second connection, the clutch seems working all right(after 5 seconds the clutch is engaged(relative angular velocity w_rel = 0))
However, the first connection behaves quite differently. We can see that they are all flange connections, and angular velocities are all calculated from flange_a/b.phi, so we should expect that there is no angular velocity difference in the clutch no matter what the input (realExperssion1) is. But the interesting thing is that when I simulate the model, the left flange of the clutch is not moving, the right flange is rotating instead. Here're two plots of my results.
Connection1
Connection2
Actually, I should expect to see the flange_a.phi and flange_b.phi are all zero, and then I accidentally removed the annotation __Dymola_InlineAfterIndexReduction = true in the move block, then the model behaves as what I expected. I wound be really appreciated if anyone could help me explain what I saw. Thanks A LOT!
The documentation for the Move model clearly says
The user has to guarantee that the input signals are consistent to
each other
In your case, they are not consistent. So I'm not too surprised you get a strange answer. It was not clear to me why you even attempted to go this route. You imply in the message you would explain why, but I certainly didn't understand your motivation. I suspect that the Move model exists to allow the user to provide their own explicit functions for position, velocity and acceleration that Dymola will use during index reduction instead of generating those functions from the underlying equations. Unless you can provide consistent functions, you really shouldn't use this block at all.
You should really be using a source where you specify only one of position, velocity and acceleration. If that isn't possible, then I'm afraid you'll have to explain why so we can try to understand what you are really trying to achieve here.