I just tried conflate and extrapolate in akka-streams.
As conflate makes totally sense to me, I don't get the Use Case of extrapolate.
Why should we add more work for the downstream - when the upstream does not demand it?
From the Scala Doc:
Allows a faster downstream to progress independent of a slower upstream.
For one example:
Game development
In video games, it's common to have at least two "loops": a logic/game loop and a rendering loop. Usually, the rate of the game loop (the "tick rate") is slower than the rate of the rendering loop (the "frame rate"). For example, a logic tick might occur 10 times a second, but the frame rate should usually be at least 60 frames per second. In order for there to be something to render in between ticks, game developers use either extrapolation or interpolation. As you might have guessed, the extrapolate function is well suited for extrapolation. Here's an example with a tick rate of 10 ticks per second and no frame rate limit:
Source.tick(0.millis, 100.millis, 0)
.scan(intialGameState) { (g, _) => tick(g) }
.extrapolate(extrapolateFn)
.runForeach(render)
Now extrapolateFn just needs to return an iterator that provides extrapolated game states on demand:
def extrapolateFn(g: GameState) = Iterator.continually {
// Compute how long it has been since `g` was created
// Advance the state by that amount of time
// Return the new state
}
Related
Given a sequence of numbers that trend overtime, I would like to use Reactive Extensions to give an alert when there is a sudden absolute change spike or drop. i.e 101.2, 102.4, 101.4, 100.9, 95, 93, 85... and then increasing slowly back to 100.
The alert would be triggered on the drop from 100.9 to 95, each would have a timestamp looking for an an alert of the form:
LargeChange
TimeStamp
Distance
Percentage
I believe i need to start with Buffer(60, 1) for a 60 sample moving average (of a minute frequency between samples).
Whilst that would give the average value, I can't assign an arbitrary % to trigger the alert since this could vary from signal to signal - one may have more volatility that the other.
To get volatility I would then take a longer historical time frame Buffer(14, 1) (these would be 14 days of daily averages of the same signal).
I would then calculate the difference between each value in the buffer and the 14 day average, square and add all these deviations, and divide by the number of samples.
My questions are please:
How would I perform the above volatility calculation, or is it better to just do this outside of RX and update the new volatility value once daily external to the observable stream calculation (this may make more sense to avoid me having to run 14 days worth of 1 minute samples through it)?
How would we combine the fast moving average and volatility level (updated once per day) to give alerts? I am seeing Scan and DistinctUntilChanged on posts on SO, but cant work out how to put together.
I would start by breaking this down into steps. (For simplicity I'll assume the original data source is an observable called values.)
Convert values into a moving averages observable (we'll call this averages here).
Combine values and averages into an observable that can watch for "extremes".
For step 1, you may be able to use the built-in Window method that Slugart mentioned in a comment or the similar Buffer method. A Select call after the Window or Buffer can be used to process the array into a single average value object. Something like:
averages = values.Buffer(60, 1)
.Select((buffer) => { /* do average and std dev calcuation here */ });
If you need sliding windows, you may have to implement your own operator, but I could easily be unaware of one that does exist. Scan along with a queue seem like a good basis for such an operator if you need to write it.
For step 2, you will probably want to start with CombineLatest followed by a Where clause. Something like:
extremes = values.CombineLatest(averages, (v, a) => new { Current = v, Average = a })
.Where((value) = { /* check if value.Current is out of deviation from value.Average */ });
The nice part of this approach is that you can choose between having averages be computed directly from values in line like we did here or be some other source of volatility information with minimal effect on the rest of the code.
Note that the CombineLatest call may cause two subscriptions to values, one directly and one indirectly via a subscription to averages. If the underlying implementation of values makes this undesirable, use Publish and RefCount to get around this.
Also note that CombineLatest will output a value each time either values or averages outputs a value. This means that you will get two events every time averages updates, one for the values update and one for the averages update triggered by the value.
If you are using sliding windows, that would mean a double update on every value, and it would probably be better to simply include the current value on the Scan output and skip the CombineLatest altogether. You would have something like this instead:
averages = values.Scan((v) => { /* build sliding window and attach current value */ });
extremes = averages.Where((a) => { /* check if current value is out of deviation for the window */ });
Once you have extremes, you can subscribe to it and trigger your alerts.
Time.deltaTime gives you the time passed in the last frame.
Input.accelerationEvents contains an array of the last reads of the accelerometer and its time.
I'd guess that after
totalTime = 0;
foreach (AccelerationEvent element in Input.accelerationEvents){
totalTime +=element.deltaTime;
}
the result would be equal to Time.deltaTime, but it isn't. What am I missing?
The AccelerationEvent.deltaTime variable returns the amount of time since the last sampling of the device's accelerometer. However, this sampling is not guaranteed to be synchronized with game framerate (even though both aim to achieve 60Hz), and as such the sum of the deltaTime of all Input.accelerationEvents during a frame may not equate the Time.deltaTime of that frame.
The the Unity documentation mentions something to this effect:
[...] In reality, things are a little bit more complicated – accelerometer
sampling doesn’t occur at consistent time intervals, if under
significant CPU loads. As a result, the system might report 2 samples
during one frame, then 1 sample during the next frame.
One way to visualize this is with the following (assume each dash is one arbitrary unit of time):
Frames completed:
1-----2-----3-----4-----5-----6-----7-----8-----9-----
Accelerometer samples made:
1-----2-----3-----4------5-----6---7-----8-----9-----
Note that while frame6 is being completed, both sample6 and sample7 were made. However, although frame6.deltaTime = 5, the sum of sample6.deltaTime + sample7.deltaTime = 5 + 3 = 8. As a result, their times don't match up.
Hope this helps! Let me know if you have any questions.
Here's what they say in Unity documentation regarding accelerometer http://docs.unity3d.com/Manual/MobileInput.html
Unity samples the hardware at a frequency of 60Hz and stores the result into the >variable. In reality, things are a little bit more complicated – accelerometer >sampling doesn’t occur at consistent time intervals, if under significant CPU >loads. As a result, the system might report 2 samples during one frame, then 1 >sample during the next frame
Also don't forget that
AccelerationEvent.deltaTime is Amount of time passed since last accelerometer measurement.
And Time.deltaTime is the time in seconds it took to complete the last frame.
Those values are independent and there is no reason for them to be equal to each other.
I am new to using wxWidgets,and am looking to do a continuous real time plot. I have done some research into real time (continuous) plotting tools available for wxWidgets.
I am most interested in using mxMathPlot for the plotting with support of mpFXYVector for feeding in points.
(I know there's Numerix Library as well, but it seems like there wasn't much documentation on it)
However, I would like to do about 100 updates a second? or 100 new points coming in a second.
Is this something that is feasible using mxMathPlot with mpFXYVector, or would this approach be too slow?
Or should OpenGL be considered?
Displaying a real time graph of data updated at 100 Hz update using wxMathPlot is feasible.
The following is a screenshot of an app using three wxMathplot instances, all showing data that is being updated at 128Hz
If you want the sort of real time graph where the old data vanish on the left as the new data appear on the right, but the amount of data shown is constant at, say, the last 10 seconds, then mpFXYVector is awkward to use. For a discussion of how to deal with this, see this answer.
However, for a first pass, this framework should get you started
... in Myframe constructor
vector <double> xs, ys;
mpFXYVector * data_layer
mpWindow * graph;
// initialize
...
// Set up 100 Hz timer
...
MyFrame::OnTimer( ... )
{
// add new data point(s)
...
// copy data into layer
data_layer.SetData(xs,ys);
// redraw
graph->Fit();
/* Note that at this point we have requested a redraw
But it hasn't taken place yet
We need to return so that the windows system can update the screen etc
We will back in 1/100 sec to do it all again.
*/
}
So, now the question is, how fast can we make this scheme go? I have experimented with a timer event handler that does nothing more than keep track of how many times it is called.
void MyFrame::OnTimer2(wxTimerEvent& )
{
static int ticks = 0;
ticks++;
static clock_t start;
if( ticks == 1 ) {
start = clock();
}
if( ticks == 1000 ) {
double duration = (double)(clock() - start) / CLOCKS_PER_SEC;
wxMessageBox(wxString::Format("1000 ticks in %f secs\n",duration));
myTimer->Stop();
return;
}
}
The fastest I can make this go, on a powerful desktop, is 66 Hz.
Does this matter?
IMHO, it does not. There is no way a human eye can appreciate a graph being updated at 100Hz. The important thing is that the graph shows data acquired at 100Hz, without any losses or lags, but the graph display does not need to be updated so frequently.
So, in the app that produced the screenshot at the top of this answer and other similar apps I have developed, the graph is being updated at 10Hz, which the wxWidgets framework has no trouble maintaining, and the data acquisition is occurring at a much higher frequency in another thread or process. Ten times per second, the graph display code copies the data that has been acquired in the meantime and updates the display.
I am confused by the hybrid modelling paradigm in Modelica. On one hand, events are useful, on the other hand, they are to be avoided. Let me explain my case:
I have a large model consisting of multiple buildings in a neighborhood that is simulated over 1 year. Initially, the model ran very slow. Adding noEvent() around as many if-conditions as possible drastically improved the speed.
As the development continued, the control of the model got more complicated, and I have again many events, sometimes at very short intervals. To give an idea:
Number of (model) time events : 28170
Number of (U) time events : 0
Number of state events : 22572
Number of step events : 0
These events blow up the output (for correct post-processing I need the variables at events) and slows the simulation. And moreover, I have the feeling that some of the noEvent(if...) lead to unexpected behavior.
I wonder if it would be a solution to force my events at certain time steps and prohibit them in between these time steps? Ideally, I would like to trigger these 'forced events' based on certain conditions. For example: during the day they should be every 15 minutes, at high solar radiation at every minute, during nights I don't want events at all.
Is this a good idea to do? I guess this will be faster as many of the state events will become time events? How can this be done with Modelica 3.2 (in Dymola)?
Thanks on beforehand for all answers.
Roel
A few comments.
First, if you have a simulation with lots of events (relative to the total duration of the simulation), the first thing I would encourage you to do is use a lower order integrator. The point here is that higher-order integrators normally allow you to take longer time steps. But if those steps are constantly truncated by events, they just end up being really expensive.
Second, you could try fixed-step integrators. Depending on the tool, they may implement this kind of "pool events and fire them all at once" kind of approach in the context of fixed-time step integrators. But the specification doesn't really say anything on how tools should deal with events that occur between fixed time steps.
Third, another way to approach this would be to "pool" your events yourself. The simplest way I could imagine doing this would be to take all the statements that currently generate events and wrap them in a "when sample(...,...) then" statement. This way, you could make sure that the events were only triggered at specific intervals. This would be more portable then the fixed time step approach. I think this is what you were actually proposing in your question but it is important to point out that it should not be based on time steps (the model has no concept of a time step) but rather on a model specified sampling interval (which will, in practice, be completely independent of time steps).
As you point out, using "sample(...,...)" will turn these into time events and, yes, this should be faster.
I'm not sure if the term real-time is being misused here, but the idea is that many players on a server have a city producing n resources per second. There might be a thousand such cities. What's the best way to reward all of the player cities?
Is the best way a loop like such placed in an infinite loop running whenever the game is "live"? (please ignore the obvious faults with such simplistic logic)
foreach(City c in AllCities){
if(c.lastTouched < DateTime.Now.AddSeconds(-10)){
c.resources += (DateTime.Now-c.lastTouched).Seconds * c.resourcesPerSecond;
c.lastTouched = DateTime.Now;
c.saveChanges();
}
}
I don't think you want an infinite loop as that would waste a lot of CPU cycles. This is basically a common simulation situation Wikipedia Simulation Software and there are a few approaches I can think of:
A discrete time approach where you increment the clock by a fixed time and recalculate the state of your system. This is similar to your approach above except do the calculation periodically and remove the 10 seconds if clause.
A discrete event approach where you have a central event queue, each with a timestamp, sorted by time. You sleep until the next event is due and then dispatch it. E.g. the event could mean adding a single resource. Wikipedia Discrete Event Simulation
Whenever someone asks for the number of resources calculate it based on the rate, initial time, and current time. This can be very efficient when the number of queries is expected to be small relative to the number of cities and the elapsed time.
while you can store the last ticked time per object, like your example, it's often easier to just have a global timestep
while(1) {
currentTime = now();
dt = currentTime - lastUpdateTime;
foreach(whatever)
whatever.update(dt);
lastUpdateTime = currentTime;
}
if you have different systems that don't need as frequent updates:
while(1) {
currentTime = now();
dt = currentTime - lastUpdateTime;
subsystem.timer += dt
while (subsystem.timer > subsystem.updatePeriod) {// need to be careful
subsystem.timer -= subsystem.updatePeriod; // that the system.update()
subsystem.update(subsytem.updatePeriod); // is faster than the
} // system.period
// ...
}
(which you'll notice is pretty much what you were doing on a per city basis)
Another gotcha is that with different subsystem clock rates, you can get overlaps (ie ticking many subsystems the same frame), leading to inconsistent frame times which can sometimes be an issue.