Controlling events in a hybrid Modelica model - simulation

I am confused by the hybrid modelling paradigm in Modelica. On one hand, events are useful, on the other hand, they are to be avoided. Let me explain my case:
I have a large model consisting of multiple buildings in a neighborhood that is simulated over 1 year. Initially, the model ran very slow. Adding noEvent() around as many if-conditions as possible drastically improved the speed.
As the development continued, the control of the model got more complicated, and I have again many events, sometimes at very short intervals. To give an idea:
Number of (model) time events : 28170
Number of (U) time events : 0
Number of state events : 22572
Number of step events : 0
These events blow up the output (for correct post-processing I need the variables at events) and slows the simulation. And moreover, I have the feeling that some of the noEvent(if...) lead to unexpected behavior.
I wonder if it would be a solution to force my events at certain time steps and prohibit them in between these time steps? Ideally, I would like to trigger these 'forced events' based on certain conditions. For example: during the day they should be every 15 minutes, at high solar radiation at every minute, during nights I don't want events at all.
Is this a good idea to do? I guess this will be faster as many of the state events will become time events? How can this be done with Modelica 3.2 (in Dymola)?
Thanks on beforehand for all answers.
Roel

A few comments.
First, if you have a simulation with lots of events (relative to the total duration of the simulation), the first thing I would encourage you to do is use a lower order integrator. The point here is that higher-order integrators normally allow you to take longer time steps. But if those steps are constantly truncated by events, they just end up being really expensive.
Second, you could try fixed-step integrators. Depending on the tool, they may implement this kind of "pool events and fire them all at once" kind of approach in the context of fixed-time step integrators. But the specification doesn't really say anything on how tools should deal with events that occur between fixed time steps.
Third, another way to approach this would be to "pool" your events yourself. The simplest way I could imagine doing this would be to take all the statements that currently generate events and wrap them in a "when sample(...,...) then" statement. This way, you could make sure that the events were only triggered at specific intervals. This would be more portable then the fixed time step approach. I think this is what you were actually proposing in your question but it is important to point out that it should not be based on time steps (the model has no concept of a time step) but rather on a model specified sampling interval (which will, in practice, be completely independent of time steps).
As you point out, using "sample(...,...)" will turn these into time events and, yes, this should be faster.

Related

Machine Learning to predict time-series multi-class signal changes

I would like to predict the switching behavior of time-dependent signals. Currently the signal has 3 states (1, 2, 3), but it could be that this will change in the future. For the moment, however, it is absolutely okay to assume three states.
I can make the following assumptions about these states (see picture):
the signals repeat periodically, possibly with variations concerning the time of day.
the duration of state 2 is always constant and relatively short for all signals.
the duration of states 1 and 3 are also constant, but vary for the different signals.
the switching sequence is always the same: 1 --> 2 --> 3 --> 2 --> 1 --> [...]
there is a constant but unknown time reference between the different signals.
There is no constant time reference between my observations for the different signals. They are simply measured one after the other, but always at different times.
I am able to rebuild my model periodically after i obtained more samples.
I have the following problems:
I can only observe one signal at a time.
I can only observe the signals at different times.
I cannot trigger my measurement with the state transition. That means, when I measure, I am always "in the middle" of a state. Therefore I don't know when this state has started and also not exactly when this state will end.
I cannot observe a certain signal for a long duration. So, i am not able to observe a complete period.
My samples (observations) are widespread in time.
I would like to get a prediction either for the state change or the current state for the current time. It is likely to happen that i will never have measured my signals for that requested time.
So far I have tested the TimeSeriesPredictor from the ML.NET Toolbox, as it seemed suitable to me. However, in my opinion, this algorithm requires that you always pass only the data of one signal. This means that assumption 5 is not included in the prediction, which is probably suboptimal. Also, in this case I had problems with the prediction not changing, which should actually happen time-dependently when I query multiple predictions. This behavior led me to believe that only the order of the values entered the model, but not the associated timestamp. If I have understood everything correctly, then exactly this timestamp is my most important "feature"...
So far, i did not do any tests on Regression-based approaches, e.g. FastTree, since my data is not linear, but keeps changing states. Maybe this assumption is not valid and regression-based methods could also be suitable?
I also don't know if a multiclassifier is required, because I had understood that the TimeSeriesPredictor would also be suitable for this, since it works with the single data type. Whether the prediction is 1.3 or exactly 1.0 would be fine for me.
To sum it up:
I am looking for a algorithm which is able to recognize the switching patterns based on lose and widespread samples. It would be okay to define boundaries, e.g. state duration 3 of signal 1 will never last longer than 30s or state duration 1 of signal 3 will never last longer 60s.
Then, after the algorithm has obtained an approximate model of the switching behaviour, i would like to request a prediction of a certain signal state for a certain time.
Which methods can I use to get the best prediction, preferably using the ML.NET toolbox or based on matlab?
Not sure if this is quite what you're looking for, but if detecting spikes and changes using signals is what you're looking for, check out the anomaly detection algorithms in ML.NET. Here are two tutorials that show how to use them.
Detect anomalies in product sales
Spike detection
Change point detection
Detect anomalies in time series
Detect anomaly period
Detect anomaly
One way to approach this would be to first determine the periodicity of each of the signals independently. This could be done by looking at the frequency distribution of time differences between measurements of state 2 only and separately for each signal.
This will give a multinomial distribution. The shortest time difference will be the duration of the switching event (after discarding time differences less than the max duration of state 2). The second shortest peak will be the duration between the end of one switching event and the start of the next.
When you have the 3 calculations of periodicity you can simply calculate the difference between each of them. Given you have the timestamps of the measurements of state 2 for each signal you should be able to calculate the time of switching for all other signals.

Suggestions for optimising FPGA design

I need to figure out optimisations for this FPGA design. I've got a few ideas and I'd like to know if they sound reasonable for my design. I'd also like to ask if anyone has any other ideas to improve my designs efficiency.
The design I have to optimise is an ensemble of neurons, I've included two images below.
My current ideas
Add pipeline registers between each neuron and each adder
'Register the inputs and outputs' by inserting registers in-between each logic block
Convert the adder tree into an adder chain
Use time division multiplexers to share the LUT's between logic blocks
Do my current ideas for improving performance make any sense? I don't know very much about FPGA's at all so I'm not sure if my optimisations will make much of an improvement or if they even make sense.
Any help would be greatly appreciated.
Links to PDFs of my neuron and emsemble (The image quality is higher):
https://francismcnamee.com/pdfs/neuron_ensemble.pdf
https://francismcnamee.com/pdfs/single_neuron.pdf
Ensemble of neurons (Each subsystem is a single neuron, the design of each neuron is shown below)
A single neuron
To start with "less area usage and/or faster speed." forget about and: you can optimise of area or speed. Both will not work.
Use time division multiplexers to share the LUT's between logic blocks"
Multiplexers are also build from LUTs so you loose area before you gain some. Then the TDM needs to have a controller, the interim results need to be stored and retrieved. All and all it is not trivial and I would only do that if you are rather good at logic design. You may gain area, but you will lose speed.
Convert the adder tree into an adder chain.
No, you don't touch the adder tree. The FPGA synthesis tool will select the optimal adder configuration for you. It will balance area against speed and come up with something much better result then you can for yourself.
In fact this applies to every part of the design:let the synthesis tool do its work. You will not be able to outperform it.
Add pipeline registers between each neuron and each adder
Register the inputs and outputs' by inserting registers in-between each logic block
Sorry again but: Nope! Working with registers is not that simple.
You need to balance the registers. Ideally the logic delay between each pipeline stage should be the same.
Lets say you multiply takes** 10nS. The adder takes 3nS. Then you should place a pipeline stages after a set of 3 adders. The delay will be ~20nS. If you placed an pipeline stage after each adder, the total delay would be ~40nS.
Now you get to the core of speeding up a design: do you use 4 pipeline stages so you can run at 200 MHz or 2 pipeline stages and run at 100MHz? In both cases the throughput is the same.
Beware that each register stage also cost you time: you need to meet the set-up time of the register. As such the fastest design is the one with no registers: the data falls through at the maximum speed. But then you may need to wait a long time before you can present the next set of data.
As you may gather: balancing registers is not easy and is rather an art. The best way would be to run the design without any registers through the synthesis tool. Then run a timing analysis on it and look at the worst-case timing path. From that try to figure out where to put the register stages. But again, that is easier said then done. To me reading those timing analysis reports is easy, but for a novice they might seem all abracadabra.
Sorry if I let you hanging here but unfortunately there is no "magic trick" in these cases. Ideally you could let an experienced design play a few hours with your code and see what (s)he can do.
**The numbers I use have been made up

MATLAB timer object pitfalls and poor usage

I created a timer that executes every 0.1 seconds. It calls a function that reads data and then updates the property of an object. When I start the timer, MATLAB displays the "Busy" signal at the bottom of the command window. MATLAB becomes unresponsive and I cannot halt the timer using the stop() function. My only recourse is to use Ctrl-C.
I determined the problem to be the processing time of the timer callback function was longer than the calling period and I presume no other MATLAB code could squeak into the stack/queue. This makes me somewhat worried about relying upon timers. My goal is to continuously make measurements from several devices, store them in an object, and need MATLAB to do other things in between these measurements. Also, I cannot afford to miss a measurement.
I am creating an app that responds to user input and provides the user with real time information, so I chose a fast period thinking it would produce a snapping UX. Since I am committed to using MATLAB I could not think of a better way of implementing this functionality than using a timer object. So the first question is, do timer objects seem like the right tool for the job I describe above?
Second, if I am to use timer objects would someone please share their experience about common mistakes or pitfalls using timers? Or has someone any advice on how to best implement timer objects? Is there a practical limit to the number of timer objects that can be used simultaneously? What is the best way to determine the optimal frequency of a timer object?
Thanks!
I would think that 0.1 seconds is pushing it for reliability, especially if you have multiple timers going at once, and especially if you want a user interface to be responsive at the same time as well.
MATLAB is basically single-threaded. There are exceptions, for example the lower level math routines call BLAS in a multithreaded way that speeds them up a lot. You can also write MEX code in C that is multithreaded, and call that from MATLAB. But there's basically one true thread that all your code runs on.
Timer objects are also an exception in a way. When you create a timer object, underneath that there's a java timer object, which is running on a separate java thread. But when any of its callbacks fire, it calls back into MATLAB to execute them, which happens on the one true thread.
If you cannot afford to miss a measurement, you'll need to set the BusyMode property of your timers to queue rather than the default drop, which will mean that if any of the callbacks take longer than 0.1 seconds to execute, they will back up - and you have to fit in any user interface actions as well.
In addition, MATLAB doesn't (and couldn't) make any real guarantees about the precision of how fast or regularly the timer callbacks will execute. If Windows suddenly decides to run a virus scan, or update itself etc, MATLAB will lose priority and that will mess things up. If you were firing timers every 10 seconds, or even every 1 second, it's in all likelihood going to be roughly accurate. But if you're firing them every millisecond, you can't expect it to be reliable - you'd need a proper real time environment for that. 0.1 seconds seems borderline to me, and I'd expect its reliability to depend on what exactly you were doing in those 0.1 seconds, and what else was going on as well (and what computer you're running, as well).
To answer your last questions (max number of timers, optimal timer frequency etc) - there's no general answer, just try it and work out what happens with a range of values in your particular situation.
If it turns out that it's not reliable enough, you could either try:
Doing the data acquisition in another faster way (e.g. in C), maybe in a separate process, then calling that from MATLAB via MEX, perhaps with some sort of buffering to smooth things out.
Turning to some the other MathWorks products (e.g. Simulink, Simulink Coder, some of the System Toolboxes) that are designed for developing properly real time systems.
Hope that helps!

Processing accelerometer data

I would like to know if there are some libraries/algorithms/techniques that help to extract the user context (walking/standing) from accelerometer data (extracted from any smartphone)?
For example, I would collect accelerometer data every 5 seconds for a definite period of time and then identify the user context (ex. for the first 5 minutes, the user was walking, then the user was standing for a minute, and then he continued walking for another 3 minutes).
Thank you very much in advance :)
Check new activity recognization apis
http://developer.android.com/google/play-services/location.html
its still a research topic,please look at this paper which discuss the algorithm
http://www.enggjournals.com/ijcse/doc/IJCSE12-04-05-266.pdf
I don't know of any such library.
It is a very time consuming task to write such a library. Basically, you would build a database of "user context" that you wish to recognize.
Then you collect data and compare it to those in the database. As for how to compare, see Store orientation to an array - and compare, the same holds for accelerometer.
Walking/running data is analogous to heart-rate data in a lot of ways. In terms of getting the noise filtered and getting smooth peaks, look into noise filtering and peak detection algorithms. The following is used to obtain heart-rate information for heart patients, it should be a good starting point : http://www.docstoc.com/docs/22491202/Pan-Tompkins-algorithm-algorithm-to-detect-QRS-complex-in-ECG
Think about how you want to filter out the noise and detect peaks; the filters will obviously depend on the raw data you gather, but it's good to have a general idea of what kind of filtering you'd want to do on your data. Think about what needs to be done once you have filtered data. In your case, think about how you would go about designing an algorithm to find out when the data indicates activity (like walking, running,etc.), and when it shows the user being stationary. This is a fairly challenging problem to solve, once you consider the dynamics of the device itself (how it's positioned when the user is walking/running), and the fact that there are very few (if not no) benchmarked algos that do this with raw smartphone data.
Start with determining the appropriate algorithms, and then tackle the complexities (mentioned above) one by one.

real-time in context of a game

I have a problem grokking the concept of real-time (IMO badly named, different meaning in different contexts). I understand real-time software as a software where time is a key variable. Events must occur at given time. Say, railway switch change at 15:02 and the next one must be at 15:05 no matter what.
But how about this example. In game, when player's FPS drops below 16 game exits and tell user to upgrade his hardware or kill other applications. So when one iteration of the game loop takes more than 1/16 of a second the output of the program is completely different.
Is it real-time(ish)? Can it be considered as a Real Time Computing?
Your question is hard to understand, are you referring to Real Time Computing, or simulating real time, or something completely different?
Simulating real time: It is possible to simulate real-time in a game by polling for events. Store the time of an event, and then when it comes time to render a frame, the game should repeatedly 'fast forward' by moving the current time to the time of the next event and handle the event. This should repeat until there are no more events, or the time is 'current'.
This requires you to have anything that is a function of time (such as velocity, position, acceleration) be calculated according to the current time. This means you would not have these attributes periodically updated, and allows your game to be deterministic, as the 'game time' is no longer dependent upon real time. It also makes things like game speed and pausing very simple to implement.
If you're referring to the concept of real-time systems, then I would say there's not enough information to determine whether that 'game loop' is 'real-time'. It depends on the operating environment of the game, and the logic in the 'game loop'. According to wikipedia, a real-time deadline must be met, regardless of system load.
In the rapidly approaching canonical article Fix your Timestep!, Glenn Fielder addresses numerous ways to handle this issue. While the article focuses primarily on physics, the key points are applicable to any system that represents a function of time, to wit, things dealing with moving things.
The executive summary of that article (which is well worth reading) is this:
You can make your physics deterministic (well, as much as can be achieved with imperfect input) by using discrete physics timesteps. It looks like this:
Render as fast as possible
Pass in a time delta that represents how long steps previous took this frame
Process delta time modulo timestep number of physics steps
Store the remainder of delta that you weren't able to process in an accumulator
That accumulator gets added to the next frame's time buffer. This requires some fine tuning such that temporary lag spikes due to e.g. a rapidly spinning player (which necessitates a lot of visibility determination over time) don't end up putting you in an inescapable time debt. If you wanted to intelligently guard against such an occurrence, you could have a sentry look for dangerous levels of accumulated time, which you could respond to by perhaps dropping a video frame.
Another advantage to using discrete timesteps is that they behave well in multiplayer games. If you have an authoritative server or node in a peer-to-peer configuration, the server can ensure that all clients' physics simulations are running at the same physics timeline. Discrete time blocks also simplifies things in rollback based multiplayer.
Edit:
Disclaimer: I've never written software for real-time myself, only worked in a company that had!
In response to really-real real-life Real Time software, it's unlikely that anyone has made a game that could be qualified as this, at least in software. (I'm not sure how one would qualify games on ROMs or games that don't run under a host OS?) While your example would be an attempt at real-time software, most real-time software goes through a period of certification in which the maximum amount of time spent per instruction or on a logical block of operation is determined. Games might come close to this in a sense when, for example, platform licensors have requirements (as I believe XBLA does) regarding minimum 30fps or similar. However, these certifications are usually established through a period of testing rather than through mathematical proof.