How to make my algorithm depends on time? - matlab

I used MATLAB to simulate the cascading failure of two interdependent networks/Layers (I generated the two layers based on small world - watts strogatz algorithm).
My code works fine but it is not time dependent.
I want to have time steps, for example, the initial attack on one node happens at t1 then after some time the next vulnerable nodes get failed at different time t2 and so on for the other failure events. My code emulates only telecommunication nodes ( all events happen instantaneously), I want it to work for other logistic networks, say social networks for example, where the timestamp for every event might be in minutes or hours. Your thoughts and ideas would appreciated.
Note: I can provide my code if this helps.

Related

Calculate the travel time from one point to the other on Anylogic

I am developing a logistics simulation in the factory by Anylogic. It's a pick up and delivery problem, where the AGVs need to pick up the parcel and deliver to the target location. All the AGVs are traveling following paths. The paths have different speed limits.
My goal is to reduce the time of traffic jam or waiting time for jobs to be picked up.
I have the leading time, job delivered time - job generated time.
But I from here, I want to identify the time of traffic jam or waiting time.
Is there a way to calculate the time from one spot to the other considering different speed limit of paths without waiting time or traffic jam? So that I could subtract this from leading time.
Let me know if I need to clarify something.
There is no build-in way to do this, you have to do it yourself. I have 3 ideas:
You compute this mathematically in the model yourself, i.e. write a function that computes the length of the total path and you have the ideal speed already, voila
You run a separate experiment and turn off all speed limits and other traffic: record the time in that ideal case and use that to compare
Similarly, you could do this in the same experiment during a warmup period: drive a fake transporter along the path and compute the perfect durations

AnyLogic - How to define and redefine Agent's Parameters?

We’re generating the data that we might get from a shop floor to run, test, and validate our machine learning models. We first have here a discrete event simulation model for our manufacturing system. Each production order is seen as an agent, which then goes through different processes with a queue (waiting time) and delays (firstly production time, secondly logistics time).
enter image description here
But sometimes we have one process, for example, printing (code 5A, after the second Select5Output), with three different machines, which do not have a particular capacity. It’s time when we divide our order into parts and send them to those machines (very randomly, subjectively).
The data we take is from flowchart_process_states_log in Database.
The data we take is from flowchart_process_states_log in Database.
My questions here are:
How can we define the number of products in each order? Ex. we’re printing card, for one order it may be 10k, for another 8k or 33k. Can we define it as agent’s parameter? Then how can we vary them (stochastically, no exact number needed).
How can we split those 10k cards into three different machines? And then how to get back an complete agent with 10k? The Agent ID should remain the same as we trace and analyse them in ML model. Is it reasonable to see an order as an agent?
How can we multiply the number of our agent after a process? Ex. After cutting 10k pieces we have 20k.
We have the distribution for delay ex. triangle distribution. But we want some disturbances, when it suddenly takes 2 days for that delay instead of 3-4 hours as normal. How to do it?
Thank you in advance for your effort. Every help is highly appreciated, because we're here and learning together. Thank you !!

Anylogic - Substantial variances in identical arrival rate schedule outputs

I am currently completing some verification checks on an Anylogic DES simulation model, and I have two source blocks with identical hourly arrival rate schedules, broken down into 24 x 1h blocks.
The issue I am encountering is significant differences in the number of agents generated by one block compared with another. I understand that the arrival rate is based on the poisson distribution, so there is some level of randomness in the instants of agent generation, but I would expect that the overall number generated by these two blocks should be similar, if not identical. For example, in one operating scenario one block is generating 78 agents, whilst the other is only generating 67 over the 24h period. This seems to be a common issue across all operating scenarios.
Is there a potential explanation regarding idiosyncrasies within Anylogic that might explain this?
Any pointers would be welcomed.
I think it occurs because it follows a poisson distribution. To solve this, you could use the interarrival time function of the source block. In that case you would have the same number of arrivals for different source blocks. However, I'm not sure whether this fits a schedule. If not, you could use the getHourOfDay() function together with a parameter representing the interarrival time. You then have to write the code below for every hour of the day:
if(getHourOfDay()==14) parameter =5;
using sources with poisson distributions will definitely not produce same results... That's the magic of stocastic models.
An alternative to solve this problem is the following:
sources will generate using the inject function
use dynamic events that will be in charge to do source.inject();
let's imagine you have R trains coming per day, and this is a fixed value you want to use, you can then distribute the trains accross the day by doing this:
for(int i=0;i<R;i++){
create_DynamicEvent1(uniform(0,1),DAY); //for source1
create_DynamicEvent2(uniform(0,1),DAY); //for source2
}
This doesn't follow a poisson distribution, but generates a predefined number of arrivals of trains throughout the day, and you can use another distribution of your choice if the uniform is not good enough for you.
run this for every day

Machine Learning to predict time-series multi-class signal changes

I would like to predict the switching behavior of time-dependent signals. Currently the signal has 3 states (1, 2, 3), but it could be that this will change in the future. For the moment, however, it is absolutely okay to assume three states.
I can make the following assumptions about these states (see picture):
the signals repeat periodically, possibly with variations concerning the time of day.
the duration of state 2 is always constant and relatively short for all signals.
the duration of states 1 and 3 are also constant, but vary for the different signals.
the switching sequence is always the same: 1 --> 2 --> 3 --> 2 --> 1 --> [...]
there is a constant but unknown time reference between the different signals.
There is no constant time reference between my observations for the different signals. They are simply measured one after the other, but always at different times.
I am able to rebuild my model periodically after i obtained more samples.
I have the following problems:
I can only observe one signal at a time.
I can only observe the signals at different times.
I cannot trigger my measurement with the state transition. That means, when I measure, I am always "in the middle" of a state. Therefore I don't know when this state has started and also not exactly when this state will end.
I cannot observe a certain signal for a long duration. So, i am not able to observe a complete period.
My samples (observations) are widespread in time.
I would like to get a prediction either for the state change or the current state for the current time. It is likely to happen that i will never have measured my signals for that requested time.
So far I have tested the TimeSeriesPredictor from the ML.NET Toolbox, as it seemed suitable to me. However, in my opinion, this algorithm requires that you always pass only the data of one signal. This means that assumption 5 is not included in the prediction, which is probably suboptimal. Also, in this case I had problems with the prediction not changing, which should actually happen time-dependently when I query multiple predictions. This behavior led me to believe that only the order of the values entered the model, but not the associated timestamp. If I have understood everything correctly, then exactly this timestamp is my most important "feature"...
So far, i did not do any tests on Regression-based approaches, e.g. FastTree, since my data is not linear, but keeps changing states. Maybe this assumption is not valid and regression-based methods could also be suitable?
I also don't know if a multiclassifier is required, because I had understood that the TimeSeriesPredictor would also be suitable for this, since it works with the single data type. Whether the prediction is 1.3 or exactly 1.0 would be fine for me.
To sum it up:
I am looking for a algorithm which is able to recognize the switching patterns based on lose and widespread samples. It would be okay to define boundaries, e.g. state duration 3 of signal 1 will never last longer than 30s or state duration 1 of signal 3 will never last longer 60s.
Then, after the algorithm has obtained an approximate model of the switching behaviour, i would like to request a prediction of a certain signal state for a certain time.
Which methods can I use to get the best prediction, preferably using the ML.NET toolbox or based on matlab?
Not sure if this is quite what you're looking for, but if detecting spikes and changes using signals is what you're looking for, check out the anomaly detection algorithms in ML.NET. Here are two tutorials that show how to use them.
Detect anomalies in product sales
Spike detection
Change point detection
Detect anomalies in time series
Detect anomaly period
Detect anomaly
One way to approach this would be to first determine the periodicity of each of the signals independently. This could be done by looking at the frequency distribution of time differences between measurements of state 2 only and separately for each signal.
This will give a multinomial distribution. The shortest time difference will be the duration of the switching event (after discarding time differences less than the max duration of state 2). The second shortest peak will be the duration between the end of one switching event and the start of the next.
When you have the 3 calculations of periodicity you can simply calculate the difference between each of them. Given you have the timestamps of the measurements of state 2 for each signal you should be able to calculate the time of switching for all other signals.

Controlling events in a hybrid Modelica model

I am confused by the hybrid modelling paradigm in Modelica. On one hand, events are useful, on the other hand, they are to be avoided. Let me explain my case:
I have a large model consisting of multiple buildings in a neighborhood that is simulated over 1 year. Initially, the model ran very slow. Adding noEvent() around as many if-conditions as possible drastically improved the speed.
As the development continued, the control of the model got more complicated, and I have again many events, sometimes at very short intervals. To give an idea:
Number of (model) time events : 28170
Number of (U) time events : 0
Number of state events : 22572
Number of step events : 0
These events blow up the output (for correct post-processing I need the variables at events) and slows the simulation. And moreover, I have the feeling that some of the noEvent(if...) lead to unexpected behavior.
I wonder if it would be a solution to force my events at certain time steps and prohibit them in between these time steps? Ideally, I would like to trigger these 'forced events' based on certain conditions. For example: during the day they should be every 15 minutes, at high solar radiation at every minute, during nights I don't want events at all.
Is this a good idea to do? I guess this will be faster as many of the state events will become time events? How can this be done with Modelica 3.2 (in Dymola)?
Thanks on beforehand for all answers.
Roel
A few comments.
First, if you have a simulation with lots of events (relative to the total duration of the simulation), the first thing I would encourage you to do is use a lower order integrator. The point here is that higher-order integrators normally allow you to take longer time steps. But if those steps are constantly truncated by events, they just end up being really expensive.
Second, you could try fixed-step integrators. Depending on the tool, they may implement this kind of "pool events and fire them all at once" kind of approach in the context of fixed-time step integrators. But the specification doesn't really say anything on how tools should deal with events that occur between fixed time steps.
Third, another way to approach this would be to "pool" your events yourself. The simplest way I could imagine doing this would be to take all the statements that currently generate events and wrap them in a "when sample(...,...) then" statement. This way, you could make sure that the events were only triggered at specific intervals. This would be more portable then the fixed time step approach. I think this is what you were actually proposing in your question but it is important to point out that it should not be based on time steps (the model has no concept of a time step) but rather on a model specified sampling interval (which will, in practice, be completely independent of time steps).
As you point out, using "sample(...,...)" will turn these into time events and, yes, this should be faster.