How to increase frequency of validation in fastai? - callback

In fastai during training, the validation loss and evaluation metric is calculated every epoch and best epoch is saved if we use the SaveModelCallback() callback. However we could increase the frequency of this process and evaluate the metric after every n steps (eg. your batch size: 32, 64, etc) in order to better capture the moment where the model starts to over-fit. This is very easily usable in repos like detectron2 via the BestCheckpointer class. Any ideas on how to implement such a callback in fastai?
for reference this was discussed in this forum but no solutions were yielded

Related

Anylogic - Substantial variances in identical arrival rate schedule outputs

I am currently completing some verification checks on an Anylogic DES simulation model, and I have two source blocks with identical hourly arrival rate schedules, broken down into 24 x 1h blocks.
The issue I am encountering is significant differences in the number of agents generated by one block compared with another. I understand that the arrival rate is based on the poisson distribution, so there is some level of randomness in the instants of agent generation, but I would expect that the overall number generated by these two blocks should be similar, if not identical. For example, in one operating scenario one block is generating 78 agents, whilst the other is only generating 67 over the 24h period. This seems to be a common issue across all operating scenarios.
Is there a potential explanation regarding idiosyncrasies within Anylogic that might explain this?
Any pointers would be welcomed.
I think it occurs because it follows a poisson distribution. To solve this, you could use the interarrival time function of the source block. In that case you would have the same number of arrivals for different source blocks. However, I'm not sure whether this fits a schedule. If not, you could use the getHourOfDay() function together with a parameter representing the interarrival time. You then have to write the code below for every hour of the day:
if(getHourOfDay()==14) parameter =5;
using sources with poisson distributions will definitely not produce same results... That's the magic of stocastic models.
An alternative to solve this problem is the following:
sources will generate using the inject function
use dynamic events that will be in charge to do source.inject();
let's imagine you have R trains coming per day, and this is a fixed value you want to use, you can then distribute the trains accross the day by doing this:
for(int i=0;i<R;i++){
create_DynamicEvent1(uniform(0,1),DAY); //for source1
create_DynamicEvent2(uniform(0,1),DAY); //for source2
}
This doesn't follow a poisson distribution, but generates a predefined number of arrivals of trains throughout the day, and you can use another distribution of your choice if the uniform is not good enough for you.
run this for every day

Machine Learning to predict time-series multi-class signal changes

I would like to predict the switching behavior of time-dependent signals. Currently the signal has 3 states (1, 2, 3), but it could be that this will change in the future. For the moment, however, it is absolutely okay to assume three states.
I can make the following assumptions about these states (see picture):
the signals repeat periodically, possibly with variations concerning the time of day.
the duration of state 2 is always constant and relatively short for all signals.
the duration of states 1 and 3 are also constant, but vary for the different signals.
the switching sequence is always the same: 1 --> 2 --> 3 --> 2 --> 1 --> [...]
there is a constant but unknown time reference between the different signals.
There is no constant time reference between my observations for the different signals. They are simply measured one after the other, but always at different times.
I am able to rebuild my model periodically after i obtained more samples.
I have the following problems:
I can only observe one signal at a time.
I can only observe the signals at different times.
I cannot trigger my measurement with the state transition. That means, when I measure, I am always "in the middle" of a state. Therefore I don't know when this state has started and also not exactly when this state will end.
I cannot observe a certain signal for a long duration. So, i am not able to observe a complete period.
My samples (observations) are widespread in time.
I would like to get a prediction either for the state change or the current state for the current time. It is likely to happen that i will never have measured my signals for that requested time.
So far I have tested the TimeSeriesPredictor from the ML.NET Toolbox, as it seemed suitable to me. However, in my opinion, this algorithm requires that you always pass only the data of one signal. This means that assumption 5 is not included in the prediction, which is probably suboptimal. Also, in this case I had problems with the prediction not changing, which should actually happen time-dependently when I query multiple predictions. This behavior led me to believe that only the order of the values entered the model, but not the associated timestamp. If I have understood everything correctly, then exactly this timestamp is my most important "feature"...
So far, i did not do any tests on Regression-based approaches, e.g. FastTree, since my data is not linear, but keeps changing states. Maybe this assumption is not valid and regression-based methods could also be suitable?
I also don't know if a multiclassifier is required, because I had understood that the TimeSeriesPredictor would also be suitable for this, since it works with the single data type. Whether the prediction is 1.3 or exactly 1.0 would be fine for me.
To sum it up:
I am looking for a algorithm which is able to recognize the switching patterns based on lose and widespread samples. It would be okay to define boundaries, e.g. state duration 3 of signal 1 will never last longer than 30s or state duration 1 of signal 3 will never last longer 60s.
Then, after the algorithm has obtained an approximate model of the switching behaviour, i would like to request a prediction of a certain signal state for a certain time.
Which methods can I use to get the best prediction, preferably using the ML.NET toolbox or based on matlab?
Not sure if this is quite what you're looking for, but if detecting spikes and changes using signals is what you're looking for, check out the anomaly detection algorithms in ML.NET. Here are two tutorials that show how to use them.
Detect anomalies in product sales
Spike detection
Change point detection
Detect anomalies in time series
Detect anomaly period
Detect anomaly
One way to approach this would be to first determine the periodicity of each of the signals independently. This could be done by looking at the frequency distribution of time differences between measurements of state 2 only and separately for each signal.
This will give a multinomial distribution. The shortest time difference will be the duration of the switching event (after discarding time differences less than the max duration of state 2). The second shortest peak will be the duration between the end of one switching event and the start of the next.
When you have the 3 calculations of periodicity you can simply calculate the difference between each of them. Given you have the timestamps of the measurements of state 2 for each signal you should be able to calculate the time of switching for all other signals.

Get the same arrival rate across models

I have a model that I have duplicated and made some adjustments to it. When I run both models with the same fixed seed I don’t get the same results, which I understand because I have other sources of randomness in the model. Regardless, in both models, I am using a source block, such that the arrivals are defined by a rate schedule, the schedule is of type rate, and is provided from the database. Now, I know the following pieces of information:
Generally, I can use my own random number generator (RNG) in distributions, for example triangular(5, 10, 25, myRNG), such that Random myRNG = new Random (2)
By default, a schedule with “type” rate follows a passion distribution that utilizes the default RNG.
At anytime in the model, I can substitute the default RNG with my own by calling setDefaultRandomGenerator(Random r).
The question is: Is it possible to use a fixed seed for the arrival rate to make sure I am getting the same exact input in both models?
In anylogic, rates are always equivalent to a poisson distribution with lambda equal to the rate you set,
Intearrival times don't follow any distribution, but using exponential(lambda) in the field is equivalent to using a arrival by rate with a rate of lambda.
But poisson and exponential are closely related, which is why if you use poisson(1.0/lambda) in the intearrival time, you have the same average arrivals as if you use exponential(lambda).
It is not possible to set a seed for the arrival rate, and that's why you need to use intearrival times instead in your source
But you need to create a variable first, let's call it rand, of type Random with initial value new Random(seed)
where seed is any integer you want (long to be more exact)
then in the intearrival time you need to do:
exponential(lambda, 0,rand)
This will lead to unique simulation runs, no matter what configuration you have in the AnyLogic experiment
Sure, simply set both Source blocks to "Interarrival time" in the "arrivals defined by".
Then, use the same code poisson(1, myRNG) in the field, making sure that the myRNG RNG uses the same initial seed (i.e.new Random(1234)
(the "Rate" setting is the same as using poission(1) for interarrival-time)

Set Batch Size *and* Number of Training Iterations for a neural network?

I am using the KNIME Doc2Vec Learner node to build a Word Embedding. I know how Doc2Vec works. In KNIME I have the option to set the parameters
Batch Size: The number of words to use for each batch.
Number of Epochs: The number of epochs to train.
Number of Training Iterations: The number of updates done for each batch.
From Neural Networks I know that (lazily copied from https://stats.stackexchange.com/questions/153531/what-is-batch-size-in-neural-network):
one epoch = one forward pass and one backward pass of all the training examples
batch size = the number of training examples in one forward/backward pass. The higher the batch size, the more memory space you'll need.
number of iterations = number of passes, each pass using [batch size] number of examples. To be clear, one pass = one forward pass + one backward pass (we do not count the forward pass and backward pass as two different passes).
As far as I understand it makes little sense to set batch size and iterations, because one is determined by the other (given the data size, which is given by the circumstances). So why can I change both parameters?
This is not necessarily the case. You can also train "half epochs". For example, in Google's inceptionV3 pretrained script, you usually set the number of iterations and the batch size at the same time. This can lead to "partial epochs", which can be fine.
If it is a good idea or not to train half epochs may depend on your data. There is a thread about this but not a concluding answer.
I am not familiar with KNIME Doc2Vec, so I am not sure if the meaning is somewhat different there. But from the definitions you gave setting batch size + iterations seems fine. Also setting number of epochs could cause conflicts though leading to situations where numbers don't add up to reasonable combinations.

change Frequency of coming data

I Have a Sensor (Gyro) that connected to my python program (with socket UDP) and send data to python console in real-time but with 200 Hz frequency.
I want to change this frequency of coming data to my console but could not find a good way to do it.
I was thinking about doing it with filters like Mean an waiting for idea?
If you want to have regular updates, use a windowing mechanism. Take the last n values and store the average. Then, discard the next two values and take the last n values again. This example would yield values with a frequency of 200 Hz/2.
If you only want to see events when changes have occured, store the last value, compare the current value with the last one and emit an event if it has changed, updating the stored value. As you're dealing with sensors (and thus, a little fuzziness), you probably want to implement a hysteresis.
You can even raise the frequency by creating extra values in between the received ones through interpolation. For a steady frequency, you would have to take care about your timing though.