Utilization of a Resource - anylogic

Is there any way to calculate the utilization of a given resource in a specific time frame? I have a machine that works h24, but during daytime hours its utilization is higher than during nighttime hours.
In the final statistics, using the function "machine.utilization()" I get a low result, which is influenced by the night hours. How can I split the two statistics?

Utilization is calculated as (work time)/(available time excluding maintenance). Which means that the measure described in your question can be achieved using 2 ways:
Make the machine 'unavailable' during the night, this way that time will be excluded in calculations
ResourcePool object has 2 properties on resource seize and on resource release which can be used to record specific instances of work time, sum it up and divide only by a period of (8hr * (num of days)) instead of total time from model start
For a little more detail and link to AnyLogic help please see the answer to another question here.
Update:
In ResourcePool's On seize and On release, AnyLogic provides a parameter called unit, which is the specific resource unit agent being processed. So getting actual use time per unit requires following:
2 collections of type LinkedHashMap that maps Agent -> Double. One collection to store start times (let's call it col_start and one to store total use times, let's call it col_worked)
On seize should contain this code: col_start.put(unit, time())
On release should contain:
double updated = col_worked.getOrDefault(unit, 0.0) + (time() - col_start.get(unit));
col_worked.put(unit, updated);
This way at any given point during model execution, col_worked will contain a mapping of resource unit Agent to the total sum of time it was utilised expressed as a double value in model time units.

Related

Measuring system time of specific agent in anylogic

I've got 3 different product types of agent, which each go it's individual path within the fabric. How can i measure the average time the product type spends in the system?
My logic looks like this , and i wanted to implement the measurement in the first service, like this:, it will be completed in the last service like this :
Now I get some really high numbers, which are absolutely wrong. The process itself works fine, if you run the measurement with the code "//agent.enteredSystemP1 = time()", you will get a mean of 24 minutes, per product. But how can i get the mean per product type?
Just use the same if-elseif-else distinction in the 2nd service block as well.
Currently, any agent leaving the system adds time to any systemTimeDistribution

AnyLogic - modeling large number of different ResourcePools

I would like to model a larger number of employees (about 30) as a resource pool. Each employee is given parameters before the model starts, which the simulation end user can enter manually. Each employee has different working hours (shift work, different on each day of the week), different duration of the shift and different tasks assigned to them.
My first thought was to model each employee individually as a resource with their own shift schedule. That would be easiest, but I bet there is a nicer solution - anyone have any ideas?
If your workers have different settings such as different shift hours, they will not belong to the same ResourcePool.
You must build an agent that contains a ResourcePool (so that you can use it as a resource) with its another parameters such as capacity etc.
In my opinion, the most correct thing is to build a Population of them. Each item in the population is an amount of workers with identical parameters.

Graphite: keepLastValue for a given time period instead of number of points

I'm using Graphite + Grafana to monitor (by sampling) queue lengths in a test system at work. The data that gets uploaded to Graphite is grouped into different series/metrics by properties of the payloads in the queue. These properties can be somewhat arbitrary, at least to the point where they are not all known at the time when the data collection script is run.
For example, a property could be the project that the payload belongs to and this could be uploaded as a separate series/metric so that we can monitor the queues broken down by the different projects.
This has the consequence that Graphite sends a lot of null values for certain metrics if the queues in the test system did not contain any payloads with properties that would group it into that specific series/metric.
For example, if a certain project did not have any payloads in queue at the time when the data collection was ran.
In Grafana this is not so nice as the line graphs don't show up as connected lines and gauges will show either null or the last non-null value.
For line graphs I can just chose to connect null values in Grafana but for gauges thats not possible.
I know about the keepLastValue function in Graphite but that includes a limit for how long to keep the value which I like very much as I would like to keep the last value until the next time data collection is ran. Data collection is run periodically at known intervals.
The problem with keepLastValue is it expects a number of points as this limit. I would rather give it a time period instead. In Grafana the relationship between time and data points is very dynamic so its not easy to hard-code a good limit for keepLastValue.
Thus, my question is: Is there a way to tell Graphite to keep the last value for a given time instead of a given number of points?

Google Cloud: Metrics Explorer: "Aggregator" vs "Aligner" - Whats the difference?

Trying to understand the difference between the two: Aggregator vs Aligner.
Docs was not helpful for me.
What I'm trying to achieve is to get the bytes of logs generated within a week per each namespace and container combination. For example, I want to see that container C in namespace N generated 10Gb of logs during the last 7 days.
This is how far I got:
Resource type = Kubernetes Container
Metric = Log bytes
Group by = namespace_name and container_name
Aggregator = sum(?) mean(?)
Minimum alignment period = 1(?) 7(?) days
Aligner = sum(?) mean(?)
I was confused with this until I realized that a single metric, such as kubernetes.io/container/cpu/core_usage_time is available in multiple different resources in my cluster.
So when you search for that metric, you'll get a whole lot of different resources that emit that metric. Aggregation is adding up all the data from those different resources WITH THAT SAME METRIC.
This all combines into one "time series" for that metric, an aggregation of all the individual time series from each of those different resources.
Now, alignment is the process of using that time series and putting all the data points through a function (over a period of time, known as the alignment period) which results in one single data point (per alignment period).
So aggregation combines the same metric across multiple resources, while alignment combines multiple data points in the same time series into one data point (per alignment period, which is why that field is required when using alignment).

How to make sense of the micrometer metrics using SpringBoot 2, InfluxDB and Grafana?

I'm trying to configure a SpringBoot application to export metrics to InfluxDB to visualise them using a Grafana dashboard. I'm using this dashboard as an example which uses Prometheus as a backend.
For some metrics I have no problem figuring out how to create graphs for them but for some others I don't know how to create the graphs or even if it's possible at all. So I enumerate the things I'm not really sure about in the following points:
Is there any documentation where a value unit is described? The application I'm using as an example doesn't have any load on it so sometimes I don't know whether the value is a bit, a byte, a second, a millisecond, a count, etc.
Some measurements contain the tag 'metric_type = histogram' with fields 'count', 'sum', 'mean' and 'upper'. Again, here I don't know what the value units are, what upper means or how I'm suppose to plot them. Examples of this are 'http_server_requests' or 'jvm_gc_pause'.
From what I see in the Grafana dashboard example, it seems I should use these measurements of type histogram to create both a graph with counts and graphs with duration. For example I see I should be able to create a graph with the number of requests and another one with their duration. Or for the garbage collector, I should be able to provide a graph for the number of minor and major GCs and another for their duration.
As an example of measures I get inserted into InfluxDB:
time count exception mean method metric_type outcome status sum upper uri
1625579637946000000 1 None 0.892144 GET histogram SUCCESS 200 0.892144 0.892144 /actuator/health
or
time action cause count mean metric_type sum upper
1625581132316000000 end of minor GC Allocation Failure 1 2 histogram 2 2
I agree the documentation for micrometer is not great. I've had to dig through the code to find answers...
Regarding your questions about jvm_gc_pause, it is a Timer and the implementation is AbstractTimer which is a class that wraps a Histogram among other components. This particular metric is registered by the JvmGcMetrics class. The various measurements that are published to InfluxDB are determined by the InfluxMeterRegistry.writeTimer(Timer timer) method:
sum: timer.totalTime(getBaseTimeUnit()) // The total time of recorded events
count: timer.count() // The number of times stop has been called on the timer
mean: timer.mean(getBaseTimeUnit()) // totalTime()/count()
upper: timer.max(getBaseTimeUnit()) // The max time of a single event
The base time unit is milliseconds.
Similarly, http_server_requests appears to be a Timer as well.
I believe you are correct that the sensible thing is to chart on two separate Grafana panels: one panel for GC pause seconds using sum (or mean or upper), and one panel for GC events using count.