I have created a simple model (see first attachment) in Anylogic. Resource unit W1 is seized in service and resource unit W2 is seized in service 1. The delay time of service and service 1 is both 5 minutes. The interarrival time of source is 10 minutes and interarrival time of source 1 is 6 minutes.
Now I would like to analyse the usage state of both resource units, but in dataset resource_unit_states_log there is only the state "usage_busy" logged. Is there any possibility to also log the usage state "idle" in this dataset? Later in my evaluation I want to know the exact date and time when the resource was in state "idle". Currently I can only read the exact date and time for the state "busy" from the data set (see screenshot in first attachment). Theoretically, I could manually calculate the date and time of the "idle" state based on the existing values, but that would take a long time with thousands of dates.
Another attempt was to track the "idle" state using a timeplot. If I use W1.time() as x-axis value, I get the model time (e.g. 0, 1, 2 ...) in the dataset. But I want instead as in the dataset "resource_unit_states_log" the exact date like 27-12-2021 00:06:00.
Does anyone have an idea how I can solve either of these problems?
AnyLogic internal tables/logs are not modifiable. They are as they are. If you want data in any other format, you need to do it by using your own data collection functions/codes. In your case, the second approach is pretty good. You are collecting information every minute and you can export it. I usually do the post-processing in Python. I work with millions of rows and it takes a few minutes; in your case thousands of rows should take some seconds. Here is how I would do it:
Export the data (in your second plot approach) into Excel. The data should look like this:
Open Jupyter notebook (or any IDE).
Read the data into Python. Lets say you have saved your data as data.xlsx.
Input your start_datetime, i.e. starting date and time of your simulation.
Then just add the minutes from your data to the start_datetime.
Write the modified data in a new Excel file called data_modified.xlsx. It will look like this:
Here is the full code:
import pandas as pd
import numpy as np
from datetime import timedelta
from datetime import datetime as dt
df=pd.read_excel('data.xlsx')
#Input your start date and hour below:
start_datetime='2021-12-31 00:00:00'
df['datetime']=start_datetime
df['datetime'] = pd.to_datetime(df['datetime'])
df['datetime']=df['datetime'].dt.strftime('%d-%m-%Y %H:%M:%S')
df['datetime'] = pd.to_datetime(df['datetime'])
df['time_added'] = pd.to_timedelta(df['x'],'m')
df['datetime']=df['datetime']+df['time_added']
del df['time_added']
df.to_excel('data_modified.xlsx')
Another approach:
You can use the cells On seize unit and On exit inside the Service block and log the time when the resource is seized and released by using the function time() and write this information into a dataset. Then do the calculations.
You can also use some conversion functions of AnyLogic as below:
Related
I'm using Graphite + Grafana to monitor (by sampling) queue lengths in a test system at work. The data that gets uploaded to Graphite is grouped into different series/metrics by properties of the payloads in the queue. These properties can be somewhat arbitrary, at least to the point where they are not all known at the time when the data collection script is run.
For example, a property could be the project that the payload belongs to and this could be uploaded as a separate series/metric so that we can monitor the queues broken down by the different projects.
This has the consequence that Graphite sends a lot of null values for certain metrics if the queues in the test system did not contain any payloads with properties that would group it into that specific series/metric.
For example, if a certain project did not have any payloads in queue at the time when the data collection was ran.
In Grafana this is not so nice as the line graphs don't show up as connected lines and gauges will show either null or the last non-null value.
For line graphs I can just chose to connect null values in Grafana but for gauges thats not possible.
I know about the keepLastValue function in Graphite but that includes a limit for how long to keep the value which I like very much as I would like to keep the last value until the next time data collection is ran. Data collection is run periodically at known intervals.
The problem with keepLastValue is it expects a number of points as this limit. I would rather give it a time period instead. In Grafana the relationship between time and data points is very dynamic so its not easy to hard-code a good limit for keepLastValue.
Thus, my question is: Is there a way to tell Graphite to keep the last value for a given time instead of a given number of points?
I'm trying to calculate the overtime of each resourcePool, what I reached is time-consuming and almost a manual calculation, looking for the correct way to do that, please.
What I'm doing is as follows, for example, I have the pharmacists' resource pool (regular hours, and over time pool, I divide them to be easier for me to get statistics of each shift and for optimization later on):
I added the following traceln in onExist realse of the pharmacists:
traceln(date());
traceln(agent.OrderID);
So it returns the date and time of all passed agents. Then I run the model and copy the traceln outputs from the console and process them in Excel:
the overtime hours of the day = last recorded time of a certain date - the start of overtime shift
I repeat the above for each day which takes too much time, especially since there are other resources that also I need their overtime.
Is there any simplified and quick method to get the daily overtime (and the daily mean overtime) of a resourcePool?
Thank you.
I have a daily count metric being pushed to prometheus. Its important to have the measurement every few minutes, but I also want the measurement at a specified time (end of the day) to see the daily total. Is there a way to specify a time of the measurement?
I have set the min_step (time step) to be 24h. Doing so gives me measurements at 20:00:00 each day. Ideally this would be 23:50:00 through 23:59:59
The chart type is a Graph, and the PromQL query is:
max(table_row_count) by (table)
with min_step = 24h, format = time series, and min time interval = 24h. Relative time is set to 7d to get a weekly view of the tables.
I am expecting some way to be able to set the timestamp of the query that should be run every 24h.
Prometheus doesn't have any cron features. You would have to revert to scheduling it yourself.
This means that the first requirement is to get the data you want at the given time. This can be easily done by a GET on the url of the metric you want. (by example using curl).
Now, the question is how to feed it to prometheus. I see three possibilities:
dump the content in a file and let node exporter expose it to prometheus (and erase it after a time). A careful rewrite of metrics can be used in prometheus to sanitize it.
write your own exporter to expose it (easy to do, especially since you have the right data format)
push it to a push gateway but there is currently no way to make the data expire.
I want to set the warm-up period in AnyLogic Personal Learning Edition. I searched for the warm-up period place in AnyLogic, but I couldn't find any thing about the warm-up period.
Is there a warm up period in Anylogic or something like this?
There is no default warm-up setting as it would not make sense given the vast flexibility of the tool and user needs.
It is easy, however, to set it up yourself. As usual, there are many different options, here is one:
create a variable v_WarmupDuration on Main, set it to whatever many time units you need
any data object you want to only record after the warmup period, ensure it only captures data if time() > v_WarmupDuration.
Events can have a custom initial time which you can use v_WarmupDuration for.
Functions that log data can only do so if time() > v_WarmupDuration, and so on.
Alternatively, log all your data as normal but add time stamps to them. Then, you can
Creating a warmup variable works fine for metrics you create yourself.
But if you want to use the built in functionality like histograms created from timeMeasureStart and timeMeasureEnd blocks in Anylogic, you will also need to add an extra select option so e.g. assuming you set v_WarmupDuration to 60 minutes, then you need a select block with a decision on false that goes straight to sink block or the next element after the timeMeasureEnd.
Condition if true: time(MINUTE) > v_warmupDuration
That way, the warmup period will not accumulate into the dataset of the timeMeasureEnd.
If you want to set this as a parmeter to an experiment, then ...
Add a variable to the experiment page off the screen e.g. v_warmupMins
Add a control like a slider on the experiment page and link to the variable v_warmupMins
Add a parameter to hold the warmup time in the Main canvas e.g. p_warmupMins
On the experiment properties, set the parameter p_warmupMins = v_warmupMins
to programmatically add this time onto the StopTime, add this to the Before Simulation Runs
getEngine().setStopTime( getEngine().getStopTime(MINUTE) + v_warmupMins );
Now when i run experiment with slider set to 60 mins, it adds 60 mins onto the stoptime and runs the experiment without accumulating metrics until that time has passed.
Hope that helps.
I have a JMeter project with multiple GET and POST requests and assertions for these. I use Aggregate results and View results tree listeners, but none of these can store results on hourly basis. I tried JMeterPlugins-Standard and JMeterPlugins-Extras packages and jp#gc - Graphs Generator listener, but all of them use aggregated data instead of hourly data. So I would like to get number of successful and failed requests/assertions per hour, maybe a bar chart would be most suitable for this purpose.
I'm going to suggest a non-conventional design-level solution: name your samplers dynamically with hour (or date and hour), so that each hour the name will change, and thus they will appear in different category, i.e.:
The code for such name is:
${__time(dd:hh,)} the rest of sampler name
Such sampler will appear in the following way in Aggregate Report (here I simulated it with minutes/seconds, but same will happen with days/hours, just on larger scale):
Pros and cons of such approach:
Simple, you can aggregate anything by hour, minute, or any other time slice while test is running, and not by analysis after execution.
Not listener-dependant, can be used with pretty much any listener or visualizer
If you want to also have overall stats, it will require to sum up every sub-category. So it alters data, but in the way that it can still can be added back to original relatively easy.
Calculating __time before every sampler will not be unnoticed completely from performance perspective, but I don't think it will add visible overhead to a script.
You could get the same data by properly aggregating JTL or CSV (whichever you use) after execution, so it doesn't provide you with anything that is not possible to achieve using standard methods
Script needs altering to make this happen. if you have 100s of samplers, it's going to take a while. And if you want to change back...
You might want to use Filter Results Tool which has --start-offset and --end-offset parameters, you can "cut" your results file into "interesting" pieces and plot them according to your requirements.
You can install Filter Results Tool using JMeter Plugins Manager
Also be aware that according to JMeter Best Practices you should
Use as few Listeners as possible; if using the -l flag as above they can all be deleted or disabled.
Don't use "View Results Tree" or "View Results in Table" listeners during the load test, use them only during scripting phase to debug your scripts.
You can get whatever information you need from the .jtl results file, you can specify test results location via -l command-line argument
To get summarized results per hour add to your test plan Generate Summary Results:
Generates a summary of the test run so far to the log file and/or standard output
Update interval in jmeter.properties to your needs ,1 hour, 3600 seconds:
summariser.interval=3600
You will get summary per hour of your requests.
You can try with Jmeter backend Listener. It has integration with graphite and Influxdb. After storing the results in these time series database you can display the result in Grafana dashboard. Grafana has its own filtering of showing the results in hourly, monthly, daily basis and so on.