In my model I want to calculate the utilized time in seconds in service block and display it in bar chart in the form of percentage i.e. the percentage of service time of service block 1 out of the total time of the model.
For example:
service block 1= 60 second
service block 2= 10 second
service block 3= 400 second
Total time of the service blocks = 410 seconds
Service block 1 utilized time is (60/470)*100= 12.7%
So I have calculated the utilized time as shown in . The TimeIn is a variable in agent.
Picture 2 shows the variable, Statistics and data set used for calculation and Bar chart display. D2 is the data set used in value chart display.
D2.add((agent.TimeIn-agent.TimeOut)/X)
My Question:
How can I get the bar chart to only display 12.7 percent out of the 100%. Currently it does show 100% every time I run the model.
I have used the following in the bar chart value window:
D2.getYMean()
Any suggestions?
Thanks
One of the straight forward way to measure time utilization in a service station is to use time measure as shown on the figure below
there is other ways in finding the utilization time for service station. one of the method mentioned by Anylogic help is double utilization() (Returns the mean utilization of this block. The returned value is the average (collected over time) of number of agents being serviced). I have tried using the double utilization()with service station but didn't work. However it dose work with delay (x.stats.Utilization.mean()https://paginas.fe.up.pt/~ee01260/AnyLogic%20Models/Bank/AnyLogic_6_Enterprise_Library_Tutorial.pdf).
Related
I understand that rate(xyz[5m]) * 60 is the rate of xyz per minute, averaged over 5 mins.
How then would $__rate_interval and $__interval be defined,
possibly in the same syntax?
What format is rate being measured here, in my panel? Per minute, per second?
What is the interval= 30s in my panel here? My scraping interval is set to 5s.
How do i change the rate format?
See New in Grafana 7.2: $__rate_interval for Prometheus rate queries that just work.
Rate is always per second. See Grafana documentation for the rate function.
Click on Query options, then click on the Info-Symbol. An explanation will be displayed.
To get rate per minute, just multiply the rate with 60.
Edit: ($__rate_interval and $_interval)
Prometheus periodically fetches data from your application. Grafana periodically fetches Data from Prometheus. Grafana does not know, how often Prometheus polls your application for data. Grafana will estimate this time by looking at the data. The $__interval variable expands to the duration between two data points in the graph. (Note that this is only true for small time ranges and high resolution as the intended use case for $__interval is reducing the number of data points when the time range is wide. See Approximate Calculation of $__interval.)
If the time-distance between every two data points in each series is 15 seconds, it does not make sense to use anything less than [15s] as interval in the rate function. The rate function works best with at least 4 data points. Therefore [1m] would be much better than anything betweeen [15s] and [1m]. This is what $__rate_interval tries to achieve: guessing a minimal sensible interval for the rate function.
Personally, I think, this does not always work if your application delivers sparse data. I prefer using fixed intervals like 10m or even 1h or 1d in these situations. The interval need to be great enough to get you enough data points for the metric to work with the rate function.
A different approach would be to use any of $__rate_interval and $_interval but also set the Min step parameter for the query in the Grafana UI to be big enough.
My problem is as follows: I would like to create a graph of the percentage use of boxes over 24 hours. However, the box.utilization() function is cumulative, so I tried to solve the problem by creating a dataset that collects the values every hour and an event that resets the utilization so that the next hour is not affected by the previous hour's utilization.
(I attach a picture of the graph I created).
Is there a more efficient way?
I have faced the same issue before. Here is how I handled it:
Instead of cumulative utilization, I calculate the maximum hourly utilization. That is, I record the number of seized resource for every minute and get an array of 60 elements. Then divide the maximum number in that array by the total number of resources available. An example:
I have 100 machines
During an hour, maximum of 60 of them were busy
60/100= 60% maximum utilization during that hour
Then I plot these for each hour.
I have a question. I send counter to graphite. It increases every time somebody uses endpoint. So, it increases slowly during the day. I want to display on dashboard amount of connections during time (histogram - one bar graph per 5 minutes). For example, i now have smth like this
.
And I want grafana to display changes in time (5 min). It started in 13:31. so i want one bar graph(from 13:31 too 13:36) that will have value 12, next bar grapgh with value 0 and e.t.c (For example, if counter increases by 3, next bar graph will have value 3). I have no ideas, how to do it and will be glad if you help.
For rate of change over time, Have a look at the perSecond function of Graphite.
For actual change (i.e the derivative) for your usecase id lookat the nonNegativeDerivative Function
https://graphite.readthedocs.io/en/latest/functions.html
I used this (as per the example) to calculate Network traffic
I use prometheus to monitor a api service. Currently, I use a Counter to count number of requests received and a Gauge for the response time in milliseconds.
I've tried to use something like count_over_time(response_time_ms[1m]) to count requests during a time range. However, I got result that each point is value of 10.
Why this doesn't work?
count_over_time(response_time_ms[1m]) will tell you the number of samples, not the number of times your Gauge was updated within (what I assume to be) a Java process. Based on the value of 10 you're seeing, I'm assuming your scrape interval is 6 seconds.
For an explanation of why this doesn't work as you would expect it, a Gauge is simply a Java object wrapping a double value. Every time you set its value, that value changes, but nothing more. There's no count of how many times the value changed or any notification sent to Prometheus that this happened. Prometheus simply polls every 6 seconds and collects whatever value was there at the time (never the wiser that the value changed 15 times since the last time it was collected). This is why gauges are intended to measure single values that go up and down (such as memory utilization: it's now 645 MB, in 6 seconds it's 648 MB, in 12 seconds 543 MB): you know the value constantly changes, but the best you can do is sample it every now and then.
For something like request latency, you should use a Histogram: it's basically a counter for the number of observations (i.e. number of requests); a counter for the sum of all observations (i.e. how long all requests put together took); and separate counters for each bucket (i.e. how many requests took less than 1 ms; how many requests took less than 10 ms; etc.). From this you can get an accurate average over any multiple of your scrape interval (i.e. change in total time divided by change in number of requests) as well as estimates for any percentile (including the median). How precise said percentiles are depends on the bucket sizes you choose (and how well they actually match the actual measurements).
Or, if all you're interested in is the number of requests, then a counter that's incremented on every request will be enough. To adjust for counter resets (e.g. job restarts), you should use increase() rather than the simple difference suggested above:
increase(number_of_requests_total[1m])
If you want to count number of requests in some specific time from now (in last 1m in this case) just use
number_of_requests_counter - number_of_requests_counter offset 1m
If you want to have sth like requests per second, than use
rate(number_of_requests_counter[1m])
I can tell you why it's not working with your Gauge, but first of all specify what do you assign to this metric. I mean, do you assing some avarage, last response time, or some other stuff?
For response time you should use Summary or Histogram (more info here)
I am having troubles understanding the output of my scope in this simple simulink model:
I am using a fixed step solver (tried with ode3 and ode8).
Pulse type of the puls generator is set to Sample based and I varied the Period and Pulse Width.
First I set the simulation time to 10 and set the puls generator to Period = 10 and Puls width = 5. The output of the scope is as expected:
But when I tried with simulation time 10,000 and the puls generator with Period = 1,000 and Puls width = 500 it seems my scope is wrong:
Why is the first falling edge at 5,500? I used the Autoscale button every time.
Using sim time 100,000 and Period = 10,000 and Puls width = 5,000 I don't even get a single falling edge:
Even with longer simulation time there seems to be a single rising edge at the end of the scope window.
What am I doing wrong? Is the scope not suitable for such long simulation times using fixed step solver? Or is it not "safe" to use the Autoscale button?
All of the plots you show are correct. Simulink is fine with long simulation times. It is "safe" to use the Autoscale button.
By default a scope is set to only display the last 5000 simulation time steps. Since your model is taking a step size of 1s (this is based on using the default step size of the Pulse Generator, which is 1s), in your second plot you are only seeing points from t=5000 to t=10000 (so the first down step in that time period is at 5500), and in your third plot you are only seeing points from t=95000 to t=100000 (which is a period in which the value of the pulse is low/zero).
To see all simulation times, open the Scope block's parameters (by clicking the button with a picture of a cog on it), go to the History tab, and deselect the Limit data points to last: check box.
Then rerun your simulation and press the autoscale button. You'll then see what (I think) you are expecting.