I would like to get the mean waiting time of each unit spending in my queue of every hour. (so betweeen 7-8 am for example 4 minutes, 8-9 10 minutes and so on). Thats my current queue with my timemeasure Is there a way to do so?
]
Create a normal dataset and call it datasetHourly. Deactivate the option Use time as horizontal value. This is where we will store your hourly data.
Creat a cyclic event and set the trigger to cyclic, once every hour.
This cyclic event will get the current mean of your time measurement ( waiting time + service time in your example) and save this single value in the extra dataset.
Also we have to clear the dataset that is integrated into the timeMeasurementEnd, in order to get clean statistics again for the next hour interval.
datasetHourly.add(time(HOUR),timeMeasureEnd.dataset.getYMean());
timeMeasureEnd.dataset.reset();
You can now visualise the hourly development by adding the hourlyDataset to a normal plot.
Related
I want to stop the model at a specific model time.
Do I have to work with a counter variable per time and then throw stopSimulation() or is there another possibility? My simulation will run for one week in model time. I want to stop the simulation 5min before it will end, so 5min before one week of model time is over.
You can specify the stop time in the simulation experiment properties. See below:
In the settings of Simulation:main you can define exactly when you want your simulation to stop.
See attached image:
To set a week less 5 minutes, replace the number 100 with the number 10,075 (assuming your model runs in units of minutes)
Good luck
I am working on a VRPTW and want to minimize the total time (travel time + waiting time) cumulated for all vehicles. So if we have 2 vehicles one that starts at time 0 and returns at time 50 and one that starts at time 25 and returns at time 100, then the objective value would be 50+75=125.
Currently I have implemented the following code:
for i in range(data['num_vehicles']):
routing.AddVariableMinimizedByFinalizer(
time_dimension.CumulVar(routing.End(i)))
However, this seems like it is only minimizing the time we arrive back at the depot.
Also it results in very high waiting times.
How do I implement it correctly in Google OR tools?
This is called the span.
See the SetSpanCostCoefficientForVehicle method for one vehicle.
You can also set it for all vehicles.
I have a periodic backend process and I would like to visualize the history of the length of cycles on my dashboard. Is it possible?
I have full control over the data/metrics I generate, so I could perhaps increment a counter every time a cycle completes (a cycle takes about 3 days), so I would get counter updates every 3 days or so. Then how could I get Grafana to report the length of each cycle? (for instance: 72h; 69h; 74h; etc.) The actual widget doesn't matter, but I need something visual to tell me at once if cycles are getting faster or slower.
Any pointers or ideas are welcome.
It looks like a standard time series: X-axis - time, Y-axis - duration [s]:
Then you may add:
trend line
aggregations (min/max/avg/derivation/diff/...)
moving average
other math functions, which are available in used datasource
I have a job scheduling engine which can run jobs on various machines. I have a queue of pending jobs coming in as a stream (usually at least 10s of thousands of jobs waiting for execution). I have an algorithm to execute jobs on different machines.
One of the core metrics to track is how long after a job gets requested does it get scheduled for execution (usually it is less than 5 minutes, but can be up to 1 hour due to various reasons).
Is there a way to plot the percentiles of how long the current unassigned jobs have been in there for using Prometheus + Grafana (or mix of prometheus and other solutions like Redis)? I want to know what is Median waiting time, 95 and 99 percentiles of waiting times for the jobs.
The issue is until the job gets scheduled for execution there is no event generated and longer we wait the higher bucket the job will move into. Furthermore, since the job could take very different times to get scheduled (not each job is the same), simply relying on how long past few jobs took to get scheduled is wrong.
One simple way would to iterate over all pending jobs and compute the percentiles continuously, but that would be very expensive.
The Prometheus histogram implementations assume a fixed set of buckets (e.g. less than 1 second, less than 2 seconds, less than 5 seconds etc.) which may only be incremented (together with all buckets above them).
In your case, you have 2 options:
Record the duration each job has been queued for in the histogram. The problems with this approach are that (a) you would have to keep "moving" every job up the histogram as time goes on; and (b) you can't remove a job from the histogram once it is processed (because of the monotonicity requirement).
Record the time when each job was added into a histogram (e.g. records added before 1 minute past the hour, records added before 2 minutes past the hour etc.). The problem here is that your histogram is not static in size and will grow indefinitely (assuming your Prometheus client allowed it in the first place).
You are thus left with a couple of alternatives:
Iterate over your queue and create a fresh histogram (or directly the percentiles you're interested in) every time you're scraped by Prometheus. Tens of thousands of jobs to iterate over doesn't sound all that bad, it should take milliseconds to do. You could even replace the data structure you use for your queue with e.g. a binary search tree, which should make it real easy to figure out the exact percentiles you are interested in, in logarithmic time.
Give up on recording queuing times for pending jobs and only do it for processed jobs. Every time a job is processed, you increment a histogram. It doesn't get simpler than that.
Background: I’m doing analysis of call detail record (CDR) data in order to segmentify customer with respect to their call duration, time of call (holiday call or non holiday call, Business call or non Business call), age group of subscriber and gender. Data is from two table name cdr (include card_number, service_key, calling, called, start_time, clear_time, duration column) and subscriber_detail (include subscriber_name, subscriber_address, DOB, gender column)
I have design OLAP as given below.
Call_date includes Date of call with year, month, and day. Call_time is time of call happen in second.
Question:- if we take call_time in second then it has 86400 column for each day (may be curse of dimensionality) and so we think to reduce its dimensional by taking 30 second time pulse ( telecom charges money on the basic of the pulse and 30 is pulse duration for our context). First Question is :- Is it the best way to replace time by pulse duration? And second is :- if one subscriber do more than 2 call on range of pulse it may cause problem i.e. first call start at 21:01:00 and end at 21:01:05 and he start second call at 21:01:15 and end at 21:01:20. How to resolve these type of problem.
If I were you I would divide the time in 10 minute slot and use link list to store multiple duration time within given time slot so total dimension of time is 144 (Which restrict roll down upto 10 minutes only).
I would keep start_call_time, end_call_time and ellapsed_call_time in seconds.
Then having ellapsed_time does not mean the cube would have a dimension of 86400 members; you could setup a 'ranged/banded' dimension : i.e., a dimension that is built using intervals instead of instants. This is something possible for example with icCube (www).