I am working on a VRPTW and want to minimize the total time (travel time + waiting time) cumulated for all vehicles. So if we have 2 vehicles one that starts at time 0 and returns at time 50 and one that starts at time 25 and returns at time 100, then the objective value would be 50+75=125.
Currently I have implemented the following code:
for i in range(data['num_vehicles']):
routing.AddVariableMinimizedByFinalizer(
time_dimension.CumulVar(routing.End(i)))
However, this seems like it is only minimizing the time we arrive back at the depot.
Also it results in very high waiting times.
How do I implement it correctly in Google OR tools?
This is called the span.
See the SetSpanCostCoefficientForVehicle method for one vehicle.
You can also set it for all vehicles.
Related
I would like to get the mean waiting time of each unit spending in my queue of every hour. (so betweeen 7-8 am for example 4 minutes, 8-9 10 minutes and so on). Thats my current queue with my timemeasure Is there a way to do so?
]
Create a normal dataset and call it datasetHourly. Deactivate the option Use time as horizontal value. This is where we will store your hourly data.
Creat a cyclic event and set the trigger to cyclic, once every hour.
This cyclic event will get the current mean of your time measurement ( waiting time + service time in your example) and save this single value in the extra dataset.
Also we have to clear the dataset that is integrated into the timeMeasurementEnd, in order to get clean statistics again for the next hour interval.
datasetHourly.add(time(HOUR),timeMeasureEnd.dataset.getYMean());
timeMeasureEnd.dataset.reset();
You can now visualise the hourly development by adding the hourlyDataset to a normal plot.
I use prometheus to monitor a api service. Currently, I use a Counter to count number of requests received and a Gauge for the response time in milliseconds.
I've tried to use something like count_over_time(response_time_ms[1m]) to count requests during a time range. However, I got result that each point is value of 10.
Why this doesn't work?
count_over_time(response_time_ms[1m]) will tell you the number of samples, not the number of times your Gauge was updated within (what I assume to be) a Java process. Based on the value of 10 you're seeing, I'm assuming your scrape interval is 6 seconds.
For an explanation of why this doesn't work as you would expect it, a Gauge is simply a Java object wrapping a double value. Every time you set its value, that value changes, but nothing more. There's no count of how many times the value changed or any notification sent to Prometheus that this happened. Prometheus simply polls every 6 seconds and collects whatever value was there at the time (never the wiser that the value changed 15 times since the last time it was collected). This is why gauges are intended to measure single values that go up and down (such as memory utilization: it's now 645 MB, in 6 seconds it's 648 MB, in 12 seconds 543 MB): you know the value constantly changes, but the best you can do is sample it every now and then.
For something like request latency, you should use a Histogram: it's basically a counter for the number of observations (i.e. number of requests); a counter for the sum of all observations (i.e. how long all requests put together took); and separate counters for each bucket (i.e. how many requests took less than 1 ms; how many requests took less than 10 ms; etc.). From this you can get an accurate average over any multiple of your scrape interval (i.e. change in total time divided by change in number of requests) as well as estimates for any percentile (including the median). How precise said percentiles are depends on the bucket sizes you choose (and how well they actually match the actual measurements).
Or, if all you're interested in is the number of requests, then a counter that's incremented on every request will be enough. To adjust for counter resets (e.g. job restarts), you should use increase() rather than the simple difference suggested above:
increase(number_of_requests_total[1m])
If you want to count number of requests in some specific time from now (in last 1m in this case) just use
number_of_requests_counter - number_of_requests_counter offset 1m
If you want to have sth like requests per second, than use
rate(number_of_requests_counter[1m])
I can tell you why it's not working with your Gauge, but first of all specify what do you assign to this metric. I mean, do you assing some avarage, last response time, or some other stuff?
For response time you should use Summary or Histogram (more info here)
We are writing a message count per application to InfluxDb every 10 seconds. I want to be able to generate an alert if that number has not changed in the last 15 minutes.
I tried derivative, but that gives the change for each data point. The unit parameter just scales the result. Derivative works well for our chattier apps where we can check if a message was sent every 10s, but the 15 minute window is not working.
I tried using spread with a batched query grouped by time, but that gives me the change in whole quarters of the hour (00 to 15, 15:01 to 30, 30:01 to 45...). I want to be able to check the last 15 minutes and check it every minute or so.
I tried using a windowed stream with spread, but it seems to be grabbing points outside the window since it is giving a non-zero answer.
Background: I’m doing analysis of call detail record (CDR) data in order to segmentify customer with respect to their call duration, time of call (holiday call or non holiday call, Business call or non Business call), age group of subscriber and gender. Data is from two table name cdr (include card_number, service_key, calling, called, start_time, clear_time, duration column) and subscriber_detail (include subscriber_name, subscriber_address, DOB, gender column)
I have design OLAP as given below.
Call_date includes Date of call with year, month, and day. Call_time is time of call happen in second.
Question:- if we take call_time in second then it has 86400 column for each day (may be curse of dimensionality) and so we think to reduce its dimensional by taking 30 second time pulse ( telecom charges money on the basic of the pulse and 30 is pulse duration for our context). First Question is :- Is it the best way to replace time by pulse duration? And second is :- if one subscriber do more than 2 call on range of pulse it may cause problem i.e. first call start at 21:01:00 and end at 21:01:05 and he start second call at 21:01:15 and end at 21:01:20. How to resolve these type of problem.
If I were you I would divide the time in 10 minute slot and use link list to store multiple duration time within given time slot so total dimension of time is 144 (Which restrict roll down upto 10 minutes only).
I would keep start_call_time, end_call_time and ellapsed_call_time in seconds.
Then having ellapsed_time does not mean the cube would have a dimension of 86400 members; you could setup a 'ranged/banded' dimension : i.e., a dimension that is built using intervals instead of instants. This is something possible for example with icCube (www).
I want to be able to rank users based on how quick they have completed each level. I want this to be an overall leaderboard I.e. shortest overall time for all levels.
The problem here is that for each level completed the totally completion time goes up. But I want to ensure that the leaderboard takes that into account so that a user having completed 10 levels will rank more highly than someone with only 1 completed level.
How can I create some kind of score based on this?
Before submitting the time to leader board.
You could perform a modulation on the total time by the number of levels completed, then for each level completed reduce it by a set amount so people who complete all levels with the same average time will score better then people with the same average time but with fewer levels.
My Preferred Method:
Or you could express it with a score value.
level complete = 1,000.
Each level has a set time limit bonus, the longer you take the less bonus u get.
eg
I Complete the level in 102 secs Goal time is 120 secs
I get 1,000 points for completion and 1,500 points for each second
that i beat the Goal time for.
This way i will get 1,000 + (18* 1,500) = 28,000 points
Next guy completes in 100 secs
He Gets 1,000 + (20*1,500) = 31,000 points
I suggest adding a default amount of time to the total for each incomplete level. So, say, if a player beats a new level in 3 minutes, that replaces a 10 minute placeholder time, and they 'save' 7 minutes from the total.
Without that kind of trick, the iPhone has no provision for multi-factor rankings.
Leaderboard scores in GameKit have to be expressed as a single number (see this section of the GameKit Programming Guide), so that won't be possible.
Your best bet would be to just have a completion time leaderboard for people who have completed all the levels, and maybe another leaderboard (or a few) for people who have completed a smaller number of levels.