iOS Leaderboard: Rank users on overall shortest time - iphone

I want to be able to rank users based on how quick they have completed each level. I want this to be an overall leaderboard I.e. shortest overall time for all levels.
The problem here is that for each level completed the totally completion time goes up. But I want to ensure that the leaderboard takes that into account so that a user having completed 10 levels will rank more highly than someone with only 1 completed level.
How can I create some kind of score based on this?

Before submitting the time to leader board.
You could perform a modulation on the total time by the number of levels completed, then for each level completed reduce it by a set amount so people who complete all levels with the same average time will score better then people with the same average time but with fewer levels.
My Preferred Method:
Or you could express it with a score value.
level complete = 1,000.
Each level has a set time limit bonus, the longer you take the less bonus u get.
eg
I Complete the level in 102 secs Goal time is 120 secs
I get 1,000 points for completion and 1,500 points for each second
that i beat the Goal time for.
This way i will get 1,000 + (18* 1,500) = 28,000 points
Next guy completes in 100 secs
He Gets 1,000 + (20*1,500) = 31,000 points

I suggest adding a default amount of time to the total for each incomplete level. So, say, if a player beats a new level in 3 minutes, that replaces a 10 minute placeholder time, and they 'save' 7 minutes from the total.
Without that kind of trick, the iPhone has no provision for multi-factor rankings.

Leaderboard scores in GameKit have to be expressed as a single number (see this section of the GameKit Programming Guide), so that won't be possible.
Your best bet would be to just have a completion time leaderboard for people who have completed all the levels, and maybe another leaderboard (or a few) for people who have completed a smaller number of levels.

Related

What period is P99 or P95 calculated over?

When committing to/setting SLAs for a service, what time period should the SLA be calculated over?
For example, if I wanted all the services in my organization to commit to P95 latency, and one of the services commits to 500ms, what is the time window - because the P95 will be different based on the time window we look at.
Depends on in what cycles your latency fluctuates.
No daily / hourly peaks? A couple thousand samples do just fine.
Daily fluctuations (e.g. peak usage, concurrent backups etc.)? Then you will need to measure at least a whole day.
Weekly fluctuations (e.g. tied to work hours or evening activities etc.)? Then you will need to sample over a full week.
There is no strict requirement to sample everything over the chosen time window, but your time window better be representative or you may be held liable. Also make sure to be fair when you under-sample.
If you want to be on the safe side, take the worst-case-scenario in your load cycle, and within that scenario take a full minute worth of samples. That gives you a good estimate what will be held against you.

What is the best way to represent a chart of distribution of time intervals in Datadog?

I have a server that processes packets from different devices. Devices can report in different intervals.
I would like to make a chart showing the distribution of intervals by the count of devices (how many devices are reporting within 5 sec/10 sec/60 sec ...)
Intervals for each device can vary.
Now I'm sending metric with Set using deviceId with tags that represent interval (5 sec, 10 sec, 30 sec, and more) but I'm not sure that it is correct.
What is the best way to realize it?
Set is almost never the right custom metric type to use. It will send a count of the number of unique items per a given tag. The underlying items details will be stripped from the metric, meaning that from one time slice to the next, you will have no idea that actual true number of items over time.
For example
3:00:07-3:00:32 | 5 second bucket:[device1,device4,device7] -> 3 values
3:00:32-3:00:47 | 5 second bucket:[device1,device3] -> 2 values
Your time series to datadog will report 3, and then 2. But because the underlying device info is stripped you have no idea how to combine that 2 and 3 if you to zoom out in time and roll up the numbers to show 1 data point per minute. It could be any number from 3 to 5, but the Datadog backend has no idea. (even though we know that across those 30 seconds there were 4 unique values total)
Plus even if it was accurate somehow, you can't create an alert of it or notify anyone, because you won't know which device is having issues if you see a spike of devices in the 60 second bucket.
So let's go through other metric options.
The only metric types that are ever worth using are usually distributions or gauges, or [counts].
A gauge metric is just a measurement of the latency at a point in time, it's usually good for things like CPU or Memory of a computer, or temperature in a room. Numbers that are impossible to actually collect all dat a points for so you just take measurements every 10 seconds, or every minute, or however often you never to get an idea of the behavior.
A count metric is more exact, it's the number of things that happened. Usually good for number of requests to a server, or number of files processed. Even something like the amount of bytes flowing through something, although that usually is treated like a gauge by most people.
Distributions are good for when you want to create a gauge metric, but you need detailed measurements for every single event that happens. For example a web server is handling hundreds of requests per second and we need to know the latency metrics of that server. It's not possible to send a latency metric for every request as a gauge. Gauges have a built in limit of 1 data point per second (in Datadog). Anything more sent in a 1 second interval gets dropped. But we need stats for every request, so a distribution will summarize the data, it keep a running count, min, max, average, and optionally several percentiles (p50, p75, p99).
I haven't seen many good use cases for metric types outside of those 3. For your scenario, it seems like you would want to be sending a distribution metric for that device interval. So device 1 sends a value of 10.14 and device 3 sends a value of 2.3 and so on.
Then you can use a distribution widget in a dashboard to show the number of devices for each interval bucket.
Of course make sure you tag each metric by the device that is generating the metric.

Anylogic - How to measure work in process inventory (WIP) within simulation

I am currently working on a simple simulation that consists of 4 manufacturing workstations with different processing times and I would like to measure the WIP inside the system. The model is PennyFab2 in case anybody knows it.
So far, I have measured throughput and cycle time and I am calculating WIP using Little's law, however the results don't match he expectations. The cycle time is measured by using the time measure start and time measure end agents and the throughput by simply counting how many pieces flow through the end of the simulation.
Any ideas on how to directly measure WIP without using Little's law?
Thank you!
For little's law you count the arrivals, not the exits... but maybe it doesn't make a difference...
Otherwise.. There are so many ways
you can count the number of agents inside your system using a RestrictedAreaStart block and use the entitiesInside() function
You can just have a variable that adds +1 if something enters and -1 if something exits
No matter what, you need to add the information into a dataset or a statistics object and you get the mean of agents in your system
Little's Law defines the relationship between:
Work in Process =(WIP)
Throughput (or Flow rate)
Lead Time (or Flow Time)
This means that if you have 2 of the three you can calculate the third.
Since you have a simulation model you can record all three items explicitly and this would be my advice.
Little's Law should then be used to validate if you are recording the 3 values correctly.
You can record them as follows.
WIP = Record the average number of items in your system
Simplest way would be to count the number of items that entered the system and subtract the number of items that left the system. You simply do this calculation every time unit that makes sense for the resolution of your model (hourly, daily, weekly etc) and save the values to a DataSet or Statistics Object
Lead Time = The time a unit takes from entering the system to leaving the system
If you are using the Process Modelling Library (PML) simply use the timeMeasureStart and timeMeasureEnd Blocks, see the example model in the help file.
Throughput = the number of units out of the system per time unit
If you run the model and your average WIP is 10 units and on average a unit takes 5 days to exit the system, your throughput will be 10 units/5 days = 2 units/day
You can validate this by taking the total units that exited your system at the end of the simulation and dividing it by the number of time units your model ran
if you run a model with the above characteristics for 10 days you would expect 20 units to have exited the system.

Google OR-Tools: Minimize Total Time

I am working on a VRPTW and want to minimize the total time (travel time + waiting time) cumulated for all vehicles. So if we have 2 vehicles one that starts at time 0 and returns at time 50 and one that starts at time 25 and returns at time 100, then the objective value would be 50+75=125.
Currently I have implemented the following code:
for i in range(data['num_vehicles']):
routing.AddVariableMinimizedByFinalizer(
time_dimension.CumulVar(routing.End(i)))
However, this seems like it is only minimizing the time we arrive back at the depot.
Also it results in very high waiting times.
How do I implement it correctly in Google OR tools?
This is called the span.
See the SetSpanCostCoefficientForVehicle method for one vehicle.
You can also set it for all vehicles.

How can a Neural Network learn from testing outputs against external conditions which it can not directly control

In order to simplify the question and hopefully the answer I will provide a somewhat simplified version of what I am trying to do.
Setting up fixed conditions:
Max Oxygen volume permitted in room = 100,000 units
Target Oxygen volume to maintain in room = 100,000 units
Maximum Air processing cycles per sec == 3.0 cycles per second (min is 0.3)
Energy (watts) used per second is this formula : (100w * cycles_per_second)SQUARED
Maximum Oxygen Added to Air per "cycle" = 100 units (minimum 0 units)
1 person consumes 10 units of O2 per second
Max occupancy of room is 100 person (1 person is min)
inputs are processed every cycle and outputs can be changed each cycle - however if an output is fed back in as an input it could only affect the next cycle.
Lets say I have these inputs:
A. current oxygen in room (range: 0 to 1000 units for simplicity - could be normalized)
B. current occupancy in room (0 to 100 people at max capacity) OR/AND could be changed to total O2 used by all people in room per second (0 to 1000 units per second)
C. current cycles per second of air processing (0.3 to 3.0 cycles per second)
D. Current energy used (which is the above current cycles per second * 100 and then squared)
E. Current Oxygen added to air per cycle (0 to 100 units)
(possible outputs fed back in as inputs?):
F. previous change to cycles per second (+ or - 0.0 to 0.1 cycles per second)
G. previous cycles O2 units added per cycle (from 0 to 100 units per cycle)
H. previous change to current occupancy maximum (0 to 100 persons)
Here are the actions (outputs) my program can take:
Change cycles per second by increment/decrement of (0.0 to 0.1 cycles per second)
Change O2 units added per cycle (from 0 to 100 units per cycle)
Change current occupancy maximum (0 to 100 persons) - (basically allowing for forced occupancy reduction and then allowing it to normalize back to maximum)
The GOALS of the program are to maintain a homeostasis of :
as close to 100,000 units of O2 in room
do not allow room to drop to 0 units of O2 ever.
allows for current occupancy of up to 100 people per room for as long as possible without forcibly removing people (as O2 in room is depleted over time and nears 0 units people should be removed from room down to minimum and then allow maximum to recover back up to 100 as more and more 02 is added back to room)
and ideally use the minimum energy (watts) needed to maintain above two conditions. For instance if the room was down to 90,000 units of O2 and there are currently 10 people in the room (using 100 units per second of 02), then instead of running at 3.0 cycles per second (90 kw) and 100 units per second to replenish 300 units per second total (a surplus of 200 units over the 100 being consumed) over 50 seconds to replenish the deficit of 10,000 units for a total of 4500 kw used. - it would be more ideal to run at say 2.0 cycle per second (40 kw) which would produce 200 units per second (a surplus of 100 units over consumed units) for 100 seconds to replenish the deficit of 10,000 units and use a total of 4000 kw used.
NOTE: occupancy may fluctuate from second to second based on external factors that can not be controlled (lets say people are coming and going into the room at liberty). The only control the system has is to forcibly remove people from the room and/or prevent new people from coming into the room by changing the max capacity permitted at that next cycle in time (lets just say the system could do this). We don't want the system to impose a permanent reduction in capacity just because it can only support outputting enough O2 per second for 30 people running at full power. We have a large volume of available O2 and it would take a while before that was depleted to dangerous levels and would require the system to forcibly reduce capacity.
My question:
Can someone explain to me how I might configure this neural network so it can learn from each action (Cycle) it takes by monitoring for the desired results. My challenge here is that most articles I find on the topic assume that you know the correct output answer (ie: I know A, B, C, D, E inputs all are a specific value then Output 1 should be to increase by 0.1 cycles per second).
But what I want is to meet the conditions I laid out in the GOALS above. So each time the program does a cycle and lets say it decides to try increasing the cycles per second and the result is that available O2 is either declining by a lower amount than it was the previous cycle or it is now increasing back towards 100,000, then that output could be considered more correct than reducing cycles per second or maintaining current cycles per second. I am simplifying here since there are multiple variables that would create the "ideal" outcome - but I think I made the point of what I am after.
Code:
For this test exercise I am using a Swift library called Swift-AI (specifically the NeuralNet module of it : https://github.com/Swift-AI/NeuralNet
So if you want to tailor you response in relation to that library it would be helpful but not required. I am more just looking for the logic of how to setup the network and then configure it to do initial and iterative re-training of itself based on those conditions I listed above. I would assume at some point after enough cycles and different conditions it would have the appropriate weightings setup to handle any future condition and re-training would become less and less impactful.
This is a control problem, not a prediction problem, so you cannot just use a supervised learning algorithm. (As you noticed, you have no target values for learning directly via backpropagation.) You can still use a neural network (if you really insist). Have a look at reinforcement learning. But if you already know what happens to the oxygen level when you take an action like forcing people out, why would you learn such a simple facts by millions of evaluations with trial and error, instead of encoding it into a model?
I suggest to look at model predictive control. If nothing else, you should study how the problem is framed there. Or maybe even just plain old PID control. It seems really easy to make a good dynamical model of this process with few state variables.
You may have a few unknown parameters in that model that you need to learn "online". But a simple PID controller can already tolerate and compensate some amount of uncertainty. And it is much easier to fine-tune a few parameters than to learn the general cause-effect structure from scratch. It can be done, but it involves trying all possible actions. For all your algorithm knows, the best action might be to reduce the number of oxygen consumers to zero permanently by killing them, and then get a huge reward for maintaining the oxygen level with little energy. When the algorithm knows nothing about the problem, it will have to try everything out to discover the effect.