The Apple Bluetooth Design Guidelines say that connection interval should be set as such on the peripheral
Interval Min ≥ 20 ms
Interval Min + 20 ms ≤ Interval Max
When setting min to 20 ms, max to 40 ms, I expect to get an acceptance from the iPhone and lowest available interval, but the iPhone always set 37.5 ms connection interval. Trying to push the max value down gives a rejected status from the iPhone, which then set the connection interval to ~100 ms.
Is it possible to get this down to 20 ms (since this is the minimum from Apple guidelines) in some way? What is the actual minimum? According to my observations, the Interval min can be set to 30 ms, without making any difference.
I have been experimenting with this recently. This does not follow their guidelines so I'm not sure why it works, but using the following connection setting I was able to get an interval of 18.75 ms from an iPad:
min interval = 10 ms
max interval = 20 ms
latency = 0
timeout = 100 ms
Bluetooth SIG defines connection interval min and max range values = 7.25msec to 4000msec. Implementation can choose any value in this range as connection interval min or max. However, the connection interval min shall not be greater than connection interval max.
The minimum value depends on the battery considerations of the Peripheral and the maximum connection interval depends on the buffers available on the Peripheral. Iphone setting these values to 37.5 ms give us a hint that the buffers available on the Peripheral are constant. You can try to change this parameter and see if now you see the connection interval to be different.
Related
So I'm modelling a production line (simple, with 5 processes which I modelled as Services). I'm simulating for 1 month, and during this one month, my line stops approximately 50 times (due to a machine break down). This stop can last between 3 to 60 min and the avg = 12 min (depending on a triangular probability). How could I implement this to the model? I'm trying to create an event but can't figure out what type of trigger I should use.
Have your services require a resource. If they are already seizing a resource like labor, that is ok, they can require more than one. On the resourcePool, there is an area called "Shifts, breaks, failures, maintenance..." Check "Failures/repairs:" and enter your downtime distribution there.
If you want to use a triangular, you need min/MODE/max, not min/AVERAGE/max. If you really wanted an average of 12 minutes with a minimum of 3 and maximum of 60; then this is not a triangular distribution. There is no mode that would give you an average of 12.
Average from triangular, where X is the mode:
( 3 + X + 60 ) / 3 = 12
Means X would have to be negative - not possible for there to be a negative delay time for the mode.
Look at using a different distribution. Exponential is used often for time between failures (or poisson for failures per hour).
Using a PostgreSQL database, what is the best way to store time, in hours, minutes and seconds. E.g. "40:21" as in 40 minutes and 21 seconds.
Example data:
20:21
1:20:02
12:20:02
40:21
time would be the obvious candidate to store time as you describe it. It enforces the range of daily time (00:00:00 to 24:00:00) and occupies 8 bytes.
interval allows arbitrary intervals, even negative ones, or even a mix of positive and negative ones like '1 month - 3 seconds' - doesn't fit your description well - and occupies 16 bytes. See:
How to get the number of days in a month?
To optimize storage size, make it an integer (4 bytes) signifying seconds. To convert time back and forth:
SELECT EXTRACT(epoch FROM time '18:55:28'); -- 68128 (int)
SELECT time '00:00:01' * 68128; -- '18:55:28' (time)
It sounds like you want to store a length of time, or interval. PostgreSQL has a special interval type to store a length of time, e.g.
SELECT interval'2 hours 3 minutes 20 seconds';
This can be added to a timestamp in order to form a new timestamp, or multiplied (so that (2 * interval'2 hours') = interval'4 hours'. The interval type seems to tailor-made for your use case.
I have a metric in Prometheus called unifi_devices_wireless_received_bytes_total, it represents the cumulative total amount of bytes a wireless device has received. I'd like to convert this to the download speed in Mbps (or even MBps to start).
I've tried:
rate(unifi_devices_wireless_received_bytes_total[5m])
Which I think is saying: "please give me the rate of bytes received per second", over the last 5 minutes, based on the documentation of rate, here.
But I don't understand what "over the last 5 minutes" means in this context.
In short, how can I determine the Mbps based on this cumulative amount of bytes metric? This is ultimately to display in a Grafana graph.
You want rate(unifi_devices_wireless_received_bytes_total[5m]) / 1000 / 1000
But I don't understand what "over the last 5 minutes" means in this context.
It's the average over the last 5 minutes.
The rate() function returns the average per-second increase rate for the counter passed to it. The average rate is calculated over the lookbehind window passed in square brackets to rate().
For example, rate(unifi_devices_wireless_received_bytes_total[5m]) calculates the average per-second increase rate over the last 5 minutes. It returns lower than expected rate when 100MB of data in transferred in 10 seconds, because it divides those 100MB by 5 minutes and returns the average data transfer speed as 100MB/5minutes = 333KB/s instead of 10MB/s.
Unfortinately, using 10s as a lookbehind window doesn't work as expected - it is likely the rate(unifi_devices_wireless_received_bytes_total[10s]) would return nothing. This is because rate() in Prometheus expects at least two raw samples on the lookbehind window. This means that new samples must be written at least every 5 seconds or more frequently into Prometheus for [10s] lookbehind window. The solution is to use irate() function instead of rate():
irate(unifi_devices_wireless_received_bytes_total[5m])
It is likely this query would return data transfer rate, which is closer to the expected 10MBs if the interval between raw samples (aka scrape_interval) is lower than 10 seconds.
Unfortunately, it isn't recommended to use irate() function in general case, since it tends to return jumpy results when refreshing graphs on big time ranges. Read this article for details.
So the ultimate solution is to use rollup_rate function from VictoriaMetrics - the project I work on. It reliably detects spikes in counter rates by returning the minimum, maximum and average per-second increase rate across all the raw samples on the selected time range.
As to the mongodb doc, using nearest mode for read perference,
the driver reads from a member whose network latency falls within the acceptable latency window. However the doc does not tell what is the exact "network latency" is.
Does anyone know how is it evaluated? Using something like ping round trip time or the avg query time or through some specific query?
Does anyone know how is it evaluated? Using something like ping round trip time or the avg query time or through some specific query?
The following is applicable for MongoDB v3.2, v3.4 and current version v3.6. Based on MongoDB Specifications: Server Selection:
For every available server, clients (i.e. the driver) must track the average Round Trip Times (RTT) of server monitoring isMaster commands. When there is no average RTT for a server, the average RTT must be set equal to the first RTT measurement (i.e. the first isMaster command after the server becomes available).
After the first measurement, average RTT must be computed using an exponentially-weighted moving average formula, with a weighting factor (alpha) of 0.2. If the prior average is denoted (old_rtt), then the new average (new_rtt) is computed from a new RTT measurement (x) using the following formula:
alpha = 0.2
new_rtt = alpha * x + (1 - alpha) * old_rtt
A weighting factor of 0.2 was chosen to put about 85% of the weight of the average RTT on the 9 most recent observations.
See also Blog: Server Selection in Next Generation MongoDB Drivers.
You may also be interested in selecting servers within the latency window :
When choosing between several suitable servers, the latency window is the range of acceptable RTTs from the shortest RTT to the shortest RTT plus the local threshold. E.g. if the shortest RTT is 15ms and the local threshold is 200ms, then the latency window ranges from 15ms - 215ms.
For example, in MongoDB Python driver (PyMongo) by default the value of localThresholdMS is 15ms, which means only members whose ping times are within 15ms-30ms are used for queries.
I want to know how to convert MIDI ticks to actual playback seconds.
For example, if the MIDI PPQ (Pulses per quarter note) is 1120, how would I convert it into real world playback seconds?
The formula is 60000 / (BPM * PPQ) (milliseconds).
Where BPM is the tempo of the track (Beats Per Minute).
(i.e. a 120 BPM track would have a MIDI time of (60000 / (120 * 192)) or 2.604 ms for 1 tick.
If you don't know the BPM then you'll have to determine that first. MIDI times are entirely dependent on the track tempo.
You need two pieces of information:
PPQ (pulses per quarter note), which is defined in the header of a midi file, once.
Tempo (in microseconds per quarter note), which is defined by "Set Tempo" meta events and can change during the musical piece.
Ticks can be converted to playback seconds as follows:
ticks_per_quarter = <PPQ from the header>
µs_per_quarter = <Tempo in latest Set Tempo event>
µs_per_tick = µs_per_quarter / ticks_per_quarter
seconds_per_tick = µs_per_tick / 1.000.000
seconds = ticks * seconds_per_tick
Note that PPQ is also called "division" or "ticks per quarter note" in the document linked above.
Note that Tempo is commonly represented in BPM (a frequency) but raw MIDI represents it in µs per quarter (a period).