If the minimum time taken by quicksort algorithm to sort 1000 elements is 100 seconds, what will be the minimum time taken by it to sort 100 elements?
The correct answer is that we don't know. The O(N log N) behaviour only describes the highest order part of the time dependency.
If we assume that the implementation we are looking at follows time = k * N * log N (that is, we assume that there are no lower order parts), then the answer would be:
100 * 100 / 1000 * log 100 / log 1000 = 20/3 or approx 6.7 seconds
average time complexity of quick sort is O(n log n) (also best case). if it takes 100 sec for 1000 elements, and will take 6.7 sec for 100 elements.
Related
I have a metric in Prometheus called unifi_devices_wireless_received_bytes_total, it represents the cumulative total amount of bytes a wireless device has received. I'd like to convert this to the download speed in Mbps (or even MBps to start).
I've tried:
rate(unifi_devices_wireless_received_bytes_total[5m])
Which I think is saying: "please give me the rate of bytes received per second", over the last 5 minutes, based on the documentation of rate, here.
But I don't understand what "over the last 5 minutes" means in this context.
In short, how can I determine the Mbps based on this cumulative amount of bytes metric? This is ultimately to display in a Grafana graph.
You want rate(unifi_devices_wireless_received_bytes_total[5m]) / 1000 / 1000
But I don't understand what "over the last 5 minutes" means in this context.
It's the average over the last 5 minutes.
The rate() function returns the average per-second increase rate for the counter passed to it. The average rate is calculated over the lookbehind window passed in square brackets to rate().
For example, rate(unifi_devices_wireless_received_bytes_total[5m]) calculates the average per-second increase rate over the last 5 minutes. It returns lower than expected rate when 100MB of data in transferred in 10 seconds, because it divides those 100MB by 5 minutes and returns the average data transfer speed as 100MB/5minutes = 333KB/s instead of 10MB/s.
Unfortinately, using 10s as a lookbehind window doesn't work as expected - it is likely the rate(unifi_devices_wireless_received_bytes_total[10s]) would return nothing. This is because rate() in Prometheus expects at least two raw samples on the lookbehind window. This means that new samples must be written at least every 5 seconds or more frequently into Prometheus for [10s] lookbehind window. The solution is to use irate() function instead of rate():
irate(unifi_devices_wireless_received_bytes_total[5m])
It is likely this query would return data transfer rate, which is closer to the expected 10MBs if the interval between raw samples (aka scrape_interval) is lower than 10 seconds.
Unfortunately, it isn't recommended to use irate() function in general case, since it tends to return jumpy results when refreshing graphs on big time ranges. Read this article for details.
So the ultimate solution is to use rollup_rate function from VictoriaMetrics - the project I work on. It reliably detects spikes in counter rates by returning the minimum, maximum and average per-second increase rate across all the raw samples on the selected time range.
Suppose a multiprogramming operating system allocated time slices of 10 milliseconds and the machine executed an average of five instructions per nanosecond.
How many instructions could be executed in a single time slice?
please help me, how to do this.
This sort of question is about cancelling units out after finding the respective ratios.
There are 1,000,000 nanoseconds (ns) per 1 millisecond (ms) so we can write the ratio as (1,000,000ns / 1ms).
There are 5 instructions (ins) per 1 nanosecond (ns) so we can write the ratio as (5ins / 1ns).
We know that the program runs for 10ms.
Then we can write the equation like so such that the units cancel out:
instructions = (10ms) * (1,000,000ns/1ms) * (5ins/1ns)
instructions = (10 * 1,000,000ns)/1 * (5ins/1ns) -- milliseconds cancel
instructions = (10,000,000 * 5ins)/1 -- nanoseconds cancel
instructions = 50,000,000ins -- left with instructions
We can reason that it is at least the 'right kind' of ratio setup - even if the ratios are wrong or whatnot - because units we are left with instructions, which is matches the type of unit expected in the answer.
In the above I started with the 1,000,000ns/1ms ratio, but it could also have done 1,000,000,000ns/1,000ms (= 1 second / 1 second) and ended with the same result.
I am a student reading Operating systems course for the first time. I have a doubt in the calculation of the performance degradation calculation while using demand paging. In the Silberschatz book on operating systems, the following lines appear.
"If we take an average page-fault service time of 8 milliseconds and a
memory-access time of 200 nanoseconds, then the effective access time in
nanoseconds is
effective access time = (1 - p) x (200) + p (8 milliseconds)
= (1 - p) x 200 + p x 8.00(1000
= 200 + 7,999,800 x p.
We see, then, that the effective access time is directly proportional to the
page-fault rate. If one access out of 1,000 causes a page fault, the effective
access time is 8.2 microseconds. The computer will be slowed down by a factor
of 40 because of demand paging! "
How did they calculate the slowdown here? Is 'performance degradation' and slowdown the same?
This is whole thing is nonsensical. It assumes a fixed page fault rate P. That is not realistic in itself. That rate is a fraction of memory accesses that result in a page fault.
1-P is the fraction of memory accesses that do not result in a page fault.
T= (1-P) x 200ns + p (8ms) is then the average time of a memory access.
Expanded
T = 200ns + p (8ms - 200ns)
T = 200ns + p (799980ns)
The whole thing is rather silly.
All you really need to know is a nanosecond is 1/billionth of a second.
A microsecond is 1/thousandth of a second.
Using these figures, there is a factor of a million difference between the access time in memory and in disk.
Suppose to have an time interval ( from 0 to 3600000 that is one hour in milliseconds). I have to generate entity with average 3 and I utilise an Exponential Distribution. The average is (3600000/3) that is how I wanna sample the distribution. If in a particular run I obtain 0 entity create is wrong or can be correct result? Anyone can help me?
It's not an error to get zero. With exponential interarrival times and a rate of 3 per hour, the number of occurrences in an hour has a Poisson distribution with λ=3. The probability of getting n outcomes is
e-λλn/n!
which for n=0 is just under 0.05. In other words, you would see a zero roughly one out of every 20 times.
Given a BPM (beats per minute) MIDI delta time (leftmost bit off) with speed of 192:
0x00C0
I want to convert it to a FPS/TPM (frames per second / ticks per minute) delta time (leftmost bit on), but it should be the same (or the most accurate) speed value if you know what I mean.
For more info about MIDI Delta Time please take a look at Midi File Format under Header Chunk -> Time Division.
I am looking for a formula that will convert between these two deltatime types.
If you're talking about 0x00C0 being the time division field, what you're referring to is not 192 beats per minute, but rather 192 ticks per beat, quite a different beast. BPM is specified indirectly via "Set Tempo" events, given in microseconds per beat (with the lamentably ubiquitous 120 BPM being assumed to begin with). The trickiness of time division with this format is that the length of a tick will grow and shrink depending on the tempo changes in the song.
Let's say the time division you want to convert to has F as the frames per second (24, 25, 29.97, or 30) and G as the ticks per frame (note: it's not ticks per minute!). Further, let's assume that the current tempo in microseconds per beat is p. Then the formula to convert a given duration in ticksold to ticksnew (the unit analysis really helps!) is:
y = x ticksold * (1/192) beat/ticksold * p μsec/beat * (1/106) sec/μsec * F frames/sec * G ticksnew/frame
= ((x * p * F * G)/(192*106)) ticksnew