Loading a program from the disk - operating-system

How long does it take to load a 64-KB program from a disk whose average seek time is 10 msec., whose rotation time is 20 msecs., and whose track holds 32-KB for a 2-KB page size?
The pages are spread randomly around the disk and the number of cylinders is so large
that the chance of two pages being on the same cylinder is negligible.
My solution ..
64 KB program will be organized into 2 tracks because of each track capacity is 32KB.
To load entire track we require 20msec. To load 2KB we require 1.25 msec.
I/O time =seek time+avg.rotation latency+transfer time
10msec+10msec+1.25msec=21.25msec
Since 64KB program is organized into 2 tracks then I/O time will be 2(21.25)=42.5 msec.
Is it correct? if so why seek time =avg rotetion latency?

As its mentioned that: the number of cylinders is so large that the chance of two pages being on the same cylinder is negligible., it simply means to load each page,we always need to move to a different cylinder. Therefore, we have to add seek timefor each page, because seek time is time it takes to move the arm over appropriate cylinder.
Number of pages = 64KB/2KB = 32
rotation time by definition is time it takes to get to the appropriate sector, once we have reached the appropriate cylinder.
So Time taken would be = 32*10*20 = 6400 msec

I found another solution .
The seek plus rotational latency is 20 msec.
For 2-KB pages, the transfer
time is 1.25 msec, for a total of 21.25 msec.
Loading 32 of these pages will
take 680 msec. For 4-KB pages, the transfer time is doubled to 2.5 msec, so
the total time per page is 22.50 msec. Loading 16 of these pages takes 360
msec.
Now I'm confused.

Related

How far is it safe to regularly write to the ESP32 flash, considering the flash MTBF?

What would be the best practice for writing to the flash on a regular basis.
Considering the hardware I am working on is supposed to have 10 to 20 years longevity, what would be your recommandation? For example, is it ok I write some state variables every 15 minutes thru Preferences?
That depends on
number of erase cycles your Flash supports,
size of the NVS partition where you store data and
size and structure of the data that you store.
Erase cycles mean how many times a single sector of Flash can be erased before it's no longer guaranteed to work. The number is found in the datasheet of the Flash chip that you use. It's usually 10K or 100K.
Preferences library uses the ESP-IDF NVS library. This requires an NVS partition to store data, the size of which determines how many Flash sectors get reserved for this purpose. Every time you store a value, NVS writes the data together with its own overhead (total of 32 bytes for primitive data types like ints and floats, more for strings and blobs) into the current Flash sector. When the current sector is full, it erases the next sector and proceeds to write there; thereby using up sectors in a round robin fashion as write requests come in.
If we assume that your Flash has 100K erase cycles, size your NVS partition is 128 KiB and you store a set of 8 primitive values (any int or float) every 15 minutes:
Each store operation uses 8 * 32 = 256 bytes (32 B per data value).
You can repeat that operation 131072 / 256 = 512 times before you've written to every sector of your 128KiB NVS partition (i.e. erased every sector once)
You can repeat that cycle 100K times so you can do 512 * 100000 = 51200000 or roughly 5.1M store operations before you've erased every sector its permitted maximum number of times.
Considering the interval of 15 minutes creates 365 * 24 * 4 = 35040 operations per year, you'd have 51200000 / 35040 = 1461 years until Flash is dead.
Obviously, if your Flash chip is rated at 10K erase cycles, it drops to only 146 years.
There's probably some NVS overhead in there somewhere that I didn't account for, and the Flash erase cycle ratings are not 100% reliable so I'd cut it in half for good measure - I would expect 700 or 70 years in real life.
If you store non-primitive values (strings, blobs) then the estimate changes based on the length of that data. I don't know to calculate the exact Flash space used by those but I'd guess 32B plus length of your data multiplied by 10% of NVS overhead. Plug in the numbers, see for yourself.

How should i interpret this grafana visualized prometheus histogram buckets heatmap?

I visualized prometheus histogram buckets as heatmap with grafana, below pic shows the query and the outcome graph, how should i interpret this?
According to my attacker, in total i sent 300 requests in that period exactly, but when i sum those numbers up on above graph i can never get exact 300,
and also looks those numbers are fluctuating with the time elapsing, how should i interpret this graph in a meaningful way?
And if i want those numbers to be the exact request counts locate in each of those bucket in that time window, what should i do?
Oh, for the X-Axis Mode i chose Series and the Value i chose Current.
There are real reasons why you can't always get a precise rate/increase value out of Prometheus. One of them is failed scrapes, i.e. every now and then a scrape will fail or time out due to a slow service, slow Prometheus or network issue.
The other reason is the fact that collected samples are never exactly scrape_interval apart: there will always be a few milliseconds or seconds of delay here and there. So (to take an extreme example) how can you tell the precise increase over the past 1 minute if you only have 2 samples 63 seconds apart? Is it the difference between the two values? Is it that difference adjusted to 60 seconds (i.e. / 63 * 60)?
That being said, Prometheus further boxes itself into a corner by only looking at samples falling strictly within the requested time range. To explain myself: how would a reasonable person calculate the increase of a counter over the last 30 minutes? They would likely take the value of said counter now and the value 30 minutes ago and subtract them. I.e. in PromQL terms (adjusting for counter resets where necessary):
request_duration_bucket - request_duration_bucket offset 30m
What Prometheus does instead (assuming a scrape_interval of 1m and an ideal timeseries with samples spaced exactly 1m apart) is essentially this:
(request_duration_bucket - request_duration_bucket offset 29m) / 29 * 30
I.e. it takes the increase over 29 minutes and extrapolates it to 30. Because of self-imposed limitations, nothing to do with the nature of the problem at hand.
Note that this works fine with counters that increase smoothly and continuously. E.g. if you have a counter that increases by 500 every minute, then taking the increase over 29 minutes and extrapolating to 30 is exactly correct. But for anything that increases in jumps and fits (which is most real-life counters) it will either slightly overestimate the increase if it occurs during the 29 minutes it actually samples (by exactly 1/29) or seriously underestimate it (if the increase occurs in the 1 minute not included in the sampling). This is even worse if you compute a rate/increase over a range covering fewer samples. E.g. if your range only covers 5 samples on average, the overestimate will be 20%, i.e. 1 / (5 - 1) and (each of) your increases will totally disappear 1 minute out of 5.
The only way I've found to work around this limitation is (again, assuming a scrape_interval of 1m) to reverse engineer Prometheus' extrapolation:
increase(request_duration_bucket[31m]) / 31 * 30
But this requires you to be aware of your scrape_interval and adjust for it and is very brittle (if you ever change your scrape_interval all your careful tweaking goes to hell).
Or, if you are OK with your increase falling to zero every time an instance is restarted:
clamp_min(request_duration_bucket - request_duration_bucket offset 30m, 0)
I do actually have a proposed patch to Prometheus to add xrate/xincrease functions that actually behave more as you would expect them to (and as described above) but it doesn't look very likely to be accepted: https://github.com/prometheus/prometheus/issues/3806

Calculating Mbps in Prometheus from cumulative total

I have a metric in Prometheus called unifi_devices_wireless_received_bytes_total, it represents the cumulative total amount of bytes a wireless device has received. I'd like to convert this to the download speed in Mbps (or even MBps to start).
I've tried:
rate(unifi_devices_wireless_received_bytes_total[5m])
Which I think is saying: "please give me the rate of bytes received per second", over the last 5 minutes, based on the documentation of rate, here.
But I don't understand what "over the last 5 minutes" means in this context.
In short, how can I determine the Mbps based on this cumulative amount of bytes metric? This is ultimately to display in a Grafana graph.
You want rate(unifi_devices_wireless_received_bytes_total[5m]) / 1000 / 1000
But I don't understand what "over the last 5 minutes" means in this context.
It's the average over the last 5 minutes.
The rate() function returns the average per-second increase rate for the counter passed to it. The average rate is calculated over the lookbehind window passed in square brackets to rate().
For example, rate(unifi_devices_wireless_received_bytes_total[5m]) calculates the average per-second increase rate over the last 5 minutes. It returns lower than expected rate when 100MB of data in transferred in 10 seconds, because it divides those 100MB by 5 minutes and returns the average data transfer speed as 100MB/5minutes = 333KB/s instead of 10MB/s.
Unfortinately, using 10s as a lookbehind window doesn't work as expected - it is likely the rate(unifi_devices_wireless_received_bytes_total[10s]) would return nothing. This is because rate() in Prometheus expects at least two raw samples on the lookbehind window. This means that new samples must be written at least every 5 seconds or more frequently into Prometheus for [10s] lookbehind window. The solution is to use irate() function instead of rate():
irate(unifi_devices_wireless_received_bytes_total[5m])
It is likely this query would return data transfer rate, which is closer to the expected 10MBs if the interval between raw samples (aka scrape_interval) is lower than 10 seconds.
Unfortunately, it isn't recommended to use irate() function in general case, since it tends to return jumpy results when refreshing graphs on big time ranges. Read this article for details.
So the ultimate solution is to use rollup_rate function from VictoriaMetrics - the project I work on. It reliably detects spikes in counter rates by returning the minimum, maximum and average per-second increase rate across all the raw samples on the selected time range.

How to detect ultrasonic range sensor is out of range or not?

I am using jsn-srf04t ranging sensor (25cm to 5m range) I want to know when it is going out of range(when under 25cm)
The problem is when it goes under 25 cm, the sensor output sometimes goes to (90cm to 95cm or 100cm to 120cm) and this cause undetectability of that it is really out of range or not!
Is there any solution?
This question is not directly related, but I thought I post a suggestion/answer anyway.
SRF04's can detect the distances as small as 3 cms. Please measure the width of the output echo pulse using an oscilloscope. It can be from 100uS to 18mS and if there is no object within its range, the echo pulse is 36ms.
If the measured pulse width from the oscilloscope is agreeing with what you say, then presumably the SRF04 is faulty, or there is a problem with its mounting etc.
If the width of the pulse is measured in uS, then dividing by 58 will give you the distance in cm, or dividing by 148 will give the distance in inches.
The SRF sensors can be triggered as fast as every 50mS, or 20 times each second. You should wait 50ms before the next trigger to ensure the ultrasonic "beep" has faded away and will not cause a false echo on the next ranging.
Otherwise, check your timer configuration. Ensure that it can measure a pulse in the order of hundreds of microseconds with at least a resolution of tens of microseconds.
If you are using this, then perhaps you are at the lowest possible level.

Interrupt time in DMA operation

I'm facing difficulty with the following question :
Consider a disk drive with the following specifications .
16 surfaces, 512 tracks/surface, 512 sectors/track, 1 KB/sector, rotation speed 3000 rpm. The disk is operated in cycle stealing mode whereby whenever 1 byte word is ready it is sent to memory; similarly for writing, the disk interface reads a 4 byte word from the memory in each DMA cycle. Memory Cycle time is 40 ns. The maximum percentage of time that the CPU gets blocked during DMA operation is?
the solution to this question provided on the only site is :
Revolutions Per Min = 3000 RPM
or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
No. of tracks read per second = (2^19/2^2)*50
= 6553600 ............. (1)
Interrupt = 6553600 takes 0.2621 sec
Percentage Gain = (0.2621/1)*100
= 26 %
I have understood till (1).
Can anybody explain me how has 0.2621 come ? How is the interrupt time calculated? Please help .
Reversing form the numbers you've given, that's 6553600 * 40ns that gives 0.2621 sec.
One quite obvious problem is that the comments in the calculations are somewhat wrong. It's not
Revolutions Per Min = 3000 RPM ~ or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
No. of tracks read per second = (2^19/2^2)*50 <- WRONG
The numbers are 512K / 4 * 50. So, it's in bytes. How that could be called 'number of tracks'? Reading the full track is 1 full rotation, so the number of tracks readable in 1 second is 50, as there are 50 RPS.
However, the total bytes readable in 1s is then just 512K * 50 since 512K is the amount of data on the track.
But then it is further divided by 4..
So, I guess, the actual comments should be:
Revolutions Per Min = 3000 RPM ~ or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
Interrupts per second = (2^19/2^2) * 50 = 6553600 (*)
Interrupt triggers one memory op, so then:
total wasted: 6553600 * 40ns = 0.2621 sec.
However, I don't really like how the 'number of interrupts per second' is calculated. I currently don't see/fell/guess how/why it's just Bytes/4.
The only VAGUE explanation of that "divide it by 4" I can think of is:
At each byte written to the controller's memory, an event is triggered. However the DMA controller can read only PACKETS of 4 bytes. So, the hardware DMA controller must WAIT until there are at least 4 bytes ready to be read. Only then the DMA kicks in and halts the bus (or part of) for a duration of one memory cycle needed to copy the data. As bus is frozen, the processor MAY have to wait. It doesn't NEED to, it can be doing its own ops and work on cache, but if it tries touching the memory, it will need to wait until DMA finishes.
However, I don't like a few things in this "explanation". I cannot guarantee you that it is valid. It really depends on what architecture you are analyzing and how the DMA/CPU/BUS are organized.
The only mistake is its not
no. of tracks read
Its actually no. of interrupts occured (no. of times DMA came up with its data, these many times CPU will be blocked)
But again I don't know why 50 has been multiplied,probably because of 1 second, but I wish to solve this without multiplying by 50
My Solution:-
Here, in 1 rotation interface can read 512 KB data. 1 rotation time = 0.02 sec. So, one byte data preparation time = 39.1 nsec ----> for 4B it takes 156.4 nsec. Memory Cycle time = 40ns. So, the % of time the CPU get blocked = 40/(40+156.4) = 0.2036 ~= 20 %. But in the answer booklet options are given as A) 10 B)25 C)40 D)50. Tell me if I'm doing wrong ?