how to calculate simple CPU load by formula - operating-system

I missed a class and now I'm confused.
Im trying to solve this task:
On a server with 2 CPU are 3 processes running
They are waiting 10% of their time on I/O
How high is the CPU load
The only formula i got is
CPU-load of a 1 CPU system = 1 - p^n
p = %of time idle
n = number of processes
I have no clue how to account for the second CPU in the formula.
Or can i say a CPU runs 2 processes and the other only 1.

One processor is idle if 2 processes cannot run. The probability of one processor being idle is
.1 x .1 + .1 x .1 + .1 x .1 - .1 x .1 x .1
Both processors are idle if 3 processes cannot run. The probability of both processors being idle is:
.1 x .1 x .1
Is the question then whether one processor is idle or both processors is idle? If the former and one processor is running, to you count this as being half idle?
I am amazed at the useless busywork they put students through.

Related

How many of the available cores can I take up on a Mac without crashing the computer?

I need to run a parfor loop in Matlab and assign the number of workers to the loop.
Total Number of Cores: 10 (8 performance and 2 efficiency).
What is the maximum number of cores that I can take up, without crashing the computer and still being able to use the computer for browsing and email? Is leaving only 1 core free enough?
Below is an example in which 3 cores are selected.
n = 3 % Select the number of workers that you need and set n to that integer.
parpool(n)
parfor
i=1:n, c(:,i) = eig(rand(100));
end

Capacity Provisioning for Server Farms Markov Chain Queues

Suppose we are using an M/M/N model, how many servers would we need to keep the probability of an arriving job to wait to be less than 0.2. Given that jobs arrive at a rate of 400/second, and the processing times are exponentially distributed with a mean of 1 second.
So I used Erlang's C-formula:
P[queueing] = (1/c!)*(lambda/mu)^(c)*(1/(1-rho))*pi_o
And got an answer of 4 servers, however when I used the model they showed in the textbook:
rho < 1 -> (lambda)/(k*mu) -> k > lambda/mu
I get k = 400 servers. I'm not sure.

Anylogic: How to block a line by a probability?

So I'm modelling a production line (simple, with 5 processes which I modelled as Services). I'm simulating for 1 month, and during this one month, my line stops approximately 50 times (due to a machine break down). This stop can last between 3 to 60 min and the avg = 12 min (depending on a triangular probability). How could I implement this to the model? I'm trying to create an event but can't figure out what type of trigger I should use.
Have your services require a resource. If they are already seizing a resource like labor, that is ok, they can require more than one. On the resourcePool, there is an area called "Shifts, breaks, failures, maintenance..." Check "Failures/repairs:" and enter your downtime distribution there.
If you want to use a triangular, you need min/MODE/max, not min/AVERAGE/max. If you really wanted an average of 12 minutes with a minimum of 3 and maximum of 60; then this is not a triangular distribution. There is no mode that would give you an average of 12.
Average from triangular, where X is the mode:
( 3 + X + 60 ) / 3 = 12
Means X would have to be negative - not possible for there to be a negative delay time for the mode.
Look at using a different distribution. Exponential is used often for time between failures (or poisson for failures per hour).

CPU utilization calculation

I've read in many places that a simple and decent way to get the % of CPU utilization is by this formula:
CPU utilization = 1 - p^n
where:
p - blocked time
n - number of processes
But i can't find an explanation for it. Seems it has to do with statistics, but i can't wrap my head around it.
My starting point is: if i have 2 processes with 50% wait time, then the formula would yield 1 - 1/4 = 75% CPU utilization. But my broken logic begs the question: if one process is blocked on I/O and the other is swapped in to run when the first is blocked(whatever the burst is), that means that while one waits, the second runs and their wait time overlap. Isn't that 100% CPU utilization? I think this is true only when the first half of the programs is guaranteed to run without IO need.
Question is: How is that formula taking into account every other possibility?
You need to think in terms of probabilities. If the probability of each of the cores to be idle (waiting for IO) is 0.5 then the probability of the CPU to be in idle is the probability of all of the cores to be in idle at the same time. That is 0.5 * 0.5 = 0.25 and so the probability the CPU is doing work is 1 - 0.25 = 0.75 = 75%
CPU utilisation is given as 1 - probability of CPU to being in the idle state
and CPU remain in the idle state when all the process loaded in the main memory is blocked time(I/O)
So if n process has wait time of 50% the probability that all the process are in
block(I/O) state

Interrupt time in DMA operation

I'm facing difficulty with the following question :
Consider a disk drive with the following specifications .
16 surfaces, 512 tracks/surface, 512 sectors/track, 1 KB/sector, rotation speed 3000 rpm. The disk is operated in cycle stealing mode whereby whenever 1 byte word is ready it is sent to memory; similarly for writing, the disk interface reads a 4 byte word from the memory in each DMA cycle. Memory Cycle time is 40 ns. The maximum percentage of time that the CPU gets blocked during DMA operation is?
the solution to this question provided on the only site is :
Revolutions Per Min = 3000 RPM
or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
No. of tracks read per second = (2^19/2^2)*50
= 6553600 ............. (1)
Interrupt = 6553600 takes 0.2621 sec
Percentage Gain = (0.2621/1)*100
= 26 %
I have understood till (1).
Can anybody explain me how has 0.2621 come ? How is the interrupt time calculated? Please help .
Reversing form the numbers you've given, that's 6553600 * 40ns that gives 0.2621 sec.
One quite obvious problem is that the comments in the calculations are somewhat wrong. It's not
Revolutions Per Min = 3000 RPM ~ or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
No. of tracks read per second = (2^19/2^2)*50 <- WRONG
The numbers are 512K / 4 * 50. So, it's in bytes. How that could be called 'number of tracks'? Reading the full track is 1 full rotation, so the number of tracks readable in 1 second is 50, as there are 50 RPS.
However, the total bytes readable in 1s is then just 512K * 50 since 512K is the amount of data on the track.
But then it is further divided by 4..
So, I guess, the actual comments should be:
Revolutions Per Min = 3000 RPM ~ or 3000/60 = 50 RPS
In 1 Round it can read = 512 KB
Interrupts per second = (2^19/2^2) * 50 = 6553600 (*)
Interrupt triggers one memory op, so then:
total wasted: 6553600 * 40ns = 0.2621 sec.
However, I don't really like how the 'number of interrupts per second' is calculated. I currently don't see/fell/guess how/why it's just Bytes/4.
The only VAGUE explanation of that "divide it by 4" I can think of is:
At each byte written to the controller's memory, an event is triggered. However the DMA controller can read only PACKETS of 4 bytes. So, the hardware DMA controller must WAIT until there are at least 4 bytes ready to be read. Only then the DMA kicks in and halts the bus (or part of) for a duration of one memory cycle needed to copy the data. As bus is frozen, the processor MAY have to wait. It doesn't NEED to, it can be doing its own ops and work on cache, but if it tries touching the memory, it will need to wait until DMA finishes.
However, I don't like a few things in this "explanation". I cannot guarantee you that it is valid. It really depends on what architecture you are analyzing and how the DMA/CPU/BUS are organized.
The only mistake is its not
no. of tracks read
Its actually no. of interrupts occured (no. of times DMA came up with its data, these many times CPU will be blocked)
But again I don't know why 50 has been multiplied,probably because of 1 second, but I wish to solve this without multiplying by 50
My Solution:-
Here, in 1 rotation interface can read 512 KB data. 1 rotation time = 0.02 sec. So, one byte data preparation time = 39.1 nsec ----> for 4B it takes 156.4 nsec. Memory Cycle time = 40ns. So, the % of time the CPU get blocked = 40/(40+156.4) = 0.2036 ~= 20 %. But in the answer booklet options are given as A) 10 B)25 C)40 D)50. Tell me if I'm doing wrong ?