SCAN and CSCAN algorithm - operating-system

I am having hard time understanding the working of SCAN and CSCAN algorithm of disk scheduling.I understood FCFS,Closest Cylinder Next but heard that SCAN resembles elevator mechanism and got confused.
My book says that for the incoming order :[10 22 20 2 40 6 38] (while the disk currently at 20) the SCAN moving at the start serves [(20) 20 22 38 40 10 6 2]; this requires moves of [0 2 16 2 30 4 4] cylinders, a total of 58 cylinders.
How does the pattern [(20) 20 22 38 40 10 6 2] came?

Let's know what the SCAN(Elevator) disk-scheduling algorithm says:-
It scans down towards the nearest end and then when it hits the bottom
it scans up servicing the requests that it didn't get going down. If a
request comes in after it has been scanned it will not be serviced
until the process comes back down or moves back up.
So, in your case, the current position of disk is at 20. So, as per the SCAN algorithm, it'll scan towards the nearest end, and just after hitting bottom, it scans up servicing the requests back up.
The order is :-
| |
| * current position | * move back up to upside
|---> nearest disk is this one |
| so it'll move down and so on. |
| as it hit the bottom _______
____
Fig :- Demonstration of SCAN algorithm
So, as per the given data, the order will be [(20) 20 22 38 40 10 6 2];
EDIT :-
The only difference between SCAN and CSCAN is that in CSCAN,
it begins its scan toward the nearest end and works it way all the way
to the end of the system. Once it hits the bottom or top it jumps to
the other end and moves in the same direction,unlike the SCAN which
moves back to upside using the same path.
As per CSCAN,the direction of movement would be same uptil bottom and then it'll reverse the path.
So, as per the given data, the order will be [(20) 20 22 38 40 2 6 10]; Notice the change in last three disk positions.
I hope it is clear. Feel free to ask remaining doubts.

Related

Octave/Matlab: Arranging Space-Time Data in a Matrix

This is a question about coding common practice, not a specific error or other malfunctions.
I have a matrix of values of a variable that changes in space and time. What is the common practice, to use different columns for time or space values?
If there is a definite common practice, in the first place
Update: Here is an example of the data in tabular form. The time vector is much longer than the space vector.
t y(x1) y(x2)
1 100 50
2 100 50
3 100 50
4 99 49
5 99 49
6 99 49
7 98 49
8 98 48
9 98 48
10 97 48
It depends on your goal and ultimately doesn't matter that much. This is more the question of your convenience.
If you do care about the performance, there is a slight difference. Your code achieves maximum cache efficiency when it traverses monotonically increasing memory locations. In Matlab data stored column-wise, therefore processing data column-wise results in maximum cache efficiency. Thus, if you frequently access all the data at certain time layers, store space in columns. If you frequently access all the data at certain spatial points, store time in columns.

Pure Data - Get adc value in a particular duration

I'm trying to get the adc values in says 50 seconds. I end up with the picture below
I set up the metro as 50 which is 0.05 sec and the tabwrite size 1000. I got a list of values as below
But I feel it isn't right as I speak louder for a few seconds, the entire graph changed. Can anyone point out what I did wrong? Thank you.
the [metro 50] will retrigger every 50 milliseconds (20 times per second).
So the table will get updated quite often, which explains why it reacts immediately to your voice input.
To record 50 seconds worth of audio, you need:
a table that can hold 2205000 (50*44100) samples (as opposed to the default 64)
a [metro] that triggers every 50 seconds:
[tgl]
|
[metro 50000]
|
| [adc~]
|/
[tabwrite~ mytable]
[table mytable 2205000]

I literally don't know what I need to do for this but it needs to be done and evaluated within a couple of days

1 It's an Emergency.
2 For one of my programming units we have to create a 'UCAS' points calculator in Net beans. UCAS sort out all of the university stuff in the UK.
3 So I need to create a GUI that has text boxes and buttons and I need to enter the amount of passes, merits, and distinctions I've got on my course, which will give me a total amount of UCAS points.
4 Passes = 70 points
5 Merit = 80 points
6 Distinctions = 90 points
7 There's 18 units in total, so if I enter I have 4 passes (a pass for 4 units) the box with the total amount of UCAS points needs to go up.
8 I need three boxes, one for passes, merits, and distinctions, and the box for the total amount of UCAS points. It needs a calculate button of course and a 'reset facility'. It says that the current date needs to be seen on the interface.
9 It also says that there should be an option to quit the program at any time.
10 It has to be a **GUI** and I literally don't know what to do and I don't want to fail again.
11 Can someone help me?
12 I literally don't know how to make the buttons work or anything.
I don't even need any help I just need someone to show me exactly what to do because I haven't got a clue.

Round Robin Scheduling : Two different solutions - How is that possible?

Problem :
Five batch jobs A through E, arrive at a computer center at almost the same time. They have estimated running times 10, 6, 2, 4, and 8 minutes. Their (externally determined) priorities are 3, 5, 2, 1, and 4, respectively, with 5 being the highest priority. Determine the mean process turn around time. Ignore process switching overhead. For Round Robin Scheduling, assume that the system is multiprogramming, and that each job gets it fair share of the CPU.All jobs are completely CPU bound.
Solution #1 The following solution comes from this page :
For round robin, during the first 10 minutes, each job gets 1/5 of the
CPU. At the end of the 10 minutes, C finishes. During the next 8
minutes, each job gets 1/4 of the CPU, after which time D finishes.
Then each of the three remaining jobs get 1/3 of the CPU for 6
minutes, until B finishes and so on. The finishing times for the five
jobs are 10, 18, 24. 28, 30, for an average of 22 minutes.
Solution #2 the following solution comes from Cornell University, can be found here, and is obviously different from the previous one even though the problem is given in exactly the same form (this solution, by the way, makes more sense to me) :
Remember that the turnaround time is the amount of time that elapses
between the job arriving and the job completing. Since we assume that
all jobs arrive at time 0, the turnaround time will simply be the time
that they complete. (a) Round Robin: The table below gives a break
down of which jobs will be processed during each time quantum. A *
indicates that the job completes during that quantum.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
A B C D E A B C* D E A B D E A B D* E A B E A B* E A E A E* A A*
The results are different: In the first one C finishes after 10 minutes, for example, whereas in the second one C finishes after 8 minutes.
Which one is the correct one and why? I'm confused.. Thanks in advance!
The problems are different. The first problem does not specify a time quantum, so you have to assume the quantum is very small compared to a minute. The second problem clearly specifies a one minute scheduler quantum.
The mystery with the second solution is why it assumes the tasks run in letter order. I can only assume that this an assumption made throughout the course and so students would be expected to know to make it here.
In fact, there is no such thing as a 'correct' RR algorithm. RR is merely a family of algorithms, based on the common concept of scheduling several tasks in a circular order. Implementations may vary (for example, you may consider task priorities or you may discard them, or you may manually set the priority as a function of task length or whatever else).
So the answer is - both algorithms seem to be correct, they are just different.

Can't understand Belady's anomaly

So Belady's Anomaly states that when using a FIFO page replacement policy, when adding more page space we'll have more page faults.
My intuition says that we should less or at most, the same number of page faults as we add more page space.
If we think of a FIFO queue as a pipe, adding more page space is like making the pipe bigger:
____
O____O size 4
________
O________O size 8
So, why would you get more page faults? My intuition says that with a longer pipe, you'd take a bit longer to start having page faults (so, with an infinite pipe you'd have no page faults) and then you'd have just as many page faults and just as often as with a smaller pipe.
What is wrong with my reasoning?
The reason that when using FIFO, increasing the number of pages can increase the fault rate in some access patterns, is because when you have more pages, recently requested pages can remain at the bottom of the FIFO queue longer.
Consider the third time that "3" is requested in the wikipedia example here:
http://en.wikipedia.org/wiki/Belady%27s_anomaly
Page faults are marked with an "f".
1:
Page Requests 3 2 1 0 3 2 4 3 2 1 0 4
Newest Page 3f 2f 1f 0f 3f 2f 4f 4 4 1f 0f 0
3 2 1 0 3 2 2 2 4 1 1
Oldest Page 3 2 1 0 3 3 3 2 4 4
2:
Page Requests 3 2 1 0 3 2 4 3 2 1 0 4
Newest Page 3f 2f 1f 0f 0 0 4f 3f 2f 1f 0f 4f
3 2 1 1 1 0 4 3 2 1 0
3 2 2 2 1 0 4 3 2 1
Oldest Page 3 3 3 2 1 0 4 3 2
In the first example (with fewer pages), there are 9 page faults.
In the second example (with more pages), there are 10 page faults.
When using FIFO, increasing the size of the cache changes the order in which items are removed. Which in some cases, can increase the fault rate.
Belady's Anomaly does not state anything about the general trend of fault rates with respect to cache size. So your reasoning (about viewing the cache as a pipe), in the general case is not wrong.
In summary:
Belady's Anomaly points out that it is possible to exploit the fact that larger cache sizes can cause items in the cache to be raised in the FIFO queue later than smaller cache sizes, in order to cause larger cache sizes to have a higher fault rate under a particular (and possibly rare) access pattern.
This statement is wrong because it is overgeneralized:
Belady's Anomaly states that when using a FIFO page replacement policy, when adding more page space we'll have more page faults
This is a corrected version:
"Belady's Anomaly states that when using a FIFO page replacement policy, when adding more page space, some memory access patterns will actually result in more page faults."
In other words... whether the anomaly is observed depends on the actual memory access pattern.
Belady's anomaly occurs in page replacement algorithm do not follow stack algorithm.That is the pages when frames were less should be a subset of pages when frame are more.On increasing page frame,the page frames which were present before has to be there.This can occur in FIFO sometimes,even random page replacement but not LRU or optimal.
I could not understand belady anomaly even after reading the Wikipedia article and accepted answer. After writing the trace I kind of got it. Here I’d like to share my understanding.
Keys to understanding belady anomaly.
Unlike LRU, FIFO just pushes out the oldest elements regardless of
frequency. So staying in FIFO longer means falling victim to
eviction.
From here, I refer to 3 and 4 pages FIFO queue as FIFO3 and FIFO4.
To understand Wikipedia example, I divided it into 2 parts. When FIFO3 catches up with FIFO4 and when FIFO3 overtakes FIFO4.
How FIFO3 catch up with FIFO4 on 9th
Look at 3 in both FIFOs. In FIFO3, 3 is evicted on 4th and came back on 5th. So it was still there on 8th and cache hit happened.
In FIFO4, 3 is HIT on 5th, but this cache hit made 3 stays longer and evicted on 7th, right before next 3 on 8th.
2 is the same as 3. Second 2(6th) is MISS on FIFO3 and HIT on FIFO4, but third 2(9th) is HIT on FIFO3, and MISS on FIFO4.
It might help to think like this. On FIFO3, 3 was updated on 5th so stayed longer until 8th. On FIFO4 3 was old and evicted on 7th, right before next 3 comes.
How FIFO3 overtakes FIFO4
Because there are 2 cache misses on 8, 9th for FIFO4, 4 is pushed down and evicted on 11th in FIFO4.
FIFO3 still retains 4 on 12th because there are cache hit on 8, 9th, so 4 was not pushed down.
I think this is why Wikipedia's aritcle says "Penny Wise, Pound Foolish"
Conclusion
FIFO is a simple and naive algorithm that does not take frequency into account.
You might get a better understanding by applying LRU(Least Recently Used) to Wikipedia’s example. In LRU, 4 is better than 3.
Belady's anomaly happens in a FIFO scheme only when the page that is currently being referred is the page that was removed last from the main memory. only in this case even though you increase more page space, you'll have more page faults.