SJF Scheduling: Selecting Process based on predicted CPU Burst Time - operating-system

In SJF Algorithm, we predict the next CPU Burst time using the formula:
τ(n+1) = α*t(n) + (1-α)*τ(n). And then we select the process with the shortest predicted burst time.
Now my question is: do we already have an idea about the CPU burst times of the processes arriving?
If yes, then why predict the CPU burst time? We could rather just use the shortest time process for scheduling.
If no i.e., we do not have any idea about the burst times of the processes, how is the predicted burst time τ(n+1) helping us to pick a process?
Hope I am able to explain my confusion.
Thanks.

The answer is in the question itself. The later condition is true we don't have an idea about burst time of incoming processes this is the reason that we are predicting their burst time τ(n+1). Our prediction may not be 100% right all the time but it'll server the purpose of SJF to a very great extent !
I hope you would've coded this and saw the results if not then i recommend to do so, it'll help a lot understanding this.
This is the application I developed, for my teacher, on some scheduling techniques.enter image description here

Related

STM32 ADC: leave it running at 'high' speed or switch it off as much as possible?

I am using a G0 with one ADC and 8 channels. Works fine. I use 4 channels. One is temperature that is measured constantly and I am interested in the value every 60s. Another one is almost the opposite: it is measuring sound waves for a couple a minutes per day and I need those samples at 10kHz.
I solved this by letting all 4 channels sample at 10kHz and have the four readings moved to memory by DMA (array of length 4 with 1 measurement each). Every 60s I take the temperature and when I need the audio, I retrieve the audio values.
If I had two ADC's, I would start the temperature ADC reading for 1 conversion every 60s. Non-stop. And I would only start the audio ADC for the the couple of minutes a day that it is needed. But with the one ADC solution, it seems simple to let all conversions run at this high speed continuously and that raised my question: Is there any true downside in having 40.000 conversions per second, 24 hours per day? If not, the code is simple. I just have the most recent values in memory all the time. But maybe I ruin the chip? I use too much energy I know. But there is plenty of it in this case.
You aren't going to "wear it out" by running it when you don't need to.
The main problems are wasting power and RAM.
If you have enough of these, then the lesser problems are:
The wasted power will become heat, this may upset your temperature measurements (this is a very small amount though).
Having the DMA running will increase your interrupt latency and maybe also slow down the processor slightly, if it encounters bus contention (this only matters if you are close to capacity in these regards).
Having it running all the time may also have the advantage of more stable readings, not being perturbed turning things on and off.

Shortest Process Next Scheduling Algorithm

I am wondering which answer is correct?For answer 1, when P5 is finish executing, then we compared about P3,P6 and P4 ,if we compare them according to the arrival time then P3 will execute first. So,my question is about do we need to follow the arrival time? Which answer is correct? Thanks.
This is the question image
The first answer is correct.
A case could be made that both are correct, but the first is "more correct." The tiebreaker for this scheduling algorithm should be the arrival time. Your goal is to minimize the process wait time, and it makes more sense to first run the process that has been waiting longer.
Algorithms like this will often use a priority queue, which would sort the elements from shortest burst time to longest burst time. Queues use FIFO (first in, first out), which means if there are two elements with the same burst time, the one that was added first would be selected first.

How does CPU know how much time a process take to complete?

I studying various CPU scheduling algorithm in OS, I was asked this question once I was not able to answer it. Can anyone explain me?
OS can compute the total needed time for each task, by means of first calculating its CPI (cycles per instruction). There is a weighted average CPI for each job. I hope this answers the question. But if you are talking about burst time then there is a default value to those unknown processes.

Possible way to speed up SUMO simulation

Hi all I am a new SUMO user. I am having simulation iteratively with DUAROUTER and SUMO. The simulation consist of 20000 trips in Singapore network and it's very slow, took one hour and more to complete one simulation.
Anyone knows any way to speed up the process? I need to do 50 iterations. 1 hour per iteration is too slow.
My commands are as follows:
duarouter --net-file sg_left_v1.net.xml --trip-files trips20000_merged.trips.xml --output-file 0.20000.route.xml --ignore-errors true --no-warnings true --repair true
sumo -c simulation_sg_20000.sumocfg --tripinfo-output 0.20000.trip.output.xml --no-warnings true --tripinfo-output.write-unfinished true --vehroute-output 0.20000.individual.output.xml --link-output 0.20000.link-state.output.xml
The number X in X.20000.something.xml is increased on each iteration by my python code.
Thank you all in advance.
There are different things you can do to speed up the process by analyzing the bottlenecks. I would do the following:
Check whether the traffic flow is smooth. If there are big jams piling up the simulation slows down.
Do the vehicles depart at the times you expect them too. Even is there no visible jam, the backlog slows the simulation down. A good indicator is that vehicles which have an intended departure time near the end of the simulation, take much longer to depart (it's also in the tripinfo).
Recheck whether you need all outputs. To get a feeling whether it helps disable them one by one and have a look at the running time.
3a. Extend SUMO to aggregate your data. It is open source after all, so if the outputs are the bottleneck, aggregate inside the simulation.
Think about parallel execution. Maybe you do not need to start the iterations one after another?
Make the scenario smaller.
To accelerate the simulation, you will need to pass a parameter to Sumo called step-length
which a ratio of sumoTime / realWorldtime.
sumo your-other-args-here --step-length 1
It should enable you the get the wanted result

How does operating system knows execution time of process

I was revisiting Operating Systems CPU job scheduling and suddenly a question popped in my mind, How the hell the OS knows the execution time of process before its execution, I mean in the scheduling algorithms like SJF(shortest job first), how the execution time of process is calculated apriori ?
From Wikipedia:
Another disadvantage of using shortest job next is that the total execution time of a job must be known before execution. While it is not possible to perfectly predict execution time, several methods can be used to estimate the execution time for a job, such as a weighted average of previous execution times.[1]
More on http://en.wikipedia.org/wiki/Shortest_job_next
Also, O.S can compute the total needed time for each task, by means of first calculating its CPI.
(CPI: cycles per instruction)
There is a weighted average CPI for each job.
For example, floating point instructions weigh much more than fixed point instructions, meaning they take more time to perform. So a job dealing with fixed point operations: like add or increment is perceived to be shorter. Hence in a shortest job first, it shall be executed prior to the aforementioned job.