(this was a question asked in the 5th semester of my computer engineering degree)
What will the order of process execution be in the following scenario, given that the scheduling used is Round Robin?
QUANTUM SIZE = 4
Process----Arrival time----Burst time
A---0---3
B---1---5
C---3---2
D---9---5
E---12---5
My real doubt comes at time 9. At this point, A and C have finished execution. B is in the queue and D has just entered. Which one will be executed first? B or D?
should the overall order be A-B-C-D-E-B-D-E
or A-B-C-B-D-E-D-E?
In round robin processes are executed for a time period called Quantum you haven't mentioned it. Still there is no problem. Round Robin algorithm says that each process will get equal time period for execution in a circular fashion. At the point of ambiguity, it implements First Come First Serve method. You are mentioning a deadlock situation here. B should com first. Here are few references: Word Definition a simple example
the order of exe will be A-B-C-B-D-E-D-E, because at time 3 i.e. after exe of a ready queue have B , C in same order , so B is executed till time 7 (as TQ < burst time of b ) and B is queued back in circular queue (ready queue) as follows A-B-C-B
and at time 7 c will execute till time 10 , while exe of c d arrived at ready queue at time 9 therefore queue like A-B-C-B-D...
final chart will be
Q= | A | B | C | B | D | E | D | E |
T= | | | | | | | | |
0 3 7 10 11 15 19 20 21
Round Robin scheduling is similar to FCFS (First Come
First Serve) scheduling, but preemption is added. The ready queue is treated as a circular queue. The CPU scheduler goes around
the ready queue, allocating CPU to each process for a time interval of
up to 1 time quantum.
Operating System Concepts (Silberschatz)
Now in your case,
the Gantt Chart will look like this:
A | B | C | D | E | B | D | E |
0 3 7 9 13 17 18 19 20
Notice in this case at first we consider FCFS and start with process A (arrival time 0 ms) then we continue dispatching each process based on their arrival time (the same sequence in which you have listed in the question)
, for 1 time quantum (4 ms each).
If the burst time of a process is less than the time quantum then it releases the CPU voluntarily otherwise preemption is applied.
So, the scheduling order will be :
A -> B -> C -> D -> E -> B -> D -> E
Related
I know this question is kind of specific. But I am trying to do this really hard for a long time now and didn't find any solution that works properly for me.
Lets say:
I am having a Sensor1 which sends me Data approximately every 5 seconds.
I am having a Sensor 2 which sends me Data approximately every 5 minutes.
So the incoming Stream of Data often looks like this
DEVICE | Sensor1 | Sensor2 | TS
A 4 NULL 01:01:05
A 3 NULL 01:01:25
A 4 NULL 01:01:35
A 4.2 NULL 01:01:55
A 1 2 01:02:25
A 2 NULL 01:02:15
A NULL NULL 01:02:55
A 1 NULL 01:02:35
I would now like to Aggregate the Values into their Recent State in a window of one minute.
Therefore i need the K-SQL Window Functions that provide windowing capabilities for a fixed time range:
Select
DEVICE,
latest_by_offset(SENSOR1),
latest_by_offset(SENSOR2),
from STREAM
WINDOW TUMBLING ( SIZE 60 seconds)
group by DEVICE
Now to my problem, which is the output:
DEVICE | Sensor1 | Sensor2 | WINDOWSTART | WINDOWEND
A 4.4 NULL 01:01:00 01:02:00
A 3.4 2 01:02:00 01:03:00
A 3.2 NULL 01:03:00 01:04:00
Whenever I am reaching a minute that gets aggregated that has no Data from Sensor2 in the current Datastream minute, the aggregation Function obviously return NULL. Thats because the values from past timewindows are getting ignored.
But from my side this is not the intended behavior. I want that the Values from the Sensors are staying the same after they get delivered once until there are some new Values.
Is there any way to archive this using K-SQL?
I am looking for some help in displaying a set of numbers on my dashboard but I need to display the latest week whenever the dashboard is open but also allow the user to change the week that they are looking at through the filters.
My data is the following:
latest_week_rank | week_date | completed_orders
1 | 31/01/2020 | 3500
2 | 24/01/2020 | 6450
3 | 17/01/2020 | 6050
4 | 10/01/2020 | 6110
5 | 03/01/2020 | 4000
6 | 27/12/2019 | 3500
7 | 20/12/2019 | 7500
8 | 13/12/2019 | 7450
9 | 06/12/2019 | 7540
10 | 29/11/2019 | 6900
11 | 22/11/2019 | 7100
12 | 15/11/2019 | 7400
13 | 08/11/2019 | 7550
I am going to be using a Multi KPI Extension where I will display the volume of 3500 for the latest weeks volume in my data and then have a second measure to then display a % value to show if the volume is higher then previous week or lower.
so a formula: (3500 / 6450) giving me a % of 45.74% down
The tricky bit is how to do the expression/variable to show the default of the latest week but also having the ability to filter and pick another week which would then change the previous week if the selection of the week_date is changed.
I would really appreciate it if somebody could advise on how I could tackle this issue to display my data on my dashboard as I am fairly new to Qlik so just trying to get my head around how everything works.
I have managed to write expression which gives me the latest weeks volume and also allows me to filter and view previous weeks data.
Sum({<week_date={">=$(=Weekstart(max(week_date)))<=$(=Weekend(max(week_date)))"}>}completed_orders)
In regards to the percentage I have used the same code and then taken the latest weeks and divided the previous weeks . To get the previous week all I did was add a -1 to look at the previous week and then changed the option to show it as a %.
Code in the Data Tab:
set vvWeekOrders = Sum({<week_date={">=$(=Weekstart(max(week_date)))<=$(=Weekend(max(week_date)))"}>}completed_orders);
but this changes my values to 0, do i need to change the code if I am using set?
I am using ksql stream and calculating events coming every 5 minutes. Here is my query -
select count(*), created_on_date from TABLE_NAME window tumbling (size 5 minutes) group by created_on_date;
Providing results -
2 | 2018-11-13 09:54:50
3 | 2018-11-13 09:54:49
3 | 2018-11-13 09:54:52
3 | 2018-11-13 09:54:51
3 | 2018-11-13 09:54:50
query without window tumbling -
select count(*), created_on_date from OP_UPDATE_ONLY group by created_on_date;
Result -
1 | 2018-11-13 09:55:08
2 | 2018-11-13 09:55:09
1 | 2018-11-13 09:55:10
3 | 2018-11-13 09:55:09
4 | 2018-11-13 09:55:12
Both queries returning same results, so how does window tumbling make difference?
The tumbling window is a rolling aggregation and counts the number of events based on a key within a given window of time. The window of time is based on the timestamp of your stream, inherited from your Kafka message by default but overrideable by WITH (TIMESTAMP='my_column'). So you could pass created_on_date as the timestamp column and then aggregate by the values there.
The second one is over the entire stream of messages. Since you happen to have a timestamp in your message itself, grouping by that gives the illusion of a time-based aggregation. However, if you wanted to find out how many events, for example, within an hour - this would be no use (you can only do a count at the grain of created_on_date).
So the first example, with a window, is usually the correct way to do it because you usually want to answer a business question about an aggregation within a given time period, not over the course of an arbitrary stream of data.
I am building an app that deals with times and durations, and intersections between given units of time and start/end times in a database, for example:
Database:
Row # | start - end
Row 1 | 1:00 - 4:00
Row 2 | 3:00 - 6:00
I want to be able to select sums of time between two certain times, or GROUP BY an INTERVAL such that the returned recordset will have one row for each sum during a given interval, something like:
SELECT length( (start, end) ) WHERE (start, end) INTERSECTS (2:00,4:00)
(in this case (start,end) is a PERIOD which is a new data type in Postgres Temporal and pg9.2)
which would return
INTERVAL 3 HOURS
since Row 1 has two hours between 2:00 - 4:00 and Row 2 has one hour during that time.
further, i'd like to be able to:
SELECT "lower bound of start", length( (start, end) ) GROUP BY INTERVAL 1 HOUR
which i would like to return:
1:00 | 1
2:00 | 1
3:00 | 2
4:00 | 2
5:00 | 1
which shows one row for each hour during the given interval and the sum of time at the beginning of that interval
I think that the PERIOD type can be used for this, which is in Postgres Temporal and Postgres 9.2. However, these are not available on Heroku at this time as far as I can tell - So,
How can I enable these sorts of maths on Heroku?
Try running:
heroku addons:add heroku-postgresql:dev --version=9.2
That should give you the 9.2 version which has range types supported. As this is currently very alpha any feedback would be greatly appreciated at dod-feedback#heroku.com
I am working with Spring batch. the batch will read several records each time from the database which look like this
personId |fromDate| toDate | someCode
*100 | 05-05-2011 | 31-12-2011 | A
*100 | 01-01-2012 | 31-12-2012 | A
100 | 01-01-2013 | 03-03-2013 | B
101 | 05-05-2011 | 31-12-2011 | A
*periodes to be merged.
What i want to do is to merge the periodes which has the same code and same personId, but not diffrent code or personId.
The first question is can i chunk this step? the problem is that commite intervals are static and i might not get all the priodes for a person in one chunk. is it possible to have dynamic chunks based on how many records for a person are on the table?
the next question is what is the best way to merge the periods? periods should be merged if the toDate is 31-12 and the next period starts from 01-01 of next year.
I solv the problem with using 2 pointer in each object. one which point to the previous period and one to the next period.
For chunking i needed to read all rows with same person id and aggregate them.