I have one server with a FIFO queue, the server capacity is 1 and simulation stops at 5 minutes. I have 2 events: arrive and leave server the server.
I'm calculating the average delay in queue which is the sum of delay in queue of each costumer dividing by the number of costumers delayed.
If during simulation all my events are type "Arrive" won't my average delay be 0?
Because everyone that got in queue doesn't leave the server before the end of the simulation.
Related
I need to model a waiting line in a call centre in AnyLogic. This is what I don't understand. It says:
If all of the service representatives are busy, an arriving customer is placed on hold, but ties up on the phone lines.
I am not sure what block or how to model customers waiting. Can someone help me? Thank You!
Here is my solution, although I'm absolutely sure other methods exist.
To represent an arrival rate of 15/hr, use a source block with arrivals defined by rate, with the rate set to 15 per hour.
To represent 24 phone lines and 3 service reps, use a queue block (callsWaiting) followed by a delay block (service). The queue block should have capacity = 21 and the delay block should have capacity = 3 with a delay time of exponential(0.1,0) minutes representing the exponential service time (with mean 10 min).
To represent losing calls when all of the phone lines are tied up, place a selectOutput block before the callsWaiting queue and set its condition to: callsWaiting.canEnter(). It will return false if the queue is at maximum capacity. On the false branch for that selectOutput, place a sink block for dropped calls.
I have searched the internet for examples for the algorithms in cpu scheduling and I have never seen any examples with the same arrival time.
Is it possible to make the processes have the same arrival time?
For example:
Algorithm: Round Robin
Process ---- Arrival Time ----- Burst Time
P1 ----------------- 3 ------------------ 4 -----
P2 ----------------- 1 ------------------ 5 -----
P3 ----------------- 1 ------------------ 3 -----
Quantum = 1
What would be the gantt chart look like?
Is it possible to make the processes have the same arrival time?
Yes, for normal definitions of "same time" (e.g. excluding "in the same Planck time quantum") it's possible for processes to have the same arrival time.
For an example, imagine if 100 tasks sleep until midnight. When midnight occurs a timer IRQ handler processes a list of tasks waiting to wake up and wakes up 100 tasks at the "same" time.
Now, for this example you could say that "same time" is stricter; and that the timer IRQ handler processes the list of tasks sequentially and adds them to scheduler's queues sequentially, and that it's only "almost at the same time". In this case it's still possible to have N CPUs running in parallel (with a different timer for each CPU) that happen to wake (up to) N tasks at the same time.
Of Course, multiple processes can have the same arrival time i.e the time they came for looking the CPU to execute them. And its the responsibility of the Processor to handle and schedule them accordingly as per the appropriate Scheduling Algorithms.
When the Arrival time for two or more processes are same, then the RR-Scheduling follows FCFS {First come first serve} approach. Here in Round robin scheduling with quantum = 1, we have
Gantt Chart
At time 0, no process
At time 1 , we have P2 and P3 with P2 first then after a quantum RR executes P3,
At time 3 we have all three processes with order P2,P3,P1 hence the RR-Algorithm will keep switching between them until they complete their execution (burst) time.
And we will get all executed at time 13.
New to stackoverflow but had a quick question about queues. I'm not entirely sure how to phrase the question but here it goes:
Say you had a drive in: there is an order queue and a pickup queue. Normally you go and order + pay at the order queue and then once complete switch to the pickup queue. The average time for a customer to come in is 3 minutes, average time in order queue is 1 minute and average time in pickup que is 1 minute.
In a similar situation, you have an order queue, a pay queue and a pickup queue. Now the average wait time for the order que is half a minute, the average wait time for the pay queue is half a minute and the average wait time for the pickup is still 1 minute, with a average time of 3 minutes for every customer to come into the store.
My questions are:
1) Would you expect the total average wait time for a customer to be higher in case 1 or 2?
2) How would you expect the queues to differ if the average time for a customer to enter the store is increased or decreased?
This was an exam question I could not solve, even after searching about response time.
I thought that answer should be 220, 120
Effectiveness of RR scheduling depends on two factors: choice of q, the time quantum, and the scheduling overhead s. If a system contains n processes and each request by a process consumes exactly q seconds, the response time (rt) for a request is rt= n(q+s) . This means that response is generated after spending the whole CPU burst and being scheduled to the next process. (after q+s)
Assume that an OS contains 10 identical processes that were initiated at the same time. Each process contains 15 identical requests, and each request consumes 20msec of CPU time. A request is followed by an I/O operation that consumes 10 sec. The system consumses 2msec in CPU scheduling. Calculate the average reponse time of the fisrt requests issued by each process for the following two cases:
(i) the time quantum is 20msec.
(ii) the time quantum is 10 msec.
Note that I'm assuming you meant 10ms instead of 10s for the I/O wait time, and that nothing can run on-CPU while an I/O is in progress. In real operating systems, the latter assumption is not true.
Each process should take time 15 requests * (20ms CPU + 10ms I/O)/request = 450ms.
Then, divide by the time quantum to get the number of scheduling delays, and add that to 450ms:
450ms / 20ms = 22.5 but actually it should be 23 because you can't get a partial reschedule. This gives the answer 450ms + 2ms/reschedule * 23 reschedules = 496ms.
450ms / 10ms = 45. This gives the answer 450ms + 2ms/reschedule * 45 reschedules = 540ms.
I'm currently learning about interrupts but don't understand how you
calculate the data rate for the question below. I have the answers but
I have no idea how you get there. If someone could please explain to
me how it is calculated it would be really appreciated.
Here is the question...
This question concerns the use of interrupts to handle the input and
storage in memory of data arriving at an input interface, and the
consideration of data rates that be achieved using this mechanism. In
this particular question, the arrival of each new data item triggers
an interrupt request to input and store the data item in a queue in
memory.The question is about calculating the maximum data rate
achievable in this scenario.
You are first required to calculate the time to respond to an
interrupt from the interface, run the interrupt service routine (ISR)
and return to the interrupted program.From this and the number of data
bits input on each interrupt, you are required to calculate the
maximum data rate in bits per second, that can be handled. Below you
are given: the number of clock cycles the CPU requires to respond to
the interrupt and switch to the ISR, the number of instructions
executed by the ISR, the average number of clock cycles executed per
instruction in the ISR, the number of bits in the data item input on
each interrupt, and the clock frequency. [You can assume that when the
CPU can be immediately interrupted again as soon as the ISR completes,
but not before this]
clock cycles to respond to interrupt = 15
instructions executed in ISR= 70
average clock cycles per instruction = 5
number of bits per data item = 32
clock frequency = 10MHz
Questions
a) What is the time in microseconds to respond to an interrupt from
the interface, run the interrupt service routine (ISR) and return to
the interrupted program?
b)What is the maximum data rate in Kbits/second?
Answers
a) 36.5 - I understand this
b) 876.7 - ????
Because each ISR takes 36.5 us, the absolute maximum number of ISRs that can happen in a second is 27,397.2603.
In each ISR, 32 bits of data are processed.
Therefore, 27397.2603 * 32 bits = 876.712.33 bits processed per second