how can we schedule cloudlets Round-Robin scheduling algorithm if the quantum time=5 - scheduled-tasks

how can we schedule cloudlets using Round-Robin scheduling algorithm if they are sorted according to the arrival time.
For example there are 2 VMs and 6 cloudlets. Cloudlets exec time= 12, 5, 10, 7, 15, 18 Quantum = 5 Can you guide me in an example? Thank you

CloudSim doesn't provide a CloudletScheduler that consider quantum time (time slices).
There is an answer for a similar question that explains exactly what you are looking for. Check it here. It uses CloudSim Plus, a modern version of CloudSim that provides lots of exclusive features.

Related

In Vert.x, why is the DEFAULT_EVENT_LOOP_POOL_SIZE = 2 * no. of cores?

Vert.x seems to create upto 2 * NUM_OF_CORES event loop threads by default.
And this seems to be a fairly old change (7 years+)
On a machine with 4 physical cores (8 logical cores with hyper-threading), it creates 16 event loop threads.
Shouldn't NUM_OF_CORES (i.e., 8 in above example) number of event loop threads be ideal?
Only explaination I could find was from Tim Fox (original author of vertx):
we use 2 * number of cores by default - in practice this gives better results as OSes don't always distribute threads evenly across cores.
But a few load tests I did gave better results when I used 8 instead of 16. So want to understand under what conditions should the default give better results?
In optimal CPU bound calculations having about the same number of threads and logical cores is a good practice because we want out thread to use more CPU power as possible without interfering with other threads.
Usually Vert.x is not used for CPU intensive computations; for the most common usecases of Vert.x it might be helpful to have some more threads ready for beign used when needed, rather than having to create new ones on the go.
Why not using 10 * NUM_OF_CORES threads then? Because of the thread creation overhead and the risk of creating too many unused threads (that would lower the system performace). So this choise is (probably) the result of the tradeoff between thread responsiveness and waste of system resources.
Your benchmarks can produce bad results with 2 * NUM_OF_CORES for a variety or reasons, such as:
OS thread management (allocation time and context switches);
lack of system resouces (a lot of programs running with the one you are testing);
misuration issues (did the measure start before the thread allocation? did the test last for an amount of time that makes the thread creation time negligible?);
probably something else I can't figure out rn 😅
Hope it helped!

Does "concurrency" limit of 10 guarantee 10 parallel slice runs?

In an ADF we can define concurrency limit up to maximum 10. So, assuming we set it to 10, and slices are waiting to run (not waiting for data set dependency etc), will there always be guarantee that at any given time 10 slices will be running in parallel. I have noticed that even after setting it to 10, sometimes couple of them are in progress, or not sure if UI doesn't show properly. Is it subject to resources available? But finally it's cloud, there are infinite resources virtually. Has anyone noticed anything like this?
If there are 10 slices to be run in parallel and for each one of them all their dependencies have been met then 10 slices would run in parallel. Do raise an Azure support ticket if you do not see this happening and we would look into it. There may be a small delay in kicking all 10 off but 10 should run in parallel.
Thanks, Harish

Completely Fair Scheduler vs Round Robin

In the text books they say that the major advantage of CFS is that it is very fair in allocating CPU to different processes. However, I am unable to know how CFS with the RB-Tree is capable of achieving better form of fairness than that achieved by simple Round Robin queue !
If we forget about CFS grouping and other features, which can also be incorporated somehow in simple RR queue, can anybody tell me how CFS is more fair than RR?
Thanks in advance
I believe the key difference relates to the concept of "sleeper fairness".
With RR, each of the processes on the ready queue gets an equal share of CPU time, but what about the processes that are blocked/waiting for I/O? They may sit on the I/O queue for a long time, but they don't get any built-up credit for that once they get back into the ready queue.
With CFS, processes DO get credit for that waiting time, and will get more CPU time once they are no longer blocked. That helps reward more interactive processes (which tend to use more I/O) and promotes system responsiveness.
Here is a good detailed article about CFS, which mentions "sleeper fairness": https://developer.ibm.com/tutorials/l-completely-fair-scheduler/

Which is more efficient preemptive or nonpreemptive scheduler?

I am just learning about preemptive and nonpreemptive schedulers so I was wondering which is more efficient a preemptive or nonpreemptive scheduler? or are they equally efficient? or are they just specialized for one task and are efficient in there own way?
Use a non-preemtive scheduler if you want I/O and inter-thread comms to be slower than Ruby running on an abacus.
Use a preemptive scheduler if you want to be saddled with locks, queues, mutexes and semaphores.
[I've also heard that there are positive characteristics too, but you'll have to Google for that, since googling your exact title results in: 'About 55,900 results']

What is your experience with Sun CoolThreads technology?

My project has some money to spend before the end of the fiscal year and we are considering replacing a Sun-Fire-V490 server we've had for a few years. One option we are looking at is the CoolThreads technology. All I know is the Sun marketing, which may not be 100% unbiased. Has anyone actually played with one of these?
I suspect it will be no value to us, since we don't use threads or virtual machines much and we can't spend a lot of time retrofitting code. We do spawn a ton of processes, but I doubt CoolThreads will be of help there.
(And yes, the money would be better spent on bonuses or something, but that's not going to happen.)
IIRC The coolthreads technology is referring to the fact that rather than just ramping up the clock speed ever higher to improve performance they are now looking at multiple core processors with hyperthreading effectively giving you loads of processors on one chip. Overall the processing capacity available is higher but without the additional electrical power and aircon requirements you would expect (hence cool). Its usefulness definitely depends on what you are planning to run on it. If you are running Apache with the multiple threads core it will love it as it can run the individual response threads on the individual cpu cores. If you are simply running single thread processes you will get some performance increases over a single cpu box but not as great (any old fashioned non mod_perl/mod_python CGID processes would still be sharing the the cpu a bit). If your application consists of one single threaded process running maxed out on the box you will get very little improvement on a single core cpu running at the same speed.
Peter
Edit:
Oh and for a benchmark. We compared a T2000 in our server farm to our current V240s (May have been V480's I don't recall) The T2000 took the load of 12-13 of the Older boxes in a live test without any OS tweeking for performance. As I said Apache loves it :-)
Disclosure: I work for Sun (but as an engineer in client software).
You don't necesarily need multithreaded code to make use of these machines. Having multiple processes will make use of multiple hardware threads on multiple cores.
The old T1 processors (T1000 and T2000 boxes) did have only a single FPU, and weren't really suitable for tasks with much more than about 1% floating point. The newer T2 and T2+ processors have an FPU per core. That's probably still not great for massive floating point crunching, but is much more respectable.
(Note: Hyper-Threading Technology is a trademark of Intel. Sun uses the term Chip MultiThreading (CMT).)
We used Sun Fire T2000s for my last system. The boxes themselves were far exceeded our capacity requirements in terms of processing power. For us the decision was based on the lower power consumption and space requirement. We successfully ran WebSphere 6, Oracle 10g and SunONE Directory server on the same box.
My info may be a bit out of date (last used these servers 2 years ago) but as I recall one big gotcha was that all the cores on a single CPU all shared the same FPU unit, so if your code did a lot of floating point (we were doing GIS) the FPU was a massive bottleneck and you didn't get much benefit from the large number of threads.
For any process with high parallelism these machines (eg, the t1000/t2000) are great for their cost. I've been running oracle on them for about 18 months now and it works great.
If you task is a single threaded/single process, then you'd be better off with a high speed dual/quad core intel machine.
If your application has lots of threads/lots of processes then these machines will likely be great for it.
Best of all, Sun will send you one for 60 days to evaluate, that is what we did before committing to it, ended up getting 2 t2000's and have recently purchased another 4 t1000's.
It hit me last night that our core processes aren't multi-threaded, but the machine in question does have a bunch of system processes that are. In particular, it acts as an NFS server. It sounds like running hundreds of processes will benefit from all those cores, as well.
I'll see if we can get a demo unit to test on first.
Sun has been selling the Niagra machines to be all things to all comers. They do have their place: web services being the best deployment. We have run Oracle on some T2000s and it worked well for highly parallelized operations. But the machines fall flat on single-treaded operations, the performance of which is rather bad. If you have floating point work to do, look elsewhere. Even the newer chips with A FPU per core is inadequate. Also, these machines cannot take a enterprise-class pounding for long and we've had reliability problems. Multi-core techology is more hype than substance. Sandia National Lab's research on it and found that four to eight cores is about the top-end of usefulnes and that a 16 core chip has the same throughput as a dual core chip. So a 16 core chip is a waste of a lot of money. Also, as the number of cores increase, the clock speed muust decrease, because of the thermal wall. Most manufacturers will probably settle on quad-core chips until memory technology improves (you can't keep 16 cores fed with memory and most of the cores are stalled). Finally, given the chaos at Sun, you'd do better to look elsewhere.