I have a problem, which is similar to Assignment Problem, described as follows:
The problem instance has a number of workers and a number of tasks. Any task can only be assigned to a subset of the workers. A task should be assigned to exactly one worker. Each task has different difficulty, therefore, workers will spend different time to finish each task. It is required to assign tasks as uniformly as possible: all of the workers spent almost same time to finish assigned tasks.
I can translate this problem to Integer Linear Programming problem: the cost function is to minimize the variance of each worker.
Is the problem well studied? Are there any algorithm to solve it more efficiently? (Approximation is acceptable)
Thanks in advance!
Related
I'm quite a beginner in Anylogic, so maybe my question is moronic.
What I'm trying to do is to create a model of M/M/1 with reneging, i.e. an agent waits in queue for a (random) amount of time and then exits the queue via timeOut.
Also, I've inserted timeMeasureStart and timeMeasureEnd in order to find the mean time spent in queue for the agents which left the queue by timeOut: MM1 with reneging.
I've tried to set constant time, uniform, triangular and normal random time - the mean time (and the deviation) was as the theory predicts.
But when I tried to use exponential (and weibull), the mean time was significantly less then the mean value of the distribution.
I wonder if someone could explain to me why it happens?
I have a fairly large scale optimization problem although the problem itself is fairly simple. It is just quadratic + linear objective, with linear constraints. So the problem is solvable with cplexqp. The scale of the problem is around 1300 variables, but I need to solve ~200 independent problems.
If I just loop over 200 times and call cplexqp as usual, it takes about 16 minutes to solve all the problems. I considered using parallel computing, so I changed the loop to parfor, and it now takes around 14 minutes. I would have thought we would get much bigger speedup factor, considering that we have 12 cores and 12 workers.
I made sure that the parallel worker is already initialized (so MATLAB does not have to spend time initializing them). I also verified that all 12 worker threads were active in task manager, and they all were using non trivial amount of CPU each.
My question is: do you think cplexqp has a locking mechanism, as in it can't be called with more than one problem at a given time (from different threads?) What if I have different MATLAB processes? (For example I can save the inputs to a file, and start up several MATLAB sessions to consume the file and each session would know which index of problems to solve).
16 minutes is not bad, but we may need to do this several times a day (with potentially different inputs), so I was wondering if we can speed up the process even more.
TIA
The problem is that by default CPLEX will use all cores on your machine to solve one problem. So if you attempt to solve multiple problems in parallel then you are heavily oversubscribing the CPUs. This is likely to result in an overall slowdown.
So you should carefully select how many models you solve in parallel and how many cores you allow for each solve. If you use parfor then you should use the Cplex.Param.threads parameter to limit he number of cores for a single solve, or alternatively, select the simplex algorithm to solve your QPs.
Whether this whole parallelization gives you an overall speedup depends on how much slowdown you will observe for the individual models by limiting the thread counts.
I am struggling to find a way to implement an asynchronous evolutionary process in MATLAB.
I've already implemented a "standard" GA that evaluates the fitness of individuals in parallel using MATLAB's parfor routine. However, this also means that the different workers in my parallel pool have to wait for the last worker to finish his assesment before I can create a new generation and continue with my parallel fitness evaluation. But when the fitness assessment time is very heterogeneous, such an approach can involve significant idle times.
The suggested solution in the literature is what is sometimes called asynchronous evolution. It means that there are no generations anymore, instead every time a worker becomes idle we immediately let him breed a new individual and evalute its fitness. As such there is hardly any idle time as no worker has to wait for others to complete their fitness assessment. The only problem is that those parallel threads/workers need to work on the same population, i.e. they need to communicate to each other whenever they want to breed a new individual (because they need to select parents out of a population that is constantly altered by the workers).
Now ideally you would control each worker separately, letting him access the joint population to select parents, evaluate the offspring's fitness and then replace a parent in the joint population. That involves an ongoing iterative process where parallel workers need to exchange and alter joint information and doing independent fitness evaluation.
My big question is: How can this be achieved in MATLAB?
Things I've already tried:
MATLAB's 'spmd'-functionality using the labSend and labReceive functions but this also seems to involve waiting for other workers.
creating jobs and assigning them tasks. Does not seem to work as multiple jobs are processed sequentially on the cluster. Only tasks within a job use multiple workers but that fails since you can't assign tasks dynamically (you always have to submit the whole job anew to the queue.
parfeval in a recursive while statement. I do not see how this would reduce waiting time.
So does anyone know a way how to implement something like this in MATLAB?
The classical assignment problem deals with assigning N agents to M jobs while minimising the time spent on each job. The problem has a solution in polynomial time, called the Hungarian algorithm, which has a cost matrix C as an input and returns the optimal list of assignments.
In my case, I got the same problem, but with one difference. Each job requires a pair of two works to be assigned to it. The number of agents are chosen such that N is an even number in order for this to be possible.
I am fairly new to assignment problem related questions, so I am not sure how to tackle this problem.
How would one solve this problem?
Edit: Note that a agent can be assigned to at most one task, it can not be assigned to multiple tasks. One can assume M(jobs) = 2N(agents) and otherwise introduce dummy agent or tasks.
Since there are twice as many tasks as workers, you need to double the number of tasks. Since each task requires two workers, you can double the number of tasks by duplicating each of them (ex. Task 1 becomes Task 1a and Task 1b). You would then have an equal number of workers and tasks, and after running the Hungarian Algorithm you can find your pairs of workers by looking at who was assigned to each split of each task.
I was revisiting Operating Systems CPU job scheduling and suddenly a question popped in my mind, How the hell the OS knows the execution time of process before its execution, I mean in the scheduling algorithms like SJF(shortest job first), how the execution time of process is calculated apriori ?
From Wikipedia:
Another disadvantage of using shortest job next is that the total execution time of a job must be known before execution. While it is not possible to perfectly predict execution time, several methods can be used to estimate the execution time for a job, such as a weighted average of previous execution times.[1]
More on http://en.wikipedia.org/wiki/Shortest_job_next
Also, O.S can compute the total needed time for each task, by means of first calculating its CPI.
(CPI: cycles per instruction)
There is a weighted average CPI for each job.
For example, floating point instructions weigh much more than fixed point instructions, meaning they take more time to perform. So a job dealing with fixed point operations: like add or increment is perceived to be shorter. Hence in a shortest job first, it shall be executed prior to the aforementioned job.