AnyLogic: How customers choose smallest queue? - simulation

If we have 2 queues, we can simply use SelectOutPut, if (queue1.size() < queue2.size()) go to Queue1, else go to Queue2.
But what if we have 12 queues?
Using only if else will be a nightmare. So what will be our approach?
Note:
Going through all the queues through a for loop could be the answer. If it's possible, then how?

I do not know the purpose of the conditions when you need smaller size or bigger and etc.
You can probably use a PriorityQueue the complexity will be bigger but it will looks cleaner in the code.
How do I use a PriorityQueue?
each time you poll from the PQ you can choose whether the smaller will be first or the bigger ( in size I assume ) and you insert in right after - so it can be used after (if needed).
if you are still want to do this with a loop - please give more details on what you trying to achieve, since its kinda hard to guess
Good Luck !

you can add all the queues into a collection and then do this
Queue queue=top(collection,q->-q.size());
go to queue

Related

How to determine when to start a counter to ensure it never catches the previous counter

I have a problem where I have several events that are occurring in a project, the events happen semi-concurrently, where they do not start at the same time but multiple can still be occurring at once.
Each event is a team of people working on a linear task, starting at the beginning and then working their way to the end. Their progress is based on a physical distance.
I essentially need to figure out each events start time in order for no teams to be at the same location, nor passing eachother, at any point.
I am trying to program this in MATLAB so that the output would be the start and end time for each event. The idea would be to optimize the total time taken for the project.
I am not sure where to begin with something like this so any advice would be greatly appreciated.
If I understand correct, you just want to optimize the "calendar" of events with limited resources (aka space/teams).
This kind of problems are those called NP and there is no "easy" way to search for the best solution.
You here have two options:
Greedy like algorithm: You will have your solution in a resonable time but it won't be the best one.
Brute force like algorithm: You will find the best solution but maybe not in the time you need it.
Usually if the amount of events is low you can go for 2nd option but if don't you may need to go for the first one.
No mather which one you choose first thing you will need to do is to compute if a solution is valid. What does this mean? It means to check for every event wheter if it collisions whith others in time, space and teams.
So lets imagine the problem of making the calendar on a University. There you have to think about:
Students
Teacher
Classroom
So for each event I have to check if another event have same students, teacher or classroom at the same time. First of all I will check the events that match in time with the actual event. Then I will compare the actual event with all the others.
Once you have this done you could just write a greedy algorithm that starts placing events on time just checking if it collides with some other.

What is the smallest unit of work that is sensible to parallelize with actors?

While Scala actors are described as light-weight, Akka actors even more so, there is obviously some overhead to using them.
So my question is, what is the smallest unit of work that is worth parallelising with Actors (assuming it can be parallelized)? Is it only worth it if there is some potentially latency or there are a lot of heavy calculations?
I'm looking for a general rule of thumb that I can easily apply in my everyday work.
EDIT: The answers so far have made me realise that what I'm interested in is perhaps actually the inverse of the question that I originally asked. So:
Assuming that structuring my program with actors is a very good fit, and therefore incurs no extra development overhead (or even incurs less development overhead than a non-actor implementation would), but the units of work it performs are quite small - is there a point at which using actors would be damaging in terms of performance and should be avoided?
Whether to use actors is not primarily a question of the unit of work, its main benefit is to make concurrent programs easier to get right. In exchange for this, you need to model your solution according to a different paradigm.
So, you need to decide first whether to use concurrency at all (which may be due to performance or correctness) and then whether to use actors. The latter is very much a matter of taste, although with Akka 2.0 I would need good reasons not to, since you get distributability (up & out) essentially for free with very little overhead.
If you still want to decide the other way around, a rule of thumb from our performance tests might be that the target message processing rate should not be higher than a few million per second.
My rule of thumb--for everyday work--is that if it takes milliseconds then it's potentially worth parallelizing. Although the transaction rates are higher than that (usually no more than a few 10s of microseconds of overhead), I like to stay well away from overhead-dominated cases. Of course, it may need to take much longer than a few milliseconds to actually be worth parallelizing. You always have to balance time time taken by writing more code against the time saved running it.
If no side effects are expected in work units then it is better to make decision for work splitting in run-time:
protected T compute() {
if (r – l <= T1 || getSurplusQueuedTaskCount() >= T2)
return problem.solve(l, r);
// decompose
}
Where:
T1 = N / (L * Runtime.getRuntime.availableProcessors())
N - Size of work in units
L = 8..16 - Load factor, configured manually
T2 = 1..3 - Max length of work queue after all stealings
Here is presentation with much more details and figures:
http://shipilev.net/pub/talks/jeeconf-May2012-forkjoin.pdf

How to approximate processing time?

It's common to see messages like "Installation will take 10 min aprox." , etc in desktop applications. So, I wonder how can I calculate an approximate of how much time a certain process will take. Off course I won't install anything but I want to update some internal data and depending on the user usage this might take some time.
Is this possible in a iPhone app? How Cocoa guys do this, would it be the same way in iPhone apps?
Thanks in advance.
UPDATE: I want to rewrite/edit some files on disk, most of the time these files are not the same size so I cannot use timers for the first iteration and calculate the rest from that.
Is there any API that helps on calculating this?
If you have some list of things to process, each "thing" - usually better to measure a group of 10 or so "things" - is a unit of work. Your goal is to see how long it takes to process a single group and report the estimated time to completion.
One way is to create an NSDate at the start of each group and a new one at the end (the top and bottom of your for loop) for each group. Multiply the difference in seconds by however many groups you have left (minus the one you just processed) and that should be a reasonable estimate of the time remaining.
Of course this gets more complicated if one "thing" takes a lot longer to process than another "thing" - the above approach assumes all things take the same amount of time. In this case, however, you may need to keep track of an average window (across the last n "things" or groups thereof).
A more detailed response would require more details about your model and what work you're performing.

How can I implement incr/decr on top of a key/value store?

How can I implement incr/decr on top of a key/value store?
I'm using a key value store that doesn't support incr and decr though which is why I want to create this. I have used Redis and Memcached incr and decr, so as mentioned in some of the answers then this is a perfect example of how I want the incr and decr to behave, so thanks to those who mentioned this.
The point of having a incr() function is it's all internal to the store. You don't have to pull data out and push it back in.
What you're doing sounds like you want to put some logic in your code that pulls the data out, increments it and pushes it back in... While it's not very hard (I think I've just described how you'd do it), it does defeat the point somewhat.
To get the benefit you'd need to change the source of your key store. Might be easy.
But a lot of caches already have this. If you really need this for speed, perhaps you should find an alternate store like memcached that does support it.
Memcache has this functionality built in
edit: it looks like you're not going to get an atomic update without updating the source, as there doesn't appear to be a lock function. If there is (and this is not pretty), you can lock the value, get it, increment it in your application, put it, and unlock it. Suboptimal though.
it kind of seems like without a compareAndSet then you are out of luck. But it will help to consider the problem from another angle. For example, if you were implementing an atomic counter that shows the number of upvotes for a question, then one way would be to have a "table" per question and to put a +1 for each upvote and -1 for each downvote. Then to "get" you would sum the "table". For this to work I assume "tables" are inexpensive and you don't care how long "get" takes to compute, you only mentioned incr/decr.
If you wish to atomically increment or decrement an int value associated with a key of e.g. type string, and if you'll know all of the keys in advance of having to perform the atomic operations on any of them, use Dictionary<string, int[]> and pre-populate the dictionary with a single-item array for each key value. It will then be possible to perform atomic operations (e.g. increment) on items via code like Threading.Interlocked.Increment(MyDict[keyString][0]);. If you need to be able to deal with keys that are not known in advance, you may need to use a ConcurrentDictionary instead of Dictionary, but you need to be careful if two threads try to simultaneously create dictionary entries for the same key.
Since increment and decrement are simple addition and subtraction operations that are "commutative", what you need to implement is a PN-Counter. It is a CRDT (commutative replicated data type). Various examples of how to implement this on Riak are available around the web and on Github.

What are some of the advantage/disadvantages of using SQLDataReader?

SqlDataReader is a faster way to process the stored procedure. What are some of the advantage/disadvantages of using SQLDataReader?
I assume you mean "instead of loading the results into a DataTable"?
Advantages: you're in control of how the data is loaded. You can ask for specific data types, and you don't end up loading the whole set of data into memory all at the same time unless you want to. Basically, if you want the data but don't need a data table (e.g. you're going to populate your own kind of collection) you don't get the overhead of the intermediate step.
Disadvantages: you're in control of how the data is loaded, which means it's easier to make a mistake and there's more work to do.
What's your use case here? Do you have a good reason to believe that the overhead of using a normal (or strongly typed) data table is significantly hurting performance? I'd only use SqlDataReader directly if I had a good reason to do so.
The key advantage is obviously speed - that's the main reason you'd choose a SQLDataReader.
One potential disadvantage not already mentioned is that the SQLDataReader is forward only, so you can only go through the records once in sequence - that's one of the things that allows it to be so fast. In many cases that's fine but if you need to iterate over the records more than once or add/edit/delete data you'll need to use one of the alternatives.
It also remains connected until you've worked through all the records and close the reader (of course, you can opt to close it earlier, but then you can't access any of the remaining records). If you're going to perform any lengthy processing on the records as you iterate over them, you may find that you impact other connections to the database.
It depends on what you need to do. If you get back a page of results from the database (say 20 records), it would be better to use a data adapter to fill a DataSet, and bind that to something in the UI.
But if you need to process many records, 1 at a time, use SqlDataReader.
Advantages: Faster, less memory.
Disadvantages: Must remain connected, must remember to close the reader.
The data might not be concluesive and you are not in control of your actions that why the milk man down the road has always got to carry data with him or else they gona get cracked by the data and the policeman will not carry any data because they think that is wrong to keep other people's data and its wrong to do so. There is a girl who lives in Sheffield and she loves to go out and play most the times that she s in the house that is why I dont like to talk to her because her parents and her other fwends got taken to peace gardens thats a place that everyone likes to sing and stay. usually famous Celebs get to hang aroun dthere but there are always top security because we dont want to get skanked down them ends. KK see u now I need 2 go and chill in the west end PEACE!!!£"$$$ Made of MOney MAN$$$$