Anylogic: assemble multiple agents at once in assmbler block - simulation

Is it possible to assemble multiple agents at once in an "assembler" block. I mean that if an agent is composed of two types of parts, and there are three parts of each type, and we have enough resources (e.g. 3 workers) is it possible to assemble these three products simultaneously. in my model they are assembled one by one and there is no option like queue capacity which does the same in "service" block.
Any Idea?

Assembler always works one-at-a-time. To have this work in parallel, the model would need to contain 3 separate assemblers with some sort of routing logic to ensure that parts are spread across the assemblers and don't all end up in one block to be processed serially.

Related

Method to create a list of taskings for resource to work on when resource becomes idle

Image to illustrate point of freezing Context:
Creating a scalable model for a production line to increase Man Machine Optimization ratio. Will be scaling the model for an operator (resource) to work on multiple machines (of the same type). During the process flow at a machine, the operator will be seized and released multiple times for different taskings.
Problem:
Entire process freezes when the operator is being seized at multiple seize blocks concurrently.
Thoughts:
Is there a way to create a list where taskings are added in the event the resource is currently seized. Resource will then work on the list of taskings whenever it becomes idle. Any other methods to resolve this issue is also appreciated!
If this is going to become a complex model, you may want to consider using a pure agent-based approach.
Your resource has a LinkedList of JobRequest agents that are created and send by the machines when necessary. They are sorted by some priority.
The resource then simply does one JobRequest after the next.
No ResourcePools or Seieze elements required.
This is often the more powerful and flexible approach as you are not bound to the process blocks anymore. But obviously, it needs good control and testing from you :)
Problem: Entire process freezes when the operator is being
seized at multiple seize blocks concurrently.
You need to explain your problem better: it is not possible to "seize the same operator at multiple seize blocks concurrently" (unless you are using a resource choice condition or similar to try to 'force' seizing of a particular resource --- even then, this is more accurately framed as 'I've set up resource choice conditions which mean I end up having no valid resources available').
What does your model "freezing" represent? For example, it could just be a natural consequence of having no resources available, especially if you have long delay times or are using Delay blocks with "Until stopDelay() is called" set --- i.e., you are relying on events elsewhere in your model to free agents (and seized resources) from blocks, which an incorrect model design might mean never happen in some circumstances. (If your model is "freezing" because of no resources being available, it should 'unfreeze' when one does.)
During the process flow at a machine, the operator will be
seized and released multiple times for different taskings.
You can just do this bit by breaking down the actions at a machine into a number of Seize/Delay/Release actions with different characteristics (or a process flow that loops around a set of these driven by some data if you want it to be more flexible / data-driven).

How to set service/delay time based on different source and parameter

I am modeling a scenario where say my source is incoming orders, but order may have different characteristics, such as lines, units # of SKUs on the orders. Based on different characteristics, my service/delay time my differ. For example, my service time may be 1slines+ 5sunits+30s*SKUs. How can I set up my source and delay block to model this scenario?
Create an agent type for orders, add parameters for #SKUs etc.
In your delay block's duration field, use agent.numSKU to refer to a parameter numSKU in your agent type Order.
In your source, make sure it creates Order agents (not the default agents).
Lots of example models do this, please follow some of the basic tutorials and check the example models :)

Multi-instance and Loop in BPMN

I am trying to model a certain behaviour, where couple of activities in differents swimlanes supposed to be processed in a loop. Now BPMN uses tokens to ilustrate the flow and paths taken. I wonder how such tokens work in case of loops. Does every activity iteration creates a token which consequently travel through the connected activities?
E.g. Let's say Activity1 will be performed in a loop 10 times. Will that create 10 tokens where each will travel through the remaining activities of the process? Such behaviour would be undesirable, however if I am not mistaken multi-instance activities work that way.
The only solution on my mind which would comply with BPMN specification would be to create a Call activity for the whole block of activities and then run the Call activity in a loop.
Can anyone clarify for me the use of loops and multi-instances in BPMN from the view of tokens?
Thank you in advance!
Based upon my reading of the documentation: https://www.omg.org/spec/BPMN/2.0/PDF The answer from #qwerty_so does not seem to conform to the standard, although in part this seems to be because the question also seems imprecise or at least underspecified.
A token (see glossary) is simply an imaginary object that represents the flow unit in the process diagram. There are at least three different types of loops specified in the standard, which suggest different implications for the flow unit.
Sections 13.2.6 and 12.2.7 describe Loop Activity and Multiple Instance Activities respectively. While the latter, on its face, might not seem like a loop, the standard defines attributes of the activity that suggest otherwise including: MultipleInstanceLoopCharacteristics and ExpressionloopCardinality.
In the former case, it seems that the operational semantics suggest a single flow unit that repeats multiple times according to some policy or even unbounded.
In the latter case, the activity has "multiple instances spawned," including a parallel variant.
That multiple instances can flow forward in parallel, on its face, suggests that the system must at least allow for the possibility of spawning multiple tokens (or conceptually splitting the original token) to support multiple threads proceeding simultaneously along different paths.
That said, the Loop Activity (13.2.6) appears to support the OP's desired semantics.

How many IDL files are needed when there are multiple components in a distributed system?

This is the scenario.
There are three components, A, B and C.
Component A can access methods of component B.
Component B can access methods of component A and component C.
Component C can access methods of component B.
To implement this scenario how many interface description language (IDL) files are needed? How many stubs and skeletons do we require?
I was thinking four IDL files. But can we have multiple stubs and skeletons?
Depending on the size of your services/components (number of methods), I would rather try to split them into logical units rather than based on the number of computers you want to connect.
For example, it can make sense to have two different services implemented by the same server: One for internal purposes and one that is to be published. Or a service is split into two parts, because these two parts do not really belong together logically, although they are for some reason implemented by the same component. All of these services may or may not share the same IDL file.
Last not least, if you still want to minimize the amount of files, keep in mind that declaring, publishing and knowing about a certain service interface does not imply, that a given component must implement the service, must be able to reach the service, wants to use the service, or is allowed to call it.

How does one represent multiple threads in a flow chart

I have been tasked with creating a flow chart for some client server and start up processes in our organizations software. A lot of our processes run concurrently as they have no impact on one another. How is this traditionally represented in the flow chart?
I was thinking that flowcharts aren't really intended for this, but as it turns out, there actually is a notation for concurrency. Wikipedia says:
Concurrency symbol
Represented by a double transverse line with any number of entry and exit arrows. These symbols are used whenever two or more control flows must operate simultaneously. The exit flows are activated concurrently, when all of the entry flows have reached the concurrency symbol. A concurrency symbol with a single entry flow is a fork; one with a single exit flow is a join.
I did some looking around on Google images and found this notation:
But this will only apply for a specific type of parallelism (what if you don't spawn all your threads at once?), and won't apply to a multiprocess model at all. In case of a multiprocess model, I would just make a separate flowchart for each process.
Examples speak louder than words! See the flow chart in a paper.