I have a problem with the SEIZE block. Right now agents are accessing resources by following a priority I set. Is it possible, however, that if more than one agent arrives with the same priority, and asks for the same resources, Anylogic stops not knowing to whom to grant the resources?
If two or more agent with the same priority are asking for the same resource then the first one asking will get it. Here is more information on the topic.
Related
Problem is explained in the image below
The detail of the agent which I am using:
Detail on the Service in which I am using Resource Pool:
Process Flow:
I used seize-move-release, in order to move the agent with the Resource. showing in figure below
Problem:
Now the only problem is, how 2 Agents will wait in the queue for there turn to go to the Delay section. Explanation is in the image below.
so it seems that most of this flow is wrong if your intent is for the resource to take the agent somewhere.
You are looking at the resource move alone because it's probably going back home
In order to use the resource to move your agent, you need to use seize-move-release, not a service block
I hope this helps
I am working on a ticketing model framework, where we receive requests for single or bulk user account creation in an SAP system. The request is an agent which have multiple agents - user(s) inside it.
So, as you can see in the image we have
Source - Request is coming from here.
Delay(createRequestNo) - A request no. is assigned to the Request at this block.
Service(userCreation) - User(s) are created at this block.
Sink - Request (agent) goes out from this block.
resourcePool - A team of 15 who works on creating user accounts. It is linked to service block.
Imagine a bulk request comes in to create 5 users.
How do the resources at the service block process the all 5 user agents which are inside a Request agent here?
You say your Request agent that flows through the process flow has a number of Agents in it, but these don't need to be agents, they can be pure Java classes or requests can also simply carry a number of users to create.
It all depends on the granularity you require
To answer your question you can access the inside of the agents that travels through the process flow and use that to determine the delay or the number of resources to be seized as follow:
Just be sure that the Agent type in the advanced setting is set to the Agent type that you expect in this block. If you set the Source to create a specific agent type it will automatically update all the serially connected blocks for you.
Please note if the user creation process will vary for each user to be created you need a separate delay for each user... and thus it would be better to split into multiple agents for each user creation, and then have them seize, delay and release each resource separately.
With your current logic, they will all be seized and released at the same time.
I'm developing an Azure Devops extension with tasks in it. In one of the tasks, I'm starting a process and I'm doing configurations. In another task, I'm accessing the same process API to consume it. This is working perfectly fine, but I notice that after the job is done, my process is killed. I was planning to allow the user to do the configuration on an agent and be able to access it in another job or pipeline.
Is there a way to persist a process on an agent? I feel like the agent is killing every child processes created on cleanup. Where can I find documentation on this?
Edit: I managed to find this thread that talks about a certain Process.clean variable but there's not any more information about it and I didn't find documentation on it.
Your feeling is correct. Agents clean up spawned processes when the job finishes, and that's by design. A single machine can have multiple agents on it, and multiple agents can be running tasks in parallel. What if you have one machine with 10 agents on it, and they all start up this process at once?
IMO, the approach you're taking is suspect. If you need to persist information across jobs, there are numerous ways to do so (for example, an output variable containing JSON) that don't involve spawning a service that continues running outside the scope of the job that started it.
I wish to monitor private pool queue if there is waiting items in queue. If there is one waiting (which means that there is not enough agents to serve request) - I wish to add more VMs with agents. But I could not find any API endpoint, which will answer me, if there is any items in current pool queue.
I was not able to locate any api, which will be able to tell me how much tasks is in queue for an agent pool currently, so, I found my way around:
Query https://dev.azure.com/{instanceName}/_apis/distributedtask/pools/{poolId}/agents - this will show me how much agents I have and how much of them is online
Query https://dev.azure.com/{instanceName}/_apis/distributedtask/pools/{poolId}/jobrequests - this shows all jobs in this pool, including running one (theirs status will be null).
So, if amount of jobs is lower than amount of online agents - I am OK. As soon as amount of jobs is higher than online agents - I can employ SDK to add more agents in VMSS (until license permits, though)
I need some guidance on how to properly build out a system that will be able to scale. I will give you some information about what I am trying to do and then ask my specific question.
I have a site where I want visitors to send some data to be processed. They input the data into a textarea or upload it in a file. Simple. The data is somewhat preprocessed on the client side before a POST request is made to a REST endpoint.
What I am stuck on is what is a good way to take this posted data store it and then associate an id with it that references the user since I cannot process the data fast enough for it to be returned to the user in a reasonable amount of time?
This question is a bit vague and open to opinion, I admit it. I just need a push in the right direction to keep moving. What I have been considering is throwing the data into a message queue and then having some workers process the data elsewhere and when the data is processed alert the user as to where to find it with some sort of link to an S3 bucket or just a URL to a file. The other idea was to just run the request for each item to be processed against another end-point that already processes individual records in some sort of loop client side. The problem is as follows with this idea:
To process the data it may take somewhere from 30 minutes to 2 hours depending upon the amount that they want processed. It's not ideal for them to just sit there and wait for that to finish depending on the amount of records they need processed, so I have ruled this out mostly.
Any guidance would be very much appreciated as I don't have any coworkers to bounce things off of, nor do I know many people with the domain knowledge that I could freely ask. If this isn't the right place to ask this, could you point me in the right direction as to where it should be asked?
Chris
If I've got you right, your pipeline is:
Accept item from user
Possibly preprocess/validate it (?)
Put into some queue
Process data
Return result.
You man use one or several queues on stage (3). Entity from user gets added to one of the queues. If it's big enough, it could be stored in S3 or storage alike, and only info about it put into the queue: link, add date, user id (or email of alike). Processors can pull items from queue and give feedback to users.
If you have no strict requirements on order, things get much simpler: you don't need any sync between them. Treat all the components: upload acceptors, queues, storages and processors as independent pools of processes. Monitor each pool separately. If there's some bottlenecks - add machines to that pool.