In my model the seize operator seizes a workers agent which move a pallete from source to sink, this part is working fine.
But in the release operator I have configured the worker to go to their home node only if there is no more pallete, and the workers are going to the source node again directly from sink noode, but there they get stuck at the source node and not moving the pallete to sink node again.
Flow:
source -> seize -> moveToSink -> Release -> sink
this should work properly the way you built it, however, its really difficult to know the reason for the problem from the info you provided. Maybe you can add screenshots showing the parameters you have chosen in your blocks. In all cases, make sure you have chosen the "Attach seized resources" in the seize block.
Related
A source is producing "productA". "productA" is moved by a transporter called "agv" from node1 to node2. On the way between the two nodes I want to put several tasks like for example delay or queue. These tasks can be implemented by Process Modelling Library blocks. When using them you have to set an agent type. Which type do I use now? "productA" or "agv"? When using "agv" an error accurs because that type is not equal to the source agent. When using "productA" tasks from the process Modelling blocks are only executed for this agent, not for "agv". How to deal with that? Is there a way to create a new agent which contains "productA" and "agv" and overwrites the source agent? How will that not contradict with the transporter fleet block?
You need to use ProductA as this is the main agent flowing through the blocks. It just "uses" an AGV to be moved around, similar to a resource.
If you want to stop and do things with the product and the AGV, you use the SeizeTransporter block first, this makes your product seize an AGV.
Then, you can make them do anything by dropping blocks: queues, delays, Move By Transporter. Just make sure to finish up with a Release Transporter block when all is done.
On Rundeck, I want my job to do a lot work and would like to slice this data by the number of nodes running the job.
Is there a way to query some of the context variables, where I can know how many nodes will run the current task and in which one we are running now ?
For example:
On my current job execution there is a total of 10 nodes and I'm the shell at node 3.
Is this provided somehow by rundeck context ? or would I need to create some state file from an initial step ?
The goal is to split the work on the amount of nodes running the task.
To see what is running on the N node just go to the activity page (left menu) and see the execution (you can see the execution by nodes by default or just click on the "Nodes" link).
To "split the work on the amount of nodes running the task" the best approach is to create separate jobs pointing to desired nodes and then call it from a parent job using the Job Reference Step, take a look at this answer.
I have been trying to get druid to fire a kill task periodically to clean up unused segments.
These are the configuration variables responsible for it
druid.coordinator.kill.on=true
druid.coordinator.kill.period=PT45M
druid.coordinator.kill.durationToRetain=PT45M
druid.coordinator.kill.maxSegments=10
From the above configuration my mental model is, once ingested data is marked unused, kill task will fire and delete the segments that are older that 45 mins while retaining 45 mins worth of data. period and durationToRetain are the config vars that are confusing me, not quite sure how to leverage them. Any help would be appreciated.
The caveat for druid.coordinator.kill.on=true is that segments are deleted from whitelisted datasources. The whitelist is empty by default.
To populate the whitelist with all datasources, set killAllDataSources to true. Once I did that, the kill task fired as expected and deleted the segments from s3 (COS). This was tested for Druid version 0.18.1.
Now, while the above configuration properties can be set when you build your image, the killAllDataSources needs to be set through an API. This can be set via the druid UI too.
When you click the option, a modal appears that has Kill All Data Sources. Click on True and you should see a kill task (Ingestion ---> Tasks below) firing in the interval specified. It would be really nice to have this as a part of runtime.properties or some sort of common configuration file that we can set the value in when build the druid image.
Use crontab it works quite well for us.
If you want to have a control outside the druid over the segments removal, then you must use an scheduled task which runs based on your desire interval and register kill-tasks in druid. It can increase your control over your segments, since when they go away, you cannot recover them. You can use this script to accompany you:
https://github.com/mostafatalebi/druid-kill-task
Agent Comparison/ Priority for queue
I'm building a model in Anylogic I have a question about visualization.
I have a model that looks like this: Source -> delay -> queue (with preemption option that goes into a delay2) -> seize -> delay3 -> release -> sink.
What I would like to do is make the agents enter the queue but visually keep moving along a circled path while waiting. In my model, I put the queue to a capacity of 1 and then added a preemption option which goes into a delay where the agents move along a circled path. I am able to make them take a lap along the path this way, but since the capacity of the queue can't be less than 1 there is always 1 agent standing still (not going in the preemption option). I would like for all of them to keep moving on the circled path while in the queue. How can I fix this?
Also, since the seize block has an embedded queue, and it starts where the queue before ends, there are actually two agents standing still at the same place.
I'm trying to create a packaging cell for 5 items into 1 package; the 5 items are picked up from a resource (worker) and placed into a packaging machine which generate the package; a conveyor moves the packages from the machine to a buffer and every once in a while (say every 20 packages) the worker stops picking the items and goes to the buffer the put all the packages in a box, ideally ready to be shipped. Once the worker has completed the box he has to go back to his pick&place task.
Now, my issues are:
When the worker stops picking the items from the rackSystem and goes to the buffer, the source blocks have to stop generating agents, otherwise the simulation will stop saying that there are no available cells in the rack;
When the worker gets back to his picking task the source blocks have to start generating agents again.
With the hold blocks in the picture I managed to stop the source blocks when the worker stops picking from the rack, anyway I could not make the process start again when the box is complete. How can i do this?
Everything works fine except from the fact that once the worker returns to the picking location and take the last 5 items from the rack, no more agents are allowed to enter the rack.
Actually from this setup, I think you should do this:
Let your sources create agents continuously. In reality (I suppose) things also do not stop coming in just because the worker is doing something else.
Gather all agents in an infinite queue, as you do
remove the hold blocks
instead, make your RackStore and RackPick objects utilize the worker resource pool (tick the box as below and select your resource pool)
You may need to play with the "customize resource choice" selection as well to ensure that your worker only tries to store items when the RackSystem has a space, though, something like below: