restricted area object capacity be based on available footprint - anylogic

What is a feasible way to have the restricted area object capacity be based on available footprint for the agents?
I have agents that are produced at the source and have a randomly assigned footprint. The delay object is in a restricted area block. Instead of the capacity be based on the number of agents that it can hold, I would like it to be based on footprint availability. I have two parameters, initial sqft and true sqft. Initial sqft is what is available at the start (how much space is available at the facility). At the entry of the queue, i set the true sqft to initial subtracting the agent's footprint and at the exit i add the agents footprint back to the true sqft. I entered the true sqft evaluation at the queue entry because I want it to wait in the block if there isnt any footprint to delay the agent. When I run it, true sqft exceeds initial sqft, which should be impossible.

Related

AnyLogic, how to change the batch size dynamically?

I have a model where the agents are created by the event but in random manner (uniform_discr(2, 4)). Each group of agents have it's unique number (batchNumber). The size of agents may be different. Then, the model creates the batches of the agents depends on the quantity of the created agents (batchSize). So in this case it may be 2,3 or 4 agents in the batch. It is the number of agents in the batch. Each batch obtains it's volume depends on the volume of it's agents (Resource_quantity).
How to change the batch size and it's volume dynamically?
I used the solution proposed here and used Wait and Batch blocks.
The problem is that under the running the model I receive the error (batch capacity must be greater than 0).
I understand that it must be greater than zero and I defined it in the wait block.
What's the problem here?
In your example you can set the initial batch size to any arbitrary number since no agents will be batched until they are released from the wait block and you set the batch size of the batch block before releasing any agents.
Set the initial batch inside the batch size to any number greater than 0 and see if it works

Make an idle resource help out a unit already being served

So, the problem I'm facing is the following.
I have a simple arcitechture where I simulate an unloading process where 2 cranes unload incoming docked ships. Using a seize-delay-release design, one incoming ship binds up one capcity point from the "cranes" resourcePool.
View descriptive image here:
In the event of only one arriving ship, how can I make both cranes work on one ship and from there cut the processing time in half?
If a second ship docks in, I want one of the two busy cranes to help out the newly arrived ship with unloading.
Are there any elegant ways of solving this issue?
Another option would be to model the offloading process in a bit more detail. This will negate the requirement of computing the remaining offloading time on a ship if service time depends on the number of cranes serving it in any time unit.
You can model the berths or docks where ships can park to be offloaded as waiting blocks or queues
When a ship enters the docks or berth they will generate a number of containers or what ever units that need to be offloaded in a separate logic stream, using source.inject(numberOfUnits)
You can model the number of cranes available at each berth programmatically by either increasing or decreasing the number of resources in the resource pool using resourcePool.set_capacity(numberOfUnits)
Thus if a ship arrives at a dock A and there is no ship at dock B, you increase the capacity of the cranes at dock A to 2. If a new ship then arrives at dock B you set the capacity for the resource pool at both docks to 1.
If the ship at dock A leaves, then you assign the cranes in the resource pool at dock B to 2 and at dock A to 0, thus simulating the movement of a crane from one dock to another.
The only minor issue here is that if you want to track the utilization of individual cranes you will need to store them separately as you will be destroying and creating resource units the whole time.
Yes, apply priorities. A new ship should have a higher priority and your second crane (still busy with ship 1) can be seized by the higher priority ship using task preemption. (Check how these work in the help and example models)
Only difficulty will be to compute the remaining time on ship 1 if service time depends on the number of cranes serving it in any time unit. Not impossible but will need some coding.

How can a Neural Network learn from testing outputs against external conditions which it can not directly control

In order to simplify the question and hopefully the answer I will provide a somewhat simplified version of what I am trying to do.
Setting up fixed conditions:
Max Oxygen volume permitted in room = 100,000 units
Target Oxygen volume to maintain in room = 100,000 units
Maximum Air processing cycles per sec == 3.0 cycles per second (min is 0.3)
Energy (watts) used per second is this formula : (100w * cycles_per_second)SQUARED
Maximum Oxygen Added to Air per "cycle" = 100 units (minimum 0 units)
1 person consumes 10 units of O2 per second
Max occupancy of room is 100 person (1 person is min)
inputs are processed every cycle and outputs can be changed each cycle - however if an output is fed back in as an input it could only affect the next cycle.
Lets say I have these inputs:
A. current oxygen in room (range: 0 to 1000 units for simplicity - could be normalized)
B. current occupancy in room (0 to 100 people at max capacity) OR/AND could be changed to total O2 used by all people in room per second (0 to 1000 units per second)
C. current cycles per second of air processing (0.3 to 3.0 cycles per second)
D. Current energy used (which is the above current cycles per second * 100 and then squared)
E. Current Oxygen added to air per cycle (0 to 100 units)
(possible outputs fed back in as inputs?):
F. previous change to cycles per second (+ or - 0.0 to 0.1 cycles per second)
G. previous cycles O2 units added per cycle (from 0 to 100 units per cycle)
H. previous change to current occupancy maximum (0 to 100 persons)
Here are the actions (outputs) my program can take:
Change cycles per second by increment/decrement of (0.0 to 0.1 cycles per second)
Change O2 units added per cycle (from 0 to 100 units per cycle)
Change current occupancy maximum (0 to 100 persons) - (basically allowing for forced occupancy reduction and then allowing it to normalize back to maximum)
The GOALS of the program are to maintain a homeostasis of :
as close to 100,000 units of O2 in room
do not allow room to drop to 0 units of O2 ever.
allows for current occupancy of up to 100 people per room for as long as possible without forcibly removing people (as O2 in room is depleted over time and nears 0 units people should be removed from room down to minimum and then allow maximum to recover back up to 100 as more and more 02 is added back to room)
and ideally use the minimum energy (watts) needed to maintain above two conditions. For instance if the room was down to 90,000 units of O2 and there are currently 10 people in the room (using 100 units per second of 02), then instead of running at 3.0 cycles per second (90 kw) and 100 units per second to replenish 300 units per second total (a surplus of 200 units over the 100 being consumed) over 50 seconds to replenish the deficit of 10,000 units for a total of 4500 kw used. - it would be more ideal to run at say 2.0 cycle per second (40 kw) which would produce 200 units per second (a surplus of 100 units over consumed units) for 100 seconds to replenish the deficit of 10,000 units and use a total of 4000 kw used.
NOTE: occupancy may fluctuate from second to second based on external factors that can not be controlled (lets say people are coming and going into the room at liberty). The only control the system has is to forcibly remove people from the room and/or prevent new people from coming into the room by changing the max capacity permitted at that next cycle in time (lets just say the system could do this). We don't want the system to impose a permanent reduction in capacity just because it can only support outputting enough O2 per second for 30 people running at full power. We have a large volume of available O2 and it would take a while before that was depleted to dangerous levels and would require the system to forcibly reduce capacity.
My question:
Can someone explain to me how I might configure this neural network so it can learn from each action (Cycle) it takes by monitoring for the desired results. My challenge here is that most articles I find on the topic assume that you know the correct output answer (ie: I know A, B, C, D, E inputs all are a specific value then Output 1 should be to increase by 0.1 cycles per second).
But what I want is to meet the conditions I laid out in the GOALS above. So each time the program does a cycle and lets say it decides to try increasing the cycles per second and the result is that available O2 is either declining by a lower amount than it was the previous cycle or it is now increasing back towards 100,000, then that output could be considered more correct than reducing cycles per second or maintaining current cycles per second. I am simplifying here since there are multiple variables that would create the "ideal" outcome - but I think I made the point of what I am after.
Code:
For this test exercise I am using a Swift library called Swift-AI (specifically the NeuralNet module of it : https://github.com/Swift-AI/NeuralNet
So if you want to tailor you response in relation to that library it would be helpful but not required. I am more just looking for the logic of how to setup the network and then configure it to do initial and iterative re-training of itself based on those conditions I listed above. I would assume at some point after enough cycles and different conditions it would have the appropriate weightings setup to handle any future condition and re-training would become less and less impactful.
This is a control problem, not a prediction problem, so you cannot just use a supervised learning algorithm. (As you noticed, you have no target values for learning directly via backpropagation.) You can still use a neural network (if you really insist). Have a look at reinforcement learning. But if you already know what happens to the oxygen level when you take an action like forcing people out, why would you learn such a simple facts by millions of evaluations with trial and error, instead of encoding it into a model?
I suggest to look at model predictive control. If nothing else, you should study how the problem is framed there. Or maybe even just plain old PID control. It seems really easy to make a good dynamical model of this process with few state variables.
You may have a few unknown parameters in that model that you need to learn "online". But a simple PID controller can already tolerate and compensate some amount of uncertainty. And it is much easier to fine-tune a few parameters than to learn the general cause-effect structure from scratch. It can be done, but it involves trying all possible actions. For all your algorithm knows, the best action might be to reduce the number of oxygen consumers to zero permanently by killing them, and then get a huge reward for maintaining the oxygen level with little energy. When the algorithm knows nothing about the problem, it will have to try everything out to discover the effect.

Space between agents in discrete-event simulation

I have a question about space between agents. In my model I have agents generated from a source and then they enter a delay, after the delay the agents go into a a queue with a capacity of 1 but I have a preemption option. The agents that go into the preemption are supposed to move along a circled path (I used a delay block for this) but there should always be a certain space between the agents, e.g. 100 meters. How can I incorporate this in my model to make sure my agents are not too close to each other?
One way you can control the distance between your agents is to move them on a path using a dummy transporter instead of the moveTo block. Transporters allow you to define a minimum distance to obstacle that prevents the agents from getting too close to each other.
Two options if you mean the static queue with agents actually waiting:
1) if your queue size is 500 meters, define the maximum amount of agents allowed in that queue to 6 (so you have 100 meters of distance between each agent)
2) Use the PML settings block from the PML palette and define an initial capacity of animation location equal to 6 (if your queues are 500 meters)... but this applies to all the model, so maybe it won't be good enough.
If you want them to have 100 meters space while they are moving towards their objective through the path that represents the queue, then the answer depends heavily on your model and it cannot be answered with the info provided... you need in this case to control the agent movements adding some logic... but I don't know what logic is suitable for you.

AnyLogic Batching agents i.t.o Weight

How do I set the batch size i.t.o the weight I want to batch? I am currently simulating a potato plant. And the potatoes(agent) all have their own weight due to the randomness of potatoes, but now I must batch them into 10Kg bags. The weight should just be over 10kg but not smaller, so it is going to be 9.9kg plus one more potato.
The F1 help function suggests to use a customized Queue. But I do not know how to go forward with that option.
Any help would be appreciated
You could use a "Wait" object with infinite capacity. Whenever a potatoe is added, check the total weight and if they are above your threshold, you can use wait.freeAll(). This will send them into a downstream batch object.
Make sure to change the batch size to the number of potatoes in the "wait" object before you the the freeAll() method,so that all freed potatoes are batched together. You can do that dynamically using batch.set_batchSize(x)
cheers