Anylogic - interrupt delay upon resource availability and move agent to a new task - anylogic

I am modelling a patient flow as a DES in AnyLogic. Patients, as agents, have a RecoveryTime variable that sets how long they should stay in a delay before they are discharged. Once the are assigned with a random RecoveryTime, they are directed to LBRP delay, only if LBRP bed resource is available. If LBRP bed is not available, they will be directed to PP delay. Once they are there, they will spend the remainder of their RecoveryTime and discharge, or if an LBRP bed becomes available, will be transferred to LBRP delay and spend the remainder of their RecoveryTime. I tried a couple of things in Anylogic to make this happen, but so far, I did not have enough progress. Does anyone know what to do?
Best,

Related

Calculate the travel time from one point to the other on Anylogic

I am developing a logistics simulation in the factory by Anylogic. It's a pick up and delivery problem, where the AGVs need to pick up the parcel and deliver to the target location. All the AGVs are traveling following paths. The paths have different speed limits.
My goal is to reduce the time of traffic jam or waiting time for jobs to be picked up.
I have the leading time, job delivered time - job generated time.
But I from here, I want to identify the time of traffic jam or waiting time.
Is there a way to calculate the time from one spot to the other considering different speed limit of paths without waiting time or traffic jam? So that I could subtract this from leading time.
Let me know if I need to clarify something.
There is no build-in way to do this, you have to do it yourself. I have 3 ideas:
You compute this mathematically in the model yourself, i.e. write a function that computes the length of the total path and you have the ideal speed already, voila
You run a separate experiment and turn off all speed limits and other traffic: record the time in that ideal case and use that to compare
Similarly, you could do this in the same experiment during a warmup period: drive a fake transporter along the path and compute the perfect durations

Modeling a waiting line in a call centre in AnyLogic

I need to model a waiting line in a call centre in AnyLogic. This is what I don't understand. It says:
If all of the service representatives are busy, an arriving customer is placed on hold, but ties up on the phone lines.
I am not sure what block or how to model customers waiting. Can someone help me? Thank You!
Here is my solution, although I'm absolutely sure other methods exist.
To represent an arrival rate of 15/hr, use a source block with arrivals defined by rate, with the rate set to 15 per hour.
To represent 24 phone lines and 3 service reps, use a queue block (callsWaiting) followed by a delay block (service). The queue block should have capacity = 21 and the delay block should have capacity = 3 with a delay time of exponential(0.1,0) minutes representing the exponential service time (with mean 10 min).
To represent losing calls when all of the phone lines are tied up, place a selectOutput block before the callsWaiting queue and set its condition to: callsWaiting.canEnter(). It will return false if the queue is at maximum capacity. On the false branch for that selectOutput, place a sink block for dropped calls.

How to calculate the probability of a customer having to wait more than 5min in the queue?

I have to calculate the probability of a customer waiting more than 5 min in the queue in anylogic. I've already implemented the timemeasureend and-start block, but I have seriously no clue how to compute the probability of a customer waiting longer than 5min? What do I need to write where? Help is highly appreciated
Thanks!
There are two objects in a process modelling library: TimeMeasureStart and TimeMeasureEnd. You can put those around a queue and record the time for each entity after they exit the queue. Save that time to a Statistics object and from there your probability if waiting more that 5 mins should be (no of samples over 5 mins)/(total number of entities). Also, make sure that your model time unit is set to minute to make it easier.

How to make an reinforcement learning agent learn an endless runner?

I'm tried to train a reinforcement learning agent to play an endless runner game using Unity-ML.
The game is simple: an obstacle is approaching from the side and the agent has to jump at the right timing to overcome it.
As the observation, I have the distance to the next obstacle. Possible actions are 0 - idle; 1 - jump. Rewards are given for longer playtime.
Unfortunately, the agent fails to learn to overcome even the 1st obstacle reliable. I guess this is due too high imbalance on the two actions as the ideal policy would be doing nothing (0) most of the time and jump (1) only at very specific points in time. Additionally, all actions during a jump are meaningless since the agent cannot jump while in the air.
How can I improve the learning such that it convergence nevertheless? Any suggestions what to look into?
Current trainer config:
EndlessRunnerBrain:
gamma: 0.99
beta: 1e-3
epsilon: 0.2
learning_rate: 1e-5
buffer_size: 40960
batch_size: 32
time_horizon: 2048
max_steps: 5.0e6
Thanks!
It's difficult to say without seeing the exact code that's being used for the reinforcement learning algorithm. Here are some steps worth exploring:
How long are you letting the agent train? Depending on the complexity of the game environment, it very well may take thousands of episodes for the agent to learn to avoid its first obstacle.
Experiment with the Frameskip property of the Academy object. This permits the agent to only take an action after a number of frames have passed. Increasing this value may increase the speed of learning in more simple games.
Adjust the learning rate. The learning rate determines how heavily the agent weights new information versus old information. You're using a very small learning rate; try increasing it by a couple decimal places.
Adjust epsilon. Epsilon determines how often a random action is taken. Given a state and an epsilon rate of 0.2, your agent will take a random action 20% of the time. The other 80% of the time, it will choose the (state, action) pair with the highest associated reward. You can try reducing or increasing this value to see if you get better results. Since you know you'll want more random actions in the beginning of training, you can even "decay" epsilon with each episode. If you start with an epsilon value of 0.5, after each game episode is completed, reduce epsilon by a small value, say 0.00001 or so.
Change the way the agent is rewarded. Instead of rewarding the agent for each frame it stays alive, perhaps you could reward the agent for each obstacle it successfully jumps over.
Are you sure that the given time_horizon and max_steps provide enough runway for the game to complete an episode?
Hope this helps, and best of luck!

What is a good fitness function for an AI of a zero-sum game?

I am making an AI for a zero-sum 4-player board game. It's actually not zero-sum (the 4 players will "die" when they lose all their lives, so there will be a player who died first, second, third and a player who survived. However, I am telling the AI that only surviving counts as a win and anything else is a lose) After some research, I figured I would use a minimax algorithm in combination with a heuristic function. I came across this question and decided to do the same as the OP of that question - write an evolutionary algorithm that gives me the best weights.
However, my heuristic function is different from the one the OP of that question had. Mine takes 9 weights and is a lot slower, so I can't let the agents play 1000 games (takes too much time) or breed them with the crossover method (how do I do a crossover with 9 weights?).
So I decided to come up with my own method of determining fitness and breeding. And this question is only about the fitness function.
Here are my attempts at this.
First Attempt
For each agent A in a randomly generated population of 50 agents, select 3 more agents from the population (with replacement but not the same agent as A itself) and let the 4 agents play a game where A is the first player. Select another 3 and play a game where A is the second player, and so on. For each of these 4 games, if A died first, its fitness does not change. If A died second, its fitness is increased by 1. If it died third, its fitness is increased by 2. If it survived, its fitness is increased by 3. Therefore, I concluded that the highest fitness one can get is 12 (surviving/wining all 4 games -> 3 + 3 + 3 + 3).
I ran this for 7 generations and starting from the first generation, the highest fitness is as high as 10. And I calculated the average fitness of the top 10 agents, but the average didn't increase a bit throughout the 7 generations. It even decreased a little.
I think the reason why this didn't work is because there's gotta be a few agents that got lucky and got some poor performing agents as its opponents.
Second Attempt
The game setups are the same as my first attempt but instead of measuring the results of each game, I decided to measure how many moves did that agent make before it died.
After 7 generations the average fitness of top 10 does increase but still not increasing as much as I think it should.
I think the reason why this failed is that the game is finite, so there is a finite number of moves you can make before you die and the top performing agents pretty much reached that limit. There is no room for growth. Another reason is that the fitness of the player who survived and the fitness of the player who died third differs little.
What I want
From my understanding of EAs (correct me if I'm wrong), the average fitness should increase and the top performing individual's fitness should not decrease over time.
My two attempts failed at both of these. Since the opponents are randomly selected, the top performing agent in generation 1 might get stronger opponents in the next generation, and thus its fitness decreases.
Notes
In my attempts, the agents play 200 games each generation and each generation takes up to 3 hours, so I don't want to let them play too many games.
How can I write a fitness function like this?
Seven generations doesn't seem like nearly enough to get a useful result. Especially for a game, I would expect something like 200+ generations to be more realistic. You could do a number of things:
Implement elitism in order to ensure the survival of the best individual(s).
The strength of evolution stems from repeated mutation and crossover, so I'd recommend letting the agents play only a few games per generation (say, 5 ~ 10), at least at the beginning, and then evolve the population. You might even want to do only one game per generation.
In this regard, you could adopt a continuous evolution strategy. What this means is that as soon as an agent dies, they are subjected to mutation, and as soon as an agent wins, they can produce offspring. Or any combination of the two. The point is that the tournament is ongoing, everyone can play against anyone else. This is a little more "organic" in the sense that it does not have strictly defined generations, but it should speed up the process (especially if you can parallelise the evaluation).
I hope that helps. The accepted answer in the post you referenced has a good suggestion about the way you could implement crossover.