Calculate the time available in the service block in anylogic - anylogic

Hi working on Production Model where I am working in shifts( 8:00AM to 4:30PM) and the requirement is that work should be done in one shift only otherwise shift it to next day. so for example lets say my agent enter the service block at 4:00 PM and shift end at 4:30 PM, the average time require to complete this task is 3hours, What I want to do is to check the available time and if time is not sufficient ( like in this case i have only 30 mins) the agent should be sent to waiting area and next day service block should start working on that agent first.

One way: Add A SelectOutput block ahead of the Service. In the condition, check
mySchedule.getTimeOfNextValue() - time() > x
Where x is the remaining time window you allow, i.e. the task duration the agent will require once entering the servive.
If there is not enough x left, you send the agent to some Wait block, else they are allowed to enter
PS: Also check the other functions that Schedule allows, might be you need some of them. Always check all capabilities a block offers you via code-complete (ctrl+space or Option+space on Mac)

Related

Change machine delay time for rejected source in AnyLogic

Agent moving from machine block (machine block delay is 9 minutes) to milling block. At the milling block, 7 percent gets rejected and goes back to the machine block.
Now rejected agents move from machine to milling block, but this time the machine block delay time is 12 minutes. The question is, how can I achieve this 12 minute delay time now?
*** The logic image is given below:***
In the agent type of of your agents going through the process, create a variable of type boolean and call it rejected with initial value false. On Exit (false) of the your selectOutput1 write:
agent.rejected = true
The delay of the machine block should be
agent.rejected ? 12 : 9

Anylogic - Assembler should stop working for 2 hours after 10 assemblies done

The "Assembler" should stop working for 2 hours after 10 assemblies are done.
How can I achieve that?
There are so many ways to do this depending on what it means to stop working and what the implications are for the incoming parts.. but here's one option
create a resourcePool called Machine, this will be used along with the technicians:
on the "on exit" action of the assembler do this (I use 9 instead of 10 because the out.count() doesn't count until the agent is completely out, so when it counts 9, it means that you have produced 10)
if(self.out.count()==9){
machine.set_capacity(0);
create_MyDynamicEvent(2, HOUR);
}
In your dynamice event (that you have to create) you will add the following code:
machine.set_capacity(1);
A second option is to have a variable countAssembler count the number of items produced... then
on exit you write countAssembler++;
on enter delay you write the following:
if(countAssembler==10){
self.suspend(agent);
create_MyDynamicEvent(2, HOUR,agent);
}
on the dynamic event you write:
assembler.resume(agent);
Don't forget to add the parameter needed in the dynamic event:
Create a variable called countAssembler of type int. Increment this as agents pass through the assembler. Also create a variable called assemblerStopTime. You also record the assembler stop time with assemblerStopTime=time()
Place a selectOutputOut block before the and let them in if countAssembler value is less than 10. Otherwise send to a Wait block.
Now, to maintain the FIFO rule, in the first selectOutputOut condition, you need to check also if there is any agent in the wait block and if the current time - assemblerStopTime is greater than 2. If there is, you free it and send to the assembler with wait.free(0) function. And send the current agent to wait. You also need to reset the countAssembler to zero.

AnyLogic: How to refer to an agent in a delay block in anylogic

I am building a DES-ABM hybrid model in AnyLogic.
The agents go through the DES blocks, among which multiple Delay blocks.
How do I
access an agent which is in a Delay block
or peferrably
acces the specific agent which triggered the 'on enter' action of the delay block?
My ultimate goal is to open or close a valve object on the agent frame
So can I/ how do I
A. open or close the valve on the agent frame directly form the main/root frame (on which the Delay block is located)
or if that is not possible
B. send a message or trigger a statechart within the specific agent which will then open or close the valve from the agent's own frame?
I have tried to use the 'DelayBlockName'.agents() function, but this does not work and returns [] when I check it using traceln.
access an agent which is in a Delay block or peferrably
use the keyword agent. These keywords differ for different library blocks so best start learning about the lightbulb and how it can help, see here.
acces the specific agent which triggered the 'on enter' action of the delay block?
When you write agent. in the "On enter" block, every agent coming through will execute that code, so by definition, it is always the specific agent :)
My ultimate goal is to open or close a valve object on the agent frame So can I/ how do I A. open or close the valve on the agent frame directly form the main/root frame (on which the Delay block is located) or if that is not possible B. send a message or trigger a statechart within the specific agent which will then open or close the valve from the agent's own frame?
this is something completely different to your original question and just... messy. Please limit questions to 1 topic so it is easy for us to answer :) (see this guide for more)

Trouble with agent state chart

I'm trying to create an agent statechart where a transition should happen every day at 4 pm (except weekends).
I have already tried:
1. a conditional transition (condition: getHourOfDay() == 16)
2: A timeout transition that will "reinsert" my agent into the chart every 1 s and check if time = 16.
My code is still not running, does anyone have any idea how to solve it?
This is my statechart view. Customer is a single resource that is supposed to "get" the products out of my stock everyday at 4pm. It is supposed to happen in the "Active" state.
I have set a timeout transition (from Active-Active) that runs every 1s.
Inside my "Active" state in the "entrance action" i wrote my code to check if it is 4 pm and run my action if so.
I thought since i set a timeout transition it would check my condition every 1s, but apparently it is not working.
Your agent does not enter the Active state via an internal transition.
Redraw the transition to actually go out of the Active state and then enter it again as below:
Don't use condition-based transitions, for performance reasons. In your case, it also never triggers because it is only evaluated when something happens in the model. Incidentally, that is not the case at 4pm.
re your timeout approach: Why would you "reinsert" your agent into its own statechart? Not sure I understand...
Why not set up a schedule or event with your recurrence requirement and make it send a message to the statechart: stateChart.fireEvent("trigger!");. In your statechart, add a message-based transition that waits for this message. This will work.
Be careful to understand the difference between the Statechart.fireEvent() and the Statechart.receiveMessage() functions, though.
PS: and agree with Felipe: please start using SOF the way it is meant, i.e. also mark replies as solved. It helps us but also future users to quickly find solutions :-) cheers

Setting up openai gym

I've been given a task to set up an openai toy gym which can only be solved by an agent with memory. I've been given an example with two doors, and at time t = 0 I'm shown either 1 or -1. At t = 1 I can move to correct door and open it.
Does anyone know how I would go about starting out? I want to show that a2c or ppo can solve this using an lstm policy. How do I go about setting up environment, etc?
To create a new environment in gym format, it should have the 5 functions mentioned in the gym.core file.
https://github.com/openai/gym/blob/e689f93a425d97489e590bba0a7d4518de0dcc03/gym/core.py#L11-L35
To lay this down in steps-
Define observation space and action space for your environment, preferably using gym.spaces module.
Write down the step function which performs agent's action and returns a 4 tuple containing - next set of observations from the environment , reward ,
done - a boolean indicating whether the episode is over , and some extra info if you want.
Write a reset function for the environment to reinitialise the episode to a random start state and return a 4 tuple similar to step.
These functions are enough to be able to run an RL agent on your environment.
You can skip the render, seed and close functions if you want.
For the task you have defined,you can model the observation and action space using Discrete(2). 0 for first door and 1 for second door.
Reset would return in it's observation which door has the reward.
Then agent would choose either of the door - 0 or 1.
Then perform a environment step by calling step(action), which will return agent's reward and done flag as true - signifying that the episode is over.
Frankly, the problem you describe seems too simple to accomplish for any reinforcement learning algorithm, but I assume you have provided that as an example.
Remembering for longer horizons is usually harder.
You can read their documentation and toy environments to understand how to create one.