I have been working with Anylogic for about 6 months now and my goal is to model a generic energy supply chain for an energy demand (e.g. storm and heat for a house). As a result I want to evaluate how suitable the components in the energy supply chain are to meet the energy demand.
My idea would be to model the components (Ex. PV->Battery Storage->House) as agents. I would have modeled the energy flow in the agents with SD and individual events of the components (e.g. charging and discharging at the battery) via state diagrams.
Currently I have two problems:
Which possibilities are there to create a variable interconnection of my components (agents). For example, if I do not want to evaluate the scenario PV->Battery Storage->House, but PV->Electrolysis->Tank->Fuel Cell->House. My current approach would be to visually connect the agents with ports and connectors and then pass input and output variables for DS calculation via set and get functions. Are there other possibilities, e.g. to realize such a connection via an input Excel? I have seen a similar solution in the video: "How to Build a True Digital Twin with Self-Configuring Models Using the Material Handling Library" by Benjamin Schumann, but I am not sure if this approach can be applied to SD.
To evaluate the energy supply chain, I would like to add information to the energy flow, for example the type (electricity, heat), generation price (depending on which components the energy flow went through) and others. Is there a way to add this information to a flow in SD? My current approach would be to model the energy flow as an agent population with appropriate parameters and variables. Then agents could die when energy is consumed or converted from electricity to heat type. However, I don't know if this fits with the SD modeling of the energy flow.
Maybe you can help me with my problems? I would basically be interested in the opinion of more experienced Anylogic users if my approaches would be feasible or if there are other or easier approaches. If you know of any tutorial videos or example models that address similar problems, I would also be happy to learn from them.
Best
Christoph
Sounds like what you need is a model that combine agent-based and system dynamics approaches with Agents populating the stocks (in your case energy that then gets converted into heat) depending on their connection. There is an example of AB-SD combination model in 'Example' models and I also found one on cloud.anylogic.com, although it is from a different domain.
Perhaps if you can put together a simple example and share then I'll be able to provide more help.
Related
I'm working on a family practitioner model, my goal is to model the population of a city and all of the family practitioners in that given city by taking into account the distribution of the treatment time and the arrival schedule of patients etc... I've started with a basic model with two practitioners and a small population.
Basic Model
Now if im going to model all of the 10k practitioners by duplicating the same blocks its gonna be like undoable, the other solution is to add more units in the resourcepool which does not model the real situation because basicly every family practitioner have his own queue and increasing the resourcepool is like modelling 'n' family practitioner in the same clinic with the same shared queue of patients. Is there any other solution? (first time using anylogic so im basicly a newbie)
thanks
You need to change your model architecture fundamentally. Practitioners become a custom agent type, you create a population of them and provide a flow chart in that agent type. Then, you can have 10k of them each with their own flow chart.
Strongly recommend you do all the tutorials in the help first to get a better understanding of the capabilities of AnyLogic. Having some blocks on Main is really just 0.1% of what the tool is capable of :)
I am trying to simulate a finite calling population model in AnyLogic. My population consists of 10 agents and I want them to come back to the Source node after they have been served.
I thought about making conditioning with the SelectOutput node but the Source node does not have any input. The best thing that I came up with is to just limit the number of customers arrivals to 10. However, in this case, the model stops running after 10 arrivals which is not an appropriate result.
What can I do to be able to simulate such a type of model in AnyLogic?
EDIT: I thought that making agents come back to the Source node could be a solution to building the finite calling population model. The main purpose of my question is to understand how can I build such type of model in AnyLogic. Here is the description of the concept of the model.
You cannot send them back to a Source element, as it only acts to create agents.
However, you can send them back to blocks that come after the source as below:
Here, all agents created by the Source block will infinitely loop through the Queue and Delay blocks.
In the Thermal Power Library from Modelon, there are two kinds of connectors: flow connector and volume connector.
Based on the tutorial shipped with the library, these two kinds of connectors should NOT be connected with the same kind of connector.
But I checked their code, it seems the codes are the same.
I checked the code in the ThermoSysPro library from EDF and ThermoPower library, too. They also use two kinds of connectors, and the recommendation of connecting principle is also the same.
So I read the code of “MixVolume” and “SteamTurbineStodola”, which include volume connectors and flow connectors respectively, but I am not sure the difference between these two kinds of connectors.
My question is :
Could someone tell me the philosophy of using such two kinds of connectors in thermo-hydraulic systems, and in the code of every component, how should I deal with them so they work like they’re designed for.
Here is a very short and simplified explanation applying to thermo-hydraulic systems.
In flow models (pipes, valves etc.) enthalpy is unchanged and mass flow/pressure drop are related with a static equation.
In volume models pressure and enthalpy are dynamic state variables, that is, mass and energy conservation is "elastic".
As a rule of thumb, you should build thermo-hydraulic system models of alternating flow and volume models (in a staggered grid scheme) to decouple nonlinear systems.
For the dynamic pipe model in the top figure in your post the connectors merely indicate that, internally, the pipe model begins with a volume model and ends with a flow model.
Claytex has a nice blog post on the subject here https://www.claytex.com/blog/how-to-avoid-computationally-expensive-fluid-networks-in-dymola/
Also the authors of the Modelica Buildings Library have done a great effort explaining this in various papers. See e.g. https://buildings.lbl.gov/publications/simulation-speed-analysis-and
These kind of connectors are indeed the same due to modelica language specification. You can only connect two connectors that are interchangeable, that have the exact same amount and type of flow and potential variables. At every node all flows have to sum up to zero and all potentials have to be the same, therefore they have to be type consistent.
The difference is just information wise for the modeler or someone trying to understand the model and all components have been designed with such a thing in mind. It is easiest to understand with electrical components, where you have positive and negative pins which indicate in which direction the current should flow, but this is actually never really forced. Positive and negative pins are, ignoring their name, identical.
Although i don't know the connectors you are talking about i would assume that the VolumePort is a connector of something that has a volume and passes that information, whereas FlowPort passes the information about the mass flow rate. Usually a pipe i guess (?). Broken down to abstract dae theory one could say the names indicate if the potential or the flow variable are considered unknown for the component.
I have to emphasize that these are only indicators and that it is never actually forced by the model or the compiler. It is just how it should logically resolve in the end if you respect these restrictions of only connecting VolumePortto FlowPort connectors.
I am thinking to implement a learning strategy for different types of agents in my model. To be honest, I still do not know what kind of questions should I ask first or where to start.
I have two types of agents which I want them to learn by experience, they have a pool of actions which each has different reward based on specific situations that might happen.
I am new to reinforcement Learning methods, therefore any suggestions on what kind of questions should I ask myself is welcomed :)
Here is how I am going forward to formulate my problem:
Agents have a lifetime and they keep track of a few things that matter for them and these indicators are different for different agents, for example, one agent wants to increase A another wants B more than A.
States are points in an agent's lifetime which they
Have more than one option (I do not have a clear definition for
States as they might happen a few times or not happen at all because
Agents move around and they might never face a situation)
The reward is the an increase or decrease in an indicator that agents can get from an action in a specific State, and agent do not know what would be the gain if he chose another action.
The gain is not constant, the states are not well defined and there is no formal transition of one state into another,
For example agent can decide to share with one of the co-located agent (Action 1) or with all of the agents at the same location(Action 2) If certain conditions hold true Action A will be more rewarding for that agent, while in other conditions Action 2 will have higher reward; my problem is I did not see any example with unknown rewards since sharing in this scenario also depends on the other agent's characteristics (which affects the conditions of reward system) and in different states it will be different.
In my model there is no relationship between the action and the following state,and that makes me wonder if its ok to think about RL in this situation at all.
What I am looking to optimize here is the ability for my agents to reason about current situation in a better way and not only respond to their need which is triggered by their internal states. They have a few personalities which can define their long term goal and can affect their decision making in different situations, but I want them to remember what action in a situation helped them to increase their preferred long term goal.
In my model there is no relationship between the action and the following state,and that makes me wonder if its ok to think about RL in this situation at all.
This seems strange. What do actions do if not change state? Note that agents don't have to necessarily know how their actions will change their state. Similarly, actions could change the state imperfectly (a robots treads could skid out so it doesn't actually move when it tries to). In fact, some algorithms are specifically designed for this uncertainty.
In any case, even if the agents are moving around the state space without having any control, it can still learn the rewards for the different states. Indeed, many RL algorithms involve moving around the state space semi-randomly to figure out what the rewards are.
I do not have a clear definition for States as they might happen a few times or not happen at all because Agents move around and they might never face a situation
You might consider expanding what goes into what you consider to be a "state". For instance, the position seems like it should definitely go into the variables identifying a state. Not all states need to have rewards (although good RL algorithms typically infer a measure of goodness of neutral states).
I would recommend clearly defining the variables that determine an agent's state. For instance, the state space could be current-patch X internal-variable-value X other-agents-present. In the simplest case, the agent can observe all of the variables that make up their state. However, there are algorithms that don't require this. An agent should always be in a state, even if the state has no reward value.
Now, concerning unknown reward. That's actually totally okay. Reward can be a random variable. In that case, a simple way to apply standard RL algorithms would be to use the expected value of the variable when making decisions. If the distribution is unknown, then the algorithm could just use the mean of the rewards observed so far.
Alternatively, you could include the variables that determine the reward in the definition of the state. That way, if the reward changes, then it is literally in a different state. For example, suppose a robot is on top of a building. It needs to get to the top of the building in front of it. If it just moves forward, it falls to ground. Thus, that state has a very low reward. However, if it first places a plank that goes from one building to the other, and then moves forward, the reward changes. To represent this, we could include plank-in-place as a variable so that putting the board in place actually changes the robot's current state and the state that would result from moving forward. Thus, the reward itself has not changed; it's just in a different state.
Hopefully this helps!
UPDATE 2/7/2018: A recent upvote reminded me of the existence of this question. In the years since it was asked, I've actually dived into RL in NetLogo to a much greater extent. In particular, I've made a python extension for NetLogo, primarily to make it easier to integrate machine learning algorithms in with model. One of the demos of the extension trains a collection of agents using deep Q-learning as the model runs.
The Modelica Standard Library comes with the Modelica.Media library which makes available thermodynamic properties of fluids.
Quoting from the Modelica.Media documentation:
Media models in Modelica.Media are provided by packages, inheriting
from the partial package Modelica.Media.Interfaces.PartialMedium.
Every package defines:
[...]
A BaseProperties model, to compute the basic thermodynamic properties of the fluid;
setState_XXX functions to compute the thermodynamic state record from different input arguments (such as density, temperature, and composition which would be setState_dTX);
[...]
There are - as stated above - two different basic ways of using the Media library
which will be described in more details in the following section.
One way is to use the model BaseProperties.
[...]
The second way is to use the setState_XXX functions to compute the thermodynamic state record from which all other thermodynamic state variables can be computed [...]
My colleague prefers BaseProperties (he spends most time modeling components),
I prefer the setState_XXX functions (I spend most time writing a property library).
Now we want to develop a simple&small component library together and probably we should agree to use one of the two approaches.
Can you recommend a publication that explains the advantages/disadvantages of the two approaches? Publications that promote the use of the setState_XXX function are preferred of course... ;-)
Are there some simple rules to decide which one of the two approaches to use when modeling a component (e.g. a very simple turbine)?
The components in Modelica.Fluid seem to use both.
The 2 types of patterns for computing properties can both be used for all types of components, but BaseProperties have been designed to make life for the Modeller easy for components with dynamic states, i.e. usually for the storage of mass and energy in volumes. You need to write just the conservation equations, instantiate BaseProperties, equate the relevant variables and you are done. That is often overkill (more equations than minimally needed) for components with a stationary mass and energy balance, like simple valves, pumps and turbines. For that type of components (no dynamic states), the setState_xxx method provide a way to work with the minimally necessary number of equations. I think that is also what you will see in Modelica.Fluid: BaseProperties are used together with dynamic equations for mass- and energy storage, and setState elswhere.
The minimum number of equations is not the whole story w.r.t. computational efficiency, but in geeneral models shoudl not ocmpute more than what is actually needed.