Flow Balance in Chemical Engineering Process Flow Diagram - simulation

I am developing a chemical process simulation program that takes the user input of
Definitions of process units
A Process Flow Diagram (PFD) that depicts how the process units are connected and flow/mass stream directions;
The PFD may have recirculation loops. A simple example may look like this:
PFD:
Feed_Unit --> Chemical_Reactor --> Separator --> Product
^ |
| |
|<----(recirculation)---V (flow split)
|
L------> Waste_Material
The flow of Waste_Material is a function of the Chemical_Reactor and changes during simulation from one timestamp to the next.
I can balance the Feed, Waste_Material, and Product flows easily. What would be an efficient approach/algorithm to make sure the inner streams' flows are balanced too?

This seems like an 1000 level mass balance problem. It would be easier to just set up a bunch of systems of linear equations in matlab and use the rref() function. As long as it's not a transient problem.
Just write out all the balances and then plug em into matlab.

Related

Generic modeling of an energy supply chain with Anylogic

I have been working with Anylogic for about 6 months now and my goal is to model a generic energy supply chain for an energy demand (e.g. storm and heat for a house). As a result I want to evaluate how suitable the components in the energy supply chain are to meet the energy demand.
My idea would be to model the components (Ex. PV->Battery Storage->House) as agents. I would have modeled the energy flow in the agents with SD and individual events of the components (e.g. charging and discharging at the battery) via state diagrams.
Currently I have two problems:
Which possibilities are there to create a variable interconnection of my components (agents). For example, if I do not want to evaluate the scenario PV->Battery Storage->House, but PV->Electrolysis->Tank->Fuel Cell->House. My current approach would be to visually connect the agents with ports and connectors and then pass input and output variables for DS calculation via set and get functions. Are there other possibilities, e.g. to realize such a connection via an input Excel? I have seen a similar solution in the video: "How to Build a True Digital Twin with Self-Configuring Models Using the Material Handling Library" by Benjamin Schumann, but I am not sure if this approach can be applied to SD.
To evaluate the energy supply chain, I would like to add information to the energy flow, for example the type (electricity, heat), generation price (depending on which components the energy flow went through) and others. Is there a way to add this information to a flow in SD? My current approach would be to model the energy flow as an agent population with appropriate parameters and variables. Then agents could die when energy is consumed or converted from electricity to heat type. However, I don't know if this fits with the SD modeling of the energy flow.
Maybe you can help me with my problems? I would basically be interested in the opinion of more experienced Anylogic users if my approaches would be feasible or if there are other or easier approaches. If you know of any tutorial videos or example models that address similar problems, I would also be happy to learn from them.
Best
Christoph
Sounds like what you need is a model that combine agent-based and system dynamics approaches with Agents populating the stocks (in your case energy that then gets converted into heat) depending on their connection. There is an example of AB-SD combination model in 'Example' models and I also found one on cloud.anylogic.com, although it is from a different domain.
Perhaps if you can put together a simple example and share then I'll be able to provide more help.

How to make my algorithm depends on time?

I used MATLAB to simulate the cascading failure of two interdependent networks/Layers (I generated the two layers based on small world - watts strogatz algorithm).
My code works fine but it is not time dependent.
I want to have time steps, for example, the initial attack on one node happens at t1 then after some time the next vulnerable nodes get failed at different time t2 and so on for the other failure events. My code emulates only telecommunication nodes ( all events happen instantaneously), I want it to work for other logistic networks, say social networks for example, where the timestamp for every event might be in minutes or hours. Your thoughts and ideas would appreciated.
Note: I can provide my code if this helps.

How do I avoid circular equality errors in a closed media circuit? (modelica)

I'm attempting to build a model of a pumped water heat exchange loop in modelica using thermal.fluid library components (doing my compiling in OpenModelica). The inlet of the constant-volume pump connects to a flow port from a heated pipe component, which is ostensibly the "last" stage in the circuit which is driven by the pump, and so the loop is completed.
This system worked when my media flowed from an ambient source, through the pump, through the system, and out to an ambient sink, but now that I've "completed the circuit," the simulation fails and I receive "circular equality" errors. This makes a modicum of sense physically, as pressure-type variables need a baseline to reference, but it seems that the friction-based elements in my system would create pressure losses through the system, and the pump would operate normally, as pressures would arrive at zero in the piping leading up to the pump.
Any ideas on how to clear these "circular equality" errors, or particular pitfalls I should be looking out for? Could it be that the constant-volume pump operation is interfering with the pressure calculations, and I should use the ideal pump library component?
Thanks for your thoughts!

Simulink large scale modeling: best practices for interconnecting blocks

What are the best practices for large scale modeling in Simulink when it comes to connecting blocks? Would you use the same structure for all I/O ports of your blocks to facilitate their interconnection (but obviously there will be a lot of redundant signals) or would you define custom structures for each I/O port type with only the necessary information?
For example:
A reactor is modeled as a single block with 4 inputs and 1 output:
I1. Feed which is a structure containing: Flow and Concentrations (7 species);
I2. Mass flow of enzymes - scalar;
I3. Mass flow of water - scalar;
I4. Outflow - which is adjusted by a controller to keep a constant mass in the tank - scalar;
O1. The outstream, which is a struct: Flow and Concentrations
(let's say 10 species).
Now imagine this reactor block is only a tiny piece of an entire process. There are enzymes and water tanks connected to it and some other downstream processes etc.
Would you use a unique structure for all IO ports (even if it scales up to 50-100 components but you would need less per block or 1 component like I2, I3 and I4 above which are scalars)? Is this regarded as bad programming practice?
Or would you customize the IO port structure for each block? Of course you would group them somehow and make reuse of them but with no redundant information.
Thanks!
You might find the following useful: http://www.mathworks.co.uk/videos/tips-and-tricks-for-large-scale-model-based-design-part-2-81873.html.
I would personally use a single bus input and a single bus output for your reactor block. You can then group buses together to form larger bus signals as you move up the hierarchy of your model. Look at the Bus Creator and Bus Selector blocks.

create new event in output adapter streaminsight

I have the following problem in StreamInsight. I have a query where new tasks from an order came in and trigger an output adapter to make an prediction. The outputadapter writes the predicted task cycle time to a table (in Windows Azure). The prediction is based on neural networks and is plugged in in the outputadapter. After the prediction is written in the table I want to do something else with all the predicted times. So in a second query I want to count the number of written tasks in a time window of 5 minutes. When the number of predicted values saved in the table is equal to the number of tasks in an order, I want to get all the predicted values from the table and make a prediction of the order cycle time.
For this idea I need to make a new event in my outputadapter to know the predicted time is writen in the table. But I don't thinks its possible to enqueue new events in the streaminsight server from an outputadapter.
Maybe this figure makes the problem clear:
http://i40.tinypic.com/4h4850.jpg
Hope someone can help me.
Thanks Carlo
First off, I'm assuming you are using pre-2.1 StreamInsight based on your use of the term "output adapter".
From what you've posted, I would strongly recommend that your adapters do either input or output, but not both. This cuts down on the complexity, makes the implementation easier, and depending on how you wrote the adapter, you now have a reusable piece of code in your solution.
If you are wanting to send data from StreamInsight to your neural network prediction engine, you will need to write an output adapter to do that. Then I would create an input adapter that will get the results from the neural network prediction engine and enqueue the data into StreamInsight. After creating your stream from the neural network prediction engine input adapter, you can use dynamic query composition to share the stream to a Windows Azure storage output adapter and your next query.
If your neural network prediction engine can "push" data to your input adapter, that would be the way to do. If not, you'll have to poll for results.
There is a lot more to this, but it's difficult to drill in to more specifics without more details.
Hope this helps.