Can the Modelica "Fluid" library handle choked flow? - modelica

I'd like to start off by saying that I'm new to StackOverflow and to Modelica.
My goal is to simulate the injector system of a Rotating Detonation Engine. Essentially this is a piping system from a tank to a rocket engine. This system will change depending on the experimental setup, so I chose Modelica (specifically OpenModelica) because of the re-usability of components. The flows encountered will be at high pressures and high flow rates (sustaining a detonation requires this), and choked flow will occur.
My question is this: does the standard "Fluid" library in Modelica allow for choked flow? I understand that a few valves model this, but will the current library be able to capture "choking" in a long rough pipe, or the small end of a converging pipe (basically anywhere choking can happen, despite it not being the design location for a choke)?
If yes, excellent. If not, is there a non-standard library available? Should I be looking at something other than Modelica? I am happy to work on making a new library, but before going through that work I thought I would check to see if anything already existed.
I have read through most of the "Media" and the basics of the "Fluid" libraries and I get the feeling that compressible flow is modeled as a means of increasing accuracy over in-compressible flow, but not to actually handle choked flow.
Thank you for your time. I hope everyone is keeping safe!

The pipe model in the Modelica library does not handle choked flows.
Adding a standard orifice in series with the pipe should help provided the 'zeta' value is adjusted to make the velocity at the orifice match with the speed of sound in the gas. In other words Modelica library does not provide a valid mean of modeling choked flows in pipes.
However, I found a very interesting library called FreeFluids (https://github.com/CarlosTrujilloGonzalez/FreeFluidsModelica) who does have a very good model for choked pipes. An example is provided with the library for a choked air flow in a 10m long diam. 50mm circular pipe. The model returns correct values for air.

Related

Flow and volume connectors in the thermo-hydraulic system

In the Thermal Power Library from Modelon, there are two kinds of connectors: flow connector and volume connector.
Based on the tutorial shipped with the library, these two kinds of connectors should NOT be connected with the same kind of connector.
But I checked their code, it seems the codes are the same.
I checked the code in the ThermoSysPro library from EDF and ThermoPower library, too. They also use two kinds of connectors, and the recommendation of connecting principle is also the same.
So I read the code of “MixVolume” and “SteamTurbineStodola”, which include volume connectors and flow connectors respectively, but I am not sure the difference between these two kinds of connectors.
My question is :
Could someone tell me the philosophy of using such two kinds of connectors in thermo-hydraulic systems, and in the code of every component, how should I deal with them so they work like they’re designed for.
Here is a very short and simplified explanation applying to thermo-hydraulic systems.
In flow models (pipes, valves etc.) enthalpy is unchanged and mass flow/pressure drop are related with a static equation.
In volume models pressure and enthalpy are dynamic state variables, that is, mass and energy conservation is "elastic".
As a rule of thumb, you should build thermo-hydraulic system models of alternating flow and volume models (in a staggered grid scheme) to decouple nonlinear systems.
For the dynamic pipe model in the top figure in your post the connectors merely indicate that, internally, the pipe model begins with a volume model and ends with a flow model.
Claytex has a nice blog post on the subject here https://www.claytex.com/blog/how-to-avoid-computationally-expensive-fluid-networks-in-dymola/
Also the authors of the Modelica Buildings Library have done a great effort explaining this in various papers. See e.g. https://buildings.lbl.gov/publications/simulation-speed-analysis-and
These kind of connectors are indeed the same due to modelica language specification. You can only connect two connectors that are interchangeable, that have the exact same amount and type of flow and potential variables. At every node all flows have to sum up to zero and all potentials have to be the same, therefore they have to be type consistent.
The difference is just information wise for the modeler or someone trying to understand the model and all components have been designed with such a thing in mind. It is easiest to understand with electrical components, where you have positive and negative pins which indicate in which direction the current should flow, but this is actually never really forced. Positive and negative pins are, ignoring their name, identical.
Although i don't know the connectors you are talking about i would assume that the VolumePort is a connector of something that has a volume and passes that information, whereas FlowPort passes the information about the mass flow rate. Usually a pipe i guess (?). Broken down to abstract dae theory one could say the names indicate if the potential or the flow variable are considered unknown for the component.
I have to emphasize that these are only indicators and that it is never actually forced by the model or the compiler. It is just how it should logically resolve in the end if you respect these restrictions of only connecting VolumePortto FlowPort connectors.

Programming an intelligent game-playing bot

I am taking part in a programming competition where the objective is writing a bot that can play a specific game.
The objective of the game is to earn a certain amount of points. You control multiple airships, that you move around, capture islands and navigate drones that carry treasure. You play against one opponent, turns happen simultaneously, and there is a time limit. You can move multiple ships and drones in one turn. You can program your bot in Python, Java or C#.
The exact details don‘t matter, just that each ship has around 15 options each turn (moving and shooting) and overall you have around 10000 different options for each turn (different configurations of airship movements and shooting)
Up until now I approached this competition naively, and haven‘t done anything exceptionally clever (for example, if near enemy, shoot). I have read about minimax algorithms, and I would really like to apply it here (or something similar), you can assume that I can tell the value of a state. My problem is the mass of options for each turn - which create an enourmous branching factor that doesnt let me get very deep.
Question 1: Is there a better, applicable approach to this problem? Perhaps deep-learning or something similar?
Question 2: Is there a way to minimize the branching factor? I`ve read about alpha-beta and similar algorithms, but nothing seems to do the job.
Any help would be much appreciated
The minimax algorithm seems to be natural for these kinds of problems. At first, the game will be modelled in a abstract way and then a solver is used to find the path from current situation to a gamestate which maximize the amount of points. A similar approach to minimax is GOAP, which was implemented in the 1970'er for Shakey the robot under the name STRIPS. But, GOAP and minimax has two problems: first, a abstract model of the game is needed (perhaps in PDDL or in Game Description Language) and second the state-space is to big.
An better alternative to planning is to use a Behavior Tree. Thats a static program which describes the behavior of an agent. No solver is needed and no complete modelling of the game is needed. Instead, a bottom up approach is used with multiple edit-compile-run iterations for finding the optimal behavior tree (Test-driven-development). To implement such programming approach a so called "reactive planner" has to be implemented first which is another word for a realtime scheduler. Thats a module whichs maps a behavior tree onto a gantt-chart for executing an action at a specific moment in time. As introduction, the unity3d Engine is a good starting point, which has a full behaviortree implementation out-of-the-box.

2D multi-robot simulation libraries?

background
I'm working on a group project to simulate some consensus algorithms used by a group of independent robots to form an arbitrary shape on a 2D plane. The robots are modeled as unit disks, and all run the same algorithm. Basically, each robot can move, wait, or observe its local environment at any moment, but cannot communicate explicitly with an other robots. We'd like to find a simulation or even 2d graphics library to help us without writing too much from scratch.
Question
Can anyone recommend a simulation library meeting the requirements below, which could be used for a multi-robot 2D simulation?
I've never coded a simulation before, so it's possible some of my concerns are readily addressed by many existing libraries. However, the Mason project is the only resource I've found that seems promising so far. Unfortunately, a few of our team members are not very proficient in Java, so I'd like to find something suitable in a different language, if possible.
Requirements
* language preference (descending order): python, c++, (maybe) java
* open source/FOSS recommendations only
* Options/flags to disable simulation: We plan on running several thousand trials of randomly generated shapes against each algorithm, so for the bulk of trials we don't care about any visual representation, just data. So the simulation logic has to be decoupled from the graphics components if this makes sense.
* collision detection
* Customizable visual representations: Within a simulation, we'd like to have several views (or toggles for a single view) that present additional information about each robot like current state, the area it's currently observing etc.
For such simple graphics you can surely get away with either pyqt or wxpython.
The simulation itself should be its own python module; the GUI should just load the module, then call its "timestep" function at regular intervals (timer, GUI idle callback, etc); the step function should evolve the robot system by one small time step.
The GUI should just display the simulation state. Avoid mixing everything (display and simulation) in one module, it'll get pretty messy, plus if your simulation engine is a separate module you can then also run it directly from the command line and look at the output file.
It would be pretty easy to write a python script that reads such output file and generates commands to represent it graphically in either excel or powerpoint using win32com, in which case you don't even need pyqt or wxpython.
For the collision detection, look at pybox2d.

Input connector problems in custom modelica models with custom media

I am currently working with a Modelica model in Dymola to simulate a chemical process. The reactor modeling itself is done to a satisfying extent, but I'm having a hard time implementing these models into Modelica, especially with respect to getting the various Media definitions to interconnect and communicate, so to speak. This is also the key achievement of the Modelica implementation of the model.
At the moment I'm struggling with a specific type of error which, even though it appears quite obvious and straight-forward, I find relatively hard to solve. The errors are of the type:
Note: The input connector p of coopolReactor_2706_1.medium is not connected from the outside.
It is likely that it should have been connected, and recursive check will assume this.
The missing connection is a likely cause of errors in the model.
Note: The input connector h of coopolReactor_2706_1.medium is not connected from the outside.
It is likely that it should have been connected, and recursive check will assume this.
The missing connection is a likely cause of errors in the model.
The model has the same number of unknowns and equations.
The model has the same number of unknowns and equations.
The model EmulsionPolymerizationToolbox.Test.Test_2706 component coopolReactor_2706_1 is structurally singular.
when assuming the most generic outside couplings to all the flow variables of its connectors.
In the specific code which gave this error message, I've mimic'ed a simple lumped volume extending base classes from the Modelica Standard Library, but the error is still the same as for my complete reactor models. That's why understanding and solving this problem is vital to the progress of my assignment.
I've been searching a bit online to find out more about what could cause this problem, without much luck. Could someone please elaborate a bit on these kind of errors, and maybe even suggest solutions? Any inputs from this board will be useful to me.
Thanks in advance.
Regards, Fredrik.
It's possible this is actually a Red Herring. It appears as though this message is generated because of an imbalance in equations. Dymola then searches for the source. It may be that when it seems an imbalance in your component, it also notices that you have an unconnected input and reports that, even if that may not be your problem.
Another thing to keep in mind is that one of the new features in Modelica 3.x was the addition of rules about local balancing of equations and unknowns. One impact of these rules was that for medium models to be balanced, it was necessary to mark some of the variables as inputs (implying they would be specified from the outside). This use of the input qualifier isn't meant to indicate that these variables need to be connected to (or even specified via equations or modifications). Instead, it is really just a way of indicating how many equations are provided by the media model and how many are provided outside.
So where does this leave you. Well, I could be completely wrong (let's not overlook that possibility). But if I'm right, this indicates that you have an imbalance that has nothing to do "unconnected inputs". I suppose the only real help my answer gives is to encourage you to look for other "missing" equations.
If you actually posted code of your simple case, someone might be able to spot the missing equation.

simple speech recognition methods

Yes, I'm aware that speech recognition is fairly complicated (as an understatement). What I'm looking for is a method for distinguishing between maybe 20-30 phrases. An ability to split words (discrete speech is fine) would be nice, but isn't required. The software will be user-dependent(i.e. for use by me). I'm not looking for existing software, but for a good way of going about doing this myself. I've looked into various existing methods and it seems like splitting the sound into phonemes, while common, is somewhat excessive for my needs.
For some context, I'm just looking for a way to control some aspects of my computer with a few simple voice commands. I'm aware that Windows already has speech recognition software, but I'd like to go about this one myself as a learning exercise. Commands would be simple like "Open Google", or "Mute". What I had in mind (not sure if this is a good idea) is that some commands would be compound. So "Mute" would just be "Mute". Whereas the "Open" command could be recognized individually, and then have its suffixes (Google, Photoshop, etc). recognized with another network/model/whatever. But I'm not sure if looking for prefixes/word breaks in this way would produce better results than having to deal with an increased number of individual commands.
I've been looking into perceptrons, hopfield networks (though they're somewhat obsolete from what I understand) and HMMs, and while I understand the ideas behind these (I've implemented the ANNs before) I don't really know which is best suited to this task. I'm assuming that linear vector quantization models would also be appropriate, but I can't really find much literature to this end. Any guidance/resources would be greatly appreciated.
There are some open source project in speech recognition:
HTK (Hidden Markov Models Toolkit)
Sphinx
Both have decoder, training, language model toolkits. Eveything to build a complete and robust speech recognizer.
Voxforge has acoustic and language models for both open source speech recognition toolkits.
Some time ago, I read a whitepaper about a limited vocabulary system, which used a simple recognition process. The system divided each utterance into a small number of bins (6 in time, and 4 in magnitude, if I remember correctly, for 24 total), and all it did was count the number of sample audio measurements in each bin. There was a fuzzy logic rule base which then interpreted each utterances 24 bin counts, and generated an interpretation.
I imagine that (for some applications) a simple matching process might work just as well, in which the 24 bin counts of the current utterance are simple matched against those of each of your stored prototypes, and the one with the least overall difference is the winner.