How do I avoid circular equality errors in a closed media circuit? (modelica) - modelica

I'm attempting to build a model of a pumped water heat exchange loop in modelica using thermal.fluid library components (doing my compiling in OpenModelica). The inlet of the constant-volume pump connects to a flow port from a heated pipe component, which is ostensibly the "last" stage in the circuit which is driven by the pump, and so the loop is completed.
This system worked when my media flowed from an ambient source, through the pump, through the system, and out to an ambient sink, but now that I've "completed the circuit," the simulation fails and I receive "circular equality" errors. This makes a modicum of sense physically, as pressure-type variables need a baseline to reference, but it seems that the friction-based elements in my system would create pressure losses through the system, and the pump would operate normally, as pressures would arrive at zero in the piping leading up to the pump.
Any ideas on how to clear these "circular equality" errors, or particular pitfalls I should be looking out for? Could it be that the constant-volume pump operation is interfering with the pressure calculations, and I should use the ideal pump library component?
Thanks for your thoughts!

Related

Can the Modelica "Fluid" library handle choked flow?

I'd like to start off by saying that I'm new to StackOverflow and to Modelica.
My goal is to simulate the injector system of a Rotating Detonation Engine. Essentially this is a piping system from a tank to a rocket engine. This system will change depending on the experimental setup, so I chose Modelica (specifically OpenModelica) because of the re-usability of components. The flows encountered will be at high pressures and high flow rates (sustaining a detonation requires this), and choked flow will occur.
My question is this: does the standard "Fluid" library in Modelica allow for choked flow? I understand that a few valves model this, but will the current library be able to capture "choking" in a long rough pipe, or the small end of a converging pipe (basically anywhere choking can happen, despite it not being the design location for a choke)?
If yes, excellent. If not, is there a non-standard library available? Should I be looking at something other than Modelica? I am happy to work on making a new library, but before going through that work I thought I would check to see if anything already existed.
I have read through most of the "Media" and the basics of the "Fluid" libraries and I get the feeling that compressible flow is modeled as a means of increasing accuracy over in-compressible flow, but not to actually handle choked flow.
Thank you for your time. I hope everyone is keeping safe!
The pipe model in the Modelica library does not handle choked flows.
Adding a standard orifice in series with the pipe should help provided the 'zeta' value is adjusted to make the velocity at the orifice match with the speed of sound in the gas. In other words Modelica library does not provide a valid mean of modeling choked flows in pipes.
However, I found a very interesting library called FreeFluids (https://github.com/CarlosTrujilloGonzalez/FreeFluidsModelica) who does have a very good model for choked pipes. An example is provided with the library for a choked air flow in a 10m long diam. 50mm circular pipe. The model returns correct values for air.

Q# ResourcesEstimator for quantum chemistry of 1000+ qubit systems

This is a q# question about resource estimation on quantum chemistry problems
In the docomentation for ResourcesEstimator, it states that ...by executing the quantum operation without actually simulating the state of a quantum computer; for this reason, it can estimate resources for Q# operations that use thousands of qubits.
I am wondering how we can perform Quantum Chemistry simulation resource estimation on thousands of qubits. Although a quantum circuit of thousands of qubits can be an input to ResourcesEstimator, it is not clear to me how to generate the quantum circuit using the conventional workflow as described in this documentation on end-to-end with NWChem.
As far as I understand, the .nw file suggests generating the molecular electron-integrals which outputs to a BroomBridge .yaml file which loads to the GetGatecount and similar resource estimators. However, in a 1000+ qubit chemistry simulation, just the generation of the yaml file would take days on a beefy computer and the filesize would be giga or terabytes in size.
My question is; can we do this resource estimation without explicitly calculating the Hamiltonian matrix elements? If not, how do you propose doing these large-scale resource estimations 'up to thousands of qubits'?
Thanks for your help! [q#]
It would be more accurate to say "it can estimate resources for Q# operations that use thousands of qubits, if the classical part of the code can be executed in a reasonable time".
QDK resource estimator is basically a special simulator which still "executes" the Q# program it gets. Unlike the full state or Toffoli simulators, though, it does not simulate the effect of the gates and measurements on the state of the quantum systems - instead it increments certain counters that track the metrics produced by resource estimator. For example, if you use a T gate, it will increment the counter of T gates but will not touch the counter of Pauli gates or CNOTs.
This means that the resource estimator can run much larger programs than the other simulators (the main restriction on full state simulator comes from the need to update the full state of the system, which grows larger than the available memory around 30-40 qubits). But it still needs to be able to run the program, going through all the gates and all the classical computations involved, even if going through the gates is much more lightweight than on a full state simulator.

Flow and volume connectors in the thermo-hydraulic system

In the Thermal Power Library from Modelon, there are two kinds of connectors: flow connector and volume connector.
Based on the tutorial shipped with the library, these two kinds of connectors should NOT be connected with the same kind of connector.
But I checked their code, it seems the codes are the same.
I checked the code in the ThermoSysPro library from EDF and ThermoPower library, too. They also use two kinds of connectors, and the recommendation of connecting principle is also the same.
So I read the code of “MixVolume” and “SteamTurbineStodola”, which include volume connectors and flow connectors respectively, but I am not sure the difference between these two kinds of connectors.
My question is :
Could someone tell me the philosophy of using such two kinds of connectors in thermo-hydraulic systems, and in the code of every component, how should I deal with them so they work like they’re designed for.
Here is a very short and simplified explanation applying to thermo-hydraulic systems.
In flow models (pipes, valves etc.) enthalpy is unchanged and mass flow/pressure drop are related with a static equation.
In volume models pressure and enthalpy are dynamic state variables, that is, mass and energy conservation is "elastic".
As a rule of thumb, you should build thermo-hydraulic system models of alternating flow and volume models (in a staggered grid scheme) to decouple nonlinear systems.
For the dynamic pipe model in the top figure in your post the connectors merely indicate that, internally, the pipe model begins with a volume model and ends with a flow model.
Claytex has a nice blog post on the subject here https://www.claytex.com/blog/how-to-avoid-computationally-expensive-fluid-networks-in-dymola/
Also the authors of the Modelica Buildings Library have done a great effort explaining this in various papers. See e.g. https://buildings.lbl.gov/publications/simulation-speed-analysis-and
These kind of connectors are indeed the same due to modelica language specification. You can only connect two connectors that are interchangeable, that have the exact same amount and type of flow and potential variables. At every node all flows have to sum up to zero and all potentials have to be the same, therefore they have to be type consistent.
The difference is just information wise for the modeler or someone trying to understand the model and all components have been designed with such a thing in mind. It is easiest to understand with electrical components, where you have positive and negative pins which indicate in which direction the current should flow, but this is actually never really forced. Positive and negative pins are, ignoring their name, identical.
Although i don't know the connectors you are talking about i would assume that the VolumePort is a connector of something that has a volume and passes that information, whereas FlowPort passes the information about the mass flow rate. Usually a pipe i guess (?). Broken down to abstract dae theory one could say the names indicate if the potential or the flow variable are considered unknown for the component.
I have to emphasize that these are only indicators and that it is never actually forced by the model or the compiler. It is just how it should logically resolve in the end if you respect these restrictions of only connecting VolumePortto FlowPort connectors.

Concurrent execution in Simulink real-time

I have two model references - Slow model and Fast model, each running at its own rate for concurrent execution on the grt "generic real time" . However when I attempt to build the block I get the following error:
Simulink cannot generate code for the signal at output port 1 of block
'Multirate/Fast' because the signal requires data transfer that
generates lock-free code for a rate monotonically scheduled task.
I am not sure what to configure in simulink to overcome this error. I attempted to add rate transition from the Fast model to the Slow model but the error remains.
Any thoughts
Since there are many possibilites I can't give you a simple answer but you can try the following:
Check if simulink can determine your sample rates... Did you configure that correctly (go to view and set sample rate colors) then you see if Simulink detects the execution times correctly.
If your Simulink Block ('Fast') is contained in a single subsystem make it an atomic subsystem... an configure the sample rate on the subsystem properties.
Set the strictest constrains in the rate transition block...
How is your Model configuration? is it set to multitaskig....

Using a subset of a SUMO scenario for OMNeT++ network simulation (with VEINS)

I'm trying to evaluate an application that runs on a vehicular network using OMNeT++, Veins and SUMO. Because the application relies on realistic traffic behavior, so I decided to use the LuST Scenario, which seems to be the state of the art for such data. However, I'd like to use specific parts of this scenario instead of the entire scenario (e.g., a high and a low traffic load fragment, perhaps others). It'd be nice to keep the bidirectional functionality that VEINS offers, although I'm mostly interested in getting traffic data from SUMO into my simulation.
One obvious way to implement this would be to use a warm-up period. However, I'm wondering if there is a more efficient way -- simulating 8 hours of traffic just to get a several-minute fragment feels inefficient and may be problematic for simulations with sufficient repetitions.
Does VEINS have a built-in mechanism for warm-up periods, primarily one that avoids sending messages (which is by far the most time consuming part in the simulation), or does it have a way to wait for SUMO to advance, e.g., to a specific time stamp (which also avoids creating vehicle objects in OMNeT++ and thus all the initiation code)?
In case it's relevant -- I'm using the latest stable versions of OMNeT++ and SUMO (OMNeT++ 4.6 with SUMO 0.25.0) and my code base is based on VEINS 4a2 (with some changes, notably accepting the TraCI API version 10).
There are two things you can do here for reducing the number of sent messages in Veins:
Use the OMNeT++ Warm-Up Period as described here in the manual. Basically it means to set warmup-period in your .ini file and make sure your code checks this with if (simTime() >= simulation.getWarmupPeriod()). The OMNeT++ signals for result collection are aware of this.
The TraCIScenarioManager offers a variable double firstStepAt #unit("s") which you can use to delay the start of it. Again this can be set in the .ini file.
As the VEINS FAQ states, the TraCIScenarioManagerLaunchd offers two variables to configure the region of interest, based on rectangles or roads (string roiRoads and string roiRects). To reduce the simulated area, you can restrict simulation to a specific rectangle; for example, *.manager.rioRects="1000,1000-3000,3000" simulates a 2x2km area between the two supplied coordinates.
With both solutions (best used in combination) you still have to run SUMO - but Veins barely consums any of the time.