Switch between two flanges - modelica

I am currently working with multibody mechanical systems using the MultiBody library included in the standard Modelica distribution.
I need to implement a switch between flanges, in order to select position or force control for a given joint.
model FlangeSwitch "Switch between flanges"
Modelica.Mechanics.Translational.Interfaces.Flange_a flange_a_1;
Modelica.Mechanics.Translational.Interfaces.Flange_b flange_b_1;
Modelica.Mechanics.Translational.Interfaces.Flange_a flange_a_2;
Modelica.Mechanics.Translational.Interfaces.Flange_b flange_b_2;
Modelica.Mechanics.Translational.Interfaces.Flange_a flange_a_exit;
Modelica.Mechanics.Translational.Interfaces.Flange_b flange_b_exit;
Modelica.Blocks.Interfaces.BooleanInput u;
equation
if u then
flange_a_exit = flange_a_2;
flange_b_exit = flange_b_2;
else
flange_a_exit = flange_a_1;
flange_b_exit = flange_b_1;
end if;
end FlangeSwitch;
But this approach does not work, the system is not balanced: 10 equations and 12 variables.
Is there any way to do this?

I don't think a Modelica tool will allow this operation (even if you have a balanced model), as it would potentially result in a variable structure system. Which is something Modelica does not support at the moment. See a nice introduction here: https://www.modelica.org/events/modelica2017/proceedings/html/submissions/ecp17132291_Stuber.pdf
Without fully knowing the application you could try two approaches:
Use a model that emulates a rotational clutch, like the Modelica.Mechanics.Translational.Components.Brake with an activated parameter useSupport. This way you can generate a "controllable mechanical connection" for connecting either of the flanges to the support connector. If I read your code correctly you should connect flange_a_2 to the support and the flange_a_exit to either flange_a or flange_b. When activating the brake via the RealInput there will be a mechanical connection.
The second thing you can try is to measure either position or force (which of both you want to apply by a sensor Modelica.Mechanics.Translational.Sensors.PositionSensor and then apply it using the respective source, which in this case would be Modelica.Mechanics.Translational.Sources.Position. Switching between the sources could then be done by switching the Real signals instead of the physical connectors. Mind that is could generate jumps in positions when applying positions directly.

The link you posted is related to non-phyiscal connectors, which are less restrictive compared to the physical connectors. So comparing the two solutions should be done very carefully.
Switching from position as an input to force as an input would require the system of equations to be rebuilt when executing this switch. This will not be possible with current generation Modelica. You will need to find a solution that is based on the same input for the whole simulation.
Would it be enough to initialize position in a way that the system starts the simulation in the point where you want to move it to first (using the Position Source)? What you loose is the movement of the system to this position.

Related

integrate Modelica variable without influencing state selection

I want to integrate a Modelica variable over time, just for convenience in plotting and post-processing. The variable I want to integrate over time is the power of a compressor so that I get the total energy. The first idea would be to add these lines:
Modelica.Units.SI.Power P_comp;
Modelica.Units.SI.Energy E_comp;
equation
P_comp = der(E_comp);
Is that the recommended way, or are there (better?) alternatives? Is it expected to influence the selection of dynamic states?
Assuming that those two lines are the only ones using E_comp that should work.
Basically E_comp will be part of its own separate state-selection block and changes there shouldn't influence anything else.
However, state selection consists of a number of algorithms and heuristics so it is difficult to formally guarantee that any change does not influence it.
I could imagine some strange possibilities that would break this, but I don't think anyone has implemented them - and I don't see a use-case for them (except to mess up cases like this).
And if you instead of integrating want to differentiate a signal it is a lot messier.

Whats wrong with this MATLAB example (User-Added Coordinate Systems)?

Well I am working on a very interesting project concerning a gear system that will rotate a shaft and some gear systems. I am following a tutorial from matlab
User-Added Coordinate Systems
on how to decouple and have two gears rotating.
Well, I need some hold on understanding the following figure, that is the output of the above link.
So what I do next is un-weld the two gears by deleting the conection of F1, and later introducing the common gear constraint by conecting it to the SMLINK port on both gear"_" boxes. I get and unusual message that says:
" * Model not assembled: position violation * Resolve this issue in order to simulate the model."
Can some one explain what is happening?
Also, what is the difference between 1st generation, and multibody Simscape? can I have joint actuators in both cases? and if so how would i be able to implement such in the example given above?
for those who would like to answer, but dont have solid works, the gear block boxes, and figure is the following:
It looks like the right thing to do, but how have you parameterised your gear constraint? Have a look at mech_user_added_css.mdl for the correct way to do it (it's one of the SimMechanics example, but it uses the first generation engine and blocks). Make sure the gear circle radii match what's in the first generation example. It will also help to answer you question about first generation vs. second generation.
SimMechanics was one of the early physical modelling tools produced by MathWorks. A few years later, they produced Simscape and the Simscape engine for modelling multi-domain systems. This was much more powerful than the original SimMechanics, so over the years they migrated SimMechanics functionality over to Simscape, but have kept the original first generation blocks for compatibility issues. Have a look at some of the first generation vs second generation examples and blocks to get an idea.

Using a subset of a SUMO scenario for OMNeT++ network simulation (with VEINS)

I'm trying to evaluate an application that runs on a vehicular network using OMNeT++, Veins and SUMO. Because the application relies on realistic traffic behavior, so I decided to use the LuST Scenario, which seems to be the state of the art for such data. However, I'd like to use specific parts of this scenario instead of the entire scenario (e.g., a high and a low traffic load fragment, perhaps others). It'd be nice to keep the bidirectional functionality that VEINS offers, although I'm mostly interested in getting traffic data from SUMO into my simulation.
One obvious way to implement this would be to use a warm-up period. However, I'm wondering if there is a more efficient way -- simulating 8 hours of traffic just to get a several-minute fragment feels inefficient and may be problematic for simulations with sufficient repetitions.
Does VEINS have a built-in mechanism for warm-up periods, primarily one that avoids sending messages (which is by far the most time consuming part in the simulation), or does it have a way to wait for SUMO to advance, e.g., to a specific time stamp (which also avoids creating vehicle objects in OMNeT++ and thus all the initiation code)?
In case it's relevant -- I'm using the latest stable versions of OMNeT++ and SUMO (OMNeT++ 4.6 with SUMO 0.25.0) and my code base is based on VEINS 4a2 (with some changes, notably accepting the TraCI API version 10).
There are two things you can do here for reducing the number of sent messages in Veins:
Use the OMNeT++ Warm-Up Period as described here in the manual. Basically it means to set warmup-period in your .ini file and make sure your code checks this with if (simTime() >= simulation.getWarmupPeriod()). The OMNeT++ signals for result collection are aware of this.
The TraCIScenarioManager offers a variable double firstStepAt #unit("s") which you can use to delay the start of it. Again this can be set in the .ini file.
As the VEINS FAQ states, the TraCIScenarioManagerLaunchd offers two variables to configure the region of interest, based on rectangles or roads (string roiRoads and string roiRects). To reduce the simulated area, you can restrict simulation to a specific rectangle; for example, *.manager.rioRects="1000,1000-3000,3000" simulates a 2x2km area between the two supplied coordinates.
With both solutions (best used in combination) you still have to run SUMO - but Veins barely consums any of the time.

2D multi-robot simulation libraries?

background
I'm working on a group project to simulate some consensus algorithms used by a group of independent robots to form an arbitrary shape on a 2D plane. The robots are modeled as unit disks, and all run the same algorithm. Basically, each robot can move, wait, or observe its local environment at any moment, but cannot communicate explicitly with an other robots. We'd like to find a simulation or even 2d graphics library to help us without writing too much from scratch.
Question
Can anyone recommend a simulation library meeting the requirements below, which could be used for a multi-robot 2D simulation?
I've never coded a simulation before, so it's possible some of my concerns are readily addressed by many existing libraries. However, the Mason project is the only resource I've found that seems promising so far. Unfortunately, a few of our team members are not very proficient in Java, so I'd like to find something suitable in a different language, if possible.
Requirements
* language preference (descending order): python, c++, (maybe) java
* open source/FOSS recommendations only
* Options/flags to disable simulation: We plan on running several thousand trials of randomly generated shapes against each algorithm, so for the bulk of trials we don't care about any visual representation, just data. So the simulation logic has to be decoupled from the graphics components if this makes sense.
* collision detection
* Customizable visual representations: Within a simulation, we'd like to have several views (or toggles for a single view) that present additional information about each robot like current state, the area it's currently observing etc.
For such simple graphics you can surely get away with either pyqt or wxpython.
The simulation itself should be its own python module; the GUI should just load the module, then call its "timestep" function at regular intervals (timer, GUI idle callback, etc); the step function should evolve the robot system by one small time step.
The GUI should just display the simulation state. Avoid mixing everything (display and simulation) in one module, it'll get pretty messy, plus if your simulation engine is a separate module you can then also run it directly from the command line and look at the output file.
It would be pretty easy to write a python script that reads such output file and generates commands to represent it graphically in either excel or powerpoint using win32com, in which case you don't even need pyqt or wxpython.
For the collision detection, look at pybox2d.

is there a way in Simulink to use the same set of blocks on multiple signals (without copying those blocks)?

I am implementing some head tracking and I get 2 matrices of horizontal velocities. (A vector field decomposed into vertical and horizontal velocities). For each of these matrices I do some math to calculate the actual head tracking.
My question is, is there a way to do that math (which is a set of blocks) on both matrices without copying the math blocks onto each signal?
It's hard to explain so here's a screen shot of my model:
You can see that the "complex to real-imag" block has 2 outputs (this is the little one in the middle). The mean block and the integrator circuit then calculate the head velocity and position for the real matrix (horizontal position). I want to do exactly the same routine on the imaginary matrix (vertical direction). Obviously I can just copy the blocks, but surely there must be a better way of doing it? In a way I'm looking for an analogue of a loop in "normal programming" like C or something, where a block of code is executed several times on different inputs.
You can create a Library in Simulink that contains code you can reference multiple times.
Go to File -> New -> Library. In the model window that opens, you can create any number of subsystems with whatever code you want. Then, just drag a subsystem from the library into your model. The subsystem will now appear in your model with a little arrow icon in the lower left. This indicates that the subsystem in the model is a link. You can drag as many instances of the library subsystem into your model as you wish, just as you can call a function as many times as you wish in any other programming language.
If you right-click on the subsystem in your model, you can select "Link Options -> Go To Library Block" to get back to the library. You can make changes in your model and propogate them back to the library as well.
One way to easily reuse a set of blocks is to create a subsystem out of them. In your case, you can create a subsystem by grouping existing blocks, then simply copy and paste your subsystem to use it for your imaginary output.
Although potentially more complicated, you could also look into using mux signals to avoid having to copy parts of your model.