I've created a Dymola model. It has an empty tank, which is connected to the output of sweptVolume component via a static pipe. Input to the sweptVolume is a constant force, with the help of which I would like to transport water from the hydraulic cylinder to the tank.
I've assumed the cross sectional area of the piston. I've calculated the force that is needed to displace the water in the cylinder assuming the pressure to be atmospheric (101.325kPa). But, somehow I see the water is not getting displaced and the volume is remaining constant without filling the tank.
Please suggest, what type of input should be given for the sweptVolume element (position,move etc.), in case the given input constant force is wrong.
I would like to thank you for your time and interest.
the way to setup initial conditions it is only a matter of GUI, just add "flange(s(start=1, fixed=true))" in add modifiers tab of the sweptVolume parameter dialog in Dymola. To get your model work just invert the sign of the force, the sign convention for the force block is displayed by the arrow, so to compress the piston and fill the tank have to set the const value to minus something. Check the fluid volumes since you will get the model to stop when tank overflows or when piston stroke reaches the end (negative value of s). So you have to setup correctly the forces, or the tank and piston volumes or place some kind of stop to the mechanic part of the piston. The model can work fine even without masses added to the piston.
Hope this helps,
Marco
Related
How to adjust the speed of the character when crossing a cube with water?
And how to make it move more slowly on the surface of the water, while leaving the ability to move with Shift?
Speed up movement when pressing Shift
Water surface blueprint
Water surface blueprint components
Put an OnComponentBeginOverlap event on 'BP_WaterRegion' that checks what kind of actor it's just overlapped with. If it's the player character then you can set the MaxWalkSpeed variable on the character from there. The OnComponentEndOverlap event should be self-explanatory now.
If you select 'WaterMesh' you should find these events at the bottom of the Details panel.
(It's a while since I've worked in UE - you might need to connect the output from the cast node to the == node instead.)
I‘m trying to create real physics simulation of the universal joint in unity. In my case universal joint is in vertical position. The whole mechanism consists of four objects: white ceiling, green shaft, blue universal joint and red shaft. All parts have Rigidbody component, with specific mass (ceiling – 1kg, green shaft – 0,05 kg, blue universal joint - 0,01 kg, red shaft – 0,05 kg). The mass of center of the parts are represented in the pictures. All objects dimensions are NOT scaled and represent the real dimensions of an object in cm. The Box Colliders or Mesh Colliders were not added to the objects.
Green shaft center of mass point, Blue universal joint center of mass point, Red shaft center of mass point
The parameters for all objects are represented in the pictures: Green shaft parameters, Blue universal joint parameters and Red shaft parameters. To the ceiling object only Rigidbody component was added with an “Is Kinematic” option, without the “Use Gravity” option. One Fixed Joint and two Hinge Joints are used for objects. My aim is to create realistic physics behavior of this universal joint in simulation. For example, while playing the scene I want to be able to move the red shaft end, let go it and see how because of gravity and friction red shaft swings back to initial position. I want to do this with object components and project settings, not with a script if possible. If I succeed with universal joint physics I will later plan to add a point on the red shaft object end, which would have Constant Force component and would pull the red shaft end on demand with force to the certain direction. But the force which would pull the red shaft end would be based on real measurements.
The problem that I’m currently facing right now is with the object swinging back to initial position. If I leave Rigidbody and Hinge Joint parameters as it is represented in pictures and try to move red shaft end in scene play mode, the red shaft and blue joint swings back to initial position very very slowly. Like it would have no mass or would have very high friction. But if I turn on red shaft and blue Use Spring parameter in the Hinge Joint and set Spring value for example to 0.2 the shaft as expected goes back to initial position much faster. I also noticed that if I increase object Scale parameters (increase object size) for example from 1 to 100 it swings faster without the spring parameter turned on.
My first question which component or project parameters have the most impact on pulling object down in gravity direction?
Is where more friction friction parameters about which I should know while creating this universal joint model? Because so far I have changed only Drag, Angular Drag to 0, I have created a PhysicMaterial with almost no friction and added it to Default Material field in Project Settings->Physics. I have increased Default Solver Iterations and Default Solver Velocity Iterations almost to the maximum.
My third question would be, is it even possible to create a realistic universal joint shaft swinging without Spring option turned on in the Hinge Joint? Or should I better write the script that would define the red shaft swinging with gravity behavior?
First, Unity physics are not real physics, they are real-looking physics.
Second, Unity units are typically assumed to be in meters. When you set a position to <1, 0, 0> you should assume your object is at 1 meter in x.
Third, Unity physics uses colliders to determine volumes, for the purposes of moments of inertia.
So, when you say that
The mass of center of the parts are represented in the pictures. All objects dimensions are NOT scaled and represent the real dimensions of an object in cm. The Box Colliders or Mesh Colliders were not added to the objects.
It makes me think that (1) you are using the wrong scaling, like 1 = 1 centimeter instead of the assumed 1 = 1 meter, and (2) you're preventing Unity from being able to really run the physics calculations correctly because you're not providing volumetric information to the physics engine (which again is done via the collider volumes).
Also, your masses seem very, very, very small. You've got the universal joint at 10 grams, which is really not much at all.
You're seeing better results when you add a spring because the spring is adding a force, where your weights are small and your missing colliders are failing to do much with physics.
I would suggest adding mesh colliders and increasing the weights to get the behavior you want to see.
Im simulating a swing door with generator using Simscape/Simulink. I imagine there is only one input for system which is the force at the knob needed to open the door, so all revolute joints have their torque actuation set to automatically computed. However, im getting an error saying:
"In the dynamically coupled component containing Revolute Joint 'SDL/SwingMotion', there are more joint primitive degrees of freedom with automatically computed force or torque (4) than with motion from inputs (0). In general, the equations of motion do not have a unique solution. Solve this problem by increasing the number of joint primitives with motion from inputs or reducing the number of joint primitives with automatically computed force or torque. Resolve this issue in order to simulate the model."
The animation works fine if i set the torque actuation of all 4 revolute joints to none, but the torque produced by the force wont be transfered this way and the generator shaft wont have any torque. However, I am able to measure the output RPM (angular velocity of generator shaft). I dont quite understand the error. Why do the revolute block treats the degree of motion of other revolute blocks as its own? How do i resolve this?
Block Diagram can be seen here.
The Assembly looks like this:
Any help is much appreciated!
For a revolute joint there are three options:
Let the revolute joint just act as a hinge, no torque can be exerted from base to follower and vice versa
Provide torque and calculate motion from the torque. This is called forward dynamics, motion is determined from torque.
Provide motion and calculate torque from the motion. This is called inverse dynamics, torque is determined from motion.
So you have to select either one of the option. If you select torque automatically computed, then you need to provide the motion which it has to follow. If you don't need to follow a provided motion, than no torque is needed.
If you set torque computed to 'none', then there is no torque that can be exerted from the base to the follower and vice versa, that is the idea of a Degree of Freedom.
It seems as if you want the Base and the Follower to be somewhat rigidly connected and follow the same motion. Than you can consider using the Rigid Transform block, which is just a rigid link in which you can define a translation or rotation offset.
EDIT
What you were effectively doing was combining forward and inverse dynamics. You put a force on the doorknob, let Simulink calculate the motion (up till now forward dynamics) then you desire the torque on the GenShaft from the motion it is doing, which is inverse dynamics. That does not work.
A better way to check the torque on the GenShaft is to, for instance, desire a certain door hinge angular velocity, put a proportional controller on it, and check the required torque. Notice that with no friction modeled in the hinge the required torque will go to zero.
So if you set all torque computed to 'none' except for the GenShaft, set this to provided by input. Then put a proportional controller on the angular velocity of the door hinge. You can then check the torque needed.
Updated model
I'm trying to use GA to train an ANN whose job is to move a bar vertically so that it makes a ball bounce without hitting the wall behind the bar, in other words, a single bar pong.
I'm going to ask it directly because i think to know what the problem is.
The game window is 200x200 pixels, so i created 40000 input neurons.
The obvious doubt is: can GA handle chromosomes of 40000(input)*10(hidden)*2 elements(genes)?
Since i think the answer is no(i implemented this solution and doesn't seem to work), the solution seems simple, i feed the NN with only 4 parameters which are the coordinates x,y of bar and ball, nailed it.
Nice solution, but the problem is: how can i apply such a solution in a game like supermario where the number of enemies in the screen is not fixed? Surely i cannot create a NN with dynamic numbers of inputs.
I hope you can help me.
You have to use features to represent your state. For example, you can divide the screen in tiles and assign a value according to a function that takes into account the enemy (e.g., a boolean if the enemy is in the tile or the distance to the closest enemy).
You can still use pixels but you might need to preprocess them in order to reduce their size (e.g., use a recurrent NN).
Btw, a NN might not be able to handle 200x200 pixels, but it was able to learn to play Atari games using a representation of the state by preprocessed pixels of size 84x84x4 (see this paper).
I am developing an app which uses LK for tracking and POSIT for estimation. I am successful in getting rotation matrix, projection matrix and able to track perfectly but the problem for me is I am not able to translate 3D object properly. The object is not fitting in to the right place where it has to fit.
Will some one help me regarding this?
Check this links, they may provide you some ideas.
http://computer-vision-talks.com/2011/11/pose-estimation-problem/
http://www.morethantechnical.com/2010/11/10/20-lines-ar-in-opencv-wcode/
Now, you must also check whether the intrinsic camera parameters are correct. Even a small error in estimating the field of view can cause troubles when trying to reconstruct 3D space. And from your details, it seems that the problem are bad fov angles (field of view).
You can try to measure them, or feed the half or double value to your algorithm.
There are two conventions for fov: half-angle (from image center to top or left, or from bottom to top, respectively from left to right) Maybe you just mixed them up, using full-angle instead of half, or vice-versa
Maybe you can show us how you build a transformation matrix from R and T components?
Remember, that cv::solvePnP function returns inverse transformation (e.g camera in world) - it finds object pose in 3D space where camera is in (0;0;0). For almost all cases you need inverse it to get correct result: {Rt; -T}