What is the different between UPrimitiveComponent's GetMass and CalculateMass? - unreal-engine4

In an udemy course I just went through a lecture where we needed to calculate the mass of some stuff.
I ended up using CalculateMass; but, the instructor used GetMass.
The unreal documention for CalculateMass shows it accepts a parameter FName BoneName; but, I did not use this parameter and it still works. The documentation also talks about CalculateMass being potentially ~0.1 KiloGrams off the actual Mass; but, this looks like an insignificant amount.
What is the important difference between these 2 functions? When should one be used over the other?

float CalculateMass(FName BoneName)
Returns mass for the specified bone / body. In physics assets you can setup multiple physics bodies for a single mesh and get the mass separately. Hence the bone name parameter. It also returns with the overridden mass instead if it was specified. This is an assumed mass by the engine and may be a little different compared to GetMass. And it seems to be faster.
On the other hand...
float GetMass() const
Is pretty much similar for calling CalculateMass(NAME_None);
Except it returns the real mass of the body instance calculated by the physics engine.
To get the exact value it causes a thread lock on the physics engine to read the value.
So this is a little slower when executing.
I would use CalculateMass unless the exact mass must be precise.
Also caching the CalculatedMass in a variable if it's not changing frequently.
You can also dig into the engine code to better understand where these values come from ;)

Related

Why are computer/game physics engines often non-deterministic?

Developing with various game physics engines over the years, I've noticed that on the same machine I observe widely different results in physics simulations between runs. Most recently, the Unity engine does this, even though physics are calculated at set intervals of time (FixedUpdate) -- as far as I can determine it should be completely independent of frame-rate.
I've asked this question on game forums before, and was told it was due to chaotic motion: see double pendulum. But, even the double pendulum is deterministic if the starting conditions are exactly controlled, right? On the same machine, shouldn't floating point math behave the same way?
I understand that there are problems with floating point math accuracy, but I understand those problems (as outlined here) to not be problems on the same hardware -- isn't floating point inaccuracy still deterministic? What am I missing?
tl;dr: If running a simulation on the same machine, using the same floating point math(?), shouldn't the simulation be deterministic?
Thank you very much for your time.
Yes, you are correct. A program executed on the same machine will give identical results each time (at least ideally---there might be cosmic rays or other external things that affect memory and what not, but I would say these are not of our concern). All calculations on a computer are deterministic, and so all algorithms of a computer will necessarily be deterministic (which is the reason it's so hard to make random number generators)!
Most likely the randomness you see is implemented in the program with some random number generator, and the seed for the random numbers varies from run to run. Should you start the simulation with the same seed, you will see the same result.
Edit: I'm not familiar with Unity, but doing some more research seems to indicate that the FixedUpdate routine might be the problem.
Except for the audio thread, everything in unity is done in the main thread if you don't explicitly start a thread of your own. FixedUpdate runs on the very same thread, at the same interval as Update, except it makes up for lost time and simulates a fixed time step.
source
If this is the case, and the function itself looks somewhat like:
void physicsUpdate(double currentTime, double lastTime)
{
double deltaT = currentTime - lastTime;
// do physics using `deltaT`
}
Here we will naturally get different behaviour due to deltaT not being same from two different runs. This is determined from what other processes are running in the background, as they could delay the main thread. This function would be called irregularly and you would observe different results from runs. Note that these irregularities will mostly not be due to floating point inprecision, but due to inaccuracies when doing integration. (E.g. velocity is often calculated by v = a*deltaT, which assumes a constant acceleration since last update. This is in general not true).
However, if the function would look like this:
void physicsUpdate(double deltaT)
{
// do physics using `deltaT`
}
Every time you do simulations using this you will always get the exact same result.
I've not got much experience with Unity or its physics simulations, but I've found the following forum post which also links to an article which seems to indicate it's down to precision with the floating point calculations.
As you've mentioned, a lot of people seem to keep rehashing this question!
The forum also links to this blog post which may shed some light on the issue.

Where are jplephem ephemerides api documented?

I am working on what is likely a unique use case - I want to use Skyfield to do some calculations on a hypothetical star system. I would do this by creating my own ephemeris, and using that instead of the actual one. The problem i am finding is that I cannot find documentation on the API to replace the ephemerides with my own.
Is there documentation? Is skyfield something flexible enough to do what I am trying?
Edit:
To clarify what I am asking, I understand that I will have to do some gravitational modeling (and I am perfectly willing to configure every computer, tablet, cable box and toaster in this house to crunch on those numbers for a few days :), but before I really dive into it, I wanted to know what the data looks like. If it is just a module with a number of named numpy 2d arrays... that makes it rather easy, but I didn't see this documented anywhere.
The JPL-issued ephemerides used by Skyfield, like DE405 and DE406 and DE421, simply provide a big table of numbers for each planet. For example, Neptune’s position might be specified in 7-day increments, where for each 7-day period from the beginning to the end of the ephemeris the table provides a set of polynomial coefficients that can be used to estimate Neptune's position at any moment from the beginning to the end of that 7-day period. The polynomials are designed, if I understand correctly, so that their first and second derivative meshes smoothly with the previous and following 7-day polynomial at the moment where one ends and the next begins.
The JPL generates these huge tables by taking the positions of the planets as we have recorded them over human history, taking the rules by which we think an ideal planet would move given gravitational theory, the drag of the solar wind, the planet's own rotation and dynamics, its satellites, and so forth, and trying to choose a “real path” for the planet that agrees with theory while passing as close to the actual observed positions as best as it can.
This is a big computational problem that, I take it, requires quite a bit of finesse. If you cannot match all of the observations perfectly — which you never can — then you have to decide which ones to prioritize, and which ones are probably not as accurate to begin with.
For a hypothetical system, you are going to have to start from scratch by doing (probably?) a gravitational dynamics simulation. There are, if I understand correctly, several possible approaches that are documented in the various textbooks on the subject. Whichever one you choose should let you generate x,y,z positions for your hypothetical planets, and you would probably instantiate these in Skyfield as ICRS positions if you then wanted to use Skyfield to compute distances, observations, or to draw diagrams.
Though I have not myself used it, I have seen good reviews of:
http://www.amazon.com/Solar-System-Dynamics-Carl-Murray/dp/0521575974

Implementing runtime-constant persistent "LUT"s in MATLAB functions

I am implementing (for class, so no built-ins!) a variant of the Hough Transform to detect circles in an image. I have a working product, and I get correct results. In other words, I'm done with the assignment! However, I'd like to take it one step further and try to improve the performance a little bit. Also, to forestall any inevitable responses, I know MATLAB isn't exactly the most performance-based language, and I know that the Hough transform isn't exactly performance-based either, but hear me out.
When generating the Hough accumulation space, I end up needing to draw a LOT of circles (approximately 75 for every edge pixel in the image; one for each search radius). I wrote a nifty function to do this, and it's already fairly optimized. However, I end up recalculating lots of circles of the same radius (expensive) at different locations (cheap).
An easy way for me to optimize this was precalculate a circle of each radius centered at zero once, and just select the proper circle and shift it into the correct position. This was easy, and it works great!
The trouble comes when trying to access this lookup table of circles.
I initially made it a persistent variable, as follows:
[x_subs, y_subs] = get_circle_indices(circ_radius, circ_x_center, circ_y_center)
persistent circle_lookup_table;
% Make sure the table's already been generated; if not, generate it.
if (isempty(circle_lookup_table))
circle_lookup_table = generate_circles(100); %upper bound circ size
end
% Get the right circle from the struct, and center it at the requested center
x_subs = circle_lookup_table(circ_radius).x_coords + circ_x_center;
y_subs = circle_lookup_table(circ_radius).y_coords + circ_y_center;
end
However, it turns out this is SLOW!
Over 200,000 function calls, MATLAB spent on average 9 microseconds each call just to establish that the persistent variable exists! (Not the isEmpty() call, but the actual variable declaration). This is according to MATLAB's built-in profiler.
This added back most of the time gained from implementing the lookup table.
I also tried implementing it as a global variable (similar time to check if the variable is declared) or passing it in as a variable (made the function call much more expensive).
So, my question is this:
How do I provide fast access inside a function to runtime-constant data?
I look forward to some suggestions.
It is NOT runtime-constant data, since your function has the ability of generating the table. So your main problem is to discard this instruction from the function. Before all the calls of this critical function, make sure that this array is generated elsewhere outside the function.
However, there is a nice trick I've read from Matlab's files, and more specifically bwmorph. For each functionality of this particuliar function which requires a LUTs, they have created a function which returns the LUT itself (the LUT is written explicitely in the file). They also add an instruction coder.inline('always') to ensure that this function will be inlined. It seems to be quite efficient!

Dymola_InlineAfterIndexReduction

This question is related to an issue that I encountered when I was playing with some blocks. Here's the model that I have,
As you can see, there are two kinds of connections, the inputs of the first connection(from the top to bottom) is u[1],u[2],u[3], other blocks are quite self-explanatory (all default values, except that startTime = 5 for the step input block).
From my knowledge, the first kind of connection only outputs angular velocity but not angle and angle acceleration(they are both zero), which is a bit not realistic(I'll explain why I did this). The second connection outputs an angular velocity as well.
My problem was that, in the Second connection, the clutch seems working all right(after 5 seconds the clutch is engaged(relative angular velocity w_rel = 0))
However, the first connection behaves quite differently. We can see that they are all flange connections, and angular velocities are all calculated from flange_a/b.phi, so we should expect that there is no angular velocity difference in the clutch no matter what the input (realExperssion1) is. But the interesting thing is that when I simulate the model, the left flange of the clutch is not moving, the right flange is rotating instead. Here're two plots of my results.
Connection1
Connection2
Actually, I should expect to see the flange_a.phi and flange_b.phi are all zero, and then I accidentally removed the annotation __Dymola_InlineAfterIndexReduction = true in the move block, then the model behaves as what I expected. I wound be really appreciated if anyone could help me explain what I saw. Thanks A LOT!
The documentation for the Move model clearly says
The user has to guarantee that the input signals are consistent to
each other
In your case, they are not consistent. So I'm not too surprised you get a strange answer. It was not clear to me why you even attempted to go this route. You imply in the message you would explain why, but I certainly didn't understand your motivation. I suspect that the Move model exists to allow the user to provide their own explicit functions for position, velocity and acceleration that Dymola will use during index reduction instead of generating those functions from the underlying equations. Unless you can provide consistent functions, you really shouldn't use this block at all.
You should really be using a source where you specify only one of position, velocity and acceleration. If that isn't possible, then I'm afraid you'll have to explain why so we can try to understand what you are really trying to achieve here.

Getting displacement from accelerometer data with Core Motion

I am developing an augmented reality application that (at the moment) wants to display a simple cube on top of a surface, and be able to move in space (both rotating and displacing) to look at the cube in all the different angles. The problem of calibrating the camera doesn't apply here since I ask the user to place the iPhone on the surface he wants to place the cube on and then press a button to reset the attitude.
To find out the camera rotation is very simple with the Gyroscope and Core Motion. I do it this way:
if (referenceAttitude != nil) {
[attitude multiplyByInverseOfAttitude:referenceAttitude];
}
CMRotationMatrix mat = attitude.rotationMatrix;
GLfloat rotMat[] = {
mat.m11, mat.m21, mat.m31, 0,
mat.m12, mat.m22, mat.m32, 0,
mat.m13, mat.m23, mat.m33, 0,
0, 0, 0, 1
};
glMultMatrixf(rotMat);
This works really well.
More problems arise anyway when I try to find the displacement in space during an acceleration.
The Apple Teapot example with Core Motion just adds the x, y and z values of the acceleration vector to the position vector. This (apart from having not much sense) has the result of returning the object to the original position after an acceleration. (Since the acceleration goes from positive to negative or vice versa).
They did it like this:
translation.x += userAcceleration.x;
translation.y += userAcceleration.y;
translation.z += userAcceleration.z;
What should I do to find out displacement from the acceleration in some istant? (with known time difference). Looking some other answers, it seems like I have to integrate twice to get velocity from acceleration and then position from velocity. But there is no example in code whatsoever, and I don't think that is really necessary. Also, there is the problem that when the iPhone is still on a plane, accelerometer values are not null (there is some noise I think). How much should I filter those values? Am I supposed to filter them at all?
Cool, there are people out there struggling with the same problem so it is worth to spent some time :-)
I agree with westsider's statement as I spent a few weeks of experimenting with different approaches and ended up with poor results. I am sure that there won't be an acceptable solution for either larger distances or slow motions lasting for more than 1 or 2 seconds. If you can live with some restrictions like small distances (< 10 cm) and a given minimum velocity for your motions, then I believe there might be the chance to find a solution - no guarantee at all. If so, it will take you a pretty hard time of research and a lot of frustration, but if you get it, it will be very very cool :-) Maybe you find these hints useful:
First of all to make things easy just look at one axis e.g x but consider both left (-x) and right (+x) to have a representable situation.
Yes you are right, you have to integrate twice to get the position as function of time. And for further processing you should store the first integration's result (== velocity), because you will need it in a later stage for optimisation. Do it very careful because every tiny bug will lead to huge errors after short period of time.
Always bear in mind that even a very small error (e.g. <0.1%) will grow rapidly after doing integration twice. Situation will become even worse after one second if you configure accelerometer with let's say 50 Hz, i.e. 50 ticks are processed and the tiny neglectable error will outrun the "true" value. I would strongly recommend to not rely on trapezoidal rule but to use at least Simpson or a higher degree Newton-Cotes formula.
If you managed this, you will have to keep an eye on setting up the right low pass filtering. I cannot give a general value but as a rule of thumb experimenting with filtering factors between 0.2 and 0.8 will be a good starting point. The right value depends on the business case you need, for instance what kind of game, how fast to react on events, ...
Now you will have a solution which is working pretty good under certain circumstances and within a short period of time. But than after a few seconds you will run into trouble because your object is drifting away. Now you will enter the difficult part of the solution which I failed to handle eventually within the given time scope :-(
One promising approach is to introduce something I call "synthectic forces" or "virtual forces". This is some strategy to react on several bad situations triggering the object to drift away although the device remains fixed (? no native speaker, I mean without moving) in your hands. The most troubling one is a velocity greater than 0 without any acceleration. This is an unavoidable result of error propagation and can be handled by slowing down artificially that means introducing a virtual deceleration even if there is no real counterpart. A very simplified example:
if (vX > 0 && lastAccelerationXTimeStamp > 0.3sec) {
vX *= 0.9;
}
`
You will need a combination of such conditions to tame the beast. A lot of try and error is required to get a feeling for the right way to go and this will be the hard part of the problem.
If you ever managed to crack the code, pleeeease let me know, I am very curious to see if it is possible in general or not :-)
Cheers Kay
When the iPhone 4 was very new, I spent many, many hours trying to get an accurate displacement using accelerometers and gyroscope. There shouldn't have been much concern about incremental drift as device needed only move a couple of meters at most and the data collection typically ran for a few minutes at most. We tried all sorts of approaches and even had help from several Apple engineers. Ultimately, it seemed that the gyroscope wasn't up to the task. It was good for 3D orientation but that was it ... again, according to very knowledgable engineers.
I would love to hear someone contradict this - because the app never really turned out as we had hoped, etc.
I am also trying to get displacement on the iPhone. Instead of using integration I used the basic physics formula of d = .5a * t^2 assuming an initial velocity of 0 (doesn't sound like you can assume initial velocity of 0). So far it seems to work quite well.
My problem is that I'm using the deviceMotion.and the values are not correct. deviceMotion.gravity read near 0. Any ideas? - OK Fixed, apparently deviceMotion.gravity has a x, y, and z values. If you don't specify which you want you get back x (which should be near 0).
Find this question two years later, I just find a AR project on iOS 6 docset named pARk, It provide a proximate displacement capture and calculation using Gyroscope, aka CoreMotion.Framework.
I'm just starting leaning the code.
to be continued...