OpenAL global sounds - openal

Is it possible to create sound sources that are "global", meaning that their volume and pitch are not affected by the listener's position and velocity at all?
I want both system sounds (not affected by listener) and 3 dimensional sounds in my application and I could not find any solution.

If your sounds are stereo then you may place them anywhere, they are not affected by relative distance nor rotation nor velocity.
But if they are mono, you need to do some more work:
All audio sources have position & velocity relative to listener flag. Turn it on and leave your source coordinates at default position, {0,0,0}, with zero velocity.
alSourcei(source, AL_SOURCE_RELATIVE, 1); // This turns on the flag
This will only work if you use one of the clamped distance models. Clamped inverse distance is default, so you should not have any problems with it. (A distance model is an equation which determines in what way sound volume changes when distance to listener changes. You can select one of these models with an OpenAL function. A clamped distance model is a such model in which a sound won't get infinitely loud when it's placed exactly on the listener. So, volume is clamped to some max value.) But if you use a non-clamped distance model, then you should turn on the flag and place your sound exactly one reference distance away from the listener. So, it will be at, for example, at {ref_distance,0,0}. Default reference distance is 1.0, but you can change it with an OpenAL function. (Reference distance is a distance at which sound has it's volume equal to 1.)

Related

Why a big moving attractor object gives a slingshot to other object with using gravity formula rather than holding it?

I have a big spherical Gameobject which moves forward in 3D with constant velocity. I have other spherical objects that other big object needs to attract to itself. I am using Newton's law of universal gravitation formula to attract other objects, but as expected, other objects are doing a slingshot movement much like the space shuttles doing when needed with other planets' orbits to accelerate.
I actually want a magnetic effect that without taking the masses into account, all other objects will be catched by the big object. How can I do that? Do I need a different formula? Or do I need to change the movement behavior of the objects altogether?
If I got it right you expect to have something like this: https://www.youtube.com/watch?v=33EpYi3uTnQ
You can do a spherical raycast or have an sphere collider as trigger to detected the objects that are inside of your magnetic field.
Once you know those objects you can calculate the distance from each of them to the magnetic ball.
You can make an inverse interpolation to know how much strength/"magnetism" is getting into that object.
Then you can apply some force on the attracted object towards the magnetic ball's center.
Something like this algorithm:
var objectsInsideField = ListOfObjects;
foreach (o in objectsInsideField) {
var distance = (o.position - center.position).magnitude;
var strength = distance / fieldRadius; // fieldRadius == spherical radius
o.AddForce(dir: o.position - center.position, strength: strength)
}
Of course, you need to do some adjustments and probably add some multipliers to make the force to be meaning.
The final result should be: for each sequential frame, if the object is inside the magnetic field it moves towards the center a bit. The next frame it should be even close to the center so the strength is even bigger.. and so and so.
First of all, since the big objects is moving with constant velocity, a coordinate frame with axes parallel to the axes of the original coordinate system will be moving uniformly with constant velocity, so this new moving coordinate system is also inertial and you can write all your equations of motion in it, calculate everything with respect to it, and at the end you add the uniform movement to the results. The benefit is that the big object is stationary in this coordinate system, so simpler physics applies.
The slingshot effect occurs most likely because your objects are treated as mass-points, rather than bigger 3D entities (like spheres), for which the centers of mass never get too close enough. Hence maybe some sort of collision detection may eliminate this problem, especially if you decrease significantly or kill completely the elastic collision resolution.
All of what I am saying is a bit speculative as I have no access to details.

Unity: Create AnimationClip With World Scale AnimationCurves

I've been looking for a solution to this for quite a while now (meaning several days) and I haven't found anything yet. Maybe I'm thinking about it wrong and there isn't a way, but let's try!
I'm recording hand-data on a Hololens (the Unity Hololens Input Simulation for now). This essentially gives me one float AnimationCurve for each hand joint for each transform.position.x to z and rotation.x to w. Now my goal is to put these curves into an AnimationClip and add it to an AnimatorController (via an AnimatorOverrideController) that animates a hand rig and replay the recordings. Everything so far works!
However, the recorded hand-data from the Hololens is in world scale, not in local scale. (which makes sense, since you usually want absolute coordinates when you want to know where the hand is.) But to animate the hand, it seems I'm only able to set local coordinates, which I don't have.
Example:
clip.SetCurve("", typeof(Transform), "localPosition.x", curve.PositionX);
Here, the clip takes the the x-coordinates from some hand joint and puts it to the localPosition.x of the corresponding hand rig joint. The problem: curve.PositionX is world-scale (absolute coordinates), but localPosition.x takes local-scale (coordinates relative to its parent).
I can't simply change "localPosition.x" to "position.x", like so:
clip.SetCurve("", typeof(Transform), "position.x", curve.PositionX);
even though the Transform class has both properties and position is the object's world scale position. I'm not sure why this doesn't work, but it gives me the following error:
Cannot bind generic curve on Transform component, only position, rotation and scale curve are supported.
I'm aware that it doesn't make much sense to use absolute coordinates for an animation, but I simply don't have anything else.
Does anyone have an approach how I can deal with this in a sensible, not-too-cumbersome way? It seems I have all the important parts, I just can't figure out how to put them together. Thanks so much already! :)
From my basic understanding, it seems like you are using the Input animation recording service provided by MRTK. Unfortunately, MRTK does not provide the localPosition version of Curves data. However, you can modify the data from the recordingBuffer after the InputRecordingService stops recording.
So, this is a method worth trying for you: in the handJointCurves dictionary property of recordingBuffer field, a set of pose curves is stored for each joint. And then, base on this table:Joint pose curves, subtract the position value of the key None from the position value of each other joint in every key frame so that the localPosition based on the key None is obtained.

Calculating Lean Angle with Core Motion

I have a record session for my application. When user started a record session I start collecting data from device's CMMotionManager object and store them on CoreData to process and present later. The data I'm collecting includes gps data, accelerometer data and gyro data. The frequency of data is 10Hz.
Currently I'm struggling to calculate the lean angle of device with motion data. It is possible to calculate which side of device is land by using gravity data but I want to calculate right or left angle between user and ground regardless of travel direction.
This problem requires some linear algebra knowledge to solve. For example for calculation on some point I must calculate the equation of a 3D line on a calculated plane. I am working on this one for a day and it's getting more complex. I'm not good at math at all. Some math examples related to the problem is appreciated too.
It depends on what you want to do with the collected data and what ways the user will go with that recording iPhone in her/his pocket. The reason is that Euler angles are no safe and especially no unique way to express a rotation. Consider a situation where the user puts the phone upright into his jeans' back pocket and then turns left around 90°. Because CMAttitude is related to a device lying flat on the table, you have two subsequent rotations for (pitch=x, roll=y, yaw=z) according to this picture:
pitch +90° for getting the phone upright => (90, 0, 0)
roll +90° for turning left => (90, 90, 0)
But you can get the same position by:
yaw +90° for turning the phone left (0, 0, 90)
pitch -90° for making the phone upright (-90, 0, 90)
You see two different representations (90, 90, 0) and (-90, 0, 90) for getting to the same rotation and there are more of them. So you press Start button, do some fancy rotations to put the phone into the pocket and you are in trouble because you can't rely on Euler Angles when doing more complex motions (s. gimbal lock for more headaches on this ;-)
Now the good news: you are right linear algebra will do the job. What you can do is force your users to put the phone in always the same position e.g. fixed upright in the right back pocket and calculate the angle(s) relative to the ground by building the dot product of gravity vector from CMDeviceMotion g = (x, y, z) and the postion vector p which is the -Y axis (0, -1, 0) in upright position:
g • x = x*0 + y*(-1) + z*0 = -y = ||g||*1*cos (alpha)
=> alpha = arccos (-y/9.81) as total angle. Note that gravitational acceleration g is constantly about 9.81
To get the left-right lean angle and forward-back angle we use the tangens:
alphaLR = arctan (x/y)
alphaFB = arctan (z/y)
[UPDATE:]
If you can't rely on having the phone at a predefined postion like (0, -1, 0) in the equations above, you can only calculate the total angle but not the specific ones alphaLR and alphaFB. The reason is that you only have one axis of the new coordinate system where you need two of them. The new Y axis y' will then be defined as average gravity vector but you don't know your new X axis because every vector perpedicular to y' will be valid.
So you have to provide further information like let the users walk a longer distance into one direction without deviating and use GPS and magnetometer data to get the 2nd axis z'. Sounds pretty error prone in practise.
The total angle is no problem as we can replace (0, -1, 0) with the average gravity vector (pX, pY, pZ):
g•p = xpX + ypY + zpZ = ||g||||p||*cos(alpha) = ||g||^2*cos(alpha)
alpha = arccos ((xpX + ypY + z*pZ) / 9.81^2)
Two more things to bear in mind:
Different persons wear different trowsers with different pockets. So the gravity vector will be different even for the same person wearing other clothes and you might need some kind of normalisation
CMMotionManager does not work in the background i.e. the users must not push the standby button
If I understand your question, I think you are interested in getting the attitude of your device. You can do this using the attitude property of the CMDeviceMotion object that you get from the deviceMotion property of the CMMotionManager object.
There are two different angles that you might be interested in the CMAttitude class: roll and pitch. If you imagine your device as an airplane with the propeller at the top (where the headphone jack is), pitch is the angle the plane/device would make with the ground if the plane were in a climb or dive. Meanwhile, roll is the angle that the "wings" would make with the ground if the plane were to be banking or in mid barrel roll.
(BTW, there is a third angle called yaw that I think is not relevant for your question.)
The angles will be given in radians, but it's easy enough to convert them to degrees if that's what you want (by multiplying by 180 and then dividing by pi).
Assuming I understand what you want, the good news is that you may not need to understand any linear algebra to capture and use these angles. (If I'm missing something, please clarify and I'd be happy to help further.)
UPDATE (based on comments):
The attitude values in the CMAttitude object are relative to the ground (i.e., the default reference frame has the Z-axis as vertical, that is pointing in the opposite direction as gravity), so you don't have to worry about cancelling out gravity. So, for example, if you lie your device on a flat table top, and then roll it up onto its side, the roll property of the CMAttitude object will change from 0 to plus or minus 90 degrees (+- .5pi radians), depending on which side you roll it onto. Meanwhile, if you start it lying flat and then gradually stand it up on its end, the same will happen to the pitch property.
While you can use the pitch, roll, and yaw angles directly if you want, you can also set a different reference frame (e.g., a different direction for "up"). To do this, just capture the attitude in that orientation during a "calibration" step and then use CMAttitude's multiplyByInverseOfAttitude: method to transform your attitude data to the new reference frame.
Even though your question only mentioned capturing the "lean angle" (with the ground), you will probably want to capture at least 2 of the 3 attitude angles (e.g., pitch and either roll or yaw, depending on what they are doing), potentially all three, if the device is going to be in a person's pocket. (The device could rotate in the pocket in various ways if the pocket is baggy, for example.) For the most part, though, I think you will probably be able to rely on just two of the three (unless you see radical shifts in yaw throughout the course of a recording session). So for example, in my jeans pocket, the phone is usually nearly vertical. Thus, for me, pitch would vary a whole bunch as I, say, walk, sit or run. Roll would vary whenever I change the direction I'm facing. Meanwhile, yaw would not vary much at all (unless I do kart-wheels, which I can't!). So yaw can probably be ignored for me.
To summarize the main point: to use these attitude angles, you don't need to do any linear algebra, nor worry about gravity (although you may want to use this for other purposes, of course).
UPDATE 2 (based on Kay's new post):
Kay just replied and showed how to use gravity and linear algebra to make sure your angles are unique. (And, btw, I think you should give the bounty to that post, fwiw.)
Depending on what you want to do, you may want to use this math. You would want to use the linear algebra and gravity if you need a standardized way of "talking about" and/or comparing attitudes over the course of your recording session. If you just want to visualize them, you can probably still get away with not using the increased complexity. (For example, visualizing (pitch=90, roll=0, yaw=0) should be the same as visualizing (pitch=0, roll=90, yaw=90).) In my approach above, while you could have multiple ways of referring to the "same" attitude, none of them is actually wrong, per se. They will still give you the angles relative to the ground.
But the fact that the gyroscope can switch from one valid description of an attitude to another means that what I wrote above about getting away with only 2 of the 3 components needs to be corrected: because of this, you will need to capture all three components, no matter what. Sorry.

iphone - core motion (relative rotation)

Is there a way to obtain a relative rotation from core motion?
What I need is: how much it rotated in one axis and which direction (+ sign = anti-clockwise, - = clockwise, according to the right-hand rule).
I have found the property rotationRate, but I am now sure how I would extract the angle out of it, as this is giving me radians per second.
I have done all kind of stuff on the last days but nothing is giving me stable values. I have tried to do a timed sample of core motion data, using a NSTimer and calculate the difference between two samples, so I would have how much it rotated since the last sample, but from times to times it gives me crazy numbers like 13600 degrees even when the iPhone is resting on the table.
Any thoughts on how this can be accomplished?
thanks
There is indeed. You can get what you're looking for by drilling down into the properties of CMMotionManager, through CMDeviceMotion and finally to CMAttitude. The attitude of the device is defined as:
the orientation of a body relative to
a given frame of reference.
In the case of DeviceMotion's CMAttitude, that frame of reference is established by the framework when starting device motion updates. From that point in time on, the attitude of the device is reported relative to that reference frame (not relative to the previous frame).
The CMAttitude class provides some handy built in functionality to convert a CMAttitude to a form that is actually useful for something, like Euler Angles, a rotation matrix, or a quaternion. You sound like you're looking for the Euler Angle representation (Pitch, Yaw, Roll).
The answer provided above isn't quite accurate, though it's probably sufficient to answer this question. Core Motion tries to determine the device's absolute attitude at all times, meaning that the definition of the axes can vary depending on the device's orientation. For example, if the device is face-up, then pitch up/down is a rotation about the y-axis, but if the device is in landscape orientation, then pitch is a rotation about the z-axis (perpendicular to the plane of the screen). This is somewhat helpful if your application will only be used in one orientation, or you want a delta like the question asked for, but makes it excessively complicated if you want to know absolute orientation.

Getting level of rotation with UIAccleration

Games like FroggyJump for iPhone figure out the rotation of the iphone. I'm getting confused with the acceleration values. How do I calculate the level of rotation? I suppose I need to consider when the iphone isn't perfectly upright.
Thank you.
I'm also wanting to use the new Core Motion framework with the "Device Motion" for iPhone 4 for extra precision. I guess I'll have to use that low pass filter for the other devices.
It's the yaw.
Having given Froggy Jump a quick go, I think it's likely directly using the accelerometer's x value as the left/right acceleration on the frog. If it is stationary, you can think of an accelerometer as giving you the vector that points upward into space, relative to the local axes. For something like a ball rolling or anything else accelerating due to tilt, you want to use the values directly.
For anything that involves actually knowing angles, you're probably best picking the axis around which you want to detect rotation then using the C function atan2f on the accelerometer values for the other two axes. With just an accelerometer, there are some scenarios in which you can't detect rotation — for example, if the device is flat on a table then an accelerometer can't detect yaw. The general rule is that rotations around the gravity vector can't be detected with an accelerometer alone.