I have been struggling with some rotations maths for a feature on my project.
I am bassicly using a gyroscope input from a phone and combining a touch input in order to recreate the same behaviour as the youtube 360 video player input. (Using Unity)
So in other words im trying to add the touch input (only rotation on the X and Y Axis) to the gyroscope free to rotate in all angles.
I tried building 2 quaternion, one for the gyro and one quaternion with the touch input. If i start up the app and stay looking at the same direction with the phone, both are adding up fine, but if i change my phone orientation in the Y axis and start using the touch input up and down becomes the roll instead of the yaw.
I tried changing the quaternion addition order but it did not fix my issue.
After playing around with a minimal setup in Unity, i figured what i need to do is recreate the same relation a child and parent object have regarding rotation.
Sorry for the lack of capture and screenshots im trying to find the best way to document the issue.
Thanks
Related
Recently i made a application for HTC Vive users to view 360 degree videos. To have a point of reference, lets assume that this video had a resolution of FullHD (1920x1080). See the picture of a 3D model below for illustration:
The field of view of a HTC Vive is 110° vertically and 100° horizontally.
It would be okay to simplify it to a round FoV of 100°.
My question would be: How can i determine the amount of video-information inside my FoV?
Here is what i know so far:
You can create a sphere on paper and calculate its surface area by using the formulas for spherical caps. -> https://en.wikipedia.org/wiki/Spherical_cap
Also there seems to be a function for the UV-Mapping that is done by Unity (because this is done in Unity). That formula can be found here: https://en.wikipedia.org/wiki/UV_mapping
Any suggestions are welcomed!
I am trying to measure distance between multiple positions but I do not want the rotation to affect the distance. In concept, I want to track the starting transform and upon each update track the distance traveled without regard to the change in rotation. I am using an HTC Vive controller and people tend to rotate their hands and I want to control for this.
I've tried resetting the Eular Angles, but this doesn't seem to work.
Adding an Analogy that will certainly help.
Think of it like trying to draw and measure a line with a pencil, the position is in the eraser, and I can hold the pencil in any number of ways and in fact change the position in the middle of drawing the line, but my line will remain straight and the measurement will remain accurate.
Any help is appreciated.
I believe your problem lies around the position you are tracking. It sounds like you are tracking the transform.position of one of the child elements of the Vive controller model, leading to the situation that you're describing with the pencil eraser analogy.
Depending on where your script is attached, you could either move this to the top level element of the Vive controller, or alter your script to instead track transform.parent.position, which shouldn't be affected by the rotations of someone's hand.
I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.
I am working on a tabletop game that user controls using accelerometer in the phone. It is pretty similar to Labyrinth on iOS.
The scene contains a tabletop created using planes and cubes, and a sphere as a ball. As it works in Labyrinth game, on tilting phone ball should drop to the tilted side, while the camera stays centered to table. I am trying to do similar thing where user tilts the phone, and objects on table move to tilted side.
Currently, I add the x and z component to Physics.gravity on tilt. Sadly, this change in gravity does not affect the ball which stays put on the table. Use gravity is selected for the ball, and it drop down from height to the tabletop initially and then comes to halt. After the initial drop, ball does not react to any gravity change.
I have also tried rotating the whole table, but using transform.rotate does not work either. The table rotates perfectly, alongwith the camera, but the ball stays put hanging in the air. So, currently I am out of my depth about the issue.
Is there any other way to allow tilt action registered, so that ball moves to the tilted direction? I cannot use addforce function, as there are multiple objects which need to react to tilt, and it will be difficult to keep track of that many addforce calls. How else can I get it working?
See this post for some related information.
As for why your sphere is sticking to the table, can I assume your sphere has a rigidbody? If so, you need need to wake it up:
if (rigidbody.IsSleeping())rigidbody.WakeUp();
Ideally you would make that call only after detecting a change in orientation/gravity, not every frame.
I remember from WWDC that there was a talk showing a teapot in OpenGL ES, which rotated with movement of device. It appeared like the teapot stood still in space.
When the app launched, the teapot started in a specific position. Then when device got rotated, the teapot rotated too to stand still in space.
At this talk, they mentioned that we must get the "reference frame" e.g. upon app launch, which tells us how the user initially held the device.
For instance here's the accelerometer axis:
I want to know rotation around Y axis, but relative to how the user holds the device.
So when the user holds it upright and rotates around Y, I need to know that rotation value.
I think the key is removing the gravity from the readings? Also I target iPhone 4 / 4S with gyros, but I think CoreMotion would sensor-fusion them automatically.
How could I figure out by how much the user rotated the device around the Y-axis?
From your other question Why is this CMDeviceMotionHandler never called by CoreMotion? I know that you working on iOS 4 - things have changed slightly in iOS5. In general gyro data or even better sensor fusion of accelerometer and gyro data as done in DeviceMotion is the best approach for getting proper results.
So if you got this up and running, you will need to work with CMAttitude's multiplyByInverseOfAttitude method to get all CMDeviceMotion results relative to your reference frame. Just store a reference of the very first CMAttitude in a class member and call multiplyByInverseOfAttitude with it on all subsequent calls. Then all members of CMDeviceMotion.attitude will refer to this reference frame.
For getting the rotation around Y axis, a first approach is to take Euler angles i.e. CMAttitude.roll. If you just need to track small motions this might be fine. If motions are more extensive, you will run into trouble regarding Gimbal Lock. Then you need advanced techniques like quaternion operation to get stable results, but this sounds like an own question.