Recently i made a application for HTC Vive users to view 360 degree videos. To have a point of reference, lets assume that this video had a resolution of FullHD (1920x1080). See the picture of a 3D model below for illustration:
The field of view of a HTC Vive is 110° vertically and 100° horizontally.
It would be okay to simplify it to a round FoV of 100°.
My question would be: How can i determine the amount of video-information inside my FoV?
Here is what i know so far:
You can create a sphere on paper and calculate its surface area by using the formulas for spherical caps. -> https://en.wikipedia.org/wiki/Spherical_cap
Also there seems to be a function for the UV-Mapping that is done by Unity (because this is done in Unity). That formula can be found here: https://en.wikipedia.org/wiki/UV_mapping
Any suggestions are welcomed!
Related
I have been struggling with some rotations maths for a feature on my project.
I am bassicly using a gyroscope input from a phone and combining a touch input in order to recreate the same behaviour as the youtube 360 video player input. (Using Unity)
So in other words im trying to add the touch input (only rotation on the X and Y Axis) to the gyroscope free to rotate in all angles.
I tried building 2 quaternion, one for the gyro and one quaternion with the touch input. If i start up the app and stay looking at the same direction with the phone, both are adding up fine, but if i change my phone orientation in the Y axis and start using the touch input up and down becomes the roll instead of the yaw.
I tried changing the quaternion addition order but it did not fix my issue.
After playing around with a minimal setup in Unity, i figured what i need to do is recreate the same relation a child and parent object have regarding rotation.
Sorry for the lack of capture and screenshots im trying to find the best way to document the issue.
Thanks
That's the setting:
I have 2 cameras in the game scene. The game must be played in a room with screens on frontal wall and on the floor. To be able to test it I just recreated the 2 screens in Unity. The goal is to make the game immersive, creating the correct illusion as the image on the left.
What I've tried so far (and it kinda worked as you can see from the screenshot) is:
Camera0: goes directly in the frontal display.
Camera1: I created a post processing effect that deforms the output texture to create the correct perspective illusion.
The problem:
The fact that I'm basically working over a texture creates some blurry effect on the borders, because the pixel density is not the same in start and deformed image.
I think the best approach would be to make the deforming transformation happen on the projection matrix of Camera1 instead. But I just failed. Have you any idea on how to approach this problem correctly?
You can let your perspective cameras do the work for you.
Set the fov of the floor camera so that it shows only as much as will fit on the screen.
Then, have the cameras at the same position.
Finally, have the floor camera rotated on the +x axis by half of the sum the fov of both cameras. For example, if the wall camera is fov 80º and the floor fov is 40º, set the floor camera to be rotated by 60º along the x axis.
This will guarantee that the view of the cameras do not overlap, and they will have the correct projection along their surfaces to create the desired illusion.
I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.
I tryed to implement the "Measure It" app on Unity 3D. I started with the PointCloud example scene downloaded on tango's website.
My problem is, when i look in 1st Person view, the point cloud don't fiel the screen, and when i look in 3rd Person I can see the point outside the Unity Camera FOV.
I don't see this problem on the Explorer app, but it looks to be made in Java so I think it's a Unity compatibility problem.
Does someone have the same problem, or a solution?
Unity 3D 5.1.1
Google Tango urquhart
Sorry for my poor english,
Regards.
EDIT :
It looks like the ExperimentalAugmentedReality scene is using the point cloud to place markers in real world, and this point cloud is right in front of the camera. I don't see any script difference between them so i don't understand why it works. If you've any idea.
I think it makes sense to divide you question into two parts.
Why the points are not filling in the screen in the point cloud example.
In order to make the points to fill in the first person view camera, the render camera's FOV needs to match the physical depth camera's FOV. In the point cloud example, I believe Tango is just using the default Unity camera's FOV, that's why you saw the points is not filling the screen(render camera).
In the third person camera view, the frustum is just a visual representation of the device movement. It doesn't indicate the FOV or any camera intrinsics of the device. For the visualization purpose, Tango explore might specifically matched the camera frustum size to the actual camera FOV, but that's not guaranteed to be 100% accurate.
Why the AR example works.
In the AR example, we must set the virtual render camera's FOV to match the physical camera's FOV, otherwise the AR view will be off. On the Tango hardware, the color camera and depth camera are the same camera sensor, so they shared a same FOV. That's why the AR example works.
We are trying to achieve the following in an iphone game:
Using 2d png files, set-up a scene that seems 3d. As the user moves the device, the individual png files would warp/distort accordingly to give the effect of depth.
example of a scene: an empty room, 5 walls and a chair in the middle. = 6 png files layered.
We have successfully accomplished this using native functions like skew and scale. By applying transformations to the various walls and the chair, as the device is tilted moved, the walls would skew/scale/translate . However, the problem is since we are using 6 png files, the edges dont meet as we move the device. We need a new solution using a real engine.
Question:
we are thinking of instead of applying skew/scale transformations, that if given the freedom to move the vertices of the rectangular images, we could precisly distort images and keep all the edges 100% aligned.
What is the best framework to do this in the LEAST amount of time? Are we going about this the correct way?
You should be able to achieve this effect (at least in regards to the perspective being applied to the walls) using Core Animation layers and appropriate 3-D transforms.
A good example of constructing a scene like this can be found in the example John Blackburn provides here. He shows how to set up layers to represent the walls in a maze by applying the appropriate rotation and translation to them, then gives the scene perspective by using the trick of altering the m34 component of the CATransform3D for the scene.
I'm not sure how well your flat chair would look using something like this, but certainly you can get your walls to have a nice perspective to them. Using layers and Core Animation would let you pull off what you want using far less code than implementing this using OpenGL ES.
Altering the camera angle is as simple as rotating the scene in response to shifts in the orientation of the device.
If you're going to the effort of warping textures as they would be warped in a 3D scene, then why not let the graphics hardware do the hard work for you by mapping the textures to 3D polygons, then changing your projection or moving polygons around?
I doubt you could do it faster by restricting yourself to 2D transformations --- the hardware is geared up to do 3x3 (well, 4x4 homogenous) matrix multiplication.