Unity3D units to Pixels for Oculus Rift - unity3d

I am wondering how many pixels is 1 unit in Unity3D for Oculus rift. For example, how a cube of 1 by 1 by 1 units could be given its dimensions in pixels.

There's not a 1:1 correlation here. It depends on many factors, such as the distance to the object, the angle you're viewing the object, the field of view of your camera, and the pixel resolution of your headset.
This is sort of like asking how many feet an object should be in a movie so that it takes up 6 feet of a movie theater screen. It'll depend on the kind of lens the movie is shot with, how far away the movie camera is, how big the movie theater screen is, etc.
However, at runtime, you can get the current pixel position on the screen of a position in the 3D world using Camera.WorldToScreenPoint. You could then do this for multiple points (say, at each end) of an object of interest to determine how large it is currently appearing on the screen.

Related

Unity3d: How to map real-world coordinates to scene coordinates?

I have a physical (real-world) camera and a Unity3d Scene. I want to map the physical camera coordinate system to the virtual scene, 1:1.
For example, imagine the physical camera is pointed at the sky and an aircraft flies overhead. I want to have the physical aircraft appear in my virtual environment, at the correct location. I can get the ADS-B data (which describes position and altitude of the aircraft) and a generic 3D aircraft model. I can import that 3D aircraft model into my scene, but how do I know where to put it and at which height in the scene? And when I move the physical camera, I want the virtual camera to move in the same way.
Put another way, if you wanted to recreate the Earth (ignoring all textures, lighting, etc.) in Unity3D, how would you ensure that objects in the physical world appear in the same location as in your virtual Earth?
How can I do this?
Unity has a built-in LocationService class to determine your location on the globe. Then there is Input.gyro, which can be used to determine where approximately you are pointing. Use this information and the flight transponder data to compute your position relative to the aircraft. Obviously, this will be wildly inaccurate. But, as others suggested, you can gain additional accuracy by setting your virtual camera up as a Unity physical camera and matching it up with your real-world camera. From your camera footage, extract the clip space position of the aircraft using some kind of image recognition method (i.e. a ComputeShader retrieving the location of a small dark spot) and then use Camera.ScreenPointToRay of that position to get a vector towards where the airplane should be in the scene. Using this, correct your virtual user position in the 3D scene, such that the ray lines up with the vector from your virtual camera to the virtual aircraft.
I think what you want to do is to do 1:1 map of real world in your unity scene. For that you will need to read this documentation. Basically 1 unit in unity is 1 meter. To make dimensions of 3D objects 1:1 to real life dimensions you will need to resize/change scale of those objects manually. Good Luck with that!
If you want a virtual camera that is just like your physical one then there's toggle in "camera" component called "physical camera". There you can set it's sensor data and all that kind of stuff. If it's not it, I don't know.

World to Cube projection Unity

That's the setting:
I have 2 cameras in the game scene. The game must be played in a room with screens on frontal wall and on the floor. To be able to test it I just recreated the 2 screens in Unity. The goal is to make the game immersive, creating the correct illusion as the image on the left.
What I've tried so far (and it kinda worked as you can see from the screenshot) is:
Camera0: goes directly in the frontal display.
Camera1: I created a post processing effect that deforms the output texture to create the correct perspective illusion.
The problem:
The fact that I'm basically working over a texture creates some blurry effect on the borders, because the pixel density is not the same in start and deformed image.
I think the best approach would be to make the deforming transformation happen on the projection matrix of Camera1 instead. But I just failed. Have you any idea on how to approach this problem correctly?
You can let your perspective cameras do the work for you.
Set the fov of the floor camera so that it shows only as much as will fit on the screen.
Then, have the cameras at the same position.
Finally, have the floor camera rotated on the +x axis by half of the sum the fov of both cameras. For example, if the wall camera is fov 80º and the floor fov is 40º, set the floor camera to be rotated by 60º along the x axis.
This will guarantee that the view of the cameras do not overlap, and they will have the correct projection along their surfaces to create the desired illusion.

Get pixel-count in FoV inside VR-Sphere

Recently i made a application for HTC Vive users to view 360 degree videos. To have a point of reference, lets assume that this video had a resolution of FullHD (1920x1080). See the picture of a 3D model below for illustration:
The field of view of a HTC Vive is 110° vertically and 100° horizontally.
It would be okay to simplify it to a round FoV of 100°.
My question would be: How can i determine the amount of video-information inside my FoV?
Here is what i know so far:
You can create a sphere on paper and calculate its surface area by using the formulas for spherical caps. -> https://en.wikipedia.org/wiki/Spherical_cap
Also there seems to be a function for the UV-Mapping that is done by Unity (because this is done in Unity). That formula can be found here: https://en.wikipedia.org/wiki/UV_mapping
Any suggestions are welcomed!

Camera-Offset | Project Tango

I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.

Smooth kinetic scrolling in unity3D

I'm developing a game for mobile platforms. I have a menu with levels. There are a lot of levels so there should be kinetic scrolling. What did I do - every frame I read touches[0].position and based and difference between previous position I move camera.
But, because of the inaccurate touch position (I suppose so) the camera doesn't move smoothly. I'm thinking about calculate average speed for three frames for example and move camera according to speed. Can you give me any advice how to smooth movement?
Also, touches[0].deltaPosition seems to work incorrect.