How to position 3D object to the camera screen in unity - unity3d

I have a project that deals with AR, so I use ARFoundation in Unity. Everything works fine until I want to position my 3D object on the left of the screen. I tried many solutions but none of them work.
I try taking the width and height of the screen and alter them then set the position of my object to them but it didn't work.
What should I be doing?

Understanding the space you are working in is crucial.
Screen positions are a different dimension compared to the normal 3D you have in your scene. In order to exchange positions you need to use https://docs.unity3d.com/ScriptReference/Camera.WorldToScreenPoint.html and https://docs.unity3d.com/ScriptReference/Camera.ScreenToWorldPoint.html.
For example you can have a box that has its transform at 0.0.0 but depending on where you are looking at it and at what angle, it's transform stays the same but its *screen-position changes.
Using WorldToScreenPoint can tell you where the object from the scene is at your screen (2D), and ScreenToWorldPoint does the opposite (you give it 2D from you screen and it gives you that position in the 3D).

Related

World to Cube projection Unity

That's the setting:
I have 2 cameras in the game scene. The game must be played in a room with screens on frontal wall and on the floor. To be able to test it I just recreated the 2 screens in Unity. The goal is to make the game immersive, creating the correct illusion as the image on the left.
What I've tried so far (and it kinda worked as you can see from the screenshot) is:
Camera0: goes directly in the frontal display.
Camera1: I created a post processing effect that deforms the output texture to create the correct perspective illusion.
The problem:
The fact that I'm basically working over a texture creates some blurry effect on the borders, because the pixel density is not the same in start and deformed image.
I think the best approach would be to make the deforming transformation happen on the projection matrix of Camera1 instead. But I just failed. Have you any idea on how to approach this problem correctly?
You can let your perspective cameras do the work for you.
Set the fov of the floor camera so that it shows only as much as will fit on the screen.
Then, have the cameras at the same position.
Finally, have the floor camera rotated on the +x axis by half of the sum the fov of both cameras. For example, if the wall camera is fov 80º and the floor fov is 40º, set the floor camera to be rotated by 60º along the x axis.
This will guarantee that the view of the cameras do not overlap, and they will have the correct projection along their surfaces to create the desired illusion.

Character is inclined to the sides of the screen

I'm colliding with a normal perspective behavior.
In my 2.5D scenes where I use a background image in a 3D space I have to lift up and rotate the camera to give the right perspective to the 3D character.
Unfortunately, this kind of perspective causes the inclination of the character when it is on the sides of the screen.
In the many forums I visited I could not find anything about it.
Do you think that there are no solutions other than to work in an orthogonal projection and to attach a script to the character to resize it?
if your character should be visible like an 2D game element just change the Camera from Perspective to Orthogonal :) (the view height can than be adjusted with the size slider)

Unity 2D - How to make Resolution-Independent colliders on my sprite

I'm currently working on a school project where we have to merge a 2010 game with an arcade-style game. I chose to merge COD:Blackops Zombie mode with Pacman.
Right off the bat I'm having issues. I'm trying to figure out how I can make the colliders for my level scale with the resolution.
I have an orthographic camera that has a size of 5.
This is what my level currently looks like.
As you can see above, the level itself is a single sprite, placed in a canvas that has Scale with Screen Size enabled.
I've placed a couple of 2D colliders on empty game objects on it for testing purposes.
The current resolution is 1920x1080, which is a 16:9 aspect ratio.
However, if I change the resolution to something like 800x600, which is a 4:3 aspect ratio... then the colliders are completely wrong now.
I thought that the colliders would scale with screen size just like the level sprite does, but I was wrong.
I am now trying to find an alternative approach to making the level have colliders that also change with the size of the sprite...
If you need more information please let me know and I will gladly update my question with the requested information.
To keep things simple, first start with a square aspect canvas.
Make an Empty GameObject, and place it at the bottom left corner of your canvas.
Put all your colliders inside this empty GameObject, and set them up so they match your play-field.
Now all you'll need to do is have a script on this empty gameobject that sets in Update():
Vector2 screenScale = new Vector2(Screen.width, Screen.height).normalized;
transform.localScale = new Vector3(screenScale.x, screenScale.y, 1);
This will make it scale everything inside it to the same proportions as the screen. Alternatively, you could use the Canvas's size if you want to make it based on that instead of the screen itself.

Offsetting the rendered result of a camera in Unity

I am trying to make everything that is rendered by my perspective "Camera A" appear 100 points higher. This is due to the fact that my App has an interface with an open space on the upper part.
My app uses face detection to simulate the face movement into an in game avatar. To do this I compute the "Model-View-Matrix" to set it into the camera's "worldToCameraMatrix".
So far this works well, but everything is rendered with the center as the origin, now i want to move this center origin a certain distance "up" so that it matches my interface.
Is there a way to tell Unity to offset the rendered camera result?
An alternative I thought about is to render into a texture, then I can just move the texture itself, but I thought there must be an easier way.
By the way, my main camera is orthographic, and i use this one to render the camera texture. In this case simply moving the rendering game object quad up does the trick.
I found a property called "pixelRect", the description says:
Where on the screen is the camera rendered in pixel coordinates.
However moving the center up seems to scale down my objects.
You can set the viewport rect/orthosize so that its offset or you can render to a render texture and render that as a overlay with a offset or diffirence in scale.
Cheers

Camera-Offset | Project Tango

I am developing an augmented reality app for Project Tango using Unity3d.
Since I want to have virtual object interact with the real world, I use the Meshing with Physics scene from the examples as my basis and placed the Tango AR Camera prefab inside of the Tango Delta Camera (at the relative position (0,0,0)).
I found out, that I have to rotate the AR Camera up by about 17deg, so the Dynamic mesh matches the room, however there is still a significant offset to the live preview from the camera.
I was wondering, if anyone who had to deal with this before could share his solution to aligning the Dynamic Mesh with the real world.
How can I align the virtual world with the camera image?
I'm having similar issues. It looks like this is related to a couple of previously-answered questions:
Point cloud rendered only partially
Point Cloud Unity example only renders points for the upper half of display
You need to take into account the color camera offset from the device origin, which requires you to get the color camera pose relative to the device. You can't do this directly, but you can get the device in the IMU frame, and also the color camera in the IMU frame, to work out the color camera in the device frame. The links above show example code.
You should be looking at something like (in unity coordinates) a (0.061, 0.004, -0.001) offset and a 13 degree rotation up around the x axis.
When I try to use the examples, I get broken rotations, so take these numbers with a pinch of salt. I'm also seeing small rotations around y and z, which don't match with what I'd expect.