How to get ARFace mesh coordinates on the ARcamera cpu image Unity/ARFoundation - unity3d

I'm trying to use AR camera with face tracking to capture some vertices from the facemesh(and mask the area), and pass the image to opencv for unity to do further processing.
Vector3 screenPosition = arCamera.GetComponent<Camera>().WorldToScreenPoint(face.transform.position + face.vertices[0])
I'm using this but this returns the position relative to the screen which has a different aspect ratio than the image from "cameraManager.TryAcquireLatestCpuImage". (2340 x 1080) vs (640 x 480)
I tried looking every where how to transform the position from world to cpu image and tried to map the screen coordinates to the cpu image using displaymatrix and projectionmatrix but no luck.
Any solution, would be appreciated!!

Related

Evaluate depth for orthographic camera

I have a post processing shader. For simplicity, my post processing shader only shows the _CameraDepthTexture at the given uv. This shader is written in code.
I'm moving to shader graph and I want to have a material for all of my objects and achieve the exact same effect (show the same depth color), althought I can't use Scene Depth node. How can I generate the exact same color for my objects in Shader Graph?
As the depth is related to the distance between the camera and the objects, I'm trying to set the depth like this:
I take the vector (vertex world position - camera world position).
I project this vector into the camera direction vector
I remap this length of the projection vector from (near plane, far plane) to (1, 0)
It looks like my depth is the same as _CameraDepthTexture, but when objects are too close to the camera, they are different (my version is darker).
How can I write a shader without Scene Depth node that generates the exact same color as _CameraDepthTexture? My camera is orthographic with orthographic size 10.4, near = -50 and far = 50.

Resolution for 2d pixel art games

I'm having problems to set the right resolution on unity to not have pixel distortion on my pixel art assets. When I create an tile grid, on the preview tab the assets look terrible.
I have an tilemap with 64x32 resolution for each tile.
I'm using 64 pixels per unit.
The camera size is set to 5 in a 640x360 resolution (using the following formula: vertical resolution / PPU / 2).
What I'm doing wrong and what I'm missing?
I don't know how the tiles are defined, but assuming those are rects with textures on topm you could check your texture filter setting and play with it a little, setting it up for example to "anisotropic"
To solve this problem and get an "pixel perfect" view, you need to apply the following formula:
Camera size = height of the screen resolution / PPU (pixels per unit) / 2
This will do the job!

Unity How To Get The Camera borders as a vector3?

i need to get the camera borders as a vector3 to store it and use it later in the game .
as the image show you , i need to get this positions , so whatever camera move anywhere , i can determine the borders of my game.
you can use Camera.ScreenPointToRay(Vector3 pixelPosition) to get a given ray from the camera that crosses a given pixel coordinate (z position is ignored), than if you multiply by distance (your end plane distance?) you'll get coortinates of your points

Relationship of video coordinates before/after resizing

I have a 720x576 video that was played full screen on a screen with 1280x960 resolution and the relevant eye tracker gaze coordinates data.
I have built a gaze tracking visualization code but the only thing I am not sure about is how to convert my input coordinates to match the original video.
So, does anybody have an idea on what to do?
The native aspect ratio of the video (720/576 = 1.25) does not match the aspect ratio at which it was displayed (1280/960 = 1.33). i.e. the pixels didn't just get scaled in size, but in shape.
So assuming your gaze coordinates were calibrated to match the physical screen (1280 × 960), then you will need to independently scale the x coordinates by 720/1280 = 0.5625 and the y coordinates by 576/960 = 0.6.
Note that this will distort the actual gaze behaviour (horizontal saccades are being scaled by more than vertical ones). Your safest option would actually be to rescale the video to have the same aspect ratio as the screen, and project the gaze coordinates onto that. That way, they won't be distorted, and the slightly skewed movie will match what was actually shown to the subjects.

Working with the coordinate system and game screen in Unity 2d?

So I've developed games in other platforms where the x/y coordinate system made sense to me. The top left representing the game screen with coordinates of (0,0) and the bottom right was (width,height). Now I'm trying to make the jump to Unity 2d and I can't understand how the game screen works. If I had a background object and a character object on the screen, when I move the character around his x and y values vary between -3 and 3... very small coordinates and it doesn't match the game resolution I have setup (1024x768). Are there good tutorials for understanding the game grid in Unity? Or can anyone explain how I can accomplish what I'm trying to do?
There are three coordinates systems in Unity: Screen coordinates, view coordinates and the world coordinates.
World coordinates: Think of the absolute positioning of the objects in your scene, using "points". You can choose to have the units represent any length you want, for example 1 unit = 10 meters. What is actually shown on the screen is determined by where the camera is placed and how it is oriented.
View Coordinates: The coordinates in the viewport of a given camera. Viewport is the imaginary rectangle through which the world is viewed. These coordinates are porportional, and range from (0,0) to (1,1).
Screen Coordinates: The actual pixel coordinates denoting the position on the device's screen.
Note that the world co-ordinates of any given object will always be the same regardless of which camera is used to view, whereas the view coordinates depends on the camera being used. The screen coordinates in addition depend on the resolution of the device and the placement of the camera view on the screen.
The "Camera" object provides several methods to convert between these different coordinate systems like "ScreenToViewportPoint" "ScreenToWorldPoint" etc.
Example: Place object on top left of screen
float distanceFromCamera = 10.0f;
Vector3 pos = Camera.main.ScreenToWorldPoint (new Vector3 (0, Camera.main.pixelHeight, distanceFromCamera));
transform.position = pos;
The ScreenToWorldPoint function takes a Vector3 as an argument, where the x and y denote the pixel position on the screen ( 0,0 is the bottom left) and the z component denotes the desired distance from the camera. An infinite number of 3D locations can map to the same screen position, so you need to provide this value.
Just make sure that the desired position falls within the clipping region of the camera. Also, you might need to pick a proper pivot for your object depending on which part of your object you want centered on the top left.
Using:
Camera.main.WorldToScreenPoint (transform.position);
Let's me convert my GameObjects tranform position to the screen's x and y coordinate system