texture mapping using bilinear interpolation - coordinates

I have been trying to achieve texture mapping using the bilinear interpolation method.I have been trying to rasterize a triangle thus i first find the barycentric weights and then try to interpolate the uv co-ordinates but have got no success. Could some one suggest me if i am doing something wrong?
void baryWeights(int x4,int y4)
{
b1=(float)((x4*y2)-(x4*y3)-(x2*y4)+(x2*y3)+(x3*y4)-(x3*y2))/((x1*y2)-(x1*y3)-(x2*y1z)+(x2*y3)+(x3*y1z)-(x3*y2));
b2=(float)((x1*y4)-(x1*y3)-(x4*y1z)+(x4*y3)+(x3*y1z)-(x3*y4))/((x1*y2)-(x1*y3)-(x2*y1z)+(x2*y3)+(x3*y1z)-(x3*y2));
b3=(float)((x1*y2)-(x1*y4)-(x2*y1z)+(x2*y4)+(x4*y1z)-(x4*y2))/((x1*y2)-(x1*y3)-(x2*y1z)+(x2*y3)+(x3*y1z)-(x3*y2));
}
void linearinterpolation()
{
uc=b1*v1u+b2*v2u+b3*v3u;
vc=b1*v1v+b2*v2v+b3*v3v;
}
I get distorted texture mapping for this code that i am using

Related

What is the difference between arView.session.currentFrame.camera.transform and arview.cameraTransform?

I stumbled upon this while applying rotation from the camera onto an entity in RealityKit. I thought I have to do some matrix math to gain the euler angles from arView.session.currentFrame.camera.transform matrix, but retrieving the rotation from arview.cameraTransform.rotation did the trick (found here).
So: What is the difference of both matrices and when should be used which?
Your first sample is the transform matrix that defines the camera’s rotation/translation in world coordinates.
open var transform: simd_float4x4 { get }
Your second sample is a specialized type of the transform matrix.
public var cameraTransform: Transform { get }
// or
#MainActor var cameraTransform: Transform { get }
Apple's definition:
Note that Transform is a specialized type that does not support all transformations that can be represented by a general 4x4 matrix. Setting the Transform from a 4x4 matrix is a lossy operation. It may result in certain transformations (such as shearing) being dropped!
However, in some situations cameraTransform is a more convenient property than transform.

How do I decode a depthTexture into linear space in the [0-1] range in HLSL?

I've set a RenderTexture as my camera's target texture. I've chosen DEPTH_AUTO as its format so it renders the depth buffer:
I'm reading this texture in my shader, with float4 col = tex2D(_DepthTexture, IN.uv); and as expected, it doesn't show up linearly between my near and far planes, since depth textures have more precision towards the near plane. So I tried Linear01Depth() as recommended in the docs:
float4 col = tex2D(_DepthTexture, IN.uv);
float linearDepth = Linear01Depth(col);
return linearDepth;
However, this gives me an unexpected output. If I sample just the red channel, I get the non-linear depthmap, and if I use Linear01Depth(), it goes mostly black:
Question: What color format is chosen when using DEPTH_AUTO in the Render Texture inspector, and how do I convert it to linear [0, 1] space? Is there a manual approach, like a logarithmic function, or an exponential function that I could use?

Swift: What matrix should be used to convert 3D point to 2D in ARKit/Scenekit

I am trying to use ARCamera matrix to do the conversion of 3D point to 2D in ARkit/Scenekit. Previously, I used projectpoint to get the projected x and y coordinate which is working fine. However, the app is significantly slowed down and would crash for appending long recordings.
So I turn into another approach: using the ARCamera parameter to do the conversion on my own. The Apple document for projectionMatrix did not give much. So I looked into the theory about projection matrix The Perspective and Orthographic Projection Matrix and Metal Tutorial. From my understanding that when we have a 3D points P=(x,y,z), in theory we should be able to just get the 2D point like so: P'(2D)=P(3D)*projectionMatrix.
I am assuming that's would be the case, so I did:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let arCamera = session.currentFrame?.camera else { return }
//intrinsics: a matrix that converts between the 2D camera plane and 3D world coordinate space.
//projectionMatrix: a transform matrix appropriate for rendering 3D content to match the image captured by the camera.
print("ARCamera ProjectionMatrix = \(arCamera.projectionMatrix)")
print("ARCamera Intrinsics = \(arCamera.intrinsics)")
}
I am able to get the projection matrix and intrinsics (I even tired to get intrinsics to see whether it changes) but they are all the same for each frame.
ARCamera ProjectionMatrix = simd_float4x4([[1.774035, 0.0, 0.0, 0.0], [0.0, 2.36538, 0.0, 0.0], [-0.0011034012, 0.00073593855, -0.99999976, -1.0], [0.0, 0.0, -0.0009999998, 0.0]])
ARCamera Intrinsics = simd_float3x3([[1277.3052, 0.0, 0.0], [0.0, 1277.3052, 0.0], [720.29443, 539.8974, 1.0]])...
I am not too sure I understand what is happening here as I am expecting that the projection matrix will be different for each frame. Can someone explain the theory here with projection matrix in scenekit/ARKit and validate my approach? Am I using the right matrix or do I miss something here in the code?
Thank you so much in advance!
You'd likely need to use the camera's transform matrix as well, as this is what changes between frames as the user moves around the real world camera and the virtual camera's transform is updated to best match that. Composing that together with the projection matrix should allow you to get into screen space.

Adding a collider on a list of vectors

I'm trying to detect contours in a scene and add a collider to every detected object, I used the canny edge detector to get the coordinates of the detected objects.
Here is my output image
I need to add a collider to each black line to prevent my game object from going in/out of that area but I don't know how to do so exactly.
The findContours function returns a list of detected contours each stored as a vector of points but how do I use that to generate a collider?
Thank you for your help.
Update
Here is my source code (for the update method)
void Update ()
{
if (initDone && webCamTexture.isPlaying && webCamTexture.didUpdateThisFrame) {
//convert webcamtexture to mat
Utils.webCamTextureToMat (webCamTexture, rgbaMat, colors);
//convert to grayscale
Imgproc.cvtColor (rgbaMat, grayMat, Imgproc.COLOR_RGBA2GRAY);
//Blurring
Imgproc.GaussianBlur(rgbaMat,blurMat,new Size(7,7),0);
Imgproc.Canny(blurMat, cannyMat, 50,100);
Mat inverted = ~cannyMat;
//convert back to webcam texture
Utils.matToTexture2D(inverted, texture, colors);
Mat hierarchy = new Mat();
List<MatOfPoint> contours = new List<MatOfPoint>();
Imgproc.findContours(inverted, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
}
}
Use a PolygonCollider2d.
You can edit the collider at runtime using the SetPath function, to which you will pass a list of 2d points (that you already computed using the findContours function.
You can have several paths in the polygon if you want your collider to have holes.

Why do vertices of a quad and the localScale of the quad not match in Unity?

I have a Quad whose vertices I'm printing like this:
public MeshFilter quadMeshFilter;
for(var vertex in quadMeshFilter.mesh.vertices)
{
print(vertex);
}
And, the localScale like this:
public GameObject quad;
print(quad.transform.localScale);
Vertices are like this:
(-0.5, -0.5), (0.5, 0.5), (0.5, -0.5), (-0.5, 0.5)
while the localScale is:
(6.4, 4.8, 0)
How is this possible - because the vertices make a square but localScale does not.
How do I use vertices and draw another square in front of the quad?
I am not well versed in the matters of meshes, but I believe I know the answer to this question.
Answer
How is this possible
Scale is a value which your mesh is multiplied in size by in given directions (x, y, z). A scale of 1 is default size. A scale of 2 is double size and so on. Your localSpace coordinates will then be multiplied by this scale.
Say a localSpace coordinate is (1, 0, 2), the scale however, is (3, 1, 3). Meaning that the result is (1*3, 0*1, 2*3).
How do I use vertices and draw another square in front of the quad?
I'd personally just create the object and then move it via Unity's Transform system. Since it allows you to change the worldSpace coordinates using transform.position = new Vector3(1f, 5.4f, 3f);
You might be able to move each individual vertex in WorldSpace too, but I haven't tried that before.
I imagine it is related to this bit of code though: vertices[i] = transform.TransformPoint(vertices[i]); since TransformPoint converts from localSpace to worldSpace based on the Transform using it.
Elaboration
Why do I get lots of 0's and 5's in my space coordinates despite them having other positions in the world?
If I print the vertices of a quad using the script below. I get these results, which have 3 coordinates and can be multiplied as such by localScale.
Print result:
Script:
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
This first result is what we call local space.
There also exists something called WorldSpace. You can convert between local- and worldSpace.
localSpace is the objects mesh vertices in relation to the object itself while worldSpace is the objects location in the Unity scene.
Then you get the results as seen below, first the localSpace coordinates as in the first image, then the WorldSpace coordinates converted from these local coordinates.
Here is the script I used to print the above result.
Mesh mesh = GetComponent<MeshFilter>().mesh;
var vertices = mesh.vertices;
Debug.Log("Local Space.");
foreach (var v in vertices)
{
Debug.Log(v);
}
Debug.Log("World Space");
for (int i = 0; i < vertices.Length; ++i)
{
vertices[i] = transform.TransformPoint(vertices[i]);
Debug.Log(vertices[i]);
}
Good luck with your future learning process.
This becomes clear once you understand how Transform hierarchies work. Its a tree, in which parent transform [3x3] matrix (position, rotation, scale (rotation is actually a quaternion but lets assume its euler for simplicity so that math works). by extension of this philosophy, the mesh itself can be seen as child to the gameoobject that holds it.
If you imagine a 1x1 quad (which is what is described by your vertexes), parented to a gameobject, and that gameobject's Transform has a non-one localScale, all the vertexes in the mesh get multiplied by that value, and all the positions are added.
now if you parent that object to another gameObject, and give it another localScale, this will again multiply all the vertex positions by that scale, translate by its position etc.
to answer your question - global positions of your vertexes are different than contained in the source mesh, because they are feed through a chain of Transforms all the way up to the scene root.
This is both the reason that we only have localScale and not scale, and this is also the reason why non-uniform scaling of objects which contain rotated children can sometimes give very strange results. Transforms stack.