3d vectors from a Unity Plane to a flat 2D point as a pixel on a texture - unity3d

If I had a sharp sword and I were to perfectly slice an object in half, I would like to sample the colours at various points along this flat, freshly cut face, and place these colours on a texture.
Imagine the face is a Unity Plane defined by its Vector3 normal that goes through a location Vector3 p.
Let the texture be a 100 x 100 sized image.
Lets say the samples I want to take are three 3D points all on this plane, and defined as Vector3 A, B and C.
How do I go about converting the 3D points (x,y,z) from the defined plane into a 2D pixel (x,y) of this texture?
I have read many similar questions but honestly could not understand the answers. I don't know in my scenario if I'm dealing with Orthographic vs Projection perspective, whether I need to create a "conversion matrix", whether I need be concerned about rotations, or if there is just a simpler solution.
I appreciate any tips or suggestions. Thanks

Related

How to set direction of arrows in shadergraph

I'm pretty new to shader graph and shaders in general. I'm working on a 2D project and I'm trying to make a shader that rotates an arrow to make a flow-like material and use it on a sprite shape.
Basically what I want to do is make a proper version of this:
What I'm currently doing is multiplying the Y position of the position node by an exposed vector 1 and using it in Rotate node (which I know is pretty hacky and won't work if the shape is not an arc.)
Aligning UV with arbitrary mesh seems bit hard. Why not bend pre-made mesh instead? Graph below bends vertex positions around axis Z at given point and strength (0 makes mesh invisible tho), but, you can easily replace that Position node with UV and plug results into Sample Texture 2D. I just guess bending a mesh will give you better/easier results.
Create a subdivided and well UV-mapped rectangle plane
Bend that plane with a vertex shader (attached graph bends around Z axis)
graph is based on code from Blender source

How to transform a non-planar surface on a plane using a pair of 2D and 3D control points?

I have a set of control point pairs. One part of the pair is in world coordinates (3D). The other one is in pixel corrdinates of the image (2D).
My goal is to transform a surface you can see in this image onto a flat plane. The problem is that the surface is not perfectly flat, it kinda looks like a ribbon. Otherwise I could have used OpenCV's getPerspectiveTransform() or Matlab's fitgeotrans().
I know that I can use OpenCV's solvePnP() or Matlab's estimateWorldCameraPose() to get the pose of the camera. The camera matrix is known and the image is rectified. But what is the next step then? How can I transform my ribbon shaped surface onto a flat plane, i.e. get an orthographic top view? That is the step, I'm stuck on.

How do you calculate the 2D Plane on which an array of Vector3 points are sitting on?

Assume you have a large array of Vector3 points. These points were plotted by hand by the player when asked to draw a 2d shape (i.e. a 2d triangle) in a 3-dimentional environment (drawing 2d shapes with htc vive controller in a virtual environment).
I would like to project these points on to a plane to 'flatten' the shape they drew. To do this properly I need to know the average plane they're sitting on so that there is minimal 'squishing/distorting' when projecting them on to it.
Basically, I want to ask them to draw any 2D Shape in the air (which won't be 2D because humans) rotated any way they want and convert that shape into a 2d picture with a known plane for further manipulation.
If you would like to know more specifics I would be happy to provide them.

Understanding depth values in 3D point cloud

I have problems understanding the depth (Z) value in 3D point cloud resulted from 3d sparse reconstruction like this example in MATLAB: http://www.mathworks.com/help/vision/ug/sparse-3-d-reconstruction-from-multiple-views.html
I have attached a picture showing the reconstructed 3D point cloud in the above example. I have put some datatips on the figure so we know the (x,y,z) coordinates of the points. here are my questions:
1- what does the Z value in point cloud represent? is it the distance in millimeters from the camera? if that's the case then it does not make sense based on the picture I attached since I am sure the distance of the sphere and checkerboard from the camera must be greater than 200 mm.
Or maybe it is from some reference point in space? then what is this reference point? and how can I make a 3D point cloud that the Z values indicate the distance from the camera?
2- why is there negative values for Z? what does that mean in terms of distance to the camera?
I appreciate if someone can explain.
In this example the world coordinates are defined by the checkerboard. The checkerboard defines the X-Y plane, and the Z-axis points into the checkerboard, as explained in the documentation:
Since your 3D points are above the checkerboard, they have negative Z-coordinates.
Your (x,y,z) coordinates are in world units, which are completely disconnected from metric values (unless you build a scale between world and metric, there are various methods to do it). So the z value tells you about the depth of each point in world coordinates.
If you have the pose of each camera, and you multiply each point by the camera projection matrix, you will get the (x',y',z') points in camera coordinates. At that point, if z' is negative, it means it's behind the camera.

What is DOT3 lighting?

An answer to my question suggests that DOT3 lighting can help with OpenGL ES rendering, but I'm having trouble finding a decent definition of what DOT3 lighting is.
Edit 1
iPhone related information is greatly appreciated.
DOT3-lighting is often referred to as per-pixel lighting. With vertex lighting the lighting is calculated at every vertex and the resulting lighting is interpolated over the triangle. In per-pixel lighting, as the name implies, the object is to calculate the lighting at every pixel.
The way this is done on fixed function hardware as the iPhone is with so called register combiners. The name DOT3 comes from this render state:
glTexEnvi(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_DOT3_RGB);
Look at this blog entry on Wolfgang Engels blog for more info on exactly how to set this up.
When doing per-pixel lighting it's popular to also utilize a so called normal map. This means that the normal of every point on an object is stored in a special texture map, a normal map. This was popularized in the game DOOM 3 by ID software where pretty low polygon models where used but with high resolution normal maps. The reason for using this technique is that the eye is more sensitive to variation in lighting than variation in shape.
I saw in your other question that the reason this came up was that you wanted to reduce the memory footprint of the vertex data. This is true, instead of storing three components for a normal in every vertex, you only need to store two components for the texture coordinates to the normal map. Enabling per-pixel lighting will come with a performance cost though so I'm not sure if this will be a net win, as usual the advice is to try and see.
Finally the diffuse lighting intensity in a point is proportional to the cosine of the angle between the surface normal and the direction of the light. For two vector the dot product is defined as:
a dot b = |a||b| cos(theta)
where |a| and |b| is the length of the vectors a and b respectively and theta is the angle between them. If the length is equal to one, |a| and |b| are referred to as unit vectors and the formula simplifies to:
a dot b = cos(theta)
this means that the diffuse lighting intensity is given by the dot product between the surface normal and the direction of the light. This means that all diffuse lighting is a form of DOT3-lighting, even if the name has come to refer to the per-pixel kind.
From here:
Bumpmapping is putting a texture on a model where each texel's brightness defines the height of that texel.
The height of each texel is then used to perturb the lighting across the surface.
Normal mapping is putting a texture on a model where each texel's color is really three values that define the direction that location on the surface points.
A color of (255, 0, 0) for example, might mean that the surface at that location points down the positive X axis.
In other words, each texel is a normal.
The Dot3 name comes from what you actually do with these normals.
Let's say you have a vector which points in the direction your light source points. And let's say you have the vector which is the normal at a specific texel on your model that tells you which direction that texel points.
If you do a simple math equation called a "dot product" on these two normal vectors, like so:
Dot = N1xN2x + N1yN2y + N1z*N2z
Then the resulting value is a number which tells you how much those two vectors point in the same direction.
If the value is -1, then they point in opposite directions, which actually means that the texel is pointing at the light source, and the light source is pointing at the texel, so the texel should be lit.
If the value is 1, then they point in the same direction, which means the texel is pointing away from the light source.
And if the value is 0, then one of the vectors points at 90 degrees relative to the other. Ie: If you are standing on the ground looking forward, then your view vector is 90 degrees relative to the normal of the ground which points up.