2022.02 won't load 2D meshes, .ply format - import

I have some 2D triangle meshes in .ply format that both Blender and MS 3D Paint will open. But Meshlab (2022.02) will not. It gives the error message "No vertex field found". I would guess that means it wants 3D meshes only. Or could it be something else?

Related

unity uvs are interpolated and dont work for textures

i have made a script for 3D voxel terrain in unity and it works. i have made a triplanar texturing shader in the universal render pipeline, i set the mesh uvs in my script and the shader uses the uvs to get an index from a texture2d array. my problem is that uvs are interpolated and between every vertex it shows numerous other textures. is there maybe another way to pass vertey wise data through meshes that is not interpolated between vertices or can i get the uv coordinate at the nearest vertex in shader graph somehow?
i tried using vertex colors but they are interpolated too.

How to set direction of arrows in shadergraph

I'm pretty new to shader graph and shaders in general. I'm working on a 2D project and I'm trying to make a shader that rotates an arrow to make a flow-like material and use it on a sprite shape.
Basically what I want to do is make a proper version of this:
What I'm currently doing is multiplying the Y position of the position node by an exposed vector 1 and using it in Rotate node (which I know is pretty hacky and won't work if the shape is not an arc.)
Aligning UV with arbitrary mesh seems bit hard. Why not bend pre-made mesh instead? Graph below bends vertex positions around axis Z at given point and strength (0 makes mesh invisible tho), but, you can easily replace that Position node with UV and plug results into Sample Texture 2D. I just guess bending a mesh will give you better/easier results.
Create a subdivided and well UV-mapped rectangle plane
Bend that plane with a vertex shader (attached graph bends around Z axis)
graph is based on code from Blender source

How to transform a non-planar surface on a plane using a pair of 2D and 3D control points?

I have a set of control point pairs. One part of the pair is in world coordinates (3D). The other one is in pixel corrdinates of the image (2D).
My goal is to transform a surface you can see in this image onto a flat plane. The problem is that the surface is not perfectly flat, it kinda looks like a ribbon. Otherwise I could have used OpenCV's getPerspectiveTransform() or Matlab's fitgeotrans().
I know that I can use OpenCV's solvePnP() or Matlab's estimateWorldCameraPose() to get the pose of the camera. The camera matrix is known and the image is rectified. But what is the next step then? How can I transform my ribbon shaped surface onto a flat plane, i.e. get an orthographic top view? That is the step, I'm stuck on.

How to apply a texture image in GLGravity Teapot iphone?

For openGl expample, Xcode given a project GlGravity. But Instead of showing yellow color how to apply a Texture image without textureCoordinates?.
You need some kind of texture coordinate, otherwise the whole concept of textures makes no sense: A texture is a function mapping a set of n coordinates to some value (depth, luminance, alpha, colour or combination of those) defined by data the samples are taken from and interpolated.
You can generate the texture coordinates, either statically from your mesh, or in the vertex shader. Or you supply them directly. But you need some texture coordinates to make this work. A very cheap and simple texture coordinate generator is using the vertex position as texture coordinate; this will project your texture along the coordinates axes onto the model. So if you've got a 2D texture it will be applied in the XY plane, as if there were a parallel projecting slide projector at coordinates (0,0,\infinity).

Getting 3D image from 2D image

I am doing a project in Matlab on Image processing
Is there any possibility of getting 3d image from 2d image?
If you have multiple images of the same object and the position of the camera when the picure was taken, then it is possible, but still not easy. You can find two such datasets and links to relevant articles here: http://vision.middlebury.edu/mview/
a 3d image would be a projection from 4d (and to show one of those you've got to project down to 2d) and most images that can be displayed on computer or in a picture frame are 2d projections of 3d objects due to this projection which in fact selects a slice of the higher dimensional space it doesn't contain the information needed to invert that projection and get back to 3d from a 2d image
but if you have sufficient sampling of the space it is possible to reconstruct a 3d object from 2d images of it but i don't know of any simple ways to do this
You can't do this without supporting data such as multiple 2D images describing the same 3D object. You then need to figure out the perspectives from which each image was taken, reconcile those into real space, and generate your points using a method such as intersection of stereo lines through each image plane onto the same physical coordinate.
You can also attempt a superpixel approach by exploiting lighting data within a single image, though these methods aren't as accurate.
This is a big field.
The Radon transform is used in tomography applications to reconstruct 3D representations (i.e.images) from many 2D projections of the 3D "scene". This transform and its inverse are present in the image processing toolbox of Matlab. You might want to have a look at it.
Hope this helps.
A.