Apply material texture to a 3D model in Unity - unity3d

I have a 3D model(part of heart) and also I have created a texture to apply on to it.
Unfortunately if I apply to the 3D model it doesnt look good. But I did the same for a cube and its nicely work as I expected. The below is the figure.
You can see the cube is more realistic, however, if I apply to my model, it is not very good. Any suggestion why this is happening?

The cube is "unwrapped" - it has a UV Map. Your Heart-Mesh does not.
You need to UV-Map / Unwrap your Heart-Mesh.
In Blender:
For this, you could try "Smart UV Project" in EditMode, but that will create small islands and you get a lot of seams.
By hand, you could mark seams and choose "unwrap" which can result in a
better UV map.
Alternative: Use a Triplanar Shader. Probably a good idea for a repeating texture like yours.
(I got that image from this reddit post: https://www.reddit.com/r/Unity3D/comments/ndh9ll/simple_triplanar_shader_in_unity/)

Related

Unity Point-cloud to mesh with texture/color

I have a point-cloud and a rgb texture that fit together from a depth camera. I procedurally created a mesh from a selected part of the point-cloud implementing the quickhull 3D algorithm for mesh creation.
Now, somehow I need to apply the texture that I have to that mesh. Note that there can be multiple selected parts of the point-cloud thus making multiple objects that need the texture. The texture is just a basic 720p file that should be applied to the mesh material.
Basically I have to do this: https://www.andreasjakl.com/capturing-3d-point-cloud-intel-realsense-converting-mesh-meshlab/ but inside Unity. (I'm also using a RealSense camera)
I tried with a decal shader but the result is not precise. The UV map is completely twisted from the creation process, and I'm not sure how to generate a correct one.
UV and the mesh
I only have two ideas but don't really know if they'll work/how to do them.
Try to create a correct UV and then wrap the texture around somehow
Somehow bake colors to vertices and then use vertex colors to create the desired effect.
What other things could I try?
I'm working on quite a similar problem. But in my case I just want to create a complete mesh from the point cloud. Not just a quickhull, because I don't want to lose any depth information.
I'm nearly done with the mesh algorithm (just need to do some optimizations). Quite challenging now is to match the RGB camera's texture with the depth camera sensor's point cloud, because they of course have a different viewport.
Intel RealSense provides an interesting whitepaper about this problem and as far as I know the SDK corrects these different perspectives with uv mapping and provides a red/green uv map stream for your shader.
Maybe the short report can help you out. Here's the link. I'm also very interested in what you are doing. Please keep us up to date.
Regards

Casting Civilization V based Hex Grid on Unity Terrain and Select Certain Areas of Grid

I am looking for approach for casting hex based grid on terrain which is for now pre-made but eventually it will be procedural for my exploration game where you can scan planet and elements will be highlighted/hex grid selected. What could be the approach towards making this kind of hex grid as my terrain will be un-even.
I have seen approaches like mesh creation, using tile-map, unity projectors but eventually I feel like this should be something using shaders but what about selection?
Can someone please guide me in a right direction.
I think this topic is more like for https://gamedev.stackexchange.com .
My tips for you:
I think the hexa grid projection can be solved with Unity built in Projector, you can use ortoghraphic projection with it, so it does not matter if your terrain is uneven, also it has a convenient way for selecting which layers are affected only (terrain, your buildings etc...)
(The projector, is a shader magic tho, it blends the picture you give it to it, and the layer below it)
If projector does not satisfies your needs, im pretty sure there are grid shader already written for unity.
About the selection, i think you could also solve that with projector, or give some trail effect to the grid boundaries? - i guess you gonna still store the boundaries so..
About country borders in Civ:
I think they cast a spline using the hex grid border points, then blend it on the terrain. I saw a shader that could draw lines on a terrain, so you might found it!
Keywords for search: Beziér, Catmull–Rom, Spline, terrain shader

How to texture mesh? Shader vs. generated texture

I managed to create a map divided in chunks. Each one holding a mesh generated by using perlin noise and so on. The basic procedural map method, shown in multiple tutorials.
At this point i took a look at surface shader and managed to write one which fades multiple textures depending on the vertex heights.
This gives me a map which is colored smoothly.
In tutorials i watched they seem to use different methods to texture a mesh. So in this one for example a texture is generated for each mesh. This texture will hold a different color depending on the noise value.This texture is applied to the mesh and after that the mesh vertices are displaced depending on the z-value.
This results in a map with sharper borders between the colors giving the whole thing a different look. I believe there is a way to create smoother transitions between the tile-colors by fading them like i do in my shader.
My question is simply what are the pro and cons of those methods. Let's call them "shader" and "texture map". I am lost right now, not knowing in which direction to go.

HLSL (Unity-specific ok, not necessary) combining Stencil and worldspace "reverse" clipping

I've built a working surface shader (call it "wonderland") that renders as invisible unless a companion "lookingGlass" shader intersects with it from the viewpoint of the camera. Simple stencil shader arrangement.
Easy peasy.
I can add shader settings to specify a plane, or even just a minimum worldspace Z value, and use clip() to only render pixels on one side of that plane... (in other words, I could use that to trim the content that's allowed by the Stencil.)
What I want to do is use the stencil on surfaces "through the looking glass", (to reveal geometry that's inside the looking glass) and to always render those surfaces when they're on "our" side of the looking glass (to always show them if they're on this side of the looking glass portal). eg., if z<0, render if the Stencil Ref value is satisfied. if z>=0, render regardless.
Now, in Unity I can attach two materials to the MeshRenderer component (one with a stencil shader, one with a "plane cutoff" shader) - that works fine. It's pretty awesome, actually, at least visually. But while I haven't benchmarked it yet, I instinctively believe it's going to massively impact framerate if there are a number of objects, fairly complicated geometry, etc., set up with this arrangement.
(I can also manage shader attachment in code, and only do this when I expect something to transition, but I'm really hoping to get a unified shader out of this to avoid unnecessary draw calls.)
As it turns out, what I was looking to do is impossible.
The two shaders I wish to combine are both surface shaders. While you can combine multiple surface shaders into a multipass shader, you cannot combine multiple surface shaders, with a Stencil, and with a clip() where the clip is applied to passes that the Stencil is not and vice-versa.
There are combinations that can achieve parts of this, or can achieve the entire goal with surface and vert (or other non-surf) shaders, but the combination of requirements stipulated by this question isn't supported as desired.
While this does not answer the question, the workaround in Unity is to create two materials that provide each piece of functionality. They can both exist on the item that needs both pieces, and code can otherwise manage whether one or the other or both is actively in use.
Similar solutions would be available in other packages.

Shader-coding: nonlinear projection models

As I understand it, the standard projection model places an imaginary grid in front of the camera, and for each triangle in the scene, determines which 3 pixels its 3 corners project onto. The color is determined for each of these points, and the fragment shader fills in the rest using interpolation.
My question is this: is it possible to gain control over this projection model? For example, create my own custom distorted uv-grid? Or even just supply my own algorithm:
xyPixelPos_for_Vector3( Vector3 v ) {...}
I'm working in Unity3D, so I think that limits me to cG or openGL.
I did once write a GLES2 shader, but I don't remember ever performing any kind of "ray hits quad" type test to resolve the pixel position of a particular 3D point in space.
I'm going to assume that you want to render 3d images based upon 3d primitives that are defined by vertices. This is not the only way to render images with OpenGL but it is the most common. The technique that you describe sounds much more like Ray-Tracing.
How OpenGL Typically Works:
I wouldn't say that OpenGL creates an imaginary grid. Instead, what it does is take the positions of each of your vertices, and converts them into a different space using linear algebra (Matrices).
If you want to start playing around with this, it would be best to do some reading on Matrices, to understand what the graphics card is doing.
You can easily start warping the positions of Vertices by making a vertex shader. However, there is some setup involved. See the Lighthouse tutorials (http://www.lighthouse3d.com/tutorials/glsl-tutorial/hello-world-in-glsl/) to get started with that! You will also want to read their tutorials on lighting (http://www.lighthouse3d.com/tutorials/glsl-tutorial/lighting/), to create a fully functioning vertex shader which includes a lighting model.
Thankfully, once the shader is set up, you can distort your entire scene to your hearts content. Just remember to do your distortions in the right 'space'. World coordinates are much different than eye coordinates!