How to color individual parts of curve in unity, real time? - unity3d

So I have a mesh curve that I've imported from Blender. What I want to do is to place individual points on the curve (at fixed distance) and then based on a specific number, color them.
Let's say for instance number 1000 is color red and number 10 is color blue (based on color gradient). These numbers get updated real time, so I want my color to get updated real time.
I have no idea how to approach this problem. I've looked up some ideas but couldn't find anything useful.
Thank you.

Bad news, it's by no means easy to do this.
Unity is a game engine - it's just not relevant to that type of problem you know?
I believe the only real solution would be to do it at the shader level.
As you know writing shaders is a whole engineering field in itself. I suggest, just google things like "Unity3d color gradient, shader" in the first instance.
For example,
http://answers.unity3d.com/questions/1108472/3-color-linear-gradient-shader.html
One thing you can conceivably do. Make the rope white. Apply lights to it! So, colored narrow beam lights or carefully placed diffuse lights. Perhaps in conjunction with layers for the lighting. It's not inconceivable it could work, enjoy!
Note - in fact the OP solved the problem just using colored lights - a solution in some cases.
Please note, the GL api is perfect here also.
user2299169 gives an outstanding link in the comments above:
http://gamedev.stackexchange.com/questions/96964/how-to-correctly-draw-a-line-in-unity
GL Doco: https://docs.unity3d.com/ScriptReference/GL.LINES.html
Thanks user2299169.

Related

Why does unity material not render semi-transparency properly?

I have a Unity material whose albedo is based on a spritesheet. The sprite has semi-transparency, and is formatted to RGBA 32bit.
Now, the transparency renders in the sprite, but not in the material.
How do I do this without also making supposedly opaque parts of the albedo not transparent?
I have tried setting render mode to transparent, fade, and unlit/transparent. The result looks like this:
I tried opaque, but it ruins the texture. I tried cutout, but the semi-transparency will get out or become fully opaque. (depending on cutout)
There is no code to this.
I expect the output to make the semi-transparent parts of the material semi-transparent, and the opaque parts opaque. The actual output is either fully opaque or fully "semi-transparent", which is super annoying.
Edit
So I delayed work and I added submesh. So, it is really close to solving the problem.
It's still doing that glitch.
Okay, good news and bad news. The good news is, this problem is not uncommon. It's not even unique to Unity. The bad news is, the reason it's not uncommon or unique to Unity is because it's a universal issue with no perfect solution. But we may be able to find you a work around, so let's go through this together.
There's a fundamental issue in 3D Graphics: In what order do you draw things? If you're drawing a regular picture in real life, the obvious answer is you draw the things based on how far away from the viewer they are. This works fine for a while, but what do you do with objects that aren't cleanly "in front" of other things? Consider the following image:
Is the fruit in that basket in front of the bowl, or behind it? It's kind of neither, right? And even if you can split objects up into front and back, how do you deal with intersecting objects? Enter the Z-Buffer:
The Z-Buffer is a simple idea: When drawing the pixels of an object, you also draw the depth of those pixels. That is, how far away from the camera they are. When you draw a new object into the scene, you check the depth of the underlying pixel and compare it with the depth of the new one. If the new pixel is closer, you overwrite the old one. If the old pixel is closer, you don't do anything. The Z Buffer is generally a single channel (read: greyscale) image that never gets shown directly. As well as depth sorting, it can also be used for various post processing effects such as fog or ambient occlusion.
Now, one key component of the depth buffer is that it can only store one value per pixel. Most of the time, this is fine; After all, if you're just trying to sort a bunch of objects, the only depth you really care about is the depth of the front-most one. Anything behind that front-most object will be rendered invisible, and that's all you need to know or care about.
That is, unless your front-most object is transparent.
The issue here is that the renderer doesn't know how to deal with drawing an object behind a transparent one. To avoid this, a smart renderer (including unity) goes through the following steps:
Draw all opaque objects, in any order.
Sort all transparent objects by distance from the camera.
Draw all transparent objects, from furthest to closest.
This way, the chances of running into weird depth sorting issues is minimized. But this will still fall apart in a couple of places. When you make your object use a transparent material, the fact that 99% of the object is actually solid is completely irrelevant. As far as Unity is concerned, your entire object is transparent, and so it gets drawn according to its depth relative to other transparent objects in the scene. If you've got lots of transparent objects, you're going to have problems the moment you have intersecting meshes.
So, how do you deal with these problems? You have a few options.
The first and most important thing you can do is limit transparent materials to areas that are explicitly transparent. I believe the rendering order is based on materials above all else, so having a mesh with several opaque materials and a single transparent one will probably work fine, with the opaque parts being rendered before the single transparent part, but don't quote me on that.
Secondly, if you have alternatives, use them. The reason "cutout" mode seems to be a binary mask rather than real transparency is because it is. Because it's not really transparent, you don't run into any of the depth sorting issues that you typically would. It's a trade-off.
Third, try to avoid large intersecting objects with transparent materials. Large bodies of water are notorious for causing problems in this regard. Think carefully about what you have to work with.
Finally, if you absolutely must have multiple large intersecting transparent objects, consider breaking them up into multiple pieces.
I appreciate that none of these answers are truly satisfying, but the key takeaway from all this that this is not a bug so much as a limitation of the medium. If you're really keen you could try digging into custom render pipelines that solve your problem explicitly, but keep in mind you'll be paying a premium in performance if you do.
Good luck.
You said you tried Transparent Shader mode, but, did you tried to change Alpha Channel values in your material color after it?
The second image seems like the Alpha in RGBA is 0, try changing it.

Draw curved lines with texture and glow with Unity

I'm looking for an efficient way to draw curved lines and to make an object follow them in Unity.
I also need to draw them using a custom image and not a solid color.
And on top of that I would like to apply an outer glow to them, and not to the rest of the scene.
I don't ask for a copy/paste solution for each of these elements, I list them all to give some context.
I did something similar in a web app using the html5 canvas to draw text progressively. Here a gif showing you the render:
I only used small lines to draw what you see above. Here a very big letter with thinker lines so lines are more visible:
Of course it's not perfect, but the goal was to keep it simple and efficient. And spaces on the outer edges are not very visible in normal size.
This is used in an educational game working on mobile as a progressive app. In real world usage I attach a particles emitter to it for better effect :
And it runs smoothly even on low end devices.
I don't want to recreate this exact effect on Unity but the core functionality is very close.
Because of how I did it the first time, I thought about creating a big list of segments to draw manually, but unity may have better tools to create this kind of stuff, maybe working directly with bezier curves.
I a beginner in Unity so I don't really know what is the most efficient way to do it.
I looked at the line renderer which seemed (at first) to be a good choice but I'm a little bit worried about performances with a list of 500+ points (considering mobiles are a target).
Also, the glow I would like to add may impact on the technique to choose.
Do you have any advice or direction to give me?
Thank you very much.

Displaying ARKit nodes in relation to real objects

I am trying to draw a box that can help someone understand the dimensions of an item, but I keep having the issue that since I first need to recognize a plane when I put my physical item on top of the plane, my box gets drawn in front of the item.
Is it possible to somehow overcome this?
#John Scalo is right, your problem is not having to first detect a plane, but it's that your render engine doesn't know that part of your green box frame is occluded (hidden) by a real-world object.
"…to somehow overcome this"
Yes, and by doing so you might be "solving" your original problem—help someone understand the dimensions of an item.
(Depending on your choice of render engine, e.g. SceneKit) You can add an invisible 3D object that has the same dimensions as the real-world object; so the render engine will "know" that some parts of your box frame are behind this (for the user invisible) 3D object. Therefor, you can tell it not to draw those parts of your box frame, which will give the illusion (borrowing from Apple here) that your soda can has the box around it.
These workarounds are inaccurate, but maybe their accuracy is enough for the level of realism you are trying to achieve:
Option 1: 1. After detecting the desk surface, place a semi-transparent 3D object over the soda can and then resize it (gestures/buttons your choice) until it's about the dimensions of the soda can. 2. Confirm that you're done, and just don't draw a texture on it at all just let it occlude the green box frame.
Option 2: Hold your device near the edges of the soda can and add "enough" ARAnchors to be able to create a "bounding shape" that (again) can be used to capture the real-word object and occlude that.
Option 3: (intense, and perhaps the least accurate) Use your finger to "brush" over the object from various angles, and on each touch perform a hit test (hopefully the top/nearest hit is a part of your soda can) and build up a "bounding shape" that way.
Option X: any combination of 1 - 2 - 3.
Good luck, there's lots of people trying to work around this device/ARKit limitation at them moment, so keep your eyes open for good ideas.
The problem you're dealing with is called occlusion, and ARKit doesn't (currently?) include occlusion support. Maybe some day soon iPhones and iPads will begin to ship with LIDAR (or similar), in which case ARKit will be able to detect objects in the scene, making occlusion much easier.

Roads are looking blurry in Unity

I am trying to make some real looking roads in my game,but the problem is ,when i am using roads so they are little bit blurry at distance and look like a real one.Can any one please guide me to solve this.I have uploaded my pic here
Is there possible solution for this?
Yes, it is a common problem, for textures' rendering at a certain distance/angle. You should enhance the aniso level, in order to apply an anisotropic filtering on the blurry texture (you can also change the aniso level from the texture's component, as in the picture below).
If the road is created via a Terrain component (which I doubt, since you already have a sandy terrain at the bottom), you should change the basemap distance.
Check also Quality settings, it might have Anisotropic textures disabled.

what libraries, data, algorithms exist for simulation of paint?

I would like to model oil and acrylic paint on a canvas in such a way that I can add a brush stroke to the canvas and have the colours mix.
I don't want to animate this happening, I just want to be able to model the final outcome of a brush stroke on existing paint.
Any suggestions?
The program ArtRage does this very well, I think by looking at that you can get a good idea of how to do this.
Each pixel on the canvas needs to store several attributes related to paint; you can imagine that each pixel will store the amount and color of the paint at that point at the minimum. Painting would be a matter of starting with a set amount of paint on the brush, then as the mouse traces a path to paint, remove some from the brush and add some to the affected pixels.
This is just an overview of the most simple way to do this, there are many more details that will make this look much better (such as shading with a light source to get the 'bump' appearance).
You need to study liquid dynamics. Far too complicated for me, but that might be a good starting point for you. Not sure many people will be able to help you. Good luck