I simply need to write text on 3D sphere. But I want to change it at runtime as I want. So, using a premade material is not an option.
I tried to make this with render texture it works pretty well but when it comes to multiple objects with multiple different text I noticed I need multiple render textures, cameras and layers so that the texts in front of the cameras do not interfere with each other. But unity gives only 32 layers. I don't know how many object will be generated and create layer. It doesn't seem like a good practice.
Is there another way like render texture or is there a way to use render texture for different texts?
Related
Overview
I'm working on a shader in Unity 2017.1 to enable UnityEngine.UI.Image components to blur what is behind them.
As some of the approaches in this Unity forum topic, I use GrabPasses, specifically a tex2Dproj(_GrabTexture, UNITY_PROJ_COORD(<uv with offset>)) call to look up the pixels that I use in my blur summations. I'm doing a basic 2-pass box blur, and not looking to optimize performance right now.
This works as expected:
I also want to mask the blur effect based on the image alpha. I use tex2D(_MainTex, IN.uvmain) to look up the alpha color of the sprite on the pixel I am calculating the blur for, which I then combine with the alpha of the blur.
This works fine when working with just a single UI.Image object:
The Problem
However when I have multiple UI.Image objects that share the same Material created from this shader, images layered above will cut into the images below:
I believe this is because objects with the same material may be drawn simultaneously and so don't appear in each other's GrabPasses, or at least something to that effect.
That at least would explain why, if I duplicate the material and use each material on its own object, I don't have this problem.
Here is the source code for the shader: https://gist.github.com/JohannesMP/8d0f531b815dfad07823d44bc12b8112
The Question
Is there a way to force objects of the same material to draw consecutively and not in parallel? Basically, I would like the result of a lower object's render passes to be visible to the grab pass of subsequent objects.
I could imagine creating a component that dynamically instantiates materials to force this, or using render textures, but I would really like a solution that doesn't require adding components or creating multiple materials to swap out.
I would love a solution that is entirely self-contained within one shader/one material but is unsure if this is possible. I'm still only starting to get familiar with shaders so I'm positive there are some features I am not familiar with.
It turns out that it was me re-drawing what I grabbed from the _GrabTexture that was causing the the issue. By correctly handling the alpha logic there I was able to get exactly the desired behavior:
Here is the updated sourcecode: https://gist.github.com/JohannesMP/7d62f282705169a2855a0aac315ff381
As mentioned before, optimizing the convolution step was not my priority.
I'm using the Unreal Engine 4 VR Content Examples where it has a whiteboard you can draw on. It uses render targets to render the line to the canvas.
The problem is, when I copy the whiteboard to use somewhere else in the level, it shows the same drawing, like this:
Here is the material and texture I am using:
I tried to make a copy of the material and the texture and use it on one of the whiteboards but it has the same result. I'm not sure why the render target is not instanced/unique? Why is it drawing on the same thing on multiple instances of the whiteboard?
Edit (Additional Details): I made a copy of the original render target and tried specifying that instead, I also made a material instance of the original and specified that for the copy but still the same issue. I tried to dynamically create a render target and material instance as you can see here https://answers.unrealengine.com/questions/828892/drawing-on-one-whiteboard-render-target-is-copied.html , but then i couldn't draw on it; so I only did it to two of them and it still had the same issue
For a material using a render target to have a different render target feed into it, the functionality is much like using a static texture. There must be multiple render target assets made (either in editor or at runtime) and there must be different Materials used, or at least different Material Instances, with unique Render Target assets assigned to each.
My recommendation is to make a set of Material Instances of that whiteboard material, and make sure to duplicate the render targets to get a unique one per whiteboard, which is set both on the material instance and the whiteboard actor.
If this isn't working, there might be some Blueprint trickery for managing the render target happening at runtime embedded in the whiteboards. You could alternatively take this as a challenge to try to reimplement the whiteboard yourself.
In Unity, is there a way to give slight color variations to a scene (a strain of purple here, some yellow blur there) without adjusting every single texture? And for that to work in VR stereo images too (and ideally in semi-consistent way as one moves around, and perhaps also without having to use computing-heavy colored lights)? Many thanks!
A simple way to achieve this if your color effect is fixed would be to add a canvas that renders a half transparent image to the whole screen. But I suppose that you might prefer some dynamic effect.
To achieve this, look at Unity's post processing stack. It allows you to add many post process effects, such as chromatic aberation and color grading, that might allow you to do what you want
I've been searching around for this one for a bit, and unfortunately I can't seem to find any good, consistent results. So, in the Unity UI system, buttons can stretch without becoming pixelated or distorted. This is because the texture is split up into 9 parts - the corners, middle, and sides.
This works because the button's middle and sides are stretched, but not the corners. Then, the button appears not pixelated, at any dimension.
So, the question is as follows: How can I do the same thing for a transparent, unlit texture in 3D space? I have a speech bubble texture on a flat plane that I know how to re-scale to fit the text in the speech bubble.
I've set the texture type to Multiple Sprite, and divided it up into 9 parts. However, I cannot seem to find where I can set the texture to act like the UI button does, and I'm not sure that this is even possible in this way in 3D space.
Is there a way, or should I just make the different parts of the texture different objects, and move them together? That would seem very inefficient and ugly compared to this.
To accomplish what you are asking, you would need to create tiles for this speech bubble and then write a script that procedurally builds a speech bubble based on the plane's scale value. You could also try just changing the texture's Filter Mode to Point.
However I really don't think you should be using textures for this anyway. Why not just use a Unity Canvas and set the Render Mode to World Space? Then you can just set your text box to be a sprite, not a texture, and set its filter mode to Point (See below). This would also make it a lot easier for when you want there to be text in the speech bubble later on.
I would like to add small text, less than 10 letters to the surfaces of complex 3D models, such as a person model or a building.
One way is to add the image texture to a material, and then add the material to the model. But by this way I can not control where the text will be placed. For example, if I use the material with a text texture to a cube, all the 6 surfaces will display the text. This is what I don't want it be. I just need the text displayed only once, wherever it is.
What I need is just to add some text on the surface anywhere, even randomly.
Can I realize this by Unity itself without other softwares like Maya, Photoshop?
Thanks!
Try using a text mesh and a text renderer ...
http://docs.unity3d.com/Manual/class-TextMesh.html
As always with unity there's a tool for that :)