I'm using the Unreal Engine 4 VR Content Examples where it has a whiteboard you can draw on. It uses render targets to render the line to the canvas.
The problem is, when I copy the whiteboard to use somewhere else in the level, it shows the same drawing, like this:
Here is the material and texture I am using:
I tried to make a copy of the material and the texture and use it on one of the whiteboards but it has the same result. I'm not sure why the render target is not instanced/unique? Why is it drawing on the same thing on multiple instances of the whiteboard?
Edit (Additional Details): I made a copy of the original render target and tried specifying that instead, I also made a material instance of the original and specified that for the copy but still the same issue. I tried to dynamically create a render target and material instance as you can see here https://answers.unrealengine.com/questions/828892/drawing-on-one-whiteboard-render-target-is-copied.html , but then i couldn't draw on it; so I only did it to two of them and it still had the same issue
For a material using a render target to have a different render target feed into it, the functionality is much like using a static texture. There must be multiple render target assets made (either in editor or at runtime) and there must be different Materials used, or at least different Material Instances, with unique Render Target assets assigned to each.
My recommendation is to make a set of Material Instances of that whiteboard material, and make sure to duplicate the render targets to get a unique one per whiteboard, which is set both on the material instance and the whiteboard actor.
If this isn't working, there might be some Blueprint trickery for managing the render target happening at runtime embedded in the whiteboards. You could alternatively take this as a challenge to try to reimplement the whiteboard yourself.
Related
I simply need to write text on 3D sphere. But I want to change it at runtime as I want. So, using a premade material is not an option.
I tried to make this with render texture it works pretty well but when it comes to multiple objects with multiple different text I noticed I need multiple render textures, cameras and layers so that the texts in front of the cameras do not interfere with each other. But unity gives only 32 layers. I don't know how many object will be generated and create layer. It doesn't seem like a good practice.
Is there another way like render texture or is there a way to use render texture for different texts?
Overview
I'm working on a shader in Unity 2017.1 to enable UnityEngine.UI.Image components to blur what is behind them.
As some of the approaches in this Unity forum topic, I use GrabPasses, specifically a tex2Dproj(_GrabTexture, UNITY_PROJ_COORD(<uv with offset>)) call to look up the pixels that I use in my blur summations. I'm doing a basic 2-pass box blur, and not looking to optimize performance right now.
This works as expected:
I also want to mask the blur effect based on the image alpha. I use tex2D(_MainTex, IN.uvmain) to look up the alpha color of the sprite on the pixel I am calculating the blur for, which I then combine with the alpha of the blur.
This works fine when working with just a single UI.Image object:
The Problem
However when I have multiple UI.Image objects that share the same Material created from this shader, images layered above will cut into the images below:
I believe this is because objects with the same material may be drawn simultaneously and so don't appear in each other's GrabPasses, or at least something to that effect.
That at least would explain why, if I duplicate the material and use each material on its own object, I don't have this problem.
Here is the source code for the shader: https://gist.github.com/JohannesMP/8d0f531b815dfad07823d44bc12b8112
The Question
Is there a way to force objects of the same material to draw consecutively and not in parallel? Basically, I would like the result of a lower object's render passes to be visible to the grab pass of subsequent objects.
I could imagine creating a component that dynamically instantiates materials to force this, or using render textures, but I would really like a solution that doesn't require adding components or creating multiple materials to swap out.
I would love a solution that is entirely self-contained within one shader/one material but is unsure if this is possible. I'm still only starting to get familiar with shaders so I'm positive there are some features I am not familiar with.
It turns out that it was me re-drawing what I grabbed from the _GrabTexture that was causing the the issue. By correctly handling the alpha logic there I was able to get exactly the desired behavior:
Here is the updated sourcecode: https://gist.github.com/JohannesMP/7d62f282705169a2855a0aac315ff381
As mentioned before, optimizing the convolution step was not my priority.
I have a post-process effect that uses Unity's Graphics.Blit to pixelate or apply a crt-screen effect to a scene. There are some UI elements that display after the fact (basically making it not a true post process, but let's put that aside for a moment).
Now, I want to apply a second process that performs a screen wipe and transitions out one scene for another. This time I want to include the UI buttons in the effect.
I've looked into using a render texture and then rendering to a second camera, but I feel like there is a smarter/more accurate way to handle this.
1st version of this question:
Can I selectively include the screenspace-overlay UI in the script that applies the post process?
or, 2nd version of this question
Is there a trick to getting the render texture to preserve resolution and display accurately (i.e.: without lost quality) when re rendering to a second camera?
Let's say for example that I'm making a tile map editor.
We have the editor, which handles the drawing of the tiles, and we have the tileset which is used to determine what tiles are drawn.
The editor needs to depend on the tileset to know which tiles should be draw, and the tileset needs to depend on the editor to know the dimensions of the tiles to be drawn, as well as other minor details.
This creates tightly coupled code. Is this a code smell? If so, how do I resolve it?
Do I stuff everything into a large class? Do I use a mediator to communicate between the two classes?
Write your tile-set first complete with tests, then move on to the editor. In so doing you will have solved your problem without even considering it.
I have created a level editor using new UI system in Unity3D.
Using Level Editor:
I can drag the ui elements,
I am able to save them in scriptable objects
But how could i save and load them at runtime?
I mean, if i create a level with some {width x hight} resolution and if i load it on different resolution, then the whole UI positioning get distorted.
How could i anchor them correctly programmatically?
Any help would be appreciated.
There are good video tutorials about the 4.6.x UI here:
http://unity3d.com/learn/tutorials/modules/beginner/ui.
Especially for positioning the elements I recommend watching: http://unity3d.com/learn/tutorials/modules/beginner/ui/rect-transform.
I suggest you learn how to use the anchor correctly first. If you want to change the position and size of UI element in a script you can do that through the RectTransform component. It offers functions like offsetMax,offsetMin and sizeDelta for position your UI element. The functions you need to use depend on your anchor setting.
as LokiSinclair said. You just need to Adjust the Scale that the new UI provided. It is the four Arrow on the Canvas of each Object and every UI object is inheriting their scale behavior from their Parent.