I am attempting to render a specific section of my scene using a separate camera, and a render texture. That object is on a separate layer that the main camera is not rendering, but a separate camera is. The secondary camera has a target texture set to be a render texture that I have created. Everything is working as intended except for the fact that the object, when rendered to a texture, has a bunch of seams that are not present when rendering directly to the screen.
What it looks like when rendered directly to the screen:
Correct
What it looks like when rendered to a texture, and then displayed on a quad in the scene:
Incorrect
Notice how the second image has a bunch of transparent "lines" in between the sprites where there shouldn't be any.
I am using a basic transparent shader to display the render texture on the quad (since the background isn't part of the render texture, just the black crowd part). I have tried a number of different shaders, and none of them seem to make a difference.
The render texture's settings are: Width: Screen.width Height: Screen.height Format: RenderTextureFormat.ARGBFloat;
Unity Version: 5.2.3f1 - iOS Platform
Note: The reason I am doing this is so that I can apply a "Blur" image effect to the texture, and make the crowd in the foreground appear to be out of focus. Any alternative suggestions for how to do this are also welcome.
I'm not quite sure -- but it almost sounds like you have line ghosting. You may want to give this a read and let me know if that's what you're dealing with or not:
The reason for this is due to how the texture image was authored, combined with the filtering that most 3d engines use when textures are displayed at different sizes on screen.
Your image may have coloured areas which are completely opaque, coloured areas which are partially transparent, and areas which are completely transparent. However, the areas where your alpha channel is completely transparent (0% opacity) actually still have a colour value too. In PNGs (or at least, the way Photoshop exports PNGs) seems to default to using white for the completely transparent pixels. With other formats or editors, this may be black. Both are equally undesirable when it comes to use in a 3d engine.
You may think, "why is the white colour a problem if it's completely transparent?". The problem occurs because when your texture appears on screen, it's usually either upscaled or downscaled depending whether the pixels in the texture's image are appearing larger or smaller than actual size. For the downsizing, a series of downscaled versions get created during import. These downscaled versions get used when the texture is displayed at smaller sizes or steeper angles in relation to the view, and is intended to improve visual quality and make rendering faster. This process is called "mip-mapping" - read more about mip-mapping here. For upscaling, simple bilinear interpolation is normally used.
The scaled versions are usually created using simple bilinear interpolation, which means that the transparent pixels are mixed with the neighbouring visible pixels. With the mipmaps, for each smaller level, the problem with the invisible mixing with the visible pixel colours increases (with the result that your nasty white edges become more apparent at further distances away).
The solution is to ensure that these completely transparent pixels have a colour value which matches their neighbouring visible pixels, so that when the interpolation occurs, the colour 'bleed' from the invisible pixels is of the appropriate colour.
To solve this (in Photoshop) I always use the free "Solidify" tool from the Flaming Pear Free Plugins pack, like this:
Download and install the Flaming Pear "Free Plugins" pack (near the bottom of that list)
Open your PNG in photoshop.
Go to Select -> Load Selection and click OK.
Go to Select -> Save Selection and click OK. This will create a new alpha channel.
Now Deselect all (Ctrl-D or Cmd-D)
Select Filter -> Flaming Pear -> Solidify B
Your image will now appear to be entirely made of solid colour, with no transparent areas, however your transparency information is now stored in an explicit alpha channel, which you can view and edit by selecting it in the channels palette.
Now re-save your image, and you should find your white fuzzies have dissappeared!
Source: http://answers.unity3d.com/questions/10302/messy-alpha-problem-white-around-edges.html
Turns out that the shader I was using for my scene was using "Blend SrcAlpha OneMinusSrcAlpha" for some reason, when it should have been using "Blend One OneMinusSrcAlpha". This was causing objects with alpha less than 1 to make the objects under them become semi-transparent as well exposing the camera's clear colour background.
Related
I am currently making a mobile match-3 like game in unity. I have made all the graphics for the gems(the objects with which you make the matches) in Inkscape at 256x256 and exported them(PNG Files) with 90 dpi(also tried with 360 but nothing changed). My problem is that when I run the game in the editor the graphics seem to be "pixelated" and blurry. In my sprite settings I've set Pixels per Unit to 256, checked Generate Mip Maps, I am using Bilinear Filter Mode and the aniso level is 0. I have also set the max size to 256 and compression to high quality(My Main Camera's size is 10 but I tried to change that and nothing changed as far as the quality of the sprites). What can I do to "perfectly" display my sprites? Do I have to export them in some other way from Inkscape or do I have to change some Unity's settings?
Thank you.
NOTE: My sprites are not "pixel art"!
Edit(Added photos of the purple gem as file and how it is shown in editor):
Because scaling
You're display resolution on the images isn't a 256x256 region where those images are displayed, which means that they must be scaled in some manner in order to display in the desired region. Camera rendering is notoriously bad at scaling. As your images aren't Vector (and Unity doesn't support vector graphic formats anyway), scaling will always result in a loss of detail. Detail like hard edges.
Your options are:
smaller images where you have complete control over how the image is scaled down
bilinear filtering (which is fundamentally blurry)
mipmaps (which are automatically scaled down versions of your image in powers of two)
If the later two aren't giving satisfactory results, your only option is the first.
I'm looking at using Camera TartetTexture RenderTexture functionality for less processing intensive menu transitions but I'm having some trouble. Every texture I render out from the camera doesn't have masks working. I can see the whole version of every graphic on the screen. How can I get it rendering keeping the masks in tact? It is also failing to render any of my spawned prefabs. Either that or they could be hidden behind the unmasked graphics.
Also, I was told to render to a material. None of the shaders I've tried have supported the masks (don't know if that's really the problem) or have looked like the original image. They all look dark and moody, with the occasional weird alpha channel in the upper left corner. How can I get the image looking just like my screen?
My menus are all on a Screen Space - Overlay canvas, so they shouldn't need to be lit.
I've been searching around for this one for a bit, and unfortunately I can't seem to find any good, consistent results. So, in the Unity UI system, buttons can stretch without becoming pixelated or distorted. This is because the texture is split up into 9 parts - the corners, middle, and sides.
This works because the button's middle and sides are stretched, but not the corners. Then, the button appears not pixelated, at any dimension.
So, the question is as follows: How can I do the same thing for a transparent, unlit texture in 3D space? I have a speech bubble texture on a flat plane that I know how to re-scale to fit the text in the speech bubble.
I've set the texture type to Multiple Sprite, and divided it up into 9 parts. However, I cannot seem to find where I can set the texture to act like the UI button does, and I'm not sure that this is even possible in this way in 3D space.
Is there a way, or should I just make the different parts of the texture different objects, and move them together? That would seem very inefficient and ugly compared to this.
To accomplish what you are asking, you would need to create tiles for this speech bubble and then write a script that procedurally builds a speech bubble based on the plane's scale value. You could also try just changing the texture's Filter Mode to Point.
However I really don't think you should be using textures for this anyway. Why not just use a Unity Canvas and set the Render Mode to World Space? Then you can just set your text box to be a sprite, not a texture, and set its filter mode to Point (See below). This would also make it a lot easier for when you want there to be text in the speech bubble later on.
I'm importing a sprite into Unity, and adding it to a Screen Space Overlay canvas to use for a UI.
The image I'm importing looks exactly as I want it, but in Unity the anti-aliased edges look like they're going to a white background color, instead of just fading over whatever is actually behind them.
I'm using these import settings:
I'm using a default UI/Image component to add it to the canvas.
This is the image I'm importing - it's a 32 bit PNG exported from Fireworks: (also shown over a black background)
Just to confirm, this looks fine everywhere else in Unity, preview panels, pickers etc. I am packing this sprite using the built in Sprite Packer if that changes anything.
And the final result:
How can I get rid of these artifacts on the corners?
The problem was the RGB values of the transparent pixels. By default they are white, and any scaling operations cause this white color to be blended with the partially transparent pixels.
I essentially made a slightly larger version of the button background shape, put it in a layer behind everything, and then wrote back into the alpha channel making it transparent. This means that the neighboring pixels are then the same color as the partially transparent ones.
The end result:
I have a scene with a background image (a lit room), and a black image (shadow) over that. I need to be able to move my finger over the background and reveal some parts of the scene, simulating a dim light source in a dark room.
My current approach was to generate a mask depending on the position of the touch, and then applying that mask to the shadow image. The problem is I'm generating a new mask and applying it every time I receive a touch event. It's a large image (800x600) and this causes the performance to go down and it increases a lot the memory usage, eventually crashing the game (I think I don't have any memory leaks, but that's not warrantied... anyway the performance itself isn't acceptable).
Can anyone think of a better approach (which doesn't involve using OpenGL ES -- that's not an option in this project) to do this?
To go with my comments above.
Maybe to get around the different shadow levels you could also have a grid of views (squares) between the image and the shadow view. each grid square has a different alpha opacity and when the spot is over a grid square, the grid square's alpha opacity changes to 0. when the spot moves off the grid square it's alpha opacity changes back to it's default.
Without more information it is a little difficult to know whether this approach will work in your case but what you could do is generate a single mask image, say, a radial alpha gradient and then apply an affine transform to it to shape it according to the touches. This can be used to simulate a torch/flashlight beam.
I would try this: use one view with a custom drawRect implemetation: first draw the shadow image (in grayscale) then a bright spot image in white an alpha. And finally the background image in a 'multiply' blend mode.
Just a thought, does the shadow has to be an image? Perhaps you could simply fill the shadow layer with a color and mask it then? This way the memory usage should be less and the effect should be nearly identical (if not exactly the same).
There is no reason to generate a new mask on every touch move. Instead, let the mask be initialized once and manipulate it (reset it's frame) as needed upon touch events.