Pixel-perfect shader in Unity ShaderLab - unity3d

In Unity, when writing shaders,
is it possible for the shader itself to "know" what the screen resolution is, and indeed for the shader to control single physical pixels?
I'm thinking only of the case of writing shaders for 2D objects (such as for UI use, or at any event with an ortho camera).
(Of course, normally to show a physical-pixel perfect PNG on screen, you merely have a say 400 pixel PNG, and you arrange scaling so that the shader, happens to be drawing to, precisely 400 physical pixels. What I'm wondering about is a shader that just draws, for example a physical-pixel perfect black line - it would have to "know" exactly where the physical-pixels are.)

There is a ShaderLab built-in value called _ScreenParams.
_ScreenParams.x is the screen width in pixels.
_ScreenParams.y is the screen height in pixels.
Here's the documentation page: http://docs.unity3d.com/462/Documentation/Manual/SL-BuiltinValues.html

I don't think this is going to happen. Your rendering is tied to current selected video mode and it doesn't even have to match your physical screen size (if that is what you mean by pixel-perfect).
The closest you are going to get with this is if you render at recommended resolution for your display device and use pixel shader to shade an entire screen. This way, one 'physical pixel' is going to be roughly equal to one actual rendered pixel. Other than that, it is impossible to associate physical (that is your display's) pixels to rendered ones.
This is unless, of course, I somehow misunderstood your intentions.

is it possible for the shader itself to "know" what the screen resolution is
I don't think so.
and indeed for the shader to control single physical pixels?
Yes. Pixel shaders know what pixel they are drawing and can also sample other pixels.

First of all, please define 'Pixel perfect' and 'Physical pixel'.
If by physical pixel you mean your display's pixel (monitor, laptop display, any other hardware you might use) then you are out of luck. Shaders don't operate on those, they operate on their own 'abstract pixels'.
You can think about it in this way:
Your graphics are rendered in a picture with some configurable resolution (say 800x600 pixels). You can still display this picture on a 1920x1080 display in full screen no problem, it would look crappy though. This is what's happening with actual display and video card rendering. What determines the actual amount of rendered pixels is your video mode (picture's resolution in the above example). And physical pixels are your display's pixels. When rendering you can only operate on the first kind.
This leads us to a conclusion that when you render the graphics at the exact same resolution as your display's native resolution, you can safely say that you endeed render it as 'Physical Pixels'.
In unity, you can pass the renderer some external data (this might include your current screen resolution (for example as a Vector2, see this).
However you most likely don't need any of this, since pixel shaders already operate on pixels (rendered pixels, determined by your current video mode). That means that if you use some resolution which is lesser than your native one, you most likely will not be able to render a single pixel.
Hope it helped.

Related

Unity blurry and pixelated sprites in editor (no pixel art)

I am currently making a mobile match-3 like game in unity. I have made all the graphics for the gems(the objects with which you make the matches) in Inkscape at 256x256 and exported them(PNG Files) with 90 dpi(also tried with 360 but nothing changed). My problem is that when I run the game in the editor the graphics seem to be "pixelated" and blurry. In my sprite settings I've set Pixels per Unit to 256, checked Generate Mip Maps, I am using Bilinear Filter Mode and the aniso level is 0. I have also set the max size to 256 and compression to high quality(My Main Camera's size is 10 but I tried to change that and nothing changed as far as the quality of the sprites). What can I do to "perfectly" display my sprites? Do I have to export them in some other way from Inkscape or do I have to change some Unity's settings?
Thank you.
NOTE: My sprites are not "pixel art"!
Edit(Added photos of the purple gem as file and how it is shown in editor):
Because scaling
You're display resolution on the images isn't a 256x256 region where those images are displayed, which means that they must be scaled in some manner in order to display in the desired region. Camera rendering is notoriously bad at scaling. As your images aren't Vector (and Unity doesn't support vector graphic formats anyway), scaling will always result in a loss of detail. Detail like hard edges.
Your options are:
smaller images where you have complete control over how the image is scaled down
bilinear filtering (which is fundamentally blurry)
mipmaps (which are automatically scaled down versions of your image in powers of two)
If the later two aren't giving satisfactory results, your only option is the first.

Unity - Render Texture from Camera's targetTexture produces seams

I am attempting to render a specific section of my scene using a separate camera, and a render texture. That object is on a separate layer that the main camera is not rendering, but a separate camera is. The secondary camera has a target texture set to be a render texture that I have created. Everything is working as intended except for the fact that the object, when rendered to a texture, has a bunch of seams that are not present when rendering directly to the screen.
What it looks like when rendered directly to the screen:
Correct
What it looks like when rendered to a texture, and then displayed on a quad in the scene:
Incorrect
Notice how the second image has a bunch of transparent "lines" in between the sprites where there shouldn't be any.
I am using a basic transparent shader to display the render texture on the quad (since the background isn't part of the render texture, just the black crowd part). I have tried a number of different shaders, and none of them seem to make a difference.
The render texture's settings are: Width: Screen.width Height: Screen.height Format: RenderTextureFormat.ARGBFloat;
Unity Version: 5.2.3f1 - iOS Platform
Note: The reason I am doing this is so that I can apply a "Blur" image effect to the texture, and make the crowd in the foreground appear to be out of focus. Any alternative suggestions for how to do this are also welcome.
I'm not quite sure -- but it almost sounds like you have line ghosting. You may want to give this a read and let me know if that's what you're dealing with or not:
The reason for this is due to how the texture image was authored, combined with the filtering that most 3d engines use when textures are displayed at different sizes on screen.
Your image may have coloured areas which are completely opaque, coloured areas which are partially transparent, and areas which are completely transparent. However, the areas where your alpha channel is completely transparent (0% opacity) actually still have a colour value too. In PNGs (or at least, the way Photoshop exports PNGs) seems to default to using white for the completely transparent pixels. With other formats or editors, this may be black. Both are equally undesirable when it comes to use in a 3d engine.
You may think, "why is the white colour a problem if it's completely transparent?". The problem occurs because when your texture appears on screen, it's usually either upscaled or downscaled depending whether the pixels in the texture's image are appearing larger or smaller than actual size. For the downsizing, a series of downscaled versions get created during import. These downscaled versions get used when the texture is displayed at smaller sizes or steeper angles in relation to the view, and is intended to improve visual quality and make rendering faster. This process is called "mip-mapping" - read more about mip-mapping here. For upscaling, simple bilinear interpolation is normally used.
The scaled versions are usually created using simple bilinear interpolation, which means that the transparent pixels are mixed with the neighbouring visible pixels. With the mipmaps, for each smaller level, the problem with the invisible mixing with the visible pixel colours increases (with the result that your nasty white edges become more apparent at further distances away).
The solution is to ensure that these completely transparent pixels have a colour value which matches their neighbouring visible pixels, so that when the interpolation occurs, the colour 'bleed' from the invisible pixels is of the appropriate colour.
To solve this (in Photoshop) I always use the free "Solidify" tool from the Flaming Pear Free Plugins pack, like this:
Download and install the Flaming Pear "Free Plugins" pack (near the bottom of that list)
Open your PNG in photoshop.
Go to Select -> Load Selection and click OK.
Go to Select -> Save Selection and click OK. This will create a new alpha channel.
Now Deselect all (Ctrl-D or Cmd-D)
Select Filter -> Flaming Pear -> Solidify B
Your image will now appear to be entirely made of solid colour, with no transparent areas, however your transparency information is now stored in an explicit alpha channel, which you can view and edit by selecting it in the channels palette.
Now re-save your image, and you should find your white fuzzies have dissappeared!
Source: http://answers.unity3d.com/questions/10302/messy-alpha-problem-white-around-edges.html
Turns out that the shader I was using for my scene was using "Blend SrcAlpha OneMinusSrcAlpha" for some reason, when it should have been using "Blend One OneMinusSrcAlpha". This was causing objects with alpha less than 1 to make the objects under them become semi-transparent as well exposing the camera's clear colour background.

OpenGL ES Tiled Texture Mipmap problem - iPad/iPhone

I'm running into the traditional tile/mipmap problem on the iPad using OpenGL ES. Basically, if you have a large texture (larger than 1k X 1k), you can break it up into pieces and map those pieces onto individual polygons. You can clamp the texture coordinates to the edges and it mostly works, but you get artifacts along the boundaries.
Now I know why you get this and know what the traditional solution is. To wit, you make a border around the outside of each littler texture (say 6 pixels). You sample from the little textures into the big one so you're just using those inside pixels (say 256-2*6). Then you smear the valid pixels out into the border area. Lastly, you map your texture coordinates to just use those valid inside pixels. Works okay.
If you're not nodding along at this point, don't try to answer. :-)
Anyway, OpenGL introduced clamping modes way back in the day to solve this. I don't see those modes in OpenGL ES (at least on this hardware) and I see other references to this problem. What I'm wonder is if I'm missing something. Is there a newer way to solve the tile/edge problem that I'm not aware of?
[Update]
A screen shot of the result is attached here. The visible line is at the end of one texture and the start of another. This is using CLAMP_TO_EDGE.
GLES supplies GL_CLAMP_TO_EDGE but not GL_CLAMP, which clamps to the centres of the outermost pixels in a texture rather than to the extreme edges. So out-of-bounds (border or wraparound) accesses are completely prevented with CLAMP_TO_EDGE but not with CLAMP.
CLAMP_TO_EDGE is a part of the GL ES specification (as per here for 1.1 and here for 2.0), so if your hardware doesn't support it then it's not technically GL ES compliant. It's also available in full Open GL, but I think only as of version 1.2. It's implied that CLAMP_TO_EDGE made the leap to ES but CLAMP didn't because the former is considered to be a fixed version of the latter.
It sounds to me like CLAMP_TO_EDGE should be suitable for what you're doing — have I misunderstood?
In the end the problem was related to texture compression. The lines were due to the compression method assuming the texture wrapped around.
I solved the problem by building slightly larger textures than needed, compressing and then using only an area within each texture, thus leaving a border.

Large scrolling background in OpenGL ES

I am working on a 2D scrolling game for iPhone. I have a large image background, say 480×6000 pixels, of only a part is visible (exactly one screen’s worth, 480×320 pixels). What is the best way to get such a background on the screen?
Currently I have the background split into several textures (to get around the maximum texture size limit) and draw the whole background in each frame as a textured triangle strip. The scrolling is done by translating the modelview matrix. The scissor box is set to the window size, 480×320 pixels. This is not meant to be fast, I just wanted a working code before I get to optimizing.
I thought that maybe the OpenGL implementation would be smart enough to discard the invisible portion of the background, but according to some measuring code I wrote it looks like background takes 7 ms to draw on average and 84 ms at maximum. (This is measured in the simulator.) This is about a half of the whole render loop, ie. quite slow for me.
Drawing the background should be as easy as copying some 480×320 pixels from one part of the VRAM to another, or, in other words, blazing fast. What is the best way to get closer to such performance?
That's the fast way of doing it. Things you can do to improve performance:
Try different texture-formats. Presumably the SDK docs have details on the preferred format, and presumably smaller is better.
Cull out entirely offscreen tiles yourself
Split the image into smaller textures
I'm assuming you're drawing at a 1:1 zoom-level; is that the case?
Edit: Oops. Having read your question more carefully, I have to offer another piece of advice: Timings made on the simulator are worthless.
The quick solution:
Create a geometry matrix of tiles (quads preferably) so that there is at least one row/column of off-screen tiles on all sides of the viewable area.
Map textures to all those tiles.
As soon as one tile is outside the viewable area you can release this texture and bind a new one.
Move the tiles using a modulo of the tile width and tile height as position (so that the tile will reposition itself at its starting pos when it have moved exactly one tile in length). Also remember to remap the textures during that operation. This allows you to have a very small grid/very little texture memory loaded at any given time. Which I guess is especially important in GL ES.
If you have memory to spare and are still plagued with slow load speed (although you shouldn't for that amount of textures). You could build a texture streaming engine that preloads textures into faster memory (whatever that may be on your target device) when you reach a new area. Mapping as textures will in that case go from that faster memory when needed. Just be sure that you are able to preload it without using up all memory and remember to release it dynamically when not needed.
Here is a link to a GL (not ES) tile engine. I haven't used it myself so I cannot vouch for its functionality but it might be able to help you: http://www.mesa3d.org/brianp/TR.html

iPhone camera images as OpenGL ES textures

Is it possible to use an image captured with the iPhone's camera as a texture that is then manipulated in OpenGL ES (flag wave effect, etc.)? The main problem being the size of the iPhone screen being 320x480 (no status bar) and thus the image won't have dimensions that are power-of-2. Is the main option copying it into a 512x512 texture and adjusting the vertices?
Yes, that's the way to do it.
Just use a larger texture. It's a waste of memory but unfortunately there is no way around this problem.
An alternative would be deviding the picture into squares with a length and height of 32 pixels (aka tiling), resulting into 15x8 tiles. Displaying it would however involve many texture switches while drawing which might become a bottleneck. On the other hand you would save a lot of memory using a tiled approach.