Unity blurry and pixelated sprites in editor (no pixel art) - unity3d

I am currently making a mobile match-3 like game in unity. I have made all the graphics for the gems(the objects with which you make the matches) in Inkscape at 256x256 and exported them(PNG Files) with 90 dpi(also tried with 360 but nothing changed). My problem is that when I run the game in the editor the graphics seem to be "pixelated" and blurry. In my sprite settings I've set Pixels per Unit to 256, checked Generate Mip Maps, I am using Bilinear Filter Mode and the aniso level is 0. I have also set the max size to 256 and compression to high quality(My Main Camera's size is 10 but I tried to change that and nothing changed as far as the quality of the sprites). What can I do to "perfectly" display my sprites? Do I have to export them in some other way from Inkscape or do I have to change some Unity's settings?
Thank you.
NOTE: My sprites are not "pixel art"!
Edit(Added photos of the purple gem as file and how it is shown in editor):

Because scaling
You're display resolution on the images isn't a 256x256 region where those images are displayed, which means that they must be scaled in some manner in order to display in the desired region. Camera rendering is notoriously bad at scaling. As your images aren't Vector (and Unity doesn't support vector graphic formats anyway), scaling will always result in a loss of detail. Detail like hard edges.
Your options are:
smaller images where you have complete control over how the image is scaled down
bilinear filtering (which is fundamentally blurry)
mipmaps (which are automatically scaled down versions of your image in powers of two)
If the later two aren't giving satisfactory results, your only option is the first.

Related

SKTileMapNode Pixel to Pixel Aligning

I am working on a tilemap game with Apple's newish SKTileMapNode. The pixels on my tiles do not match up with the pixels on the phone display. My scale mode is set to .resizeFill. My tile's sizes are correctly labeled as 64x64 and each tile's texture's image is sized correctly.
I am using a camera that is a child of the gray circle in the attached image. I believe that the camera will create a pixel to pixel view of the screen size being used and match the resolution, but I am not sure that I can trust this. How can I get my pixels to align correctly to avoid this.
It turns out that SpriteKit's SKTileMapNode really likes assets to be optimized for all resolutions. This fixed my pixel alignment problem entirely. While this may seem obvious, I originally added #1x files in order to use an optimized texture atlas. It took more research to discover how to add different resolutions to a texture atlas.
Since it is different than normal atlases (appending ".atlas" to a folder of images), I will describe how to do so here
Go to the assets.xcassets folder and click "New Sprite Atlas." In here drag in all #2x and #3x images. Delete the [asset-name].atlas folder if you had one before, as this will not support different resolutions natively.
From here on, the atlas can be accessed just as the original [asset-name].atlas folder was accessed in code.

Unity 2D art is blurry

For some reason, I can't get these tilemaps to come out like they were created in Unity...
The pixel art is imported to Unity from Tiled.
The pixel art in Tiled:
The same pixel art imported to Unity:
Does anyone have any idea how I can fix this?
Thanks in advance!
As requested, the sprite import settings:
Posting my comment as an answer after all.
As I suggested, one way to get rid of blurring is to set the "Filter Mode" of the sprite to "Point (no filter)" instead of "Bilinear" or "Trilinear"
Here you can see the difference between bilinear and point filtering.
If that doesn't help, try messing with the quality settings in the Sprite Import.
Increase the "Max Size" and maybe disable "Compression" or increase the quality of the compression.
Here you can see differences between the different compression qualities, ranging from "Low" over "High" to "None"
You can also try to increase the "Max Size" value for higher quality sprites. It scales the dimensions of your sprite to not exceed the specified value. If your sprite sheet is already smaller than the "Max Value", increasing it will have no effect though.
The next picture shows the differences between Sizes 512, 256, 128 and 64 for a Sprite with dimensions 423 x 467
You can see that sizes above the dimensions won't have any effect, whereas smaller values will scale the sprite down, decreasing its visual quality.
Usually fiddling around with those values should help making your sprites look sharp and not blurred anymore.
EDIT:
As #NikaKasradze pointed out, there are also default quality settings you can try. Go to Edit > Project Settings > Quality
The matrix on top gives you a selection of all current quality levels for the Editor itself as well as all build target platforms. The green tick shows what currently is selected as the default quality. You can also set the "Texture Quality" which defines the overall texture resolution in your project. You can choose between "Full", "Half", "Quarter" and "Eight Res". You should choose "Full Res" for your current default settings.
Considering you already changed the filtering from Bilinear to Point (no filter), the issue may be that Unity compresses your texture by default. This is likely if it is a POT texture (Power of two resolution).
You can override this for individual platforms at the bottom of the texture's import settings. Select RGBA 32 bit if you need the alpha channel (transparency) or RGB 24 bit if you don't, to get your texture uncompressed.
And don't forget hitting that Apply-button! ;)
If this doesn't help, could you post a picture of your import settings?
Also, what data type is your image?
EDIT:
After the import settings screenshot has been added, it seems to me the issue lies in the SpriteMode settings.
As far as I understand, the image used is a Spritesheet in form of a tilemap. Therefore the SpriteMode should be set to Multiple.
You can then click on the Sprite Editor button and cut the sheet into individual sprites using the Slice function (now separate, as of Unity 5.6).
You probably want to use the Grid - By Cell Size method, which is basically how this is handled inside Tiled.
You also want to adjust your Pixels Per Unit setting to match the resolution of your tiles (for pixel-perfect results).
I had some issues with blurry sprites and I was able to fix it by changing the resolution of my sprites in an image editing software. The resolution was 576x352 and I changed it to 512x512 and that fixed it. I think its best for Unity that images are square and have a resolution that is a power of 2 as often as possible (for example 512 = 2 ^ 9).
Make sure you disabled Mip Maps:
http://prntscr.com/eup3hd
After literally hours of tinkering, the only solution I could come to was to open photoshop and export the tilemap at 4000% its original size.
There is still some minor blurring if I zoom in but at least it isn't as bad as I demoed in my question.
I have tried everyone's answers here to no avail, I am left with the conclusion that it must be a Tiled2Unity problem (despite it never doing this before)
I will update this post if I find a solution...

Unity - Render Texture from Camera's targetTexture produces seams

I am attempting to render a specific section of my scene using a separate camera, and a render texture. That object is on a separate layer that the main camera is not rendering, but a separate camera is. The secondary camera has a target texture set to be a render texture that I have created. Everything is working as intended except for the fact that the object, when rendered to a texture, has a bunch of seams that are not present when rendering directly to the screen.
What it looks like when rendered directly to the screen:
Correct
What it looks like when rendered to a texture, and then displayed on a quad in the scene:
Incorrect
Notice how the second image has a bunch of transparent "lines" in between the sprites where there shouldn't be any.
I am using a basic transparent shader to display the render texture on the quad (since the background isn't part of the render texture, just the black crowd part). I have tried a number of different shaders, and none of them seem to make a difference.
The render texture's settings are: Width: Screen.width Height: Screen.height Format: RenderTextureFormat.ARGBFloat;
Unity Version: 5.2.3f1 - iOS Platform
Note: The reason I am doing this is so that I can apply a "Blur" image effect to the texture, and make the crowd in the foreground appear to be out of focus. Any alternative suggestions for how to do this are also welcome.
I'm not quite sure -- but it almost sounds like you have line ghosting. You may want to give this a read and let me know if that's what you're dealing with or not:
The reason for this is due to how the texture image was authored, combined with the filtering that most 3d engines use when textures are displayed at different sizes on screen.
Your image may have coloured areas which are completely opaque, coloured areas which are partially transparent, and areas which are completely transparent. However, the areas where your alpha channel is completely transparent (0% opacity) actually still have a colour value too. In PNGs (or at least, the way Photoshop exports PNGs) seems to default to using white for the completely transparent pixels. With other formats or editors, this may be black. Both are equally undesirable when it comes to use in a 3d engine.
You may think, "why is the white colour a problem if it's completely transparent?". The problem occurs because when your texture appears on screen, it's usually either upscaled or downscaled depending whether the pixels in the texture's image are appearing larger or smaller than actual size. For the downsizing, a series of downscaled versions get created during import. These downscaled versions get used when the texture is displayed at smaller sizes or steeper angles in relation to the view, and is intended to improve visual quality and make rendering faster. This process is called "mip-mapping" - read more about mip-mapping here. For upscaling, simple bilinear interpolation is normally used.
The scaled versions are usually created using simple bilinear interpolation, which means that the transparent pixels are mixed with the neighbouring visible pixels. With the mipmaps, for each smaller level, the problem with the invisible mixing with the visible pixel colours increases (with the result that your nasty white edges become more apparent at further distances away).
The solution is to ensure that these completely transparent pixels have a colour value which matches their neighbouring visible pixels, so that when the interpolation occurs, the colour 'bleed' from the invisible pixels is of the appropriate colour.
To solve this (in Photoshop) I always use the free "Solidify" tool from the Flaming Pear Free Plugins pack, like this:
Download and install the Flaming Pear "Free Plugins" pack (near the bottom of that list)
Open your PNG in photoshop.
Go to Select -> Load Selection and click OK.
Go to Select -> Save Selection and click OK. This will create a new alpha channel.
Now Deselect all (Ctrl-D or Cmd-D)
Select Filter -> Flaming Pear -> Solidify B
Your image will now appear to be entirely made of solid colour, with no transparent areas, however your transparency information is now stored in an explicit alpha channel, which you can view and edit by selecting it in the channels palette.
Now re-save your image, and you should find your white fuzzies have dissappeared!
Source: http://answers.unity3d.com/questions/10302/messy-alpha-problem-white-around-edges.html
Turns out that the shader I was using for my scene was using "Blend SrcAlpha OneMinusSrcAlpha" for some reason, when it should have been using "Blend One OneMinusSrcAlpha". This was causing objects with alpha less than 1 to make the objects under them become semi-transparent as well exposing the camera's clear colour background.

Pixel-perfect shader in Unity ShaderLab

In Unity, when writing shaders,
is it possible for the shader itself to "know" what the screen resolution is, and indeed for the shader to control single physical pixels?
I'm thinking only of the case of writing shaders for 2D objects (such as for UI use, or at any event with an ortho camera).
(Of course, normally to show a physical-pixel perfect PNG on screen, you merely have a say 400 pixel PNG, and you arrange scaling so that the shader, happens to be drawing to, precisely 400 physical pixels. What I'm wondering about is a shader that just draws, for example a physical-pixel perfect black line - it would have to "know" exactly where the physical-pixels are.)
There is a ShaderLab built-in value called _ScreenParams.
_ScreenParams.x is the screen width in pixels.
_ScreenParams.y is the screen height in pixels.
Here's the documentation page: http://docs.unity3d.com/462/Documentation/Manual/SL-BuiltinValues.html
I don't think this is going to happen. Your rendering is tied to current selected video mode and it doesn't even have to match your physical screen size (if that is what you mean by pixel-perfect).
The closest you are going to get with this is if you render at recommended resolution for your display device and use pixel shader to shade an entire screen. This way, one 'physical pixel' is going to be roughly equal to one actual rendered pixel. Other than that, it is impossible to associate physical (that is your display's) pixels to rendered ones.
This is unless, of course, I somehow misunderstood your intentions.
is it possible for the shader itself to "know" what the screen resolution is
I don't think so.
and indeed for the shader to control single physical pixels?
Yes. Pixel shaders know what pixel they are drawing and can also sample other pixels.
First of all, please define 'Pixel perfect' and 'Physical pixel'.
If by physical pixel you mean your display's pixel (monitor, laptop display, any other hardware you might use) then you are out of luck. Shaders don't operate on those, they operate on their own 'abstract pixels'.
You can think about it in this way:
Your graphics are rendered in a picture with some configurable resolution (say 800x600 pixels). You can still display this picture on a 1920x1080 display in full screen no problem, it would look crappy though. This is what's happening with actual display and video card rendering. What determines the actual amount of rendered pixels is your video mode (picture's resolution in the above example). And physical pixels are your display's pixels. When rendering you can only operate on the first kind.
This leads us to a conclusion that when you render the graphics at the exact same resolution as your display's native resolution, you can safely say that you endeed render it as 'Physical Pixels'.
In unity, you can pass the renderer some external data (this might include your current screen resolution (for example as a Vector2, see this).
However you most likely don't need any of this, since pixel shaders already operate on pixels (rendered pixels, determined by your current video mode). That means that if you use some resolution which is lesser than your native one, you most likely will not be able to render a single pixel.
Hope it helped.

How to reduce filesize of gradient PNG?

I am trying to create a background image on a webpage, which is similar to the 404 page used on tumbler...
http://testing404image.tumblr.com/
Here we can see a PNG which is 1623*1064 pixels, yet appears reasonably smooth gradient wise.
The direct link for the image is
http://testing404image.tumblr.com/images/status_bg.png?2
When I try to create a similar PNG (different colors, but same size) in Photoshop CS4 for Mac, the resulting file ends up at > 400k, whereas tumblers is 90k
Ive tried playing with all Photoshop options, including reducing number of colors to 55, but I cannot get the image below ~240k.
Ive also tried various optimising tools such as ImageOptim (http://imageoptim.com/) but to no avail.
Are there any properties of this PNG which result in a such a low file size?
I tried using JPG, thinking its better suited to gradient images, but even a 100% quality JPG resulted in noticeable aliasing, which an identical content/size PNG didnt have.
Thanks for any advice
Hi there changed the colours with
Image > Adjustments > Hue/Saturation - In Photoshop CS4
and this is the result:
as you can see it's almost the same size (75k).
Try playing around under the
Image > Adjustments
to get the color you are looking for and save as png with NONE for interlace.
Photoshop is not very good with PNG: I simply opened and saved it with the humble xnView (maximum compression), and got 74K. You can also convert it to paletted-image, and do some extra little tuning - PNGoptim gives me a final size of 64.548. I would't expect anything much better than that, the image is just too big.
BTW, be aware that using a gradient that is so big and so smooth that it a digital image (with 8 bits per pixel) cannot represent it without some banding. That image is really oversampled (you could resample it at 25% or less and display it scaled, and the result would be basically the same)
The actual reason is the source image your looking to have a lower gradient quality than the one you are making.
Just uncheck the Dither option (from the top toolbar in Photoshop) when filling the gradient color. the quality and smoothness of the gradient is decreased and therefore you get a very smaller file sized PNG output.