SKTileMapNode Pixel to Pixel Aligning - sprite-kit

I am working on a tilemap game with Apple's newish SKTileMapNode. The pixels on my tiles do not match up with the pixels on the phone display. My scale mode is set to .resizeFill. My tile's sizes are correctly labeled as 64x64 and each tile's texture's image is sized correctly.
I am using a camera that is a child of the gray circle in the attached image. I believe that the camera will create a pixel to pixel view of the screen size being used and match the resolution, but I am not sure that I can trust this. How can I get my pixels to align correctly to avoid this.

It turns out that SpriteKit's SKTileMapNode really likes assets to be optimized for all resolutions. This fixed my pixel alignment problem entirely. While this may seem obvious, I originally added #1x files in order to use an optimized texture atlas. It took more research to discover how to add different resolutions to a texture atlas.
Since it is different than normal atlases (appending ".atlas" to a folder of images), I will describe how to do so here
Go to the assets.xcassets folder and click "New Sprite Atlas." In here drag in all #2x and #3x images. Delete the [asset-name].atlas folder if you had one before, as this will not support different resolutions natively.
From here on, the atlas can be accessed just as the original [asset-name].atlas folder was accessed in code.

Related

Unity blurry and pixelated sprites in editor (no pixel art)

I am currently making a mobile match-3 like game in unity. I have made all the graphics for the gems(the objects with which you make the matches) in Inkscape at 256x256 and exported them(PNG Files) with 90 dpi(also tried with 360 but nothing changed). My problem is that when I run the game in the editor the graphics seem to be "pixelated" and blurry. In my sprite settings I've set Pixels per Unit to 256, checked Generate Mip Maps, I am using Bilinear Filter Mode and the aniso level is 0. I have also set the max size to 256 and compression to high quality(My Main Camera's size is 10 but I tried to change that and nothing changed as far as the quality of the sprites). What can I do to "perfectly" display my sprites? Do I have to export them in some other way from Inkscape or do I have to change some Unity's settings?
Thank you.
NOTE: My sprites are not "pixel art"!
Edit(Added photos of the purple gem as file and how it is shown in editor):
Because scaling
You're display resolution on the images isn't a 256x256 region where those images are displayed, which means that they must be scaled in some manner in order to display in the desired region. Camera rendering is notoriously bad at scaling. As your images aren't Vector (and Unity doesn't support vector graphic formats anyway), scaling will always result in a loss of detail. Detail like hard edges.
Your options are:
smaller images where you have complete control over how the image is scaled down
bilinear filtering (which is fundamentally blurry)
mipmaps (which are automatically scaled down versions of your image in powers of two)
If the later two aren't giving satisfactory results, your only option is the first.

Canvas too big for the camera in Unity

I have created a 2D game with an orthogonal camera and using 16:9 display size.
I dragged my background image onto the hierarchy (it's about 2048x1152) and then set the camera size to be 22.5, which made it fit the background perfectly and displays just right.
However, when I add a Canvas for a UI it is absolutely giant, about 100 times bigger. It only becomes 'normal' size with respect everything else added when I set the camera to its default size of 5. So when I add a small graphic, it too becomes giant.
I'm simply following a book I read and I'm not doing anything to deviate.
Am I doing something wrong? Below is what I mean. The background image is the little image in the bottom right and the outlined rectangle is the canvas with a small graphic added.
Thanks.
To force your Hierarchy Canvas UI to the same resolution as the Camera View in your Unity Editor Scene window resolution (i.e. not ridiculously massive), or in other words get the Canvas to fit into the Camera size in the Scene, do the following:
Set the Canvas component's Render Mode to Screen Space - Camera.
Make sure you select or drag the relevant Camera from the Hierarchy to the Render Camera field in the Inspector.
You should use the Unity canvas for this along with the canvas scaler component. If I'm not mistaken it will scale all elements relative to the screen they are viewed on.
The canvas scaler allows you to match the scaling based on a preferred viewport size which is a life saver.
However this may not fit you needs perfectly as it would mean that the background element would become fixed. So if you wanted to pan the element you would need to move it's x and y elements within the canvas.
Hope that helps?

Unity - Render Texture from Camera's targetTexture produces seams

I am attempting to render a specific section of my scene using a separate camera, and a render texture. That object is on a separate layer that the main camera is not rendering, but a separate camera is. The secondary camera has a target texture set to be a render texture that I have created. Everything is working as intended except for the fact that the object, when rendered to a texture, has a bunch of seams that are not present when rendering directly to the screen.
What it looks like when rendered directly to the screen:
Correct
What it looks like when rendered to a texture, and then displayed on a quad in the scene:
Incorrect
Notice how the second image has a bunch of transparent "lines" in between the sprites where there shouldn't be any.
I am using a basic transparent shader to display the render texture on the quad (since the background isn't part of the render texture, just the black crowd part). I have tried a number of different shaders, and none of them seem to make a difference.
The render texture's settings are: Width: Screen.width Height: Screen.height Format: RenderTextureFormat.ARGBFloat;
Unity Version: 5.2.3f1 - iOS Platform
Note: The reason I am doing this is so that I can apply a "Blur" image effect to the texture, and make the crowd in the foreground appear to be out of focus. Any alternative suggestions for how to do this are also welcome.
I'm not quite sure -- but it almost sounds like you have line ghosting. You may want to give this a read and let me know if that's what you're dealing with or not:
The reason for this is due to how the texture image was authored, combined with the filtering that most 3d engines use when textures are displayed at different sizes on screen.
Your image may have coloured areas which are completely opaque, coloured areas which are partially transparent, and areas which are completely transparent. However, the areas where your alpha channel is completely transparent (0% opacity) actually still have a colour value too. In PNGs (or at least, the way Photoshop exports PNGs) seems to default to using white for the completely transparent pixels. With other formats or editors, this may be black. Both are equally undesirable when it comes to use in a 3d engine.
You may think, "why is the white colour a problem if it's completely transparent?". The problem occurs because when your texture appears on screen, it's usually either upscaled or downscaled depending whether the pixels in the texture's image are appearing larger or smaller than actual size. For the downsizing, a series of downscaled versions get created during import. These downscaled versions get used when the texture is displayed at smaller sizes or steeper angles in relation to the view, and is intended to improve visual quality and make rendering faster. This process is called "mip-mapping" - read more about mip-mapping here. For upscaling, simple bilinear interpolation is normally used.
The scaled versions are usually created using simple bilinear interpolation, which means that the transparent pixels are mixed with the neighbouring visible pixels. With the mipmaps, for each smaller level, the problem with the invisible mixing with the visible pixel colours increases (with the result that your nasty white edges become more apparent at further distances away).
The solution is to ensure that these completely transparent pixels have a colour value which matches their neighbouring visible pixels, so that when the interpolation occurs, the colour 'bleed' from the invisible pixels is of the appropriate colour.
To solve this (in Photoshop) I always use the free "Solidify" tool from the Flaming Pear Free Plugins pack, like this:
Download and install the Flaming Pear "Free Plugins" pack (near the bottom of that list)
Open your PNG in photoshop.
Go to Select -> Load Selection and click OK.
Go to Select -> Save Selection and click OK. This will create a new alpha channel.
Now Deselect all (Ctrl-D or Cmd-D)
Select Filter -> Flaming Pear -> Solidify B
Your image will now appear to be entirely made of solid colour, with no transparent areas, however your transparency information is now stored in an explicit alpha channel, which you can view and edit by selecting it in the channels palette.
Now re-save your image, and you should find your white fuzzies have dissappeared!
Source: http://answers.unity3d.com/questions/10302/messy-alpha-problem-white-around-edges.html
Turns out that the shader I was using for my scene was using "Blend SrcAlpha OneMinusSrcAlpha" for some reason, when it should have been using "Blend One OneMinusSrcAlpha". This was causing objects with alpha less than 1 to make the objects under them become semi-transparent as well exposing the camera's clear colour background.

dealing with different screensizes when using tiled map files (tmx)

I'm trying to learn spritekit and I'm a little confused by something. I know that I need to have #2 image files which need to be double the size of the standard ones and that once I name them properly xcode will deal with the rest. But, when dealing with tiled, do I need tiled files that are level1.tmx and level1#2.tmx
So here is what I did as a test today:
created two tilesheets, one was 32x32 and the other is 64x64 (the 2x).
create two levels in tiled, one was 64x64 tile sizes and the level size was 2048x1536 (the 2x one). I used the 64x64 tiles on that one. The other level I created was the exact same, but I created it with 32x32 tile size, which made the level size 1024x768 and I used the tilesheet that was 32x32 on it.
So now I have two tmx files and two spritesheets, for one level, on the ipad.
Am I doing this correctly? Once I start doing the iphone I will have 4 or maybe 6 sets for each level?
I assume xcode won't work the same way with tmx files if I name them level1#2x.tmx, right? If not and assuming everything I'm doing above is correct, how to do load the correct tmx file? Do I need to check for device type, then resolution and load my map file based on that for each level?
I think maybe I'm not doing this right so I wanted to stop here and ask before I get any further.
TMX files are not loaded automatically in iOS so I am guessing you are either using the SKAToolKit or the JSTileMap one of the two most popular to my knowledge. Myself and other Sprite Kit Alliance members put together the SKAToolKit so I think I can answer your questions from that point of view because it should be similar for JSTileMap too.
The short answer is build your map using 1x assets, but provide images for standard and retina in Xcode.
First you create your map using 1x assets. When the map is loaded it uses points not pixels. So for example if you make a 32x32 pixel tile map it will be treated as a 32x32 point map. When the sprites are created it pulls in the correct image based on device. If an image is named tree.png and is 32x32 it will take up the space of a 32x32(point) tile. If you have an image called tree#2x.png 64x64 iOS will use that for retina devices. Because it is an #2x image it will take up 32x32(points) but will be 64x64 pixels.
Hopefully that makes some sense. If not let me know.

Image size incorrect in Unity

I have a Unity 2D project with a fixed screen size of 800x450 pixels.
I have imported a background image that is also 800x450 pixels.
When placed on the stage, the image only takes up half of the screen.
The scale of the image is set to 1,1. The Z position is 0.
Why is the image displayed too small? How can I display the image at the correct resolution?
Does this mean that I have to design all my game assets at 2x the required size? Or that I somehow have to set the scale for all imported assets at 2? What is the recommended workflow?
EDIT
I have added a screenshot of the camera settings:
I would trying making your camera orthographic, and set the size of the camera (not the transform) to be half the height that you would like it to be (225)
Also if you are looking for pixel perfect game. here is a pretty good article from Unity about how to make that work and it explains some of the camera aspect ratios and scaling
http://blogs.unity3d.com/2015/06/19/pixel-perfect-2d/