Confusion: SKSpriteNode & SKTexture difference. - iphone

I am confused with SKSpriteNode & SKTexture Node. I have seen in the tutorials that SKSpriteNode can be used to add image like [SKSpriteNode spritenodewithimagename:#"someimag"]; and same thing is happening in SKTexture as [SKTexture texturewithimge/file];
what is difference between a Texture and Image. If we are adding image by using SKSpriteNode then what is the reason to use SKTexture or if we use SKTexture and Texture Atlases then why we added image to be added in SKSpriteNode.
Confusion is there, what is difference between both of them.

SKSpriteNode is a node that displays (renders) a SKTexture on screen at a given position, with optional scaling and stretching.
SKTexture is a storage class that contains an image file in a format that is suitable for rendering, plus additional information like the frame rectangle if the texture references only a smaller portion of the image / texture atlas.
One reason for splitting the two is that you usually want multiple sprites to draw with the same SKTexture or from the same SKTextureAtlas. This avoids having to keep copies of the same image in memory for each individual sprite, which would easily become prohibitive. For example a 4 MB texture used by 100 Sprites still uses 4 MB of memory, as opposed to 400 MB.
Update to answer comment:
The term 'texture' dates back to the 70s.
A texture is an in-memory representation of an image formatted specifically for use in rendering. Common image formats (PNG, JPG, GIF, etc.) don't lend themselves well for rendering by a graphics chip. Textures are an "image format" that graphics hardware and renderers such as OpenGL understand and have standardized.
If you load a PNG or JPG into a texture, the format of the image changes. It's color depth, alpha channel, orientation, memory layout, compression method may change. Additional data may be introduced such as mip-map levels, which is the original texture scaled down by a certain percentage in order to draw farther-away polygons with a lower resolution version of the same texture, which decreases aliasing and speeds up rendering.
That's only scratching the surface though. What's important to keep in mind is that no rendering engine works with images directly, they're always converted into textures. This has mainly to do with efficiency of the rendering process.
Whenever you specify an image directly in an API such as Sprite Kit, for example spriteWithImageNamed: then internally what happens is that the renderer first checks if there's an existing texture with the given image name, and if so, uses that. If there's no such image loaded yet, it will load the image, convert it to a texture, and store it with the image name as key for future reference (this is called texture caching).

Related

SKTileMapNode Pixel to Pixel Aligning

I am working on a tilemap game with Apple's newish SKTileMapNode. The pixels on my tiles do not match up with the pixels on the phone display. My scale mode is set to .resizeFill. My tile's sizes are correctly labeled as 64x64 and each tile's texture's image is sized correctly.
I am using a camera that is a child of the gray circle in the attached image. I believe that the camera will create a pixel to pixel view of the screen size being used and match the resolution, but I am not sure that I can trust this. How can I get my pixels to align correctly to avoid this.
It turns out that SpriteKit's SKTileMapNode really likes assets to be optimized for all resolutions. This fixed my pixel alignment problem entirely. While this may seem obvious, I originally added #1x files in order to use an optimized texture atlas. It took more research to discover how to add different resolutions to a texture atlas.
Since it is different than normal atlases (appending ".atlas" to a folder of images), I will describe how to do so here
Go to the assets.xcassets folder and click "New Sprite Atlas." In here drag in all #2x and #3x images. Delete the [asset-name].atlas folder if you had one before, as this will not support different resolutions natively.
From here on, the atlas can be accessed just as the original [asset-name].atlas folder was accessed in code.

Why should I load the images in same size to texture for animation?

In iOS Sprite Kit, why should I load the images, in same size, to texture for animation? If the images size are various, I can not make the animation run. What is the theory? Thanks.
When you init a SKSpriteNode, its size is set unless you change it later with code. Changing the sprite's texture to another image with a different size will cause the image to potentially become distorted as SK will scale the texture to the node's size. For example, if the node is created with a 100x100 texture and then gets a 50x100 texture, the image will be stretched sideways to make it 100x100.
When creating images for animation sequences always make sure you have the same size for each one. Use alpha to fill the spaces on the edges.

How to modify a bound texture in OpenGL ES 1.1

My platform is iPhone - OpenGL ES 1.1
I'm looking for the tutorial about modifying or drawing to a texture.
For example:
I have a background texture: (Just blank blue-white gradiant image)
and a object texture:
I need to draw the object to background many times so to optimize the performance I want to draw it to the background texture like this:
does anyone know the fastest way to do this ?
Thanks a lot !
Do you want to draw it into the background texture, and then keep that, or overlay it, or what? I'm not entirely sure the question.
To draw onto the background and then reuse that, you'll want to create another texture, or a pbuffer/fbo, and bind that. Draw a full-screen quad with your background image, then draw additional quads with the overlays as needed. The bound texture should then have the results, composited as necessary, and can be used as a texture or copied into a file. This is typically known as render-to-texture, and is commonly used to combine images or other dynamic image effects.
To optimize the performance here, you'll want to reuse the texture containing the final results. This will reduce the render cost from whatever it may have been (1 background + 4 faces) to a single background draw.
Edit: This article seems to have a rather good breakdown of OpenGL ES RTT. Some good information in this one as well, though not ES-specific.
To overlay the decals, you simply need to draw them over the background. This is the same drawing method as in RTT, but without binding a texture as the render target. This will not persist, it exists only in the backbuffer, but will give the same effect.
To optimize this method, you'll want to batch drawing the decals as much as possible. Assuming they all have the same properties and source texture, this is pretty easy. Bind all the textures and set properties as needed, fill a chunk of memory with the corners, and just draw a lot of quads. You can also draw them individually, in immediate mode, but this is somewhat more expensive.

How can I get an OpenGl texture of a polygon shape from an image?

I have an image and a convex polygon defined by an array of x,y coordinates.
How would I go about getting a Texture2D representation of the part of the image encompassed by the polygon?
Basically I just need a texture from the image with the part outside the polygon made transparent.
If the resultant texture were also clipped to the width and height of the polygon I'd do backflips.
Any pointers/snippets would be appreciated. Thank you!
Interestingly, your question is tagged with both cocos2d and opengl, but I'll give an OpenGL-centric answer here. Rather than creating a new texture object to achieve the desired effect, I think you'd want to use the stencil buffer. The procedure would look like this:
When creating your FBO, attach a stencil buffer to it.
Clear the stencil buffer.
Turn off writes to the color and depth buffers; turn on writes to stencil.
Render the polygon and don't bother with texturing.
Re-enable writes to the color and depth buffers; turn on stencil testing.
Render a textured quad that corresponds to the bounding box of your polygon.
The iPhone 3GS and the iPhone simulator both support an 8-bit stencil buffer. For older iPhones, you might be able to do a similar trick with the framebuffer's alpha component rather than the stencil buffer...

iPhone camera images as OpenGL ES textures

Is it possible to use an image captured with the iPhone's camera as a texture that is then manipulated in OpenGL ES (flag wave effect, etc.)? The main problem being the size of the iPhone screen being 320x480 (no status bar) and thus the image won't have dimensions that are power-of-2. Is the main option copying it into a 512x512 texture and adjusting the vertices?
Yes, that's the way to do it.
Just use a larger texture. It's a waste of memory but unfortunately there is no way around this problem.
An alternative would be deviding the picture into squares with a length and height of 32 pixels (aka tiling), resulting into 15x8 tiles. Displaying it would however involve many texture switches while drawing which might become a bottleneck. On the other hand you would save a lot of memory using a tiled approach.