I'm manipulating image masks runtime.
Now I'm using a separate FBO to render the masked image to a texture, then draw the result to the screen.
I'm just wondering if this another approach would be cheaper (regarding memory):
Draw the image with a shader that uses two texture slot, one for the image, and another for the mask.
Could I spare any Memory in such a way?
Related
I am confused with SKSpriteNode & SKTexture Node. I have seen in the tutorials that SKSpriteNode can be used to add image like [SKSpriteNode spritenodewithimagename:#"someimag"]; and same thing is happening in SKTexture as [SKTexture texturewithimge/file];
what is difference between a Texture and Image. If we are adding image by using SKSpriteNode then what is the reason to use SKTexture or if we use SKTexture and Texture Atlases then why we added image to be added in SKSpriteNode.
Confusion is there, what is difference between both of them.
SKSpriteNode is a node that displays (renders) a SKTexture on screen at a given position, with optional scaling and stretching.
SKTexture is a storage class that contains an image file in a format that is suitable for rendering, plus additional information like the frame rectangle if the texture references only a smaller portion of the image / texture atlas.
One reason for splitting the two is that you usually want multiple sprites to draw with the same SKTexture or from the same SKTextureAtlas. This avoids having to keep copies of the same image in memory for each individual sprite, which would easily become prohibitive. For example a 4 MB texture used by 100 Sprites still uses 4 MB of memory, as opposed to 400 MB.
Update to answer comment:
The term 'texture' dates back to the 70s.
A texture is an in-memory representation of an image formatted specifically for use in rendering. Common image formats (PNG, JPG, GIF, etc.) don't lend themselves well for rendering by a graphics chip. Textures are an "image format" that graphics hardware and renderers such as OpenGL understand and have standardized.
If you load a PNG or JPG into a texture, the format of the image changes. It's color depth, alpha channel, orientation, memory layout, compression method may change. Additional data may be introduced such as mip-map levels, which is the original texture scaled down by a certain percentage in order to draw farther-away polygons with a lower resolution version of the same texture, which decreases aliasing and speeds up rendering.
That's only scratching the surface though. What's important to keep in mind is that no rendering engine works with images directly, they're always converted into textures. This has mainly to do with efficiency of the rendering process.
Whenever you specify an image directly in an API such as Sprite Kit, for example spriteWithImageNamed: then internally what happens is that the renderer first checks if there's an existing texture with the given image name, and if so, uses that. If there's no such image loaded yet, it will load the image, convert it to a texture, and store it with the image name as key for future reference (this is called texture caching).
My platform is iPhone - OpenGL ES 1.1
I'm looking for the tutorial about modifying or drawing to a texture.
For example:
I have a background texture: (Just blank blue-white gradiant image)
and a object texture:
I need to draw the object to background many times so to optimize the performance I want to draw it to the background texture like this:
does anyone know the fastest way to do this ?
Thanks a lot !
Do you want to draw it into the background texture, and then keep that, or overlay it, or what? I'm not entirely sure the question.
To draw onto the background and then reuse that, you'll want to create another texture, or a pbuffer/fbo, and bind that. Draw a full-screen quad with your background image, then draw additional quads with the overlays as needed. The bound texture should then have the results, composited as necessary, and can be used as a texture or copied into a file. This is typically known as render-to-texture, and is commonly used to combine images or other dynamic image effects.
To optimize the performance here, you'll want to reuse the texture containing the final results. This will reduce the render cost from whatever it may have been (1 background + 4 faces) to a single background draw.
Edit: This article seems to have a rather good breakdown of OpenGL ES RTT. Some good information in this one as well, though not ES-specific.
To overlay the decals, you simply need to draw them over the background. This is the same drawing method as in RTT, but without binding a texture as the render target. This will not persist, it exists only in the backbuffer, but will give the same effect.
To optimize this method, you'll want to batch drawing the decals as much as possible. Assuming they all have the same properties and source texture, this is pretty easy. Bind all the textures and set properties as needed, fill a chunk of memory with the corners, and just draw a lot of quads. You can also draw them individually, in immediate mode, but this is somewhat more expensive.
I have an image and a convex polygon defined by an array of x,y coordinates.
How would I go about getting a Texture2D representation of the part of the image encompassed by the polygon?
Basically I just need a texture from the image with the part outside the polygon made transparent.
If the resultant texture were also clipped to the width and height of the polygon I'd do backflips.
Any pointers/snippets would be appreciated. Thank you!
Interestingly, your question is tagged with both cocos2d and opengl, but I'll give an OpenGL-centric answer here. Rather than creating a new texture object to achieve the desired effect, I think you'd want to use the stencil buffer. The procedure would look like this:
When creating your FBO, attach a stencil buffer to it.
Clear the stencil buffer.
Turn off writes to the color and depth buffers; turn on writes to stencil.
Render the polygon and don't bother with texturing.
Re-enable writes to the color and depth buffers; turn on stencil testing.
Render a textured quad that corresponds to the bounding box of your polygon.
The iPhone 3GS and the iPhone simulator both support an 8-bit stencil buffer. For older iPhones, you might be able to do a similar trick with the framebuffer's alpha component rather than the stencil buffer...
I want to make a program similar to GLPaint using CGContext that is very smooth and easy to put images behind. I understand that GLPaint has no allowance for putting an Image behind the painting canvas, rather than having just a black one.
You can very simply use an image behind the painting canvas.
4 basic steps
load your image in a texture (for example 256x256)
enable TEXTURE_2D mode and set the current texture to the texture id you loaded.
draw a rectangle with that texture enabled and set a texture map coordinates pointer (array of u,v points)
loop on your screen touch events to overlay with points like in GL_PAINT (without clearing your buffer) to keep the old points and bg image. Render your buffer after drawing points (brush).
Do you need more precision or sample code ?
Is it possible to use an image captured with the iPhone's camera as a texture that is then manipulated in OpenGL ES (flag wave effect, etc.)? The main problem being the size of the iPhone screen being 320x480 (no status bar) and thus the image won't have dimensions that are power-of-2. Is the main option copying it into a 512x512 texture and adjusting the vertices?
Yes, that's the way to do it.
Just use a larger texture. It's a waste of memory but unfortunately there is no way around this problem.
An alternative would be deviding the picture into squares with a length and height of 32 pixels (aka tiling), resulting into 15x8 tiles. Displaying it would however involve many texture switches while drawing which might become a bottleneck. On the other hand you would save a lot of memory using a tiled approach.