Drawing only part of a texture OpenGL ES iPhone - iphone

..Continued on from my previous question
I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone.
- (void)setupView
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320});
glEnable(GL_TEXTURE_2D);
}
// Updates the OpenGL view when the timer fires
- (void)drawView
{
// Make sure that you are drawing to the current context
[EAGLContext setCurrentContext:context];
//Get the 320*480 buffer
const int8_t * frameBuf = [source getNextBuffer];
//Create enough storage for a 512x512 power of 2 texture
int8_t lBuf[2*512*512];
memcpy (lBuf, frameBuf, 320*480*2);
//Upload the texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf);
//Draw it
glDrawTexiOES(0, 0, 1, 480, 320);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up.
I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine
int8_t lBuf[512][512][2];
const char * frameDataP = frameData;
for (int ii = 0; ii < 480; ++ii) {
memcpy(lBuf[ii], frameDataP, 320);
frameDataP += 320;
}
Which is better, but the width appears to be stretched and the height is messed up.
Any help appreciated.

Are you drawing this background in portrait or landscape mode? The glDrawTexiOES parameters are (x, y, z, width, height) and since every other part of your code seems to reference those numbers as 320x480 that might be contributing to the "cropped incorrectly" problem - try switching the 320 and 480 in the cropping rectangle and the width/height in the glDrawTex call. (Might be partially my fault here for not posting the parameters in the last thread. Sorry!)
You'll have to use that second FB copy routine though. The reason the memcpy command won't work is because it will grab the 320x480x2 buffer (307200 bytes) and dump it into a 512x512x2 buffer (524288 bytes) as a single contiguous block - it won't know enough to copy 320 bytes, add 192 blocks of "dead space", and resume.

It looks like the buffer allocated isn't large enough. If you have 8 bits (1 byte) per color component, you are allocating one byte too few. This could explain why the image shows correct for only a part of the image.
I would think the following lines:
//Create enough storage for a 512x512 power of 2 texture
int8_t lBuf[2*512*512];
memcpy (lBuf, frameBuf, 320*480*2);
would need to be changed to:
//Create enough storage for a 512x512 power of 2 texture
int8_t lBuf[3*512*512];
memcpy (lBuf, frameBuf, 320*480*3);

Related

OpenGL ES 1.1: How to change texture color without losing luminance?

I have particles that I want to be able to change the color of in code, so any color can be used. So I have only one texture that basically has luminance.
I've been using glColor4f(1f, 0f, 0f, 1f); to apply the color.
Every blendfunc I've tried that has come close to working ends up like the last picture below. I still want to preserve luminance, like in the middle picture. (This is like the Overlay or Soft Light filters in Photoshop, if the color layer was on top of the texture layer.)
Any ideas for how to do this without programmable shaders? Also, since these are particles, I don't want a black box behind it, I want it to add onto the scene.
Here is a solution that might be close to what you're looking for:
glColor4f(1.0f, 0.0f, 0.0f, 1.0f);
glActiveTexture( GL_TEXTURE0 );
glEnable( GL_TEXTURE_2D );
glBindTexture(GL_TEXTURE_2D, spriteTexture);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glActiveTexture( GL_TEXTURE1 );
glEnable( GL_TEXTURE_2D );
glBindTexture(GL_TEXTURE_2D, spriteTexture);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ADD );
What it does is multiply the original texture by the specified color and then adds the pixels values of the original texture on top:
final_color.rgba = original_color.rgba * color.rgba + original_color.rgba;
This will result in a brighter image than what you've asked for but might be good enough with some tweaking.
Should you want to preserve the alpha value of the texture, you'll need to use GL_COMBINE instead of GL_ADD (+ set GL_COMBINE_RGB and GL_COMBINE_ALPHA properly).
Here are some results using this technique on your texture.
NONSENSE! You don't have to use multi-texturing. Just premultiply your alpha.
If you premultiply alpha on the image after you load it in and before you create the GL texture for it then you only need one texture unit for the GL_ADD texture env mode.
If you're on iOS then Apple's libs can premultiply for you. See the example Texture2D class and look for the kCGImageAlphaPremultipliedLast flag.
If you're not using an image loader that supports premultiply then you have to do it manually after loading the image. Pseudo code:
uint8* LoadRGBAImage(const char* pImageFileName) {
Image* pImage = LoadImageData(pImageFileName);
if (pImage->eFormat != FORMAT_RGBA)
return NULL;
// allocate a buffer to store the pre-multiply result
// NOTE that in a real scenario you'll want to pad pDstData to a power-of-2
uint8* pDstData = (uint8*)malloc(pImage->rows * pImage->cols * 4);
uint8* pSrcData = pImage->pBitmapBytes;
uint32 bytesPerRow = pImage->cols * 4;
for (uint32 y = 0; y < pImage->rows; ++y) {
byte* pSrc = pSrcData + y * bytesPerRow;
byte* pDst = pDstData + y * bytesPerRow;
for (uint32 x = 0; x < pImage->cols; ++x) {
// modulate src rgb channels with alpha channel
// store result in dst rgb channels
uint8 srcAlpha = pSrc[3];
*pDst++ = Modulate(*pSrc++, srcAlpha);
*pDst++ = Modulate(*pSrc++, srcAlpha);
*pDst++ = Modulate(*pSrc++, srcAlpha);
// copy src alpha channel directly to dst alpha channel
*pDst++ = *pSrc++;
}
}
// don't forget to free() the pointer!
return pDstData;
}
uint8 Modulate(uint8 u, uint8 uControl) {
// fixed-point multiply the value u with uControl and return the result
return ((uint16)u * ((uint16)uControl + 1)) >> 8;
}
Personally, I'm using libpng and premultiplying manually.
Anyway, after you premultiply, just bind the byte data as an RGBA OpenGL texture. Using glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ADD); with a single texture unit should be all you need after that. You should get exactly (or pretty damn close) to what you want. You might have to use glBlendFunc(GL_SRC_ALPHA, GL_ONE); as well if you really want to make the thing look shiny btw.
This is subtly different from the Ozirus method. He's never "reducing" the RGB values of the texture by premultiplying, so the RGB channels get added too much and look sort of washed out/overly bright.
I suppose the premultiply method is more akin to Overlay whereas the Ozirus method is Soft Light.
For more, see:
http://en.wikipedia.org/wiki/Alpha_compositing
Search for "premultiplied alpha"

2D graphics on iPhone

What I want to achieve:
Drawing png files with alpha exactly as it appears originally, without transforming any pixel. This is because the image is very detailed and I don't want to lose any bit of information.
Animating those images by rotating them and moving. No scaling.
Actually I don't want to use any 3rd party libraries like cocos2d. I have been reading blogs about OpenGL ES, also checked Texture2D.m so I have basic idea about drawing primitives in 3D space. As far as I understand if I need to draw and animate an image(sprite?) I can just make rectangle and map texture. But the problem is that I want my png file appear exactly as original, not scaled or rotated.
What is the best technique to achieve points mentioned above? Drawing textured rectangle in orthogonal viewport? How to preserve original size/color of image?
Sorry if question is a bit messed up, I can clarify.
By your description, I don't understand why you'd use OpenGL ES. Using Quartz and layers would enable 1) drawing PNGs 2) rotating and moving them (even scaling if you wished). It would be easier than setting up an orthogonal projection in OpenGL + handling loading of images.
Now, if you really want to go OpenGL, yes, you should setup an orthogonal projection, with view size strictly equal to the screen size, and draw a rectangle of the exact size, with texture mapped with exact 0/1 coordinates. For the color aspect, you can use 8888 format, which is exact, no compression, no color reduction and with full alpha.
I know you said you don't want to use a third party library, but Cocos2D implements 2D graphics with OpenGL, so you can reference CCSprite.m to see how they did it. That said, you may want to consider if this library is appropriate for your application. In order to do your own 3D rendering all you have to do is extend CCSprite and put in your own code, as you can see in the comment below the state is already setup for you and everything.
CCSprite.m:
-(void) draw
{
NSAssert(!usesBatchNode_, #"If CCSprite is being rendered by CCSpriteBatchNode, CCSprite#draw SHOULD NOT be called");
// Default GL states: GL_TEXTURE_2D, GL_VERTEX_ARRAY, GL_COLOR_ARRAY, GL_TEXTURE_COORD_ARRAY
// Needed states: GL_TEXTURE_2D, GL_VERTEX_ARRAY, GL_COLOR_ARRAY, GL_TEXTURE_COORD_ARRAY
// Unneeded states: -
BOOL newBlend = NO;
if( blendFunc_.src != CC_BLEND_SRC || blendFunc_.dst != CC_BLEND_DST ) {
newBlend = YES;
glBlendFunc( blendFunc_.src, blendFunc_.dst );
}
#define kQuadSize sizeof(quad_.bl)
glBindTexture(GL_TEXTURE_2D, [texture_ name]);
long offset = (long)&quad_;
// vertex
NSInteger diff = offsetof( ccV3F_C4B_T2F, vertices);
glVertexPointer(3, GL_FLOAT, kQuadSize, (void*) (offset + diff) );
// color
diff = offsetof( ccV3F_C4B_T2F, colors);
glColorPointer(4, GL_UNSIGNED_BYTE, kQuadSize, (void*)(offset + diff));
// tex coords
diff = offsetof( ccV3F_C4B_T2F, texCoords);
glTexCoordPointer(2, GL_FLOAT, kQuadSize, (void*)(offset + diff));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
if( newBlend )
glBlendFunc(CC_BLEND_SRC, CC_BLEND_DST);
#if CC_SPRITE_DEBUG_DRAW
CGSize s = [self contentSize];
CGPoint vertices[4]={
ccp(0,0),ccp(s.width,0),
ccp(s.width,s.height),ccp(0,s.height),
};
ccDrawPoly(vertices, 4, YES);
#endif // CC_TEXTURENODE_DEBUG_DRAW
}

Using image created with CGBitmapContextCreate as an opengl texture

I'm generating an image using quartz2d and I want to use it as an opengl texture.
The tricky part is that I want to use as few bits per pixel as possible, so I'm creating cgContext as following:
int bitsPerComponent = 5;
int bytesPerPixel = 2;
int width = 1024;
int height = 1024;
void* imageData = malloc(width * height * bytesPerPixel);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageContext context = CGBitmapContextCreate(imageData, width, height, bitsPerComponent, width * bytesPerPixel, colorSpace, kCGImageAlphaNoneSkipFirst);
//draw things into context, release memory, etc.
As stated in the documentation here, this is the only supported RGB pixel format for CGBitmapContextCreate which uses 16 bits per pixel.
So now I want to upload this imageData which looks like "1 bit skipped - 5 bits red - 5 bits green - 5 bits blue" into an opengl texture. So I should do something like this:
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1, imageData);
That won't work because in this call I've specified pixel format as 5 red - 5 green - 5 blue - 1 alpha. That is wrong, but it appears that there is no format that would match core graphics output.
There are some other options like GL_UNSIGNED_SHORT_1_5_5_5_REV, but those wont work on the iphone.
I need some way to use this imageData as a texture, but I really don't want to swap bytes around manually using memset or such, because that seems terribly inefficient.
You do need to swap bits around to get it into a denser format like RGBA551 or RGB565, since as you note, CGBitmapContext does not support these formats for drawing (for simplicity and efficency's sake).
memset isn't going to do the trick, but there are "fast" conversion routines in Accelerate.framework.
See vImageConvert_ARGB8888toRGB565(…) and vImageConvert_ARGB8888toARGB1555(…), available on iOS 5 and later.
For iOS 7.0, OS X.9 and later:
vImage_CGImageFormat fmt = {
.bitsPerComponent = 5,
.bitsPerPixel = 16,
.colorSpace = NULL, // faster with CGImageGetColorSpace(cgImage) if known to be RGB
.bitmapInfo = kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder16Little // ARGB1555 little endian
};
vImage_Buffer buf;
vImageBuffer_InitWithCGImage( &buf, &fmt, NULL, cgImage, kvImageNoFlags );
...
free(buf.data);
Data is in buf.data, along with image height, width and rowBytes info. I don't recall what GL's requirements are for whether row padding is allowed. You can control that by preallocating the buf.data and buf.rowBytes fields and passing kvImageDoNotAllocate in the flags.
565_REV is kCGImageAlphaNone | kCGBitmapByteOrder16Little.
5551_REV is kCGImageAlphaNoneSkipLast | kCGBitmapByteOrder16Little

iPhone paint app (glPaint based). Blending with white background

I'm developing painting app. I've tried to do it with CoreGraphics/Quartz 2D and drawing curves algorithm is pretty slow. So we've decided to switch to the OpenGL ES.
I've never had any OpenGL experience, so I found glPaint example from apple and started play with it.
I've changed erase method do make white background.
How I stuck with brushes and blending. In the example Apple uses "white on black" texture for the brush (first on the pic below). But it didn't work for me (I played with different blending modes). So I've decided to use different brushes, but I didn't find the proper way.
I found few questions on the stackoverflow, but all of them were unanswered. Here is a picture (from another question, thanks to Kevin Beimers).
(source: straandlooper.com)
So the question is how to implement stroke like "desired" in the picture. And how to blend 2 strokes closer to real life experience (blue over yellow = dark green).
Thanks.
There is current code (bit modified from glPaint) for the brush (from initWithFrame method:
// Make sure the image exists
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, width, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
// Make the current material colour track the current color
glEnable( GL_COLOR_MATERIAL );
// Enable use of the texture
glEnable(GL_TEXTURE_2D);
// Set a blending function to use
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
// Enable blending
glEnable(GL_BLEND);
// Multiply the texture colour by the material colour.
glTexEnvf( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
}
//Set up OpenGL states
glMatrixMode(GL_PROJECTION);
CGRect frame = self.bounds;
glOrthof(0, frame.size.width, 0, frame.size.height, -1, 1);
glViewport(0, 0, frame.size.width, frame.size.height);
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DITHER);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_BLEND);
// Alpha blend each "dab" of paint onto background
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
//glBlendFunc(GL_SRC_COLOR, GL_ONE);
glEnable(GL_POINT_SPRITE_OES);
glTexEnvf(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
self.brushScale = 3;
self.brushStep = 3;
self.brushOpacity = (1.0 / 1.5);
glPointSize(width / brushScale);
//Make sure to start with a cleared buffer
needsErase = YES;
[self erase];
Let’s start by defining the type of blending you’re looking for. It sounds like you want your buffer to start out white and have your color mixing obey a subtractive color model. The easiest way to do that is to define the result of mixing Cbrush over Cdst as:
C = Cbrush × Cdst
Notice that using this equation, the result of mixing yellow (1, 1, 0) and cyan (0, 1, 1) is green (0, 1, 0), which is what you’d expect.
Having a brush that fades at the edges complicates things slightly. Let’s say you now have a brush opacity value Abrush—where Abrush is 1, you want your brush color to blend at full strength, and where Abrush is 0, you want the original color to remain. Now what you're looking for is:
C = (Cbrush × Cdst) × Abrush + Cdst × (1 - Abrush)
Since blending in OpenGL ES results computes C = Csrc × S + Cdst × D, we can get exactly what we want if we make the following substitutions:
Csrc = Cbrush × Abrush
Asrc = Abrush
S = Cdst
D = (1 - Abrush)
Now let’s look at what it takes to set this up in OpenGL ES. There are 4 steps here:
Change the background color to white.
Change the brush texture to an alpha texture.
By default, GLPaint creates its brush texture as an RGBA texture with the brush shape drawn in the RGB channels, which is somewhat unintuitive. For reasons you’ll see later, it’s useful to have the brush shape in the alpha channel instead. The best way to do this is by drawing the brush shape in grayscale with CG and creating the texture as GL_ALPHA instead:
CGColorSpaceRef brushColorSpace = CGColorSpaceCreateDeviceGray();
brushData = (GLubyte *) calloc(width * height, sizeof(GLubyte));
brushContext = CGBitmapContextCreate(brushData, width, width, 8, width, brushColorSpace, kCGImageAlphaNone);
CGColorSpaceRelease(brushColorSpace);
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, width, height, 0, GL_ALPHA, GL_UNSIGNED_BYTE, brushData);
Set up Csrc, Asrc, S and D.
After switching to an alpha texture, assuming that your brush color is still being specified via glColor4f, you’ll find that the default OpenGL ES texture environment will give you this:
Csrc = Cbrush
Asrc = Abrush
In order to obtain the extra multiplication by Abrush for Csrc, you’ll need to set up a custom combiner function in the texture environment as follows (you can do this in the initialization function for PaintingView):
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE);
glTexEnvi(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_ALPHA);
Changing GL_TEXTURE_ENV_MODE to GL_COMBINE gives you Cbrush × 0 (to see why this is the case, read section 3.7.12 in the OpenGL ES 1.1 specification). Changing GL_OPERAND0_RGB to GL_SRC_ALPHA changes the second term in the multiplication to what we want.
To set up S and D, all you need to do is change the blending factors (this can be done where the blending factors were set up before):
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA);
Ensure that any modifications to Abrush outside of the brush texture are reflected across other channels.
The above modifications to the texture environment only take into account the part of the brush opacity that come from the brush texture. If you modify the brush opacity in the alpha channel elsewhere (i.e. by scaling it, as in AppController), you must make sure that you make the same modifications to the other three channels:
glColor4f(components[0] * kBrushOpacity, components[1] * kBrushOpacity, components[2] * kBrushOpacity, kBrushOpacity);
Note that the downsides to implementing your brushes with a subtractive color model are that colors can only get darker, and repeatedly drawing the same color over itself can eventually result in a color shift if it’s not one of the primary subtractive colors (cyan, magenta, or yellow). If, after implementing this, you find that the color shifts are unacceptable, try changing the brush texture to an alpha texture as in step 2 and changing the blend factors as follows:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
This will give you simple painting of your brush color over white, but no actual mixing of colors (the brush colors will eventually overwrite the background).

32 pixel gap at top of opengl es texture

I am trying to render to a texture using openGL ES (for iPhone) and then display the texture on the screen. Everything works except that there is a 32 row gap at the top of the texture and the bottom 32 rows are cut off. It's like all of my drawing is being offset 32 pixels down, which results in the bottom 32 rows not being drawn as they are outside of the texture.
Here's a very simple example:
void RenderToTexture( int texture )
{
unsigned char buffer[4 * 320 * 480];
unsigned char colour[4];
colour[0] = 255;
colour[1] = 0;
colour[2] = 0;
colour[3] = 128;
for ( int i = 0; i < 4 * 320 * 480; i += 4 )
{
buffer[i] = colour[0];
buffer[i+1] = colour[1];
buffer[i+2] = colour[2];
buffer[i+3] = colour[3];
}
glBindTexture( GL_TEXTURE_2D, texture );
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, 0, 320, 480, GL_RGBA, GL_UNSIGNED_BYTE, buffer );
}
And here's the result:
Just setting the colour using glColor4f() instead of calling RenderToTexture() results in a red screen as expected.
That 32 pixels are the missing ones to 512: 512 - 480 = 32.
The reason is you can use only texture sizes that are powers of two with GL_TEXTURE_2D.
So you have to round up your width and height to 512. You can still display only the part of the texture you want by using texture coordinates or by setting a texture matrix.
The iPhone 3GS supports non power-of-two textures under certain conditions. All the following must be true:
GL_TEXTURE_WRAP_S should be set to GL_CLAMP_TO_EDGE
GL_TEXTURE_WRAP_T should be set to GL_CLAMP_TO_EDGE
Mipmapping must be turned off; minify with GL_LINEAR rather than GL_LINEAR_MIPMAP_LINEAR