background with pattern texture - iphone

I tried it:
CCSprite *background = [CCSprite spriteWithSpriteFrame:frame];
background.textureRect = CGRectMake(0, 0, calcadaWidth, winSize.height);
background.position = ccp(calcadaWidth * 0.5, winSize.height * 0.5);
ccTexParams params = {GL_LINEAR, GL_LINEAR, GL_REPEAT, GL_REPEAT};
[background.texture setTexParameters:&params];
It works if the texture is regular gl size such as 64x64, 128x128...
My texture is 126x70, then, using this code I get some black space between repeats.

Cocos2D uses OpenGL and one of the limitiation of OpenGL is that if you want a texture to repeat, it must be a power of two texture.
The black spaces you are getting is where OpenGL has padded your texture up to the next power of two.

Related

Tiled background image

Trying to create a tiled background image for a top down game. The SKScene is 8000,8000 and instead of creating a couple very large sprites I am trying to tile it to improve performance.
var coverageSize = CGSizeMake(8000,8000);
var textureSize = CGRectMake(0, 0, 100, 100);
var backgroundCGImage = [UIImage imageNamed:#"bg"].CGImage; //this line returns several errors.
UIGraphicsBeginImageContext(CGSizeMake(coverageSize.width, coverageSize.height));
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawTiledImage(context, textureSize, backgroundCGImage);
UIImage *tiledBackground = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
SKTexture *backgroundTexture = [SKTexture textureWithCGImage:tiledBackground.CGImage];
SKSpriteNode *backgroundTiles = [SKSpriteNode spriteNodeWithTexture:backgroundTexture];
backgroundTiles.yScale = -1;
backgroundTiles.position = CGPointMake(0,0);
[self addChild:backgroundTiles];
one way to do tiles is you create a node used to draw your tiles
You draw the amount of tiles that fits the screen into this special tile
use:
var texture = self.view!.textureFromNode(drawingTile, crop: CGRectMake(-column.width, -row.height, self.frame.width + column.width , self.frame.height + row.height));
To convert this node to a texture
now remove all nodes from the drawing node
add in the a new node to the drawing node using this texture.
we now have our background in one node.
when you need to scroll, shift the drawing node, then you draw the 1 row and 1 columns worth of tiles that are missing. Do this only when you need your drawing mode has shifted so much that you have a gap on the screen to fill in. Use the above code again to create a new drawing mode texture. Rinse, repeat

Box2d fixture and body out of sync on retina display

I'm trying to make a cocos2d/box2d game work on iPad, iPhone and iPhone retina.
My problem is, that the fixture and body don't line up on the retina simulator, please click on screenshots below for illustration (as a new stackoverflow member, it won't allow me to post the screenshot here).
screenshot
(please disregard the different shapes, I want the 4 corners to line up)
I've done quite a bit of research on this over the last couple of days, and the closest I found was this:
link
But the solution offered there with PTM_RATIO and CC_CONTENT_SCALE_FACTOR() doesn't seem to work in my case. I think it has to do with the fact that I don't load an image from file into my sprite. Most solutions to this problem are based on loading -hd image files for the retina display, but I don't want to use files in my game at all. I basically want to draw the polygons myself at runtime,
My code looks as follows:
-(CCSprite*)addSprite
{
CGSize contextsize = CGSizeMake(200, 200);
UIGraphicsBeginImageContext(contextsize);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextFlush(context);
CGContextSetAllowsAntialiasing(context, true);
CGContextTranslateCTM(context, 0, contextsize.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGFloat components[] = {0.0, 0.0, 1.0, 1.0};
CGColorRef color = CGColorCreate(colorspace, components);
CGContextSetStrokeColorWithColor(context, color);
UIBezierPath* aPath;
aPath = [UIBezierPath bezierPathWithArcCenter:CGPointMake(100, 100)
radius:100
startAngle:0
endAngle:1.57
clockwise:YES];
[aPath addArcWithCenter:CGPointMake(100, 100)
radius:50
startAngle:1.57
endAngle:0
clockwise:NO];
[aPath stroke];
CGContextStrokePath(context);
CGColorSpaceRelease(colorspace);
CGColorRelease(color);
UIImage *graphImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CCTexture2D *tex = [[[CCTexture2D alloc] initWithImage:graphImage] autorelease];
CCSprite *sprite = [CCSprite spriteWithTexture:tex];
return sprite;
}
-(void) addFixture:(CCSprite *)fixsprite
{
b2Vec2 arcdots[] = {
b2Vec2(50.0f / PTM_RATIO, 0.0f / PTM_RATIO),
b2Vec2(100.0f / PTM_RATIO, 0.0f / PTM_RATIO),
b2Vec2(0.0f / PTM_RATIO, 100.0f / PTM_RATIO),
b2Vec2(0.0f / PTM_RATIO, 50.0f / PTM_RATIO)
};
b2PolygonShape p_shape;
b2FixtureDef fixtureDef;
b2BodyDef bodyDef;
bodyDef.type = b2_kinematicBody;
bodyDef.position.Set(100/PTM_RATIO, 100/PTM_RATIO);
bodyDef.userData = fixsprite;
b2Body *body = world->CreateBody(&bodyDef);
p_shape.Set(arcdots, 4);
fixtureDef.shape = &p_shape;
fixtureDef.density = 1.0f;
fixtureDef.friction = 0.3f;
body->CreateFixture(&fixtureDef);
}
And I call these functions from the main routine as follows:
CCSprite *sprite2 = [self addSprite];
sprite2.position = ccp(0, 0);
[self addChild:sprite2 z:0];
[self addFixture:sprite2];
I have these lines uncommented in the delegate file:
if( ! [director enableRetinaDisplay:YES] )
CCLOG(#"Retina Display Not supported");
Please let me know if further information is required. And please be gentle, I'm only starting to learn this. Thanks for your time.
Unless otherwise mentioned, all coordinates in cocos2d (and most of UIKit) are given in points, not pixels. On a Retina display device you still have a point resolution of 480x320 points (960x640 pixels).
From that follows: when you calculate in actual pixels, multiply or divide by CC_CONTENT_SCALE_FACTOR. If you deal with point coordinates, do nothing. Since you're rendering your own polys I assume you know whether you use actual pixel coordinates or not. If you use OpenGL directly, then you'll be working with pixel coordinates.
I'm not sure if enabling Retina display mode does anything for you if you don't use cocos2d to render your content.
Lastly, a common misunderstanding is that the Box2D world is using point coordinates and must be transformed to pixels or vice versa. Neither is the case. The Box2D world is completely oblivious to a specific coordinate system. The use of PTM_RATIO is done only to ensure that Box2D coordinates are within reasonable ranges for the Box2D engine, since it works best with objects that are 1 meter in size/diameter, and most objects should range from 0.1 to 10 meters in diameter.

How can I crop an Image with mask and combine it with another image (background) on iPhone? (OpenGL ES 1.1 is preferred)

I need to combine three images the way I represent in attached file:
1) One image is background. It is 'solid' in sense it has no alpha channel.
2) Another one is sprite. Sprite lies upon background. Sprite may have its own alpha channel, background has to be visible in places where sprite is transparent.
3) There's a number of masks: I apply new mask to Sprite every frame. Mask isn't rectangular.
In other words, visible pixel =
pixel of background, if cropping mask corresponding color is white OR sprite is transparent;
pixel of sprite otherwise (for example, corresponding mask's pixel is black).
I'm working with cocos2d-iphone. Can I make such combination with cocos2d-iphone or with OpenGL ES 1.1? If any answer is YES, working code would be appreciated. If both answers is NO, is there another technology on iOS to make what I want (maybe Quartz2d or OpenGL ES 2.0) ?
Mask format is not obligatory black for Sprite and white for Background. I can make Mask of required format, such as transparency for Background and white for Sprite if needed.
P.S. There's another unanswered question of same kind:
Possible to change the alpha value of certain pixels on iPhone?
Here is my answer for OpenGL. The procedure would be very different for Quartz.
The actual code is pretty simple, but getting it exactly right is the tricky part. I am using a GL context that is 1024X1024 with the origin in the bottom left. I'm not posting my code because it uses immediate mode which isn't available in OpenGL|ES. If you want my drawing code, let me know and I'll update my answer.
Draw the mask with blending disabled.
Enable blending, set the GLBlendFunc(GL_DST_COLOR, GL_ZERO) and draw the bleed through texture. My mask is white where it should bleed through. In your question it was black.
Now to draw the background, set the blend function to glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_DST_COLOR) and draw the background texture.
EDIT Here is the code I describe above. Please note that this will not work on iOS since there is no immediate mode, but you should be able to get this working in Macintosh project. Once that is working you can convert it to something iOS compatible in the Macintosh project and then move that code over to your iOS project.
The renderMask() call is where the most interesting part is. renderTextures() draws the sample textures in the top row.
static GLuint color_texture;
static GLuint mask_texture;
static GLuint background_texture;
static float window_size[2];
void renderMask()
{
float texture_x=0, texture_y=0;
float x=0, y=0;
{
glBindTexture(GL_TEXTURE_2D, mask_texture);
glDisable(GL_BLEND);
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+512.0,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+512.0,y+512.0);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+512.0);
glEnd();
}
{
glBindTexture(GL_TEXTURE_2D, color_texture);
glEnable(GL_BLEND);
glBlendFunc(GL_DST_COLOR, GL_ZERO);
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+512.0,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+512.0,y+512.0);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+512.0);
glEnd();
}
{
glBindTexture(GL_TEXTURE_2D, background_texture);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_DST_COLOR);
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+512.0,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+512.0,y+512.0);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+512.0);
glEnd();
}
}
// Draw small versions of the textures.
void renderTextures()
{
float texture_x=0, texture_y=0;
float x=0, y=532.0;
float size = 128;
{
glBindTexture(GL_TEXTURE_2D, mask_texture);
glDisable(GL_BLEND);
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+size,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+size,y+size);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+size);
glEnd();
}
{
glBindTexture(GL_TEXTURE_2D, color_texture);
x = size + 16;
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+size,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+size,y+size);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+size);
glEnd();
}
{
glBindTexture(GL_TEXTURE_2D, background_texture);
x = size*2 + 16*2;
glBegin(GL_QUADS);
glTexCoord2f(texture_x,texture_y);
glVertex2f(x,y);
glTexCoord2f(texture_x+1.0,texture_y);
glVertex2f(x+size,y);
glTexCoord2f(texture_x+1.0,texture_y+1.0);
glVertex2f(x+size,y+size);
glTexCoord2f(texture_x,texture_y+1.0);
glVertex2f(x,y+size);
glEnd();
}
}
void init()
{
GLdouble bounds[4];
glGetDoublev(GL_VIEWPORT, bounds);
window_size[0] = bounds[2];
window_size[1] = bounds[3];
glClearColor(0.0, 0.0, 0.0, 1.0);
glShadeModel(GL_SMOOTH);
// Load our textures...
color_texture = [[NSImage imageNamed:#"colors"] texture];
mask_texture = [[NSImage imageNamed:#"mask"] texture];
background_texture = [[NSImage imageNamed:#"background"] texture];
// Enable alpha blending. We'll learn more about this later
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
}
void draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glColor3f(1.0, 1.0, 1.0);
renderMask();
renderTextures();
}
void reshape(int width, int height)
{
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0.0, width, 0.0, height);
glMatrixMode(GL_MODELVIEW);
window_size[0] = width;
window_size[1] = height;
}
This shows the my three textures drawn normally (crop, bleed through, and background) and then combined below.

Distortion with 'pixel accurate' OpenGL rendering of sprites

To define what I'm trying to do: I want to be able to take an arbitrary 'sprite' image from a ^2x^2 sized PNG, and display just the pixels of interest to a given x/y position on screen.
My results are the problem - major distortion - it looks awful! (Note these SS's are in iPhone sim but on real retina device they appear the same.. junky). Here is a screenshot of the source PNG in 'preview' - which looks wonderful (any variations on rendering that I describe in this question look almost exactly like the junky one)
Previously, I've asked a question about displaying a non-power-of-2 texture as a sprite using OpenGL ES 2.0 (although this applies to any OpenGL). I'm close, but I have some issues that I can't resolve. I think there are probably multiple bugs - I think there's some bug where I'm basically aliasing what I'm displaying by rendering large then squashing x2 or vice versa but I can't see it. Additionally, there are off by one errors and I cannot get a handle on them. I can't visually identify them occurring but I know for sure they're there.
I'm working in 960 x 640 landscape (on iPhone4 retina display). So I expect 0->959 moves left to right, 0->639 moves bottom to top. (And I think I'm seeing opposite of this - but that's not what this question is about)
To make things easy what I'm trying to achieve in this test case is a FULL SCREEN 960x640 display of a PNG file. Just one of them. I display a red background first so that it's obvious if I'm covering the screen or not.
Update: I realized the 'glViewport' inside of the setFramebuffer call was setting my width and height backwards. I noticed this because when I would set my geometry to draw from 0,0 to 100,100 it drew in a rectangle not a square. When I swapped these, that call does draw a square. However, using that same call, my entire screen fills with vertex range of 0,0 -> 480,320 (half 'resolution').. don't understand that. However no matter where I push on from this, I'm still not getting a good looking result
Here's my vertex shader:
attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
// Gives 'landscape' full screen..
mat4 projectionMatrix = mat4( 2.0/640.0, 0.0, 0.0, -1.0,
0.0, 2.0/960.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
// Gives a 1/4 of screen.. (not doing 2.0/.. was suggested in previous SO Q)
/*mat4 projectionMatrix = mat4( 1.0/640.0, 0.0, 0.0, -1.0,
0.0, 1.0/960.0, 0.0, -1.0,
0.0, 0.0, -1.0, 0.0,
0.0, 0.0, 0.0, 1.0); */
// Apply the projection matrix to the position and pass the texCoord
void main()
{
gl_Position = a_position;
gl_Position *= projectionMatrix;
v_texCoord = a_texCoord;
}
Here's my fragment shader:
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D s_texture;
void main()
{
gl_FragColor = texture2D(s_texture, v_texCoord);
}
Here's my draw code:
#define MYWIDTH 960.0f
#define MYHEIGHT 640.0f
// I have to refer to 'X' as height although I'd assume I use 'Y' here..
// I think my X and Y throughout this whole block of code is screwed up
// But, I have experimented flipping them all and verifying that if they
// Are taken from the way they're set now to swapping X and Y that things
// end up being turned the wrong way. So this is a mess, but unlikely my problem
#define BG_X_ORIGIN 0.0f
// ALSO NOTE HERE: I have to put my 'dest' at 640.0f.. --- see note [1] below
#define BG_X_DEST 640.0f
#define BG_Y_ORIGIN 0.0f
// --- see note [1] below
#define BG_Y_DEST 960.0f
// These are texturing coordinates, I texture starting at '0' px and then
// I calculate a percentage of the texture to use based on how many pixels I use
// divided by the actual size of the image (1024x1024)
#define BG_X_ZERO 0.0f
#define BG_Y_USEPERCENTAGE BG_X_DEST / 1023.0f
#define BG_Y_ZERO 0.0f
#define BG_X_USEPERCENTAGE BG_Y_DEST / 1023.0f
// glViewport(0, 0, MYWIDTH, MYHEIGHT);
// See note 2.. it sets glViewport basically, provided by Xcode project template
[(EAGLView *)self.view setFramebuffer];
// Big hack just to get things going - like I said before, these could be backwards
// w/respect to X and Y
static const GLfloat backgroundVertices[] = {
BG_X_ORIGIN, BG_Y_ORIGIN,
BG_X_DEST, BG_Y_ORIGIN,
BG_X_ORIGIN, BG_Y_DEST,
BG_X_DEST, BG_Y_DEST
};
static const GLfloat backgroundTexCoords[] = {
BG_X_ZERO, BG_Y_USEPERCENTAGE,
BG_X_USEPERCENTAGE, BG_Y_USEPERCENTAGE,
BG_X_ZERO, BG_Y_ZERO,
BG_X_USEPERCENTAGE, BG_Y_ZERO
};
// Turn on texturing
glEnable(GL_TEXTURE_2D);
// Clear to RED so that it's obvious when I'm not drawing my sprite on screen
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Texturing parameters - these make sense.. don't think they are the issue
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);//GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);//GL_LINEAR);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, backgroundVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, 0, 0, backgroundTexCoords);
glEnableVertexAttribArray(ATTRIB_TEXCOORD);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, background->textureId);
// I don't understand what this uniform does in the texture2D call in shader.
glUniform1f(uniforms[UNIFORM_SAMPLERLOC], 0);
// Draw the geometry...
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// present the framebuffer see note [3]
[(EAGLView *)self.view presentFramebuffer];
Note [1]:
If I set BG_X_DEST to 639.0f I do not get full coverage of the 640 pixels, I get red showing through on the right hand side. But this doesn't make sense to me - I'm aiming for pixel perfect and I have to draw my sprite geometry from 0 to 640 which is 641 pixels when I only have 640!!! red line appearing with 639f instead of 640f
And if I set BG_Y_DEST to 959.0f I do not get the red line show throug.
red line top bug appearing with 958f instead of 960 or 959f
This may be a good clue as to what bug(s) I have going on.
Note: [2] - included in the OpenGL ES 2 framework by Xcode
- (void)setFramebuffer
{
if (context)
{
[EAGLContext setCurrentContext:context];
if (!defaultFramebuffer)
[self createFramebuffer];
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glViewport(0, 0, framebufferWidth, framebufferHeight);
}
}
Note [3]: - included in the OpenGL ES 2 framework by Xcode
- (BOOL)presentFramebuffer
{
BOOL success = FALSE;
if (context)
{
[EAGLContext setCurrentContext:context];
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
success = [context presentRenderbuffer:GL_RENDERBUFFER];
}
return success;
}
Note [4] - relevant image loading code (I have used PNG with and without alpha channel and actually it doesn't seem to make any difference... I also have tried to change my code up to be ARGB instead of RGBA and that's wrong - since A = 1.0 everywhere, I get a very RED image, which makes me think the RGBA is in fact valid and this code is right.): update: I have switched this texture loading to a completely different setup using CG/ImageIO calls and it looks identical to this so I assume it's not some kind of aliasing or color compression done by the image libraries (unless they both go to the same fundamental calls, which is possible..)
// Otherwise it isn't already loaded
glEnable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);//GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);//GL_LINEAR);
// TODO Next 2 can prob go later on..
glGenTextures(1, &(newTexture->textureId)); // generate Texture
// Use this before 'drawing' the texture to the memory...
glBindTexture(GL_TEXTURE_2D, newTexture->textureId);
NSString *path = [[NSBundle mainBundle]
pathForResource:[NSString stringWithUTF8String:newTexture->filename.c_str()] ofType:#"png"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
newTexture->width = CGImageGetWidth(image.CGImage);
newTexture->height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc(newTexture->height * newTexture->width * 4 );
CGContextRef myContext = CGBitmapContextCreate
(imageData, newTexture->width, newTexture->height, 8, 4 * newTexture->width, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease(colorSpace);
CGContextClearRect(myContext, CGRectMake(0, 0, newTexture->width, newTexture->height));
CGContextDrawImage(myContext, CGRectMake(0, 0, newTexture->width, newTexture->height), image.CGImage);
// Texture is created!
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newTexture->width, newTexture->height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(myContext);
free(imageData);
[image release];
[texData release];
[(EAGLView *)self.view setContentScaleFactor:2.0f];
By default, iPhone windows do scaling to reach their high resolution modes. Which was destroying my image quality ..
Thanks for all the help folks

OpenGL paint program based on Apple's 'glPaint' on a white background - how to blend?

Trying to write a simple paint program for iPhone, and I'm using Apple's glPaint sample as a guide. The only problem is, painting doesn't work on a white background, since white + colour = white. I've tried different blending functions, but haven't been able to hit on the right combination of settings and/or brushes to make this work. I've seen similar posts about this problem but no answers. Does anyone know how this might work?
Edit:
I don't even really transparency effects, at this point if I could draw solid lines with rounded ends I'd be happy.
I got white backgrounds working (using the default GLPaint code), by just changing the clear colour in the erase method ie,
- (void) erase
{
[EAGLContext setCurrentContext:context];
// Clear the buffer
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
//glClearColor(0.0, 0.0, 0.0, 0.0);
glClearColor(1.0, 1.0, 1.0, 0.0); // Change to white
glClear(GL_COLOR_BUFFER_BIT);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
The default blend function and brush image seem to just work.
Rather than adding the colour to the blend, could you subtract its opposite? This is roughly how paint and light work in real life, and should give the correct functionality.
Ex: If the user is painting in Red:255 Green:0 Blue:100 Opacity:0.5, you should do this to the pixel:
pixel.red -= (255-paint.red) * paint.opacity; //Subtract 0
pixel.green -= (255-paint.green) * paint.opacity; //Subtract 127.5
pixel.blue -= (255-paint.blue) * paint.opacity; //Subtract 77.5
EDIT: As you pointed out, it is not what is expected, as painting over full blue with full red will go to black, since they subtract each other.
A possible fix to this would be to combine the additive and subtractive approach.
For instance if you added 0.5*paint.colour and subtracted 0.5*paint.complementaryColour, adding full red to full blue would result in:
newPixel.red -> 0 + 127.5 - 0 = 127.5
newPixel.green -> 0 + 0 - 127.5 = 0 //Cap it off, or invent new math =D
newPixel.blue -> 255 + 0 - 127.5 = 127.5
As you can see, this results in a nice purple colour, which is the combination of blue and red. You can tweak the proportion of additive to subtractive logic to simulate how well the paint mixes.
Hope that helps! =)
Yea I had the same issue. The edges of the brush were darker than they should be. It turns out that apple's api pre multiplies the alpha into the rgb channels.
So I countered that by making a grayscale brush in photoshop with just rgb values and no alpha channel. This should look the way you want your brush to be with white representing full color pigmentaton and black representing no color pigmentation.
I load that brush the way its done in apple's glPaint sample code. I then copy the R-channel (or G or B channels as they all are equal) into the alpha channel of the texture. Following that I set the R-G-B values to maximum for all pixels of the brush texture.
So now your brush has an alpha channel with data of how exactly your brush looks. and the RGB are all 1.
Finally I used the blending function:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
And dont forget to set the color before you draw.
glColor4f(1.0f,0.0f,0.0f,1.0f); //red color
Check out the code below, see if it works for you:
-(GLuint) createBrushWithImage: (NSString*)brushName
{
GLuint brushTexture;
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData,*brushData1;
size_t width, height;
//initialize brush image
brushImage = [UIImage imageNamed:brushName].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(brushImage);
height = CGImageGetHeight(brushImage);
//make the brush texture and context
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height *4, sizeof(GLubyte));
// We are going to use brushData1 to make the final texture
brushData1 = (GLubyte *) calloc(width * height *4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width *4 , CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
for(int i=0; i< width*height;i++){
//set the R-G-B channel to maximum
brushData1[i*4] = brushData1[i*4+1] =brushData1[i*4+2] =0xff;
//store originally loaded brush image in alpha channel
brushData1[i*4+3] = brushData[i*4];
}
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData1);
// Release the image data; it's no longer needed
free(brushData1);
free(brushData);
}
return brushTexture;
}
i ran into something similar. the following blending function call solved it for me without any complicated math.
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
Add this before your glDraw calls and you should be able to draw with any texture as brush.
Actually this works even better:
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
The sample code GLPaint is used glBlendFunc(GL_SRC_ALPHA, GL_ONE) and glBlendFunc(GL_SRC_ALPHA, GL_ONE) mode in the function - (id)initWithCoder:(NSCoder*)coder.So while the color is white,all the other colors won't be see.
I want to solve it too.
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
This is best answer.And you have to charge the brush picture.The picture must be a alpha backgroud and while ellipse.