I am grabbing the YUV channel from the IPhone in the kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange format (YUV, bi-planar).
I intend to process the y-channel, so I grab it using
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
uint8_t *y_channel = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
The problem is that the y_channel pixels appears rotated and mirrored (I draw them on an overlay layer to see what the look like:
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context
{
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef bitmapContext = CGBitmapContextCreate(rotated,
imageSize->x,
imageSize->y,
8, // bitsPerComponent
1*imageSize->x, // bytesPerRow
colorSpace,
kCGImageAlphaNone);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
CGContextDrawImage(context, CGRectMake(0, 0, imageSize->x/2, imageSize->y/2), cgImage);
CFRelease(cgImage);
CFRelease(bitmapContext);
}
I have considered looping through the pixels and created a fixed version of the image, but I am wondering if there is a method to get the y_channel in the correct orientation (IE: not rotated in 90 degrees) straight from the camera.
I don't believe there is a way to alter the orientation of the Y plane coming from the camera, but that shouldn't matter that much in your processing, because you should be able to work with it just fine in its native orientation. If you know that it's rotated 90 degrees, simply tweak your processing to work with it at that rotation.
Also, I believe the mirroring you see is due to your drawing into the Core Graphics coordinate space, where the origin is in the lower left. On the iPhone, CALayers that back UIViews get flipped so that the origin is in the upper left, which can cause images drawn using Quartz in these layers to be inverted.
For display, I would recommend not doing Quartz drawing like you show here, but instead use the Y channel image as an OpenGL ES texture. This will be far more performant. Also, you can simply specify the correct texture coordinates to automatically deal with any image rotation you want in a hardware-accelerated manner.
I describe how to do hardware-accelerated image processing on iOS devices using OpenGL ES 2.0 here, and I provide a sample application that does this processing using shaders and draws the result to the screen in a texture. I'm working with the BGRA colorspace in that example, but you should be able to pull out the Y channel and use it as a luminance texture in the same way.
Related
Hi, is there any way to change your image thin size to fat size vice-versa in iPhone SDK?
In this application I want to provide the user a possibility to change its image from regular size to fat size by sliding the slider he can measure its size in iPhone SDK?
I think this can be worked by getting pixels of image i have tried this code to get pixels of image but it just removes colors from the image.
UIImage *image = [UIImage imageNamed:#"foo.png"];
CGImageRef imageRef = image.CGImage;
NSData *data = (NSData *)CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
char *pixels = (char *)[data bytes];
// this is where you manipulate the individual pixels
// assumes a 4 byte pixel consisting of rgb and alpha
// for PNGs without transparency use i+=3 and remove int a
for(int i = 0; i < [data length]; i += 4)
{
int r = i;
int g = i+1;
int b = i+2;
int a = i+3;
pixels[r] = pixels[r]; // eg. remove red
pixels[g] = pixels[g];
pixels[b] = pixels[b];
pixels[a] = pixels[a];
}
// create a new image from the modified pixel data
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, pixels, [data length], NULL);
CGImageRef newImageRef = CGImageCreate (
width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
provider,
NULL,
false,
kCGRenderingIntentDefault
// the modified image
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
imgView.image = newImage;
I have also tried by stretching image from this code.
UIImage *stretchImage = [image stretchableImageWithLeftCapWidth:10 topCapHeight:10];
Can anybody help me? I didn't find any framework or SDK that gives me that kind of functionality. I have googled for long time.
As it happens, I'm working on something almost exactly like what you describe for our company right now. We're tackling faces, which is an easier problem to do realistically.
Here is a before image (of yours truly)
And here it is with a "fat face" transformation:
What I did is to import the image into an OpenGL texture, and then apply that texture to a mesh grid. I apply a set of changes to the mesh grid, squeezing certain points closer together and stretching others further apart. Getting a realistic fat face took a lot of fine-tuning, but the results are quite good, I think.
Once I have the mesh calculated, OpenGL does the "heavy lifting" of transforming the image, and very fast. After everything is set up, actually drawing the transformed image is a single call.
Here is the same image, showing the grid lines:
Fat face with grid lines http://imageshack.com/a/img404/8049/afterwithlines.jpg
It took me a couple of weeks of full-time work to get the basic mesh warping working (for a client project that fell through) and then another week or so part time to get the fat face layout fine-tuned. Getting the whole thing into a salable product is still a work in progress.
The algorithm I've come up with is fast enough to apply the fat transformation (and various others) to video input from an iOS camera at the full 30 FPS.
In order to do this sort of image processing you need to understand trig, algebra, pointer math, transformation matrixes, and have a solid understanding of how to write highly optimized code.
I'm using Apple's new Core Image face recognition code to find peoples face features in an image, and use the coordinates of the eyes and mouth as the starting point for my transformations.
Your project is much more ambitious. You would have to do some serious image processing to find all the features, trace their outlines, and then figure out how to transform the image to get a convincing fat effect. You'd need high resolution source images that were posed, lit, and shot under carefully controlled conditions in order to get good results.
It might be too much for the computing power of current iOS devices, and the kind of image recognition you'd need to do to figure out the body parts and how to transform them would be mind-bendingly difficult. You might want to take a look at the open source OpenCV project as a starting point for the image recognition part of the problem.
I’m working on a paint app for iphone. In my code I'm using an imageView which contain outline image on which I am puting CAEAGLLayer for filling colors in outline image. Now I am taking screenshot of OpenGL ES [CAEAGLLayer] rendered content using function:
- (UIImage*)snapshot:(UIView*)eaglview{
GLint backingWidth1, backingHeight1;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth1);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight1);
NSInteger x = 0, y = 0, width = backingWidth1, height = backingHeight1;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;}
combining this screenshot with outline image using function:
- (void)Combine:(UIImage *)Back{
UIImage *Front =backgroundImageView.image;
//UIGraphicsBeginImageContext(Back.size);
UIGraphicsBeginImageContext(CGSizeMake(640,960));
// Draw image1
[Back drawInRect:CGRectMake(0, 0, Back.size.width*2, Back.size.height*2)];
// Draw image2
[Front drawInRect:CGRectMake(0, 0, Front.size.width*2, Front.size.height*2)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(resultingImage, nil, nil, nil);
UIGraphicsEndImageContext();
}
Save this image to photoalbum using function
-(void)captureToPhotoAlbum {
[self Combine:[self snapshot:self]];
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:#"Success" message:#"Image saved to Photo Album" delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil];
[alert show];
[alert release]; }
Above Code is working but the image quality of screenshot is poor. On the outlines of the brush, there is a grayish outline. I have uploaded a screenshot of my app which is combination of opengles content & UIImage.
Is there any way to get retina display screenshot of opengles-CAEaglelayer content.
Thank you in advance!
I don't believe that resolution is your issue here. If you aren't seeing the grayish outlines on your drawing when it appears on the screen, odds are that you're observing a compression artifact in the saving process. Your image is probably being saved as a lower-quality JPEG image, where artifacts will appear on sharp edges, like the ones in your drawing.
To work around this, Ben Weiss's answer here provides the following code for forcing your image to be saved to the photo library as a PNG:
UIImage* im = [UIImage imageWithCGImage:myCGRef]; // make image from CGRef
NSData* imdata = UIImagePNGRepresentation ( im ); // get PNG representation
UIImage* im2 = [UIImage imageWithData:imdata]; // wrap UIImage around PNG representation
UIImageWriteToSavedPhotosAlbum(im2, nil, nil, nil); // save to photo album
While this is probably the easiest way to address your problem here, you could also try employing multisample antialiasing, as Apple describes in the "Using Multisampling to Improve Image Quality" section of the OpenGL ES Programming Guide for iOS. Depending on how fill-rate limited you are, MSAA might lead to a little bit of slowdown in your application.
You're using kCGImageAlphaPremultipliedLast when you create the CG bitmap context. Although I can't see your OpenGL code, it seems unlikely to me that your OpenGL context is rendering premultiplied alpha. Unfortunately, IIRC, it's not possible to create a non-premultiplied CG bitmap context on iOS (it would be using kCGImageAlphaLast, but I think that'll just make the creation call fail), so you may need to premultiply the data by hand between getting it from OpenGL and making the CG context.
On the other hand, is there a reason your OpenGL context has an alpha channel? Could you just make it opaque white then use kCGImageAlphaNoneSkipLast?
I followed Apple's GLPaint paint application and tried to modify it.. In sample code they have used simple particle.png to draw.
My question is i want to use some other image of my choice for drawing. At first sight it seems very easy to replace "particle.png" with some "finger.png" but it did not work.. When I replaced "particle.png" with "finger.png" , I got something like this :
My "finger.png" image looks somthing like this :
Link : http://developer.apple.com/library/ios/#samplecode/GLPaint/Listings/Classes_PaintingView_m.html#//apple_ref/doc/uid/DTS40007328-Classes_PaintingView_m-DontLinkElementID_6
Partial Code:
- (id)initWithCoder:(NSCoder*)coder
{
CGSize myShadowOffset = CGSizeMake (0, 0);
NSMutableArray* recordedPaths;
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData;
size_t width, height;
// Create a texture from an image
// First create a UIImage object from the data in a image file, and then extract the Core Graphics image
////--------------------Modification--------------------------------///////
brushImage = [UIImage imageNamed:#"finger.png"].CGImage;
////--------------------Modification--------------------------------///////
// Get the width and height of the image
width = CGImageGetWidth(brushImage);
height = CGImageGetHeight(brushImage);
NSLog(#"%f%f",(CGFloat)width, (CGFloat)height);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
// Make sure the image exists
if(brushImage)
{
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
I do not understand why I am getting drawing like this. Can anyone point me out that what other changes I need to make to make this application work as before?? I am not expert at OpenGL so any help or suggestion will be appreciated..
If I remember correctly, you have to make the image white on transparent in order for it to work. If you have blue with transparency around it, it will show the entire picture as opaque.
I took the standard Apple GLPaint app. I replaced particle.png with a finger.png that I made in Photoshop. It is 64x64 RGB 8 bits. The entire image is transparent except for a white smudge which I copied directly from your blue finger.png. Here is the output in the simulator:
It's a bit late, but I find if you change #define kBrushScale in PaintingView.h, you get interesting effects. Try changing to .25, .5. .75 1.0 etc...
I'm writing a game for IPhone in Opengl ES, and I'm experiencing a problem with alpha blending:
I'm using glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA) to achieve alpha blending and trying to compose a scene with several "layers" so I can move them separately instead of having a static image. I created a preview in photoshop and then tried to achieve the same result in the iphone, but a black halo is shown when I blend a texture with semi-transparent regions.
I attached an image. In the left is the screenshot from the iphone, and in the right is what it looks like when I make the composition in photoshop. The image is composed by a gradient and a sand image with feathered edges.
Is this the expected behaviour? Is there any way I can avoid the dark borders?
Thanks.
EDIT: I'm uploading the portion of the png containing the sand. The complete png is 512x512 and has other images too.
I'm loading the image using the following code:
NSString *path = [NSString stringWithUTF8String:filePath];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil) NSLog(#"ERROR LOADING TEXTURE IMAGE");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[image release];
[texData release];
I need to answer my own question:
I couldn't make it work using ImageIO framework so I added libpng sources to my project and loaded the image using it. It works perfect now, but had I to solve the following problem:
The image was loaded and showed fine in the simulator but was not loading at all on the real device. I found on the web that what's going on is that the pixel ordering in PNG image-format files is converted from RGBA to BGRA, and the color values are pre-multiplied by the alpha channel value as well, by a compression utility 'pngcrush' (for device-specific efficiency reasons, when programming with the UIKit interface).
The utility also renames a header of the file, making the new PNG file unusable by libpng. These changes are done automatically when PNG files are deployed onto the iPhone. While this is fine for the UIKit, libpng (and other non-Apple libraries) generally can't then read the files.
The simple solutions are:
rename your PNG files with a different extension.
for your iPhone
-device- build add the following user-defined setting:
IPHONE_OPTIMIZE_OPTIONS | -skip-PNGs
I did the second and it works perfect on simulator and device now.
Your screenshot and photoshop mockup suggest that the image's color channels are being premultiplied against the alpha channel.
I have no idea what your original source images look like but to me it looks like it is blending correctly. With the blend mode you have you're going to get muggy blends between the layers.
The photoshop version looks like you've got proper transparency for each layer, but not blending. I suppose you could experiement with glAlphaFunc if you didn't want to explicitly set the pixel alphas exactly.
--- Code relating to comment below (removing alpha pre-multiplication) ---
int pixelcount = width * height;
unsigned char* off = pixeldata;
for (int pi=0; pi<pixelcount; ++pi)
{
unsigned char alpha = off[3];
if( alpha!=255 && alpha!=0 )
{
off[0] = ((int)off[0])*255/alpha;
off[1] = ((int)off[1])*255/alpha;
off[2] = ((int)off[2])*255/alpha;
}
off += 4;
}
I am aware this post is ancient, however I had the identical problem and after attempting some of the solutions and agonising for days I discovered that you can solve the pre-multiplied RGBA png issue by using the following blending parameters;
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
The GL_ONE parameter replaced the GL_SRC_ALPHA parameter in my case.
I can now use my RGBA PNGs without the gray alpha edges effect which made my glyph text look nasty.
Edit:
Okay, one more thing, for fading etc (setting the alpha channel in code), you will need to pre-multiply manually when the blending is set as above, like so;
glColor4f(r*a, g*a, b*a, a);
Trying to write a simple paint program for iPhone, and I'm using Apple's glPaint sample as a guide. The only problem is, painting doesn't work on a white background, since white + colour = white. I've tried different blending functions, but haven't been able to hit on the right combination of settings and/or brushes to make this work. I've seen similar posts about this problem but no answers. Does anyone know how this might work?
Edit:
I don't even really transparency effects, at this point if I could draw solid lines with rounded ends I'd be happy.
I got white backgrounds working (using the default GLPaint code), by just changing the clear colour in the erase method ie,
- (void) erase
{
[EAGLContext setCurrentContext:context];
// Clear the buffer
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
//glClearColor(0.0, 0.0, 0.0, 0.0);
glClearColor(1.0, 1.0, 1.0, 0.0); // Change to white
glClear(GL_COLOR_BUFFER_BIT);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
The default blend function and brush image seem to just work.
Rather than adding the colour to the blend, could you subtract its opposite? This is roughly how paint and light work in real life, and should give the correct functionality.
Ex: If the user is painting in Red:255 Green:0 Blue:100 Opacity:0.5, you should do this to the pixel:
pixel.red -= (255-paint.red) * paint.opacity; //Subtract 0
pixel.green -= (255-paint.green) * paint.opacity; //Subtract 127.5
pixel.blue -= (255-paint.blue) * paint.opacity; //Subtract 77.5
EDIT: As you pointed out, it is not what is expected, as painting over full blue with full red will go to black, since they subtract each other.
A possible fix to this would be to combine the additive and subtractive approach.
For instance if you added 0.5*paint.colour and subtracted 0.5*paint.complementaryColour, adding full red to full blue would result in:
newPixel.red -> 0 + 127.5 - 0 = 127.5
newPixel.green -> 0 + 0 - 127.5 = 0 //Cap it off, or invent new math =D
newPixel.blue -> 255 + 0 - 127.5 = 127.5
As you can see, this results in a nice purple colour, which is the combination of blue and red. You can tweak the proportion of additive to subtractive logic to simulate how well the paint mixes.
Hope that helps! =)
Yea I had the same issue. The edges of the brush were darker than they should be. It turns out that apple's api pre multiplies the alpha into the rgb channels.
So I countered that by making a grayscale brush in photoshop with just rgb values and no alpha channel. This should look the way you want your brush to be with white representing full color pigmentaton and black representing no color pigmentation.
I load that brush the way its done in apple's glPaint sample code. I then copy the R-channel (or G or B channels as they all are equal) into the alpha channel of the texture. Following that I set the R-G-B values to maximum for all pixels of the brush texture.
So now your brush has an alpha channel with data of how exactly your brush looks. and the RGB are all 1.
Finally I used the blending function:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
And dont forget to set the color before you draw.
glColor4f(1.0f,0.0f,0.0f,1.0f); //red color
Check out the code below, see if it works for you:
-(GLuint) createBrushWithImage: (NSString*)brushName
{
GLuint brushTexture;
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData,*brushData1;
size_t width, height;
//initialize brush image
brushImage = [UIImage imageNamed:brushName].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(brushImage);
height = CGImageGetHeight(brushImage);
//make the brush texture and context
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height *4, sizeof(GLubyte));
// We are going to use brushData1 to make the final texture
brushData1 = (GLubyte *) calloc(width * height *4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width *4 , CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
for(int i=0; i< width*height;i++){
//set the R-G-B channel to maximum
brushData1[i*4] = brushData1[i*4+1] =brushData1[i*4+2] =0xff;
//store originally loaded brush image in alpha channel
brushData1[i*4+3] = brushData[i*4];
}
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData1);
// Release the image data; it's no longer needed
free(brushData1);
free(brushData);
}
return brushTexture;
}
i ran into something similar. the following blending function call solved it for me without any complicated math.
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
Add this before your glDraw calls and you should be able to draw with any texture as brush.
Actually this works even better:
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
The sample code GLPaint is used glBlendFunc(GL_SRC_ALPHA, GL_ONE) and glBlendFunc(GL_SRC_ALPHA, GL_ONE) mode in the function - (id)initWithCoder:(NSCoder*)coder.So while the color is white,all the other colors won't be see.
I want to solve it too.
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
This is best answer.And you have to charge the brush picture.The picture must be a alpha backgroud and while ellipse.