I followed Apple's GLPaint paint application and tried to modify it.. In sample code they have used simple particle.png to draw.
My question is i want to use some other image of my choice for drawing. At first sight it seems very easy to replace "particle.png" with some "finger.png" but it did not work.. When I replaced "particle.png" with "finger.png" , I got something like this :
My "finger.png" image looks somthing like this :
Link : http://developer.apple.com/library/ios/#samplecode/GLPaint/Listings/Classes_PaintingView_m.html#//apple_ref/doc/uid/DTS40007328-Classes_PaintingView_m-DontLinkElementID_6
Partial Code:
- (id)initWithCoder:(NSCoder*)coder
{
CGSize myShadowOffset = CGSizeMake (0, 0);
NSMutableArray* recordedPaths;
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData;
size_t width, height;
// Create a texture from an image
// First create a UIImage object from the data in a image file, and then extract the Core Graphics image
////--------------------Modification--------------------------------///////
brushImage = [UIImage imageNamed:#"finger.png"].CGImage;
////--------------------Modification--------------------------------///////
// Get the width and height of the image
width = CGImageGetWidth(brushImage);
height = CGImageGetHeight(brushImage);
NSLog(#"%f%f",(CGFloat)width, (CGFloat)height);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
// Make sure the image exists
if(brushImage)
{
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
I do not understand why I am getting drawing like this. Can anyone point me out that what other changes I need to make to make this application work as before?? I am not expert at OpenGL so any help or suggestion will be appreciated..
If I remember correctly, you have to make the image white on transparent in order for it to work. If you have blue with transparency around it, it will show the entire picture as opaque.
I took the standard Apple GLPaint app. I replaced particle.png with a finger.png that I made in Photoshop. It is 64x64 RGB 8 bits. The entire image is transparent except for a white smudge which I copied directly from your blue finger.png. Here is the output in the simulator:
It's a bit late, but I find if you change #define kBrushScale in PaintingView.h, you get interesting effects. Try changing to .25, .5. .75 1.0 etc...
Related
I’m working on a paint app for iphone. In my code I'm using an imageView which contain outline image on which I am puting CAEAGLLayer for filling colors in outline image. Now I am taking screenshot of OpenGL ES [CAEAGLLayer] rendered content using function:
- (UIImage*)snapshot:(UIView*)eaglview{
GLint backingWidth1, backingHeight1;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth1);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight1);
NSInteger x = 0, y = 0, width = backingWidth1, height = backingHeight1;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;}
combining this screenshot with outline image using function:
- (void)Combine:(UIImage *)Back{
UIImage *Front =backgroundImageView.image;
//UIGraphicsBeginImageContext(Back.size);
UIGraphicsBeginImageContext(CGSizeMake(640,960));
// Draw image1
[Back drawInRect:CGRectMake(0, 0, Back.size.width*2, Back.size.height*2)];
// Draw image2
[Front drawInRect:CGRectMake(0, 0, Front.size.width*2, Front.size.height*2)];
UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext();
UIImageWriteToSavedPhotosAlbum(resultingImage, nil, nil, nil);
UIGraphicsEndImageContext();
}
Save this image to photoalbum using function
-(void)captureToPhotoAlbum {
[self Combine:[self snapshot:self]];
UIAlertView *alert = [[UIAlertView alloc] initWithTitle:#"Success" message:#"Image saved to Photo Album" delegate:nil cancelButtonTitle:#"OK" otherButtonTitles:nil];
[alert show];
[alert release]; }
Above Code is working but the image quality of screenshot is poor. On the outlines of the brush, there is a grayish outline. I have uploaded a screenshot of my app which is combination of opengles content & UIImage.
Is there any way to get retina display screenshot of opengles-CAEaglelayer content.
Thank you in advance!
I don't believe that resolution is your issue here. If you aren't seeing the grayish outlines on your drawing when it appears on the screen, odds are that you're observing a compression artifact in the saving process. Your image is probably being saved as a lower-quality JPEG image, where artifacts will appear on sharp edges, like the ones in your drawing.
To work around this, Ben Weiss's answer here provides the following code for forcing your image to be saved to the photo library as a PNG:
UIImage* im = [UIImage imageWithCGImage:myCGRef]; // make image from CGRef
NSData* imdata = UIImagePNGRepresentation ( im ); // get PNG representation
UIImage* im2 = [UIImage imageWithData:imdata]; // wrap UIImage around PNG representation
UIImageWriteToSavedPhotosAlbum(im2, nil, nil, nil); // save to photo album
While this is probably the easiest way to address your problem here, you could also try employing multisample antialiasing, as Apple describes in the "Using Multisampling to Improve Image Quality" section of the OpenGL ES Programming Guide for iOS. Depending on how fill-rate limited you are, MSAA might lead to a little bit of slowdown in your application.
You're using kCGImageAlphaPremultipliedLast when you create the CG bitmap context. Although I can't see your OpenGL code, it seems unlikely to me that your OpenGL context is rendering premultiplied alpha. Unfortunately, IIRC, it's not possible to create a non-premultiplied CG bitmap context on iOS (it would be using kCGImageAlphaLast, but I think that'll just make the creation call fail), so you may need to premultiply the data by hand between getting it from OpenGL and making the CG context.
On the other hand, is there a reason your OpenGL context has an alpha channel? Could you just make it opaque white then use kCGImageAlphaNoneSkipLast?
I'm writing a game for IPhone in Opengl ES, and I'm experiencing a problem with alpha blending:
I'm using glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA) to achieve alpha blending and trying to compose a scene with several "layers" so I can move them separately instead of having a static image. I created a preview in photoshop and then tried to achieve the same result in the iphone, but a black halo is shown when I blend a texture with semi-transparent regions.
I attached an image. In the left is the screenshot from the iphone, and in the right is what it looks like when I make the composition in photoshop. The image is composed by a gradient and a sand image with feathered edges.
Is this the expected behaviour? Is there any way I can avoid the dark borders?
Thanks.
EDIT: I'm uploading the portion of the png containing the sand. The complete png is 512x512 and has other images too.
I'm loading the image using the following code:
NSString *path = [NSString stringWithUTF8String:filePath];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil) NSLog(#"ERROR LOADING TEXTURE IMAGE");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[image release];
[texData release];
I need to answer my own question:
I couldn't make it work using ImageIO framework so I added libpng sources to my project and loaded the image using it. It works perfect now, but had I to solve the following problem:
The image was loaded and showed fine in the simulator but was not loading at all on the real device. I found on the web that what's going on is that the pixel ordering in PNG image-format files is converted from RGBA to BGRA, and the color values are pre-multiplied by the alpha channel value as well, by a compression utility 'pngcrush' (for device-specific efficiency reasons, when programming with the UIKit interface).
The utility also renames a header of the file, making the new PNG file unusable by libpng. These changes are done automatically when PNG files are deployed onto the iPhone. While this is fine for the UIKit, libpng (and other non-Apple libraries) generally can't then read the files.
The simple solutions are:
rename your PNG files with a different extension.
for your iPhone
-device- build add the following user-defined setting:
IPHONE_OPTIMIZE_OPTIONS | -skip-PNGs
I did the second and it works perfect on simulator and device now.
Your screenshot and photoshop mockup suggest that the image's color channels are being premultiplied against the alpha channel.
I have no idea what your original source images look like but to me it looks like it is blending correctly. With the blend mode you have you're going to get muggy blends between the layers.
The photoshop version looks like you've got proper transparency for each layer, but not blending. I suppose you could experiement with glAlphaFunc if you didn't want to explicitly set the pixel alphas exactly.
--- Code relating to comment below (removing alpha pre-multiplication) ---
int pixelcount = width * height;
unsigned char* off = pixeldata;
for (int pi=0; pi<pixelcount; ++pi)
{
unsigned char alpha = off[3];
if( alpha!=255 && alpha!=0 )
{
off[0] = ((int)off[0])*255/alpha;
off[1] = ((int)off[1])*255/alpha;
off[2] = ((int)off[2])*255/alpha;
}
off += 4;
}
I am aware this post is ancient, however I had the identical problem and after attempting some of the solutions and agonising for days I discovered that you can solve the pre-multiplied RGBA png issue by using the following blending parameters;
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
The GL_ONE parameter replaced the GL_SRC_ALPHA parameter in my case.
I can now use my RGBA PNGs without the gray alpha edges effect which made my glyph text look nasty.
Edit:
Okay, one more thing, for fading etc (setting the alpha channel in code), you will need to pre-multiply manually when the blending is set as above, like so;
glColor4f(r*a, g*a, b*a, a);
How can I let the user of my iphone app to clip a UIImage by a dynamically generated CGPath. Basically I display a rectangle overlaid on a UIImageView and the user can move the 4 corners of the rectangle to get a polygon with 4 sides. The rectangle is not filled so you see four lines overlaid on an image.
The user should be able to clip out whatever is outside the 4 lines.
Any help or pointers is much appreciated.
If you already have the CGPath, you just have to use CGContextAddPath and CGContextClip, and after that you can draw your UIImage on that context.
If you just want to display the clipped image, that context could be the current context in the DrawRect method of your view.
If you actually want to have the clipped image data, the context would probably be a CGBitmapContext, something like this:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
size_t bytesPerPixel = 1;
size_t bytesPerRow = bmpWidth * bytesPerPixel;
size_t bmpDataSize = ( bytesPerRow * bmpHeight);
unsigned char *bmpData = malloc(bmpDataSize);
memset(bmpData, 0, bmpDataSize);
CGContextRef bmpCtx = CGBitmapContextCreate(bmpData, bmpWidth, bmpHeight, 8, bytesPerRow, colorSpace, kCGImageAlphaNone | kCGBitmapByteOrderDefault);
(the code example is for a grey scale bitmap because I had that code ready, but it's not hard to figure out what has to be changed for an RGB bitmap.)
then to actually draw the clipped image to the bitmap context you would do something like this (I'm writing this code from memory, so there might be some mistakes):
// theContext could be
// UIGraphicsGetCurrentContext()
// or the bmpCtx
CGContextAddPath(theContext, yourCGPath);
CGContextClip(theContext);
// not sure you need the translate and scale...
CGContextTranslateCTM(theContext, 0, bmpHeight);
CGContextScaleCTM(theContext, 1, -1);
CGContextDrawImage(theContext, rect, yourUIImage.CGImage);
Trying to write a simple paint program for iPhone, and I'm using Apple's glPaint sample as a guide. The only problem is, painting doesn't work on a white background, since white + colour = white. I've tried different blending functions, but haven't been able to hit on the right combination of settings and/or brushes to make this work. I've seen similar posts about this problem but no answers. Does anyone know how this might work?
Edit:
I don't even really transparency effects, at this point if I could draw solid lines with rounded ends I'd be happy.
I got white backgrounds working (using the default GLPaint code), by just changing the clear colour in the erase method ie,
- (void) erase
{
[EAGLContext setCurrentContext:context];
// Clear the buffer
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
//glClearColor(0.0, 0.0, 0.0, 0.0);
glClearColor(1.0, 1.0, 1.0, 0.0); // Change to white
glClear(GL_COLOR_BUFFER_BIT);
// Display the buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
The default blend function and brush image seem to just work.
Rather than adding the colour to the blend, could you subtract its opposite? This is roughly how paint and light work in real life, and should give the correct functionality.
Ex: If the user is painting in Red:255 Green:0 Blue:100 Opacity:0.5, you should do this to the pixel:
pixel.red -= (255-paint.red) * paint.opacity; //Subtract 0
pixel.green -= (255-paint.green) * paint.opacity; //Subtract 127.5
pixel.blue -= (255-paint.blue) * paint.opacity; //Subtract 77.5
EDIT: As you pointed out, it is not what is expected, as painting over full blue with full red will go to black, since they subtract each other.
A possible fix to this would be to combine the additive and subtractive approach.
For instance if you added 0.5*paint.colour and subtracted 0.5*paint.complementaryColour, adding full red to full blue would result in:
newPixel.red -> 0 + 127.5 - 0 = 127.5
newPixel.green -> 0 + 0 - 127.5 = 0 //Cap it off, or invent new math =D
newPixel.blue -> 255 + 0 - 127.5 = 127.5
As you can see, this results in a nice purple colour, which is the combination of blue and red. You can tweak the proportion of additive to subtractive logic to simulate how well the paint mixes.
Hope that helps! =)
Yea I had the same issue. The edges of the brush were darker than they should be. It turns out that apple's api pre multiplies the alpha into the rgb channels.
So I countered that by making a grayscale brush in photoshop with just rgb values and no alpha channel. This should look the way you want your brush to be with white representing full color pigmentaton and black representing no color pigmentation.
I load that brush the way its done in apple's glPaint sample code. I then copy the R-channel (or G or B channels as they all are equal) into the alpha channel of the texture. Following that I set the R-G-B values to maximum for all pixels of the brush texture.
So now your brush has an alpha channel with data of how exactly your brush looks. and the RGB are all 1.
Finally I used the blending function:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
And dont forget to set the color before you draw.
glColor4f(1.0f,0.0f,0.0f,1.0f); //red color
Check out the code below, see if it works for you:
-(GLuint) createBrushWithImage: (NSString*)brushName
{
GLuint brushTexture;
CGImageRef brushImage;
CGContextRef brushContext;
GLubyte *brushData,*brushData1;
size_t width, height;
//initialize brush image
brushImage = [UIImage imageNamed:brushName].CGImage;
// Get the width and height of the image
width = CGImageGetWidth(brushImage);
height = CGImageGetHeight(brushImage);
//make the brush texture and context
if(brushImage) {
// Allocate memory needed for the bitmap context
brushData = (GLubyte *) calloc(width * height *4, sizeof(GLubyte));
// We are going to use brushData1 to make the final texture
brushData1 = (GLubyte *) calloc(width * height *4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
brushContext = CGBitmapContextCreate(brushData, width, height, 8, width *4 , CGImageGetColorSpace(brushImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0f, 0.0f, (CGFloat)width, (CGFloat)height), brushImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
for(int i=0; i< width*height;i++){
//set the R-G-B channel to maximum
brushData1[i*4] = brushData1[i*4+1] =brushData1[i*4+2] =0xff;
//store originally loaded brush image in alpha channel
brushData1[i*4+3] = brushData[i*4];
}
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData1);
// Release the image data; it's no longer needed
free(brushData1);
free(brushData);
}
return brushTexture;
}
i ran into something similar. the following blending function call solved it for me without any complicated math.
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
Add this before your glDraw calls and you should be able to draw with any texture as brush.
Actually this works even better:
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
The sample code GLPaint is used glBlendFunc(GL_SRC_ALPHA, GL_ONE) and glBlendFunc(GL_SRC_ALPHA, GL_ONE) mode in the function - (id)initWithCoder:(NSCoder*)coder.So while the color is white,all the other colors won't be see.
I want to solve it too.
glBlendFunc(GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
This is best answer.And you have to charge the brush picture.The picture must be a alpha backgroud and while ellipse.
I have a simple 16x16 particle that goes from being opaque to transparent. Unfortunately is appears different in my iPhone port and I can't see where the differences in the code are. Most of the code is essentially the same.
I've uploaded an image to here to show the problem
The particle on the left is the incorrectly rendered iPhone version and the right is how it appears on Mac and Windows. It's just a simple RGBA .png file.
I've tried numerous blend functions and glTexEnv setting but I can't seem to make them the same.
Just to be thorough, my Texture loading code on the iPhone looks like the following
GLuint TextureLoader::LoadTexture(const char *path)
{
NSString *macPath = [NSString stringWithCString:path length:strlen(path)];
GLuint texture = 0;
CGImageRef textureImage = [UIImage imageNamed:macPath].CGImage;
if (textureImage == nil)
{
NSLog(#"Failed to load texture image");
return 0;
}
NSInteger texWidth = CGImageGetWidth(textureImage);
NSInteger texHeight = CGImageGetHeight(textureImage);
GLubyte *textureData = new GLubyte[texWidth * texHeight * 4];
memset(textureData, 0, texWidth * texHeight * 4);
CGContextRef textureContext = CGBitmapContextCreate(textureData, texWidth, texHeight, 8, texWidth * 4, CGImageGetColorSpace(textureImage), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(textureContext, CGRectMake(0.0, 0.0, (float)texWidth, (float)texHeight), textureImage);
CGContextRelease(textureContext);
//Make a texture ID, bind it, create it
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
delete[] textureData;
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
return texture;
}
The blend function I use is glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
I'll try any ideas people throw at me, because this has been a bit of a mystery to me.
Cheers.
this looks like the standard "textures are converted to premultiplied alpha" problem.
you can use
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
or you can write custom loading code to avoid the premultiplication.
Call me naive, but seeing that premultiplying an image requires (ar, ag, a*b, a), I figured I'd just divide the rgb values by a.
Of course as soon as the alpha value is larger than the r, g, b components, the particle texture became black. Oh well. Unless I can find a different image loader to the one above, then I'll just make all the rgb components 0xff (white). This is a good temporary solution for me because I either need a white particle or just colourise it in the application. Later on I might just make raw rgba files and read them in, because this is mainly for very small 16x16 and smaller particle textures.
I can't use Premultiplied textures for the particle system because overlapping multiple particle textures saturates the colours way too much.