How do I use a monochrome bitmap to display a color image on the iPhone? - iphone

I have a sequence of bits for a monochrome image (0 => black, 1 => white). I want to take this data and draw an image on the iPhone in color. If a pixel value == 0, then paint it color1, else color2.
I have gotten absolutely nowhere. Originally, I thought I could use glBitmap, but it is not supported on the iPhone.
Does anyone have an idea of how to do this?
Thanks in advance.

I highly doubt that this is "the" way of doing it, or even a good way of doing it, but since there are no other responses, here is a possible way:
UIGraphicsBeginImageContext(CGSizeMake(width,height));
CGContextRef context = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(context);
for(int y=0;y<height,y++){
for(int x=0;x<width;x++){
int val=map[x+y*width];
if(val==0){
CGContextSetRGBFillColor(context, 0.0,0.0,0.0,1.0);
}else{
CGContextSetRGBFillColor(context, 1.0,1.0,1.0,1.0);
}
CGContextFillRect(context,CGRectMake(x,y,1,1));
}
}
UIGraphicsPopContext();
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Also note, that I have not actually tested this on the iPhone, only on the iPad.

Related

iOS7 screen capture has white noise

I make ios7 apps.
And this app can capture. but it's white noise in addsubview images.
Please help me.
iOS6 and iOS5 is Not has white noise.
This is my code.
CGRect rect = [[UIScreen mainScreen] bounds];
UIGraphicsBeginImageContextWithOptions(rect.size, NO, 0);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextFillRect(ctx, rect);
[self.view.layer renderInContext:ctx];
NSdate pngData = UIImagePNGRepresentation (UIGraphicsGetImageFromCurrentImageContext ());
I use alpha png image in UIimage. And addSubview on view.
white noise imase here
iOS 7 has a different way of making screen shots
UIGraphicsBeginImageContextWithOptions(self.bounds.size, NO, self.window.screen.scale);
[self drawViewHierarchyInRect:self.frame afterScreenUpdates:NO];
UIImage * image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
You are using CGContextFillRect to fill the rectangle with what? You don't seem to have defined the fill color or pattern. Although iOS7 has a new method as described in the previous answer, the previous methods should still work if coded properly.
My guess is that in previous OS, the fill color is defaulting to 0, or you have coded it properly there, and in iOS7, it must be using whatever happens to be in memory or something. Try calling CGContextSetFillColorWithColor before CGContextFillRect.

pixelated iphone UIImageView

I've been having issues rendering images with the UIImageView class. The pixelation seems to occur mostly on the edges of the image I am trying to show.
I have tried changing the property 'Render with edge antialiasing' to no avail.
The image files contain images that are larger than what will appear on the screen.
It seems to be royally messing with the quality of the image and then displaying it. I tried to post images here, but StackOverflow is denying me that privilege. So here's a link to what's going on.
http://i.imgur.com/QpUOTOF.png
The sun in this image is the problem I'm speaking of. Any ideas?
On-the-fly image resizing is quick and of low quality. For bundled images, it is worth the extra bundle space to include downsized versions. For downloaded images, you can achieve better results by resizing with Core Graphics into a new UIImage before you set the image property.
CGSize newSize = CGSizeMake(newWidth, newHeight);
UIGraphicsBeginImageContextWithOptions(newSize, // context size
NO, // opaque?
0); // image scale. 0 means "device screen scale"
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
[bigImage drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Use following method use for get specific hight and width with image
+ (UIImage*)resizeImage:(UIImage*)image withWidth:(int)width withHeight:(int)height
{
CGSize newSize = CGSizeMake(width, height);
float widthRatio = newSize.width/image.size.width;
float heightRatio = newSize.height/image.size.height;
if(widthRatio > heightRatio)
{
newSize=CGSizeMake(image.size.width*heightRatio,image.size.height*heightRatio);
}
else
{
newSize=CGSizeMake(image.size.width*widthRatio,image.size.height*widthRatio);
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
This method return NewImage, with specific size that you specified.
How big is your image and what is the size of the imageView? Don't rely on UIImageView to scale it down for you. You probably need to resize it manually. This would also be a bit more memory efficient.
I use categories like these:
>>>github link <<<
to do image resizing.
This also gives you some other nice function for rounded corners etc.
Also keep in mind, that you need a transparent border at the edge of an image if you want to rotate it to avoid aliasing.

High Quality Round Corner Image in iPhone

In my app, I want to high quality Image. Image is loaded from Facebook friend list. When that image is loaded in smaller size (50 * 50), its quality is fine. But when I try to get that image in bigger size(280 *280) quality of image diminished.
For round corner m doing like :
self.mImageView.layer.cornerRadius = 10.0;
self.mImageView.layer.borderColor = [UIColor blackColor].CGColor;
self.mImageView.layer.borderWidth = 1.0;
self.mImageView.layer.masksToBounds = YES;
For getting image m using following code :
self.mImageView.image = [self imageWithImage:profileImage scaledToSize:CGSizeMake(280, 280)];
- (UIImage *)imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
UIGraphicsBeginImageContextWithOptions(newSize, YES,0.0);
CGContextRef context = CGContextRetain(UIGraphicsGetCurrentContext());
CGContextTranslateCTM(context, 0.0, newSize.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextSetInterpolationQuality(context, kCGInterpolationLow);
CGContextSetAllowsAntialiasing (context, TRUE);
CGContextSetShouldAntialias(context, TRUE);
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, newSize.width, newSize.height),image.CGImage);
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
I have checked my code several times, but could not figure out how to make image perfect. So, guys how this quality of image will be improved?
Thanx in advance...
…quality of image diminished.
The 'quality' of the image is still present. (Technically, you are introducing a small amount of error by resizing it, but that's not the real problem…)
So, you want to scale a 50x50px image to 280x280px? The information/detail does not exist in the source signal. Ideally, you would download a more appropriately sized image, for the size you want to display at.
If that's not an option, you could reduce pixelation by means of proper resampling and/or interpolation. This would simply smooth out the pixels your program magnifies by 5.6 -- the image would then look like a cross between pixelated and blurred (see CGContextSetAllowsAntialiasing, CGContextSetShouldAntialias, CGContextSetInterpolationQuality and related APIs to accomplish this using quartz).

OpenGL ES (IPhone) alpha blending looks weird

I'm writing a game for IPhone in Opengl ES, and I'm experiencing a problem with alpha blending:
I'm using glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA) to achieve alpha blending and trying to compose a scene with several "layers" so I can move them separately instead of having a static image. I created a preview in photoshop and then tried to achieve the same result in the iphone, but a black halo is shown when I blend a texture with semi-transparent regions.
I attached an image. In the left is the screenshot from the iphone, and in the right is what it looks like when I make the composition in photoshop. The image is composed by a gradient and a sand image with feathered edges.
Is this the expected behaviour? Is there any way I can avoid the dark borders?
Thanks.
EDIT: I'm uploading the portion of the png containing the sand. The complete png is 512x512 and has other images too.
I'm loading the image using the following code:
NSString *path = [NSString stringWithUTF8String:filePath];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil) NSLog(#"ERROR LOADING TEXTURE IMAGE");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextTranslateCTM( context, 0, height - height );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[image release];
[texData release];
I need to answer my own question:
I couldn't make it work using ImageIO framework so I added libpng sources to my project and loaded the image using it. It works perfect now, but had I to solve the following problem:
The image was loaded and showed fine in the simulator but was not loading at all on the real device. I found on the web that what's going on is that the pixel ordering in PNG image-format files is converted from RGBA to BGRA, and the color values are pre-multiplied by the alpha channel value as well, by a compression utility 'pngcrush' (for device-specific efficiency reasons, when programming with the UIKit interface).
The utility also renames a header of the file, making the new PNG file unusable by libpng. These changes are done automatically when PNG files are deployed onto the iPhone. While this is fine for the UIKit, libpng (and other non-Apple libraries) generally can't then read the files.
The simple solutions are:
rename your PNG files with a different extension.
for your iPhone
-device- build add the following user-defined setting:
IPHONE_OPTIMIZE_OPTIONS | -skip-PNGs
I did the second and it works perfect on simulator and device now.
Your screenshot and photoshop mockup suggest that the image's color channels are being premultiplied against the alpha channel.
I have no idea what your original source images look like but to me it looks like it is blending correctly. With the blend mode you have you're going to get muggy blends between the layers.
The photoshop version looks like you've got proper transparency for each layer, but not blending. I suppose you could experiement with glAlphaFunc if you didn't want to explicitly set the pixel alphas exactly.
--- Code relating to comment below (removing alpha pre-multiplication) ---
int pixelcount = width * height;
unsigned char* off = pixeldata;
for (int pi=0; pi<pixelcount; ++pi)
{
unsigned char alpha = off[3];
if( alpha!=255 && alpha!=0 )
{
off[0] = ((int)off[0])*255/alpha;
off[1] = ((int)off[1])*255/alpha;
off[2] = ((int)off[2])*255/alpha;
}
off += 4;
}
I am aware this post is ancient, however I had the identical problem and after attempting some of the solutions and agonising for days I discovered that you can solve the pre-multiplied RGBA png issue by using the following blending parameters;
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
The GL_ONE parameter replaced the GL_SRC_ALPHA parameter in my case.
I can now use my RGBA PNGs without the gray alpha edges effect which made my glyph text look nasty.
Edit:
Okay, one more thing, for fading etc (setting the alpha channel in code), you will need to pre-multiply manually when the blending is set as above, like so;
glColor4f(r*a, g*a, b*a, a);

Texture from UIColor?

I am drawing a pie chart, each slice has a different color. I need to give the slices a textured look, not just the plain color. Any ideas how to do this? I don't want to use a image to use as a texture for all the possible colors. So I need to generate a texture or something like that. Any ideas. Thank You!
ps: this is an iphone project. (I can't use Core Image)
Use colorWithPatternImage with UIColor.
Edit: Sorry should have read the question properly.
You will need to use a UIGraphicsContext to create an image you can use in colorWithPatternImage. I would suggest using a grayscale image that you can load in, tint with a similar method to this, then use as a pattern in UIColor.
So you would have a method along the lines of this:
- (UIColor *)texturedPatternWithTint:(UIColor *)tint {
UIImage *texture = [UIImage imageNamed:#"texture.png"];
CGRect wholeImage = CGRectMake(0, 0, texture.size.width, texture.size.height);
UIGraphicsBeginImageContextWithOptions(texture.size, NO, 0.0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, wholeImage, texture.CGImage);
CGContextSetBlendMode(context, kCGBlendModeMultiply);
CGContextSetFillColor(context, CGColorGetComponents(tint.CGColor));
CGContextFillRect(context, self.bounds);
UIImage *tintedTexture = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return [UIColor colorWithPatternImage:tintedTexture];
}
(not tested)