I have the source code for a video decoder application written in C, which I'm now porting on iphone.
My problem is as follows:
I have RGBA pixel data for a frame in a buffer that I need to display on the screen. My buffer is of type unsigned char. (I cannot change it to any other data type as the source code is too huge and not written by me.)
Most of the links I found on the net say about how to "draw and display pixels" on the screen or how to "display pixels present in an array", but none of then say how to "display pixel data present in a buffer".
I'm planning to use quartz 2D. All I need to do is just display the buffer contents on the screen. No modifications! Although my problem sounds very simple, there isn't any API that I could find to do the same. I couldn't find any appropriate link or document that was useful enough.
Kindly help!
Thanks in advance.
You can use the CGContext data structure to create a CGImage from raw pixel data. I've quickly written a basic example:
- (CGImageRef)drawBufferWidth:(size_t)width height:(size_t)height pixels:(void *)pixels
{
unsigned char (*buf)[width][4] = pixels;
static CGColorSpaceRef csp = NULL;
if (!csp) {
csp = CGColorSpaceCreateDeviceRGB();
}
CGContextRef ctx = CGBitmapContextCreate(
buf,
width,
height,
8, // 8 bits per pixel component
width * 4, // 4 bytes per row
csp,
kCGImageAlphaPremultipliedLast
);
CGImageRef img = CGBitmapContextCreateImage(ctx);
CGContextRelease(ctx);
return img;
}
You can call this method like this (I've used a view controller):
- (void)viewDidLoad
{
[super viewDidLoad];
const size_t width = 320;
const size_t height = 460;
unsigned char (*buf)[width][4] = malloc(sizeof(*buf) * height);
// fill up `buf` here
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
buf[y][x][0] = x * 255 / width;
buf[y][x][1] = y * 255 / height;
buf[y][x][2] = 0;
buf[y][x][3] = 255;
}
}
CGImageRef img = [self drawBufferWidth:320 height:460 pixels:buf];
self.imageView.image = [UIImage imageWithCGImage:img];
CGImageRelease(img);
}
Related
I have firstly convert the image to raw pixels and again convert the pixels back to UIImage, after converting the image it changes it color and also become some transparent, I have tried a lot but not able to get the problem. Here is my code:
-(UIImage*)markPixels:(NSMutableArray*)pixels OnImage:(UIImage*)image{
CGImageRef inImage = image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
int r = 3;
int p = 2*r+1;
unsigned char* data = CGBitmapContextGetData (cgctx);
int i = 0;
while (data[i]&&data[i+1]) {
// NSLog(#"%d",pixels[i]);
i++;
}
NSLog(#"%d %zd %zd",i,w,h);
NSLog(#"%ld",sizeof(CGBitmapContextGetData (cgctx)));
for(int i = 0; i< pixels.count-1 ; i++){
NSValue*touch1 = [pixels objectAtIndex:i];
NSValue*touch2 = [pixels objectAtIndex:i+1];
NSArray *linePoints = [self returnLinePointsBetweenPointA:[touch1 CGPointValue] pointB:[touch2 CGPointValue]];
for(NSValue *touch in linePoints){
NSLog(#"point = %#",NSStringFromCGPoint([touch CGPointValue]));
CGPoint location = [touch CGPointValue];
for(int i = -r ; i<p ;i++)
for(int j= -r; j<p;j++)
{
if(i<=0 && j<=0 && i>image.size.height && j>image.size.width)
continue;
NSInteger index = (location.y+i) * w*4 + (location.x+j)* 4;
index = 0;
data[index +3] = 125;
}
}
}
// When finished, release the context
CGContextRelease(cgctx);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dp = CGDataProviderCreateWithData(NULL, data, w*h*4, NULL);
CGImageRef img = CGImageCreate(w, h, 8, 32, 4*w, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big, dp, NULL, NO, kCGRenderingIntentDefault);
UIImage* ret_image = [UIImage imageWithCGImage:img];
CGImageRelease(img);
CGColorSpaceRelease(colorSpace);
// Free image data memory for the context
if (data) { free(data); }
return ret_image;
}
First one is original image and second image is after applying this code.
You have to ask the CGImageRef if it uses alpha or not, and the format of the components per pixel - look at all the CGImageGet... functions. Most likely the image is not ARGB but BGRA.
I often create and render pure green images then print out the first pixel to insure I got it right (BGRA -> 0 255 0 255) etc. It really gets confusing with host order etc and alpha first or last (does that mean before host order is applied before or after?)
EDIT: You told the CGDataProviderCreateWithData to use 'kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big', but I don't see you asking the original image for how its configured. My guess is that changing 'kCGBitmapByteOrder32Big' to 'kCGBitmapByteOrder32Little' will fix your problem but the alpha may be wrong too.
Images can have different values for alpha and byte order so you really need to ask the original image how its configured then adapt to that (or remap the bytes in memory to whatever format you want.)
I'm converting some image drawing code from Cairo to Quartz and I'm slowly making progress and learning Quartz along the way but I've run into a problem with the image format.
In the Cairo version it works like this:
unsigned short *d = (unsigned short*)imageSurface->get_data();
int stride = imageSurface->get_stride() >> 1;
int height = imageHeight;
int width = imageWidth;
do {
d = *p++; // p = raw image data
width --;
if( width == 0 ) {
height --;
width = imageWidth;
d += stride;
}
} while( height );
Now this produces the image as expected on the Cairo::ImageSurface. I've converted this over to how use Quartz and it is making progress but I'm not sure where I'm going wrong:
NSInteger pixelLen = (width * height) * 8;
unsigned char *d = (unsigned char*)malloc(pixelLen);
unsigned char *rawPixels = d;
int height = imageHeight;
int width = imageWidth;
do {
d = *p++; // p = raw image data
width --;
if( width == 0 ) {
height --;
width = imageWidth;
}
} while( height );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(rawPixels, imageWidth, imageHeight, 8, tileSize * sizeof(int), colorSpace, kCGBitmapByteOrderDefault);
CGImageRef image = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *resultUIImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
Now this is obviously heading in the right direction as it produces something that looks a bit like the desired image but it creates 4 copies of the image in a row, each with different pixels filled in so I'm assuming this is an interlaced image (I don't know a great deal about image formats) and that I need to somehow combine them somehow to create a complete image but I don't know how to do that with Quartz.
I think the stride has something to do with the problem but from what I understand this is the byte distance from one row of pixels to another which would not be relevant in the context of Quartz?
It sounds like stride would correspond to rowBytes or bytesPerRow. This value is important because it is not necessarily equal to width * bytesPerPixel because rows might be padded to optimized offsets.
It's not completely obvious what the Cairo code is doing, and it doesn't look quite correct either. Either way, without the stride part, your loop makes no sense because it makes an exact copy of the bytes.
The loop in the Cairo code is copying a row of bytes, then jumping over the next row of data.
I am looking for a way to get a histogram of an image on the iPhone. The OpenCV library is way too big to be included in my app (OpenCV is about 70MB compiled), but I can use OpenGL. However, I have no idea on how to do either of these.
I have found how to get the pixels of the image, but cannot form a histogram. This seems like it should be simple, but I don't know how to store uint8_t into an array.
Here is the relevant question/answer for finding pixels:
Getting RGB pixel data from CGImage
The uint8_t* is just a pointer to a c array containing the bytes of the given color, i.e. {r, g, b, a} or whatever the color byte layout is for your image buffer.
So, referencing the link you provided, and the definition of histogram:
//Say we're in the inner loop and we have a given pixel in rgba format
const uint8_t* pixel = &bytes[row * bpr + col * bytes_per_pixel];
//Now save to histogram_counts uint32_t[4] planes r,g,b,a
//or you could just do one for brightness
//If you want to do data besides rgba, use bytes_per_pixel instead of 4
for (int i=0; i<4; i++) {
//Increment count of pixels with this value
histogram_counts[i][pixel[i]]++;
}
You can take RGB color of your image with CGRef. Look at the below method which I used for this.
- (UIImage *)processUsingPixels:(UIImage*)inputImage {
// 1. Get the raw pixels of the image
UInt32 * inputPixels;
CGImageRef inputCGImage = [inputImage CGImage];
NSUInteger inputWidth = CGImageGetWidth(inputCGImage);
NSUInteger inputHeight = CGImageGetHeight(inputCGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSUInteger bytesPerPixel = 4;
NSUInteger bitsPerComponent = 8;
NSUInteger inputBytesPerRow = bytesPerPixel * inputWidth;
inputPixels = (UInt32 *)calloc(inputHeight * inputWidth, sizeof(UInt32));
CGContextRef context = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight,
bitsPerComponent, inputBytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
// 3. Convert the image to Black & White
for (NSUInteger j = 0; j < inputHeight; j++) {
for (NSUInteger i = 0; i < inputWidth; i++) {
UInt32 * currentPixel = inputPixels + (j * inputWidth) + i;
UInt32 color = *currentPixel;
// Average of RGB = greyscale
UInt32 averageColor = (R(color) + G(color) + B(color)) / 3.0;
*currentPixel = RGBAMake(averageColor, averageColor, averageColor, A(color));
}
}
// 4. Create a new UIImage
CGImageRef newCGImage = CGBitmapContextCreateImage(context);
UIImage * processedImage = [UIImage imageWithCGImage:newCGImage];
// 5. Cleanup!
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
return processedImage;
}
I'm having real trouble with this. I have some raw rgb data, values from 0 to 255, and want to display it as an image on the iphone but can't find out how to do it. Can anyone help? I think i might need to use CGImageCreate but just don't get it. Tried looking at the class reference and am feeling quite stuck.
All I want is a 10x10 greyscale image generated from some calculations and if there is an easy way to create a png or something that would be great.
a terribly primitive example, similar to Mats' suggestion, but this version uses an external pixel buffer (pixelData):
const size_t Width = 10;
const size_t Height = 10;
const size_t Area = Width * Height;
const size_t ComponentsPerPixel = 4; // rgba
uint8_t pixelData[Area * ComponentsPerPixel];
// fill the pixels with a lovely opaque blue gradient:
for (size_t i=0; i < Area; ++i) {
const size_t offset = i * ComponentsPerPixel;
pixelData[offset] = i;
pixelData[offset+1] = i;
pixelData[offset+2] = i + i; // enhance blue
pixelData[offset+3] = UINT8_MAX; // opaque
}
// create the bitmap context:
const size_t BitsPerComponent = 8;
const size_t BytesPerRow=((BitsPerComponent * Width) / 8) * ComponentsPerPixel;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gtx = CGBitmapContextCreate(&pixelData[0], Width, Height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
// create the image:
CGImageRef toCGImage = CGBitmapContextCreateImage(gtx);
UIImage * uiimage = [[UIImage alloc] initWithCGImage:toCGImage];
NSData * png = UIImagePNGRepresentation(uiimage);
// remember to cleanup your resources! :)
Use CGBitmapContextCreate() to create a memory based bitmap for yourself. Then call CGBitmapContextGetData() to get a pointer for your drawing code. Then CGBitmapContextCreateImage() to create a CGImageRef.
I hope this is sufficient to get you started.
On Mac OS you could do it with an NSBitmapImageRep. For iOS it seems to be a bit more complicated. I found this blog post though:
http://paulsolt.com/2010/09/ios-converting-uiimage-to-rgba8-bitmaps-and-back/
I'm trying to do something very simple:
1) Draw a UIImage into a CG bitmap context
2) Get a pointer to the data of the image
3) iterate over all pixels and just set all R G B components to 0 and alpha to 255. The result should appear pure black.
This is the original image I am using. 200 x 40 pixels, PNG-24 ARGB premultiplied alpha (All alpha values == 255):
This is the result (screenshot from Simulator), when I do not modify the pixels. Looks good:
This is the result, when I do the modifications! It looks like if the modification was incomplete. But the for-loops went over EVERY single pixel. The counter proves it: Console reports modifiedPixels = 8000 which is exactly 200 x 40 pixels. It looks always exactly the same.
Note: The PNG image I use has no alpha < 255. So no transparent pixels.
This is how I create the context. Nothing special...
int bitmapBytesPerRow = (width * 4);
int bitmapByteCount = (bitmapBytesPerRow * imageHeight);
colorSpace = CGColorSpaceCreateDeviceRGB();
bitmapData = malloc(bitmapByteCount);
bitmapContext = CGBitmapContextCreate(bitmapData,
width,
height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
CGImageAlphaPremultipliedFirst);
Next, I draw the image into that bitmapContext, and obtain the data like this:
void *data = CGBitmapContextGetData(bitmapContext);
This is the code which iterates over the pixels to modify them:
size_t bytesPerRow = CGImageGetBytesPerRow(img);
NSInteger modifiedPixels = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
long int offset = bytesPerRow * y + 4 * x;
// ARGB
unsigned char alpha = data[offset];
unsigned char red = data[offset+1];
unsigned char green = data[offset+2];
unsigned char blue = data[offset+3];
data[offset] = 255;
data[offset+1] = 0;
data[offset+2] = 0;
data[offset+3] = 0;
modifiedPixels++;
}
}
When done, I obtain a new UIImage from the bitmap context and display it in a UIImageView, to see the result:
CGImageRef imageRef = CGBitmapContextCreateImage(bitmapContext);
UIImage *img = [[UIImage alloc] initWithCGImage:imageRef];
Question:
What am I doing wrong?
Is this happening because I modify the data while iterating over it? Must I duplicate it?
Use CGBitmapContextGetBytesPerRow(bitmapContext) to get bytesPerRow instead getting from image (image has only 3 bytes per pixels if it hasn't alpha informations)
Might you're getting wrong height or width.... and by the way 240x40=9600 not 8000 so that's for sure that you're not iterating over each and every pixel.