How to convert .jpg image to .bmp format using Objective C? - iphone

Anyone knows how to convert .jpg image to .bmp format in iphone using objective-C?
And how i process(or RGB color) the each pixel of IPhone Device caputured image?
Is there need to conversion of image type?

You won't be able to easily get to a bmp representation on the iPhone. In Cocoa on the Mac, it is managed by the NSBitmapImageRep class and is pretty straight forward as outlined below.
At a high level, you need to get the .jpg into an NSBitmapImageRep object and then let the frameworks handle the conversion for you:
a. Convert the JPG image to an NSBitmapImageRep
b. Use built in NSBitmapImageRep methods to save in desired formats.
NSBitmapImageRep *origImage = [self documentAsBitmapImageRep:[NSURL fileURLWithPath:pathToJpgImage]];
NSBitmapImageRep *bmpImage = [origImage representationUsingType:NSBMPFileType properties:nil];
- (NSBitmapImageRep*)documentAsBitmapImageRep:(NSURL*)urlOfJpg;
{
CIImage *anImage = [CIImage imageWithContentsOfURL:urlOfJpg];
CGRect outputExtent = [anImage extent];
// Create a new NSBitmapImageRep.
NSBitmapImageRep *theBitMapToBeSaved = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL pixelsWide:outputExtent.size.width
pixelsHigh:outputExtent.size.height bitsPerSample:8 samplesPerPixel:4
hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0 bitsPerPixel:0];
// Create an NSGraphicsContext that draws into the NSBitmapImageRep.
NSGraphicsContext *nsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:theBitMapToBeSaved];
// Save the previous graphics context and state, and make our bitmap context current.
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext: nsContext];
CGPoint p = CGPointMake(0.0, 0.0);
// Get a CIContext from the NSGraphicsContext, and use it to draw the CIImage into the NSBitmapImageRep.
[[nsContext CIContext] drawImage:anImage atPoint:p fromRect:outputExtent];
// Restore the previous graphics context and state.
[NSGraphicsContext restoreGraphicsState];
return [[theBitMapToBeSaved retain] autorelease];
}
On the iPhone, BMP is not directly supported by UIKit, so you would have to drop down into Quartz/Core Graphics and manage the transformation yourself.
Pixel by pixel processing is much more involved. Again, you should get intimately familiar with the core graphics capabilities on the device if this is a hard requirement for you.

Load the JPG image into a UIImage, which it can handle natively.
Then you can grab the CGImageRef from the UIImage object.
Create a new bitmap CG image context with the same properties of the image you already have, and provide your own data buffer to hold the bytes of the bitmap context.
Draw the original image into the new bitmap context: the bytes in your provided buffer are now the image's pixels.
Now you need to encode the actual BMP file, which isn't a functionality that exists in the UIKit or CoreGraphics (as far as I know) frameworks. Fortunately, it's an intentionally trivial format-- I've written quick and dirty encoders for BMP in an hour or less. Here's the spec: http://www.fileformat.info/format/bmp/egff.htm (Version 3 should be fine, unless you need to support alpha, but coming from JPEG you probably don't.)
Good luck.

Related

How to merge several CGImage into a bigger CGImage?

I have created an app that applies a neural network (convnet) to an input image followed by a post-processing. This convnet is basically a filter that inputs an image (plus one parameter) and outputs an image of similar size. Since the convnet cannot process a large image in one pass due to memory issue, the image must be split in tiles which are then sticked (or unified) together after the model is applied. My problem is about image manipulation. It will be more clear after I present you what is done in details:
Input image is a UIImage
Split the input image into as a list of UIImage called listInput
Create an empty listOutput
For each tile in listInput:
Convert the UIImage into a CGImage
Convert the CGImage into a CVPixelBuffer
Apply a CoreML model to the CVPixelBuffer which returns a CVPixelBuffer of the same size
Convert the CVPixelBuffer into a CIImage
Convert the CIImage into a CGImage
Convert the CGImage into a UIImage
Append the UIImage into listOutput
Unify all the tiles in listOutput into an output UIImage
Fuse input and output UIImage (post-processing):
Convert input UIImage into CGImage then to CIImage
Convert output UIImage into CGImage then to CIImage
Fuse the 2 CIImage using a CIFilter
Convert the resulting CIImage into a CGImage
Convert the CGImage into a UIImage
I can post the code corresponding to any of the part listed above if needed.
The general problem I have is all the conversions between UIImage to CGImage to CIImage and conversly. I'm trying to get rid of UIImage completely (except for loading the image). I want indeed to manipulate CGImage from the start until the end. This will already simplify the code.
I've modified my code to manipulate list of CGImage instead of list of UIImage. The cropping part is in fact simpler with CGImage than with UIImage. But I cannot figure out the other way around: unify CGImage together into a bigger image. This is my specific problem. Bellow is the function I've created to unify the UIImage.
func unifyTiles(listTiles: [UIImage], listRect: [CGRect]) -> UIImage? {
guard let input = input else {
return nil
}
let outputSize = CGSize(width : Int(input.size.width), height: Int(input.size.height))
UIGraphicsBeginImageContextWithOptions(outputSize, true, 1.0)
for i in 0..<listTiles.count {
listTiles[i].draw(at: listRect[i].origin)
}
guard let output = UIGraphicsGetImageFromCurrentImageContext() else {
return nil
}
UIGraphicsEndImageContext()
return output
}
So my question is:
Is it possible to do the same just manipulating CGImage?
Is it even a good idea?
Some notes:
The post-procesing must be separated from the previous part because the user wants to modify the post-processing parameters without reapplying the convnet. The application of the convnet is indeed very long and can take up to a minute to compute on large images while the post-processing is near real-time.
In the post-processing part, it was suggested to me to convert directly UIImage <-> CIImage without going through CGImage. For some reason that I don't know, this doesn't work as far as I remember.
I'm aware that using Vision I could feed directly a CGImage into the network instead of a CVPixelBuffer, but I don't know if Vision can output a CGImage as well. This will be investigated soon hopefully.
Thanks for any information you could give me.
UIImage to CIImage
This step:
UIImage into CGImage then to CIImage
is overblown, as CIImage has an init(image:) initializer that goes directly from UIImage to CIImage.
Cropping
You seem to think that cropping a CGImage is easier than cropping a UIImage, but it isn't. To crop a UIImage, just draw it into a smaller graphics context, offset so as to place the desired point at the top left of the crop.
Graphics Contexts
You can only draw in a graphics context, and that's going to be an image context whether you like it or not. There's no such thing as drawing into a "CGImage context" if that's what you're thinking. You can draw a CGImage directly into an image context, but the results are usually disastrous (as I shall explain in the next paragraph).
Final thoughts
In general I would like to set your mind at rest about UIImage. A UIImage is a good thing. It is (usually) a very lightweight wrapper around a CGImage. The CGImage is the bitmap data; the wrapper adds information like scale and orientation. Losing that information can cause your drawing to come out very badly; trying to draw a CGImage can cause the drawing to be flipped and incorrectly scaled. Don't do it! Use UIImage and be happy.

CVPixelBufferLockBaseAddress why? Capture still image using AVFoundation

I'm writing an iPhone app that creates still images from the camera using AVFoundation.
Reading the programming guide I've found a code that does almost I need to do, so I'm trying to "reverse engineering" and understand it.
I'm founding some difficulties to understand the part that converts a CMSampleBuffer into an image.
So here is what I understood and later the code.
The CMSampleBuffer represent a buffer in the memory where the image with additional data is stored. Later I call the function CMSampleBufferGetImageBuffer() to receive a CVImageBuffer back with just the image data.
Now there is a function that I didn't understand and I can only imagine its function: CVPixelBufferLockBaseAddress(imageBuffer, 0); I can't understand if it is a "thread lock" to avoid multiple operation on it or a lock to the address of the buffer to avoid changes during operation(and why should it change?..another frame, aren't data copied in another location?). The rest of the code it's clear to me.
Tried to search on google but still didn't find nothing helpful.
Can someone bring some light?
-(UIImage*) getUIImageFromBuffer:(CMSampleBufferRef) sampleBuffer{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Thanks,
Andrea
The header file says that CVPixelBufferLockBaseAddress makes the memory "accessible". I'm not sure what that means exactly, but if you don't do it, CVPixelBufferGetBaseAddress fails so you'd better do it.
EDIT
Just do it is the short answer. For why consider that image may not live in main memory, it may live in a texture on some GPU somewhere (CoreVideo works on the mac too) or even be in a different format to what you expect, so the pixels you get are actually a copy. Without Lock/Unlock or some kind of Begin/End pair the implementation has no way to know when you've finished with the duplicate pixels so they would effectively be leaked. CVPixelBufferLockBaseAddress simply gives CoreVideo scope information, I wouldn't get too hung up on it.
Yes, they could have simply returned the pixels from CVPixelBufferGetBaseAddress and eliminate CVPixelBufferLockBaseAddress altogether. I don't know why they didn't do that.
I'd like to give more hints about this function, I made some tests so far and I can tell you that. When you get the base address you are probably getting the address of some shared memory resource. This becomes clear if you print the address of the base address, doing that you can see that base addresses are repeated while getting video frames. In my app I take frames at specific intervals and pass the CVImageBufferRef to an NSOperation subclass that converts the buffer in an image and saves it on the phone. I do not lock the pixel buffer until the operation starts to convert the CVImageBufferRef, even if pushing at higher framerates the base address of the pixel and the CVImageBufferRef buffer address are equal before the creation of the NSOperation and inside it. I just retain the CVImageBufferRef. I was expecting to se unmatching references and even if I didn't see it I guess that the best description is that CVPixelBufferLockBaseAddress locks the memory portion where the buffer is located, making it inaccessible from other resources so it will keep the same data, until you unlock it.

Drawing imagedata from data set of integers on an iPad using OpenGL

I'm trying to draw an image using OpenGL in a project for iPad.
The image data:
A data blob of UInt8 that represents the grayscale value for each pixel in three dimensions (I'm going to draw slices from the 3D-body). I also have information on height and width for the image.
My current (unsuccessful) approach is to use it as a texture on a square and I am looking at some example code I found on the net. That code, however, loads an image file from the disc.
While setting up the view there is a call to CGContextDrawImage and the last parameter is suppose to be an CGImageRef. Do you know how I can create one from my data or is this a dead end?
Thankful for all input. I really haven't gotten the grip of OpenGL yet so please be gentle :-)
It's not a dead end.
You can create an CGImageRef from a blob of pixel memory using CGBitmapContextCreate() to create a bitmap context and CGBitmapContextCreateImage() to create the image ref from the bitmap context.

iPhone: How to use CGContextConcatCTM for saving a transformed image properly?

I am making an iPhone application that loads an image from the camera, and then the user can select a second image from the library, move/scale/rotate that second image, and then save the result. I use two UIImageViews in IB as placeholders, and then apply transformations while touching/pinching.
The problem comes when I have to save both images together. I use a rect of the size of the first image and pass it to UIGraphicsBeginImageContext. Then I tried to use CGContextConcatCTM but I can't understand how it works:
CGRect rect = CGRectMake(0, 0, img1.size.width, img1.size.height); // img1 from camera
UIGraphicsBeginImageContext(rect.size); // Start drawing
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextClearRect(ctx, rect); // Clear whole thing
[img1 drawAtPoint:CGPointZero]; // Draw background image at 0,0
CGContextConcatCTM(ctx, img2.transform); // Apply the transformations of the 2nd image
But what do I need to do next? What information is being held in the img2.transform matrix? The documentation for CGContextConcatCTM doesn't help me that much unfortunately..
Right now I'm trying to solve it by calculating the points and the angle using trigonometry (with the help of this answer), but since the transformation is there, there has to be an easier and more elgant way to do this, right?
Take a look at this excellent answer, you need to create a bitmapped/image context, draw to it, and get the resultant image out. You can then save that. iOS UIImagePickerController result image orientation after upload

How to use a CGLayer to draw multiple images offscreen

Ultimately I'm working on a box blur function for use on iPhone.
That function would take a UIImage and draw transparent copies, first to the sides, then take that image and draw transparent copies above and below, returning a nicely blurred image.
Reading the Drawing with Quartz 2D Programming Guide, it recommends using CGLayers for this kind of operation.
The example code in the guide is a little dense for me to understand, so I would like someone to show me a very simple example of taking a UIImage and converting it to a CGLayer that I would then draw copies of and return as a UIImage.
It would be OK if values were hard-coded (for simplicity). This is just for me to wrap my head around, not for production code.
UIImage *myImage = …;
CGLayerRef layer = CGLayerCreateWithContext(destinationContext, myImage.size, /*auxiliaryInfo*/ NULL);
if (layer) {
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextDrawImage(layerContext, (CGRect){ CGPointZero, myImage.size }, myImage.CGImage);
//Use CGContextDrawLayerAtPoint or CGContextDrawLayerInRect as many times as necessary. Whichever function you choose, be sure to pass destinationContext to it—you can't draw the layer into itself!
CFRelease(layer);
}
That is technically my first ever iPhone code (I only program on the Mac), so beware. I have used CGLayer before, though, and as far as I know, Quartz is no different on the iPhone.
… and return as a UIImage.
I'm not sure how to do this part, having never worked with UIKit.