I have created an app that applies a neural network (convnet) to an input image followed by a post-processing. This convnet is basically a filter that inputs an image (plus one parameter) and outputs an image of similar size. Since the convnet cannot process a large image in one pass due to memory issue, the image must be split in tiles which are then sticked (or unified) together after the model is applied. My problem is about image manipulation. It will be more clear after I present you what is done in details:
Input image is a UIImage
Split the input image into as a list of UIImage called listInput
Create an empty listOutput
For each tile in listInput:
Convert the UIImage into a CGImage
Convert the CGImage into a CVPixelBuffer
Apply a CoreML model to the CVPixelBuffer which returns a CVPixelBuffer of the same size
Convert the CVPixelBuffer into a CIImage
Convert the CIImage into a CGImage
Convert the CGImage into a UIImage
Append the UIImage into listOutput
Unify all the tiles in listOutput into an output UIImage
Fuse input and output UIImage (post-processing):
Convert input UIImage into CGImage then to CIImage
Convert output UIImage into CGImage then to CIImage
Fuse the 2 CIImage using a CIFilter
Convert the resulting CIImage into a CGImage
Convert the CGImage into a UIImage
I can post the code corresponding to any of the part listed above if needed.
The general problem I have is all the conversions between UIImage to CGImage to CIImage and conversly. I'm trying to get rid of UIImage completely (except for loading the image). I want indeed to manipulate CGImage from the start until the end. This will already simplify the code.
I've modified my code to manipulate list of CGImage instead of list of UIImage. The cropping part is in fact simpler with CGImage than with UIImage. But I cannot figure out the other way around: unify CGImage together into a bigger image. This is my specific problem. Bellow is the function I've created to unify the UIImage.
func unifyTiles(listTiles: [UIImage], listRect: [CGRect]) -> UIImage? {
guard let input = input else {
return nil
}
let outputSize = CGSize(width : Int(input.size.width), height: Int(input.size.height))
UIGraphicsBeginImageContextWithOptions(outputSize, true, 1.0)
for i in 0..<listTiles.count {
listTiles[i].draw(at: listRect[i].origin)
}
guard let output = UIGraphicsGetImageFromCurrentImageContext() else {
return nil
}
UIGraphicsEndImageContext()
return output
}
So my question is:
Is it possible to do the same just manipulating CGImage?
Is it even a good idea?
Some notes:
The post-procesing must be separated from the previous part because the user wants to modify the post-processing parameters without reapplying the convnet. The application of the convnet is indeed very long and can take up to a minute to compute on large images while the post-processing is near real-time.
In the post-processing part, it was suggested to me to convert directly UIImage <-> CIImage without going through CGImage. For some reason that I don't know, this doesn't work as far as I remember.
I'm aware that using Vision I could feed directly a CGImage into the network instead of a CVPixelBuffer, but I don't know if Vision can output a CGImage as well. This will be investigated soon hopefully.
Thanks for any information you could give me.
UIImage to CIImage
This step:
UIImage into CGImage then to CIImage
is overblown, as CIImage has an init(image:) initializer that goes directly from UIImage to CIImage.
Cropping
You seem to think that cropping a CGImage is easier than cropping a UIImage, but it isn't. To crop a UIImage, just draw it into a smaller graphics context, offset so as to place the desired point at the top left of the crop.
Graphics Contexts
You can only draw in a graphics context, and that's going to be an image context whether you like it or not. There's no such thing as drawing into a "CGImage context" if that's what you're thinking. You can draw a CGImage directly into an image context, but the results are usually disastrous (as I shall explain in the next paragraph).
Final thoughts
In general I would like to set your mind at rest about UIImage. A UIImage is a good thing. It is (usually) a very lightweight wrapper around a CGImage. The CGImage is the bitmap data; the wrapper adds information like scale and orientation. Losing that information can cause your drawing to come out very badly; trying to draw a CGImage can cause the drawing to be flipped and incorrectly scaled. Don't do it! Use UIImage and be happy.
Related
If we already have a Bitmap Graphics Context, and we converted this context to a CGImage. Now we want to add a single dot to the CGImage. Can we alter the CGImage directly, instead draw a single dot to the graphics context and covert the whole context once again to a CGImage?
The idea is that CGImage is also a structure, so if we can alter some data in the structure, that should somehow be possible?
CGImages are immutable. They cannot be changed after they are created.
If we already have a Bitmap Graphics Context, and we converted this context to a CGImage
CGBitmapContextCreateImage doesn't "convert" a context to an image -- it effectively takes a snapshot of the current state of the context.
You can draw more things in the original context. (The first CGImage will not be affected.) Then call CGBitmapContextCreateImage again, to get a new image with the new drawing in it.
I have divergent needs for the image returned from the iPhone camera. My app scales the image down for upload and display and, recently, I added the ability to save the image to the Photos app.
At first I was assigning the returned value to two separate variables, but it turned out that they were sharing the same object, so I was getting two scaled-down images instead of having one at full scale.
After figuring out that you can't do UIImage *copyImage = [myImage copy];, I made a copy using imageWithCGImage, per below. Unfortunately, this doesn't work because the copy (here croppedImage) ends up rotated 90º from the original.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
// Resize, crop, and correct orientation issues
self.originalImage = [info valueForKey:#"UIImagePickerControllerOriginalImage"];
UIImageWriteToSavedPhotosAlbum(originalImage, nil, nil, nil);
UIImage *smallImage = [UIImage imageWithCGImage:[originalImage CGImage]]; // UIImage doesn't conform to NSCopy
// This method is from a category on UIImage based on this discussion:
// http://discussions.apple.com/message.jspa?messageID=7276709
// It doesn't rotate smallImage, though: while imageWithCGImage returns
// a rotated CGImage, the UIImageOrientation remains at UIImageOrientationUp!
UIImage *fixedImage = [smallImage scaleAndRotateImageFromImagePickerWithLongestSide:480];
...
}
Is there a way to copy the UIImagePickerControllerOriginalImage image without modifying it in the process?
This seems to work but you might face some memory problems depending on what you do with newImage:
CGImageRef newCgIm = CGImageCreateCopy(oldImage.CGImage);
UIImage *newImage = [UIImage imageWithCGImage:newCgIm scale:oldImage.scale orientation:oldImage.imageOrientation];
This should work:
UIImage *newImage = [UIImage imageWithCGImage:oldImage.CGImage];
Copy backing data and rotate it
This question asks a common question about UIImage in a slightly different way. Essentially, you have two related problems - deep copying and rotation. A UIImage is just a container and has an orientation property that is used for display. A UIImage can contain its backing data as a CGImage or CIImage, but most often as a CGImage. The CGImage is a struct of information that includes a pointer to the underlying data and if you read the docs, copying the struct does not copy the data. So...
Deep copying
As I'll get to in the next paragraph deep copying the data will leave the image rotated because the image is rotated in the underlying data.
UIImage *newImage = [UIImage imageWithData:UIImagePNGRepresentation(oldImage)];
This will copy the data but will require setting the orientation property before handing it to something like UIImageView for proper display.
Another way to deep copy would be to draw into the context and grab the result. Assume a zebra.
UIGraphicsBeginImageContext(zebra!.size)
zebra!.drawInRect(CGRectMake(0, 0, zebra!.size.width, zebra!.size.height))
let copy = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Deep copy and Rotation
Rotating a CGImage has been already been answered. It also happens that this rotated image is a new CGImage that can be used to create a UIImage.
UIImage *newImage = [UIImage imageWithData:UIImagePNGRepresentation(oldImage)];
I think you need to create an image context (CGContextRef). Draw the UIImage.CGImage into the context with method CGContextDrawImage(...), then get the image from the context with CGBitmapContextCreateImage(...).
With such routine, I'm sure you can get the real copy of the image you want. Hope it's helped you.
I'm trying to draw an image using OpenGL in a project for iPad.
The image data:
A data blob of UInt8 that represents the grayscale value for each pixel in three dimensions (I'm going to draw slices from the 3D-body). I also have information on height and width for the image.
My current (unsuccessful) approach is to use it as a texture on a square and I am looking at some example code I found on the net. That code, however, loads an image file from the disc.
While setting up the view there is a call to CGContextDrawImage and the last parameter is suppose to be an CGImageRef. Do you know how I can create one from my data or is this a dead end?
Thankful for all input. I really haven't gotten the grip of OpenGL yet so please be gentle :-)
It's not a dead end.
You can create an CGImageRef from a blob of pixel memory using CGBitmapContextCreate() to create a bitmap context and CGBitmapContextCreateImage() to create the image ref from the bitmap context.
Anyone knows how to convert .jpg image to .bmp format in iphone using objective-C?
And how i process(or RGB color) the each pixel of IPhone Device caputured image?
Is there need to conversion of image type?
You won't be able to easily get to a bmp representation on the iPhone. In Cocoa on the Mac, it is managed by the NSBitmapImageRep class and is pretty straight forward as outlined below.
At a high level, you need to get the .jpg into an NSBitmapImageRep object and then let the frameworks handle the conversion for you:
a. Convert the JPG image to an NSBitmapImageRep
b. Use built in NSBitmapImageRep methods to save in desired formats.
NSBitmapImageRep *origImage = [self documentAsBitmapImageRep:[NSURL fileURLWithPath:pathToJpgImage]];
NSBitmapImageRep *bmpImage = [origImage representationUsingType:NSBMPFileType properties:nil];
- (NSBitmapImageRep*)documentAsBitmapImageRep:(NSURL*)urlOfJpg;
{
CIImage *anImage = [CIImage imageWithContentsOfURL:urlOfJpg];
CGRect outputExtent = [anImage extent];
// Create a new NSBitmapImageRep.
NSBitmapImageRep *theBitMapToBeSaved = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL pixelsWide:outputExtent.size.width
pixelsHigh:outputExtent.size.height bitsPerSample:8 samplesPerPixel:4
hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0 bitsPerPixel:0];
// Create an NSGraphicsContext that draws into the NSBitmapImageRep.
NSGraphicsContext *nsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:theBitMapToBeSaved];
// Save the previous graphics context and state, and make our bitmap context current.
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext: nsContext];
CGPoint p = CGPointMake(0.0, 0.0);
// Get a CIContext from the NSGraphicsContext, and use it to draw the CIImage into the NSBitmapImageRep.
[[nsContext CIContext] drawImage:anImage atPoint:p fromRect:outputExtent];
// Restore the previous graphics context and state.
[NSGraphicsContext restoreGraphicsState];
return [[theBitMapToBeSaved retain] autorelease];
}
On the iPhone, BMP is not directly supported by UIKit, so you would have to drop down into Quartz/Core Graphics and manage the transformation yourself.
Pixel by pixel processing is much more involved. Again, you should get intimately familiar with the core graphics capabilities on the device if this is a hard requirement for you.
Load the JPG image into a UIImage, which it can handle natively.
Then you can grab the CGImageRef from the UIImage object.
Create a new bitmap CG image context with the same properties of the image you already have, and provide your own data buffer to hold the bytes of the bitmap context.
Draw the original image into the new bitmap context: the bytes in your provided buffer are now the image's pixels.
Now you need to encode the actual BMP file, which isn't a functionality that exists in the UIKit or CoreGraphics (as far as I know) frameworks. Fortunately, it's an intentionally trivial format-- I've written quick and dirty encoders for BMP in an hour or less. Here's the spec: http://www.fileformat.info/format/bmp/egff.htm (Version 3 should be fine, unless you need to support alpha, but coming from JPEG you probably don't.)
Good luck.
Ultimately I'm working on a box blur function for use on iPhone.
That function would take a UIImage and draw transparent copies, first to the sides, then take that image and draw transparent copies above and below, returning a nicely blurred image.
Reading the Drawing with Quartz 2D Programming Guide, it recommends using CGLayers for this kind of operation.
The example code in the guide is a little dense for me to understand, so I would like someone to show me a very simple example of taking a UIImage and converting it to a CGLayer that I would then draw copies of and return as a UIImage.
It would be OK if values were hard-coded (for simplicity). This is just for me to wrap my head around, not for production code.
UIImage *myImage = …;
CGLayerRef layer = CGLayerCreateWithContext(destinationContext, myImage.size, /*auxiliaryInfo*/ NULL);
if (layer) {
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextDrawImage(layerContext, (CGRect){ CGPointZero, myImage.size }, myImage.CGImage);
//Use CGContextDrawLayerAtPoint or CGContextDrawLayerInRect as many times as necessary. Whichever function you choose, be sure to pass destinationContext to it—you can't draw the layer into itself!
CFRelease(layer);
}
That is technically my first ever iPhone code (I only program on the Mac), so beware. I have used CGLayer before, though, and as far as I know, Quartz is no different on the iPhone.
… and return as a UIImage.
I'm not sure how to do this part, having never worked with UIKit.