I'm trying to use the CISourceOverCompositing filter but I'm hitting some wall.
This is the relevant code. mask is a UIImage and images is an array of UIImage
ci_mask = [[CIImage alloc] initWithCGImage: mask.CGImage];
ctx = [CIContext contextWithOptions: nil];
compo = [CIFilter filterWithName: #"CISourceOverCompositing"];
for(int i = 0; i < images.count; i++) {
UIImage *image = [images objectAtIndex: i];
ci_base = [[CIImage alloc] initWithCGImage: image.CGImage];
[compo setDefaults];
[compo setValue: ci_mask forKey: #"inputImage"];
[compo setValue: ci_base forKey: #"inputBackgroundImage"];
result = compo.outputImage;
CGImageDestinationAddImage(
dst_ref,
[ctx createCGImage: result fromRect:result.extent],
frame_props
);
}
mask contains an alpha channel which is correctly applied in the simulator but not on the device. The output only show the mask as-is seemingly without using the alpha channel to blend images.
The almost same code using CoreGraphics API works fine (but then I can't apply other CIFilters)
I'll probably try to use CIBlendWithMask but then I'll have to extract the mask and add complexity...
Look for different capitalization in your filenames and the files being specified. They don't have to be the same case to work in the simulator, but they are case sensitive on the device. This has thrown me off many times and if you don't look for it is quite difficult to track down.
OK I found the issue and it's a bit tricky. First, to answer Jeshua, both the mask and the base are generated so the path isn't relevant here (but I'll keep that in mind, definitively good to know).
Now for the "solution". When generating the mask I used a combination of CG* calls on a background context (CGImageCreateWithMask, ...). Now, it seems that the result of those call gives me a CGImage seemingly without alpha channel (CGImageGetAlphaInfo returns 0) but... both CoreGraphics APIs on device and in the simulator AND CoreImage APIs but only in the simulator applies the still present alpha channel.
Creating a CGContext with kCGImageAlphaPremultipliedLast and using CGContextDrawImage with kCGBlendModeSourceOut (or whatever you need to "hollow out" your image) keeps the alpha channel intact and this works on both the simulator and the device.
I'll file a radar as either the simulator or the device is wrong.
Related
UPDATE This piece of code is actually not where the problem is; commenting out all the CoreGraphics lines and returning the first image in the array as the result does not prevent the crashes from happening, so I must look farther upstream.
I am running this on a 75ms NSTimer. It works perfectly with 480x360 images, and will run all day long without crashing.
But when I send it images that are 1024x768, it will crash after about 20 seconds, having given several low memory warnings.
In both cases Instruments shows absolutely normal memory usage: a flat allocations graph, less than one megabyte of live bytes, no leaks the whole time.
So, what's going on? Is Core Graphics somehow using too much memory without showing it?
Also worth mentioning: there aren't that many images in (NSMutableArray*)imgs -- usually three, sometimes two or four. Crashes regardless. Crashes slightly less soon when there are only two.
- (UIImage*) imagefromImages:(NSMutableArray*)imgs andFilterName:(NSString*)filterName {
UIImage *tmpResultant = [imgs objectAtIndex:0];
CGSize s = [tmpResultant size];
UIGraphicsBeginImageContext(s);
[tmpResultant drawInRect:CGRectMake(0, 0, s.width, s.height) blendMode:kCGBlendModeNormal alpha:1.0];
for (int i=1; i<[imgs count]; i++) { [[imgs objectAtIndex:i] drawInRect:CGRectMake(0, 0, s.width, s.height) blendMode:kCGBlendModeMultiply alpha:1.0]; }
tmpResultant = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return tmpResultant;
}
Sounds to me like the problem is outside of the code you have shown. Images that are displayed on screen have a backing store outside of your app's memory that is width*height*bytes_per_pixel. You also get memory warnings and app termination if you have too many backing stores.
You might need to optimize there, to either create smaller optimized versions of these images for display or to allow for the backing stores to be released. Also turning on rasterization for certain non-changing layers can help here as well as setting layer contents directly to the CGImage as opposed to working with UIImages.
You should make a sample project that demonstrates the issue that has no other code around it and see if you still run out of memory. But as I suspect you'll find that just with the code you have shown you will not be able to reproduce the isse as it lies elsewhere.
I have a looping animation consisting of 120 frames, 512x512 resolution, saved as 32bit PNG files. I want to play this sequence back in a UIView inside my application. Can anyone give me some pointers regarding how I might do this, hopefully I can do this using the standard API (which I would prefer). I could use Cocos2D if needed or even OpenGL (but I am totally new to OpenGL at this point).
You can try this:
// Init an UIImageView
UIImageView *imageView = [[UIImageView alloc] initWithFrame:/*Some frame*/];
// Init an array with UIImage objects
NSArray *array = [NSArray arrayWithObjects: [UIImage imageNamed:#"image1.png"], [UIImage imageNamed:#"image2.png"], .. ,nil];
// Set the UIImage's animationImages property
imageView.animationImages = array;
// Set the time interval
imageView.animationDuration = /* Number of images x 1/30 gets you 30FPS */;
// Set repeat count
imageView.animationRepeatCount = 0; /* 0 means infinite */
// Start animating
[imageView startAnimating];
// Add as subview
[self.view addSubview:imageView];
This is the easiest approach, but I can't say anything about the performance, since I haven't tried it. I think it should be fine though with the images that you have.
Uncompressed, that's about 90MB of images, and that might be as much as you're looking at if they're unpacked into UIImage format. Due to the length of the animation and the size of the images, I highly recommend storing them in a compressed movie format. Take a look at the reference for the MediaPlayer framework; you can remove the playback controls, embed an MPMoviePlayerController within your own view hierarchy, and set playback to loop. Note that 640x480 is the upper supported limit for H.264 so you might need to scale down the video anyway.
Do take a note of issues looping video, as mentioned in the question here Smooth video looping in iOS.
I have divergent needs for the image returned from the iPhone camera. My app scales the image down for upload and display and, recently, I added the ability to save the image to the Photos app.
At first I was assigning the returned value to two separate variables, but it turned out that they were sharing the same object, so I was getting two scaled-down images instead of having one at full scale.
After figuring out that you can't do UIImage *copyImage = [myImage copy];, I made a copy using imageWithCGImage, per below. Unfortunately, this doesn't work because the copy (here croppedImage) ends up rotated 90º from the original.
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info
{
// Resize, crop, and correct orientation issues
self.originalImage = [info valueForKey:#"UIImagePickerControllerOriginalImage"];
UIImageWriteToSavedPhotosAlbum(originalImage, nil, nil, nil);
UIImage *smallImage = [UIImage imageWithCGImage:[originalImage CGImage]]; // UIImage doesn't conform to NSCopy
// This method is from a category on UIImage based on this discussion:
// http://discussions.apple.com/message.jspa?messageID=7276709
// It doesn't rotate smallImage, though: while imageWithCGImage returns
// a rotated CGImage, the UIImageOrientation remains at UIImageOrientationUp!
UIImage *fixedImage = [smallImage scaleAndRotateImageFromImagePickerWithLongestSide:480];
...
}
Is there a way to copy the UIImagePickerControllerOriginalImage image without modifying it in the process?
This seems to work but you might face some memory problems depending on what you do with newImage:
CGImageRef newCgIm = CGImageCreateCopy(oldImage.CGImage);
UIImage *newImage = [UIImage imageWithCGImage:newCgIm scale:oldImage.scale orientation:oldImage.imageOrientation];
This should work:
UIImage *newImage = [UIImage imageWithCGImage:oldImage.CGImage];
Copy backing data and rotate it
This question asks a common question about UIImage in a slightly different way. Essentially, you have two related problems - deep copying and rotation. A UIImage is just a container and has an orientation property that is used for display. A UIImage can contain its backing data as a CGImage or CIImage, but most often as a CGImage. The CGImage is a struct of information that includes a pointer to the underlying data and if you read the docs, copying the struct does not copy the data. So...
Deep copying
As I'll get to in the next paragraph deep copying the data will leave the image rotated because the image is rotated in the underlying data.
UIImage *newImage = [UIImage imageWithData:UIImagePNGRepresentation(oldImage)];
This will copy the data but will require setting the orientation property before handing it to something like UIImageView for proper display.
Another way to deep copy would be to draw into the context and grab the result. Assume a zebra.
UIGraphicsBeginImageContext(zebra!.size)
zebra!.drawInRect(CGRectMake(0, 0, zebra!.size.width, zebra!.size.height))
let copy = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Deep copy and Rotation
Rotating a CGImage has been already been answered. It also happens that this rotated image is a new CGImage that can be used to create a UIImage.
UIImage *newImage = [UIImage imageWithData:UIImagePNGRepresentation(oldImage)];
I think you need to create an image context (CGContextRef). Draw the UIImage.CGImage into the context with method CGContextDrawImage(...), then get the image from the context with CGBitmapContextCreateImage(...).
With such routine, I'm sure you can get the real copy of the image you want. Hope it's helped you.
Anyone knows how to convert .jpg image to .bmp format in iphone using objective-C?
And how i process(or RGB color) the each pixel of IPhone Device caputured image?
Is there need to conversion of image type?
You won't be able to easily get to a bmp representation on the iPhone. In Cocoa on the Mac, it is managed by the NSBitmapImageRep class and is pretty straight forward as outlined below.
At a high level, you need to get the .jpg into an NSBitmapImageRep object and then let the frameworks handle the conversion for you:
a. Convert the JPG image to an NSBitmapImageRep
b. Use built in NSBitmapImageRep methods to save in desired formats.
NSBitmapImageRep *origImage = [self documentAsBitmapImageRep:[NSURL fileURLWithPath:pathToJpgImage]];
NSBitmapImageRep *bmpImage = [origImage representationUsingType:NSBMPFileType properties:nil];
- (NSBitmapImageRep*)documentAsBitmapImageRep:(NSURL*)urlOfJpg;
{
CIImage *anImage = [CIImage imageWithContentsOfURL:urlOfJpg];
CGRect outputExtent = [anImage extent];
// Create a new NSBitmapImageRep.
NSBitmapImageRep *theBitMapToBeSaved = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL pixelsWide:outputExtent.size.width
pixelsHigh:outputExtent.size.height bitsPerSample:8 samplesPerPixel:4
hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace
bytesPerRow:0 bitsPerPixel:0];
// Create an NSGraphicsContext that draws into the NSBitmapImageRep.
NSGraphicsContext *nsContext = [NSGraphicsContext graphicsContextWithBitmapImageRep:theBitMapToBeSaved];
// Save the previous graphics context and state, and make our bitmap context current.
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext setCurrentContext: nsContext];
CGPoint p = CGPointMake(0.0, 0.0);
// Get a CIContext from the NSGraphicsContext, and use it to draw the CIImage into the NSBitmapImageRep.
[[nsContext CIContext] drawImage:anImage atPoint:p fromRect:outputExtent];
// Restore the previous graphics context and state.
[NSGraphicsContext restoreGraphicsState];
return [[theBitMapToBeSaved retain] autorelease];
}
On the iPhone, BMP is not directly supported by UIKit, so you would have to drop down into Quartz/Core Graphics and manage the transformation yourself.
Pixel by pixel processing is much more involved. Again, you should get intimately familiar with the core graphics capabilities on the device if this is a hard requirement for you.
Load the JPG image into a UIImage, which it can handle natively.
Then you can grab the CGImageRef from the UIImage object.
Create a new bitmap CG image context with the same properties of the image you already have, and provide your own data buffer to hold the bytes of the bitmap context.
Draw the original image into the new bitmap context: the bytes in your provided buffer are now the image's pixels.
Now you need to encode the actual BMP file, which isn't a functionality that exists in the UIKit or CoreGraphics (as far as I know) frameworks. Fortunately, it's an intentionally trivial format-- I've written quick and dirty encoders for BMP in an hour or less. Here's the spec: http://www.fileformat.info/format/bmp/egff.htm (Version 3 should be fine, unless you need to support alpha, but coming from JPEG you probably don't.)
Good luck.
Ultimately I'm working on a box blur function for use on iPhone.
That function would take a UIImage and draw transparent copies, first to the sides, then take that image and draw transparent copies above and below, returning a nicely blurred image.
Reading the Drawing with Quartz 2D Programming Guide, it recommends using CGLayers for this kind of operation.
The example code in the guide is a little dense for me to understand, so I would like someone to show me a very simple example of taking a UIImage and converting it to a CGLayer that I would then draw copies of and return as a UIImage.
It would be OK if values were hard-coded (for simplicity). This is just for me to wrap my head around, not for production code.
UIImage *myImage = …;
CGLayerRef layer = CGLayerCreateWithContext(destinationContext, myImage.size, /*auxiliaryInfo*/ NULL);
if (layer) {
CGContextRef layerContext = CGLayerGetContext(layer);
CGContextDrawImage(layerContext, (CGRect){ CGPointZero, myImage.size }, myImage.CGImage);
//Use CGContextDrawLayerAtPoint or CGContextDrawLayerInRect as many times as necessary. Whichever function you choose, be sure to pass destinationContext to it—you can't draw the layer into itself!
CFRelease(layer);
}
That is technically my first ever iPhone code (I only program on the Mac), so beware. I have used CGLayer before, though, and as far as I know, Quartz is no different on the iPhone.
… and return as a UIImage.
I'm not sure how to do this part, having never worked with UIKit.