I'm processing images (using AVFoundation and OpenCV on iOS) and I want to simply display contents of CMSampleBufferRef (or IplImage) to screen.
Simply: I just want to display (like with OpenCV's cvShowImage()) non-converted image to see if I'm not dealing with corrupted or somehow deformed image.
Sadly not. Different bitmap representation.
Perhaps you want a category? I use something along the lines of this:
// NSImage+OpenCV.h
#interface NSImage (OpenCV)
+ (NSImage*)imageWithCVMat:(const cv::Mat&)cvMat;
- (id)initWithCVMat:(const cv::Mat&)cvMat;
- (cv::Ptr<cv::Mat>)cvMat;
#end
// NSImage+OpenCV.m
using namespace cv;
#implementation NSImage (OpenCV)
- (id)initWithCVMat:(const cv::Mat&)cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data
length:cvMat.total()*cvMat.elemSize()];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = CGImageCreate(cvMat.cols,
cvMat.rows,
8,
8 * cvMat.elemSize(),
cvMat.step[0],
colourSpace,
kCGImageAlphaNone |
kCGBitmapByteOrderDefault,
provider,
NULL,
false,
kCGRenderingIntentDefault);
NSImage *image = [[NSImage alloc] initWithCGImage:imageRef size:CGSizeMake(cvMat.cols,cvMat.rows)];
CGColorSpaceRelease(colourSpace);
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
return image;
}
+(NSImage*)imageWithCVMat:(const cv::Mat&)cvMat {
return [[NSImage alloc] initWithCVMat:cvMat];
}
- (cv::Ptr<cv::Mat>)cvMat {
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)[self TIFFRepresentation], NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
cv::Ptr<cv::Mat> cvMat = new cv::Mat(self.size.height, self.size.width, CV_8UC4);
CGContextRef contextRef = CGBitmapContextCreate(cvMat->data,
cvMat->cols,
cvMat->rows,
8,
cvMat->step[0],
CGImageGetColorSpace(imageRef),
kCGImageAlphaNoneSkipLast |
kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef,
CGRectMake(0, 0, cvMat->cols, cvMat->rows),
imageRef);
CGContextRelease(contextRef);
CGImageRelease(imageRef);
return cvMat;
}
#end
Related
I have an NSImage that I would like to save as a PNG, but remove the alpha channel and use 5 bit colour. I am currently doing this to create my PNG:
NSData *imageData = [image TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps = nil;
imageData = [imageRep representationUsingType:NSPNGFileType properties:imageProps];
[imageData writeToFile:fileNameWithExtension atomically:YES];
I've read though lots of similar questions on SO but am confused as to the best/correct approach to use. Do I create a new CGGraphics context and draw into that? Can I create a new imageRep with these parameters directly? Any help, with a code snippet would be greatly appreciated.
Cheers
Dave
I did this in the end. Looks ugly and smells to me. Any better suggestions greatly appreciated.
// Create a graphics context (5 bits per colour, no-alpha) to render the tile
static int const kNumberOfBitsPerColour = 5;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef tileGraphicsContext = CGBitmapContextCreate (NULL, rect.size.width, rect.size.height, kNumberOfBitsPerColour, 2 * rect.size.width, colorSpace, kCGBitmapByteOrder16Little | kCGImageAlphaNoneSkipFirst);
// Draw the clipped part of the image into the tile graphics context
NSData *imageData = [clippedNSImage TIFFRepresentation];
CGImageRef imageRef = [[NSBitmapImageRep imageRepWithData:imageData] CGImage];
CGContextDrawImage(tileGraphicsContext, rect, imageRef);
// Create an NSImage from the tile graphics context
CGImageRef newImage = CGBitmapContextCreateImage(tileGraphicsContext);
NSImage *newNSImage = [[NSImage alloc] initWithCGImage:newImage size:rect.size];
// Clean up
CGImageRelease(newImage);
CGContextRelease(tileGraphicsContext);
CGColorSpaceRelease(colorSpace);
Im capturing from the iphone camera device using CVPixelFormatType_420YpCbCr8BiPlanarFullRange for faster processing of the grayscale plane (plane 0).
I am storing an amount of frames in memory for later video creation. Before I was creating the video in grayscale so I was storing only the plane that contains the luminiscense (plane 0).
Now I have to store both planes and also create a color video from them, for storing the frames I use something like this :
bytesInFrame = width * height * 2; //2 bytes per pixel, is that correct?
frameData = (unsigned char*) malloc(bytesInFrame * numbeOfFrames);
The function for creating the image from the grayscale buffer I was using :
- (UIImage *) convertBitmapGrayScaleToUIImage:(unsigned char *) buffer
withWidth:(int) width
withHeight:(int) height {
size_t bufferLength = width * height * 1;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 8;
size_t bytesPerRow = 1 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
if(colorSpaceRef == NULL) {
DLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if(pixels == NULL) {
DLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
if(context == NULL) {
DLog(#"Error context not created");
free(pixels);
}
UIImage *image = nil;
if(context) {
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
if(pixels) {
free(pixels);
}
return image;
}
I have seen that this question is similar to what I want to achieve :
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange frame to UIImage conversion
But I think this would add an extra step to my conversion.
There is any way of creating a color UIImage form the buffer directly?
I would appreciate some indications.
Hi so i currently have a small image (about 100x160) as a NSData Attribute in my CoreData model.
i display all entities in a TableView. The UIImageView in a single Cell has only a size of 50x80. just dropping the image into this frame looks a bit pebbly.
what would be the best solution to display this image in my tableViewCell? resize it on-the-fly in my cellForRowAtIndexPath? probably this will lead up my tableview to become a bit laggy.
resize it on create and save it in my coredata entity (or probably on disk)?
thank you! please leave a comment if something is unclear
For that you have to crop/resize the image. Following is the code to crop the image as per the required frame.
- (void)viewDidLoad
{
[super viewDidLoad];
// do something......
UIImage *img = [UIImage imageWithData:(nsdata)]; // nsdata will be your image data as you specified.
// To crop Image
UIImage *croppedImage = [self imageByCropping:img] toRect:CGRectMake(10, 10, 50, 80)];
// To resize image
UIImage *resizedImage = [self resizeImage:img width:50 height:80];
}
Crop Image:
- (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect
{
CGImageRef imageRef = CGImageCreateWithImageInRect([imageToCrop CGImage], rect);
UIImage *cropped = [UIImage imageWithCGImage:imageRef];
return cropped;
}
Resize Image:
-(UIImage *)resizeImage:(UIImage *)image width:(int)width height:(int)height
{
CGImageRef imageRef = [image CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), 4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
return result;
}
You can go with either of ways.
If I have a UIImage or CGContextRef or the pure bitmap data (direct access to decompressed ARGB-8 pixels), what's my best option to blur an image with radius 10 pixels as fast as possible?
I've implemented a stackBlur algorithm for iOS, which is close to GaussianBlur but very fast:
https://github.com/tomsoft1/StackBluriOS
Check for instance here:
Blur an UIImage on change of slider
Either use a stack blur, a box blur or use the OpenGL texture blur (google the first two, and check the Apple dev samples for the latter).
https://github.com/rnystrom/RNBlurModalView
-(UIImage *)boxblurImageWithBlur:(CGFloat)blur bluringImage : (UIImage *) image
{
int boxSize = (int)(blur * 40);
boxSize = boxSize - (boxSize % 2) + 1;
CGImageRef img = image.CGImage;
enter code here
vImage_Buffer inBuffer, outBuffer;
vImage_Error error;
void *pixelBuffer;
//create vImage_Buffer with data from CGImageRef
CGDataProviderRef inProvider = CGImageGetDataProvider(img);
CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
inBuffer.width = CGImageGetWidth(img);
inBuffer.height = CGImageGetHeight(img);
inBuffer.rowBytes = CGImageGetBytesPerRow(img);
inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
//create vImage_Buffer for output
pixelBuffer = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
if(pixelBuffer == NULL)
NSLog(#"No pixelbuffer");
outBuffer.data = pixelBuffer;
outBuffer.width = CGImageGetWidth(img);
outBuffer.height = CGImageGetHeight(img);
outBuffer.rowBytes = CGImageGetBytesPerRow(img);
// Create a third buffer for intermediate processing
void *pixelBuffer2 = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img));
vImage_Buffer outBuffer2;
outBuffer2.data = pixelBuffer2;
outBuffer2.width = CGImageGetWidth(img);
outBuffer2.height = CGImageGetHeight(img);
outBuffer2.rowBytes = CGImageGetBytesPerRow(img);
//perform convolution
error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer2, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
error = vImageBoxConvolve_ARGB8888(&outBuffer2, &inBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
if (error) {
NSLog(#"error from convolution %ld", error);
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(outBuffer.data,
outBuffer.width,
outBuffer.height,
8,
outBuffer.rowBytes,
colorSpace,
kCGImageAlphaNoneSkipLast);
CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
//clean up
CGContextRelease(ctx);
CGColorSpaceRelease(colorSpace);
free(pixelBuffer);
CFRelease(inBitmapData);
CGImageRelease(imageRef);
return returnImage;
}
I know that there are already 2 relevant posts on that, but it's not clear...
So the case is this...:
I have a UIImage which consists of a png file requested from a url on the net.
Is it possible to mask out the white color so it becomes transparent?
Everything I have tried so far with CGImageCreateWithMaskingColors, returns a white image...
Any help guys would be precious :)
Ok since I found a solution and it's working fine, I am answering my question and I really believe that this will be useful to many people having the same need.
First of all I thought that would be nice to extend my UIImages via a category:
UIImage+Utils.h
#import <UIKit/UIKit.h>
#interface UIImage (Utils)
- (UIImage*)imageByClearingWhitePixels;
#end
UIImage+Utils.m
#import "UIImage+Utils.h"
#implementation UIImage (Utils)
#pragma mark Private Methods
- (CGImageRef) CopyImageAndAddAlphaChannel:(CGImageRef) sourceImage {
CGImageRef retVal = NULL;
size_t width = CGImageGetWidth(sourceImage);
size_t height = CGImageGetHeight(sourceImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef offscreenContext = CGBitmapContextCreate(NULL, width, height, 8, 0, colorSpace, kCGImageAlphaPremultipliedFirst);
if (offscreenContext != NULL) {
CGContextDrawImage(offscreenContext, CGRectMake(0, 0, width, height), sourceImage);
retVal = CGBitmapContextCreateImage(offscreenContext);
CGContextRelease(offscreenContext);
}
CGColorSpaceRelease(colorSpace);
return retVal;
}
- (UIImage *) getMaskedArtworkFromPicture:(UIImage *)image withMask:(UIImage *)mask{
UIImage *maskedImage;
CGImageRef imageRef = [self CopyImageAndAddAlphaChannel:image.CGImage];
CGImageRef maskRef = mask.CGImage;
CGImageRef maskToApply = CGImageMaskCreate(CGImageGetWidth(maskRef),CGImageGetHeight(maskRef),CGImageGetBitsPerComponent(maskRef),CGImageGetBitsPerPixel(maskRef),CGImageGetBytesPerRow(maskRef),CGImageGetDataProvider(maskRef), NULL, NO);
CGImageRef masked = CGImageCreateWithMask(imageRef, maskToApply);
maskedImage = [UIImage imageWithCGImage:masked];
CGImageRelease(imageRef);
CGImageRelease(maskToApply);
CGImageRelease(masked);
return maskedImage;
}
#pragma mark Public Methods
- (UIImage*)imageByClearingWhitePixels{
//Copy image bitmaps
float originalWidth = self.size.width;
float originalHeight = self.size.height;
CGSize newSize;
newSize = CGSizeMake(originalWidth, originalHeight);
UIGraphicsBeginImageContext( newSize );
[self drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//Clear white color by masking with self
newImage = [self getMaskedArtworkFromPicture:newImage withMask:newImage];
return newImage;
}
#end
Finally, I am able to use it like this: (After I have imported UIImage+Utils.h of course)
UIImage *myImage = [...]; //A local png file or a file from a url
UIImage *myWhitelessImage = [myImage imageByClearingWhitePixels]; // Hooray!
It is not really possible, at least no easy way, certainly no API call. It is basically an image editing problem usually done with Photoshop. Most images that have a transparent background were created that way.