I'm scaling down a UIImage using this code:
let image = UIImage(contentsOfFile: imageUrl.path!)!.CGImage
let width = Float(CGImageGetWidth(image)) / 1.01
let height = Float(CGImageGetHeight(image)) / 1.01
let bitsPerComponent = CGImageGetBitsPerComponent(image)
let bytesPerRow = CGImageGetBytesPerRow(image)
let colorSpace = CGImageGetColorSpace(image)
let bitmapInfo = CGImageGetBitmapInfo(image)
let context = CGBitmapContextCreate(nil, Int(width), Int(height), bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo)
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), image)
let scaledImage = UIImage(CGImage: CGBitmapContextCreateImage(context))
It's used in a Photo Extension. It works, scales the input image, but does not consider orientation. I keep orientation when editing starts:
func startContentEditingWithInput(contentEditingInput: PHContentEditingInput?, placeholderImage: UIImage) {
let output = PHContentEditingOutput(contentEditingInput: self.input)
let url = self.input?.fullSizeImageURL
if let imageUrl = url {
imageOrientation = input!.fullSizeImageOrientation
}
}
The saved output is always landscape.
The easiest solution is to re-use to image orientation from the UIImage input when creating the UIImage output.
let uiImage = UIImage(contentsOfFile: imageUrl.path!)
let image = uiImage!.CGImage
// ...
let scaledImage = UIImage(CGImage: CGBitmapContextCreateImage(context),
scale: uiImage.scale,
orientation: uiImage.imageOrientation)
This worked for me:
let scaledImage = UIImage(CGImage: CGBitmapContextCreateImage(context))
var cimage: CIImage
cimage = CIImage(image: scaledImage)
if self.imageOrientation != nil {
cimage = cimage.imageByApplyingOrientation(self.imageOrientation!)
}
var _context = CIContext(options: nil)
var cgImage = _context.createCGImage(cimage,
fromRect: cimage.extent())
var resultImage = UIImage(CGImage: cgImage)
Mats' answer does work when I'm saving image.
Related
I have a round avatar image with a transparent background. I want to create a new round image of the same size out of the initial image, with a gradient background behind it. So it looks like standing in sky instead of having a transparent background.
Since I will use this image as tabbaritem’s image, I couldn’t use uiview and edit it’s background layer.
And to make it reusable I wanted to create a UIImage extension.
Below is what I do:
extension UIImage {
func gradientImage() -> UIImage? {
let width = self.size.width
let height = self.size.height
UIGraphicsBeginImageContextWithOptions(size, false, 0)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let colorSpace = CGColorSpaceCreateDeviceRGB()
guard let bitmapContext = CGContext(data: nil,
width: Int(width),
height: Int(height),
bitsPerComponent: 8,
bytesPerRow: 0,
space: colorSpace,
bitmapInfo: bitmapInfo.rawValue) else { return nil }
let locations: [CGFloat] = [0.0, 1.0]
let top = R.color.duckDimDarkGrey()?.cgColor
let bottom = R.color.duckPencilDark()?.cgColor
let colors = [top, bottom] as CFArray
guard let gradient = CGGradient(colorsSpace: colorSpace, colors: colors, locations: locations) else {
return nil
}
bitmapContext.drawLinearGradient(gradient, start: CGPoint.zero, end: CGPoint(x: 0, y: size.height), options: CGGradientDrawingOptions())
guard let cgImage = UIGraphicsGetImageFromCurrentImageContext()?.cgImage else { return nil }
UIGraphicsEndImageContext()
let img = UIImage(cgImage: cgImage)
return img
}
}
Here is how I use it:
Let image1 = UIImage(named: “test.png”)
self.tabBar.items[3].image = image1.gradientImage()
However I am getting an empty image somehow.
I have this extension (found in obj-c and I converted it to Swift3) to get the same UIImage but grayscaled:
public func getGrayScale() -> UIImage
{
let imgRect = CGRect(x: 0, y: 0, width: width, height: height)
let colorSpace = CGColorSpaceCreateDeviceGray()
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue).rawValue)
context?.draw(self.cgImage!, in: imgRect)
let imageRef = context!.makeImage()
let newImg = UIImage(cgImage: imageRef!)
return newImg
}
I can see the gray image but its quality is pretty bad... The only thing I can see that's related to the quality is bitsPerComponent: 8 in the context contructor. However looking at Apple's doc, here is what I get:
It shows that iOS only supports 8bpc... Thus why can't I improve the quality ?
Try below code:
Note: code Updated and error been fixed...
Code tested in Swift 3.
originalImage is the image that you trying to convert.
Answer 1:
var context = CIContext(options: nil)
Update: CIContext is the Core Image component that handles rendering and All of the processing of a core image is done in a CIContext. This is somewhat similar to a Core Graphics or OpenGL context.For more info available in Apple Doc.
func Noir() {
let currentFilter = CIFilter(name: "CIPhotoEffectNoir")
currentFilter!.setValue(CIImage(image: originalImage.image!), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = context.createCGImage(output!,from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
originalImage.image = processedImage
}
Also you need to Considered following filter that can produce similar effect
CIPhotoEffectMono
CIPhotoEffectTonal
Output from Answer 1:
Output from Answer 2:
Improved answer :
Answer 2: Auto adjusting input image before applying coreImage filter
var context = CIContext(options: nil)
func Noir() {
//Auto Adjustment to Input Image
var inputImage = CIImage(image: originalImage.image!)
let options:[String : AnyObject] = [CIDetectorImageOrientation:1 as AnyObject]
let filters = inputImage!.autoAdjustmentFilters(options: options)
for filter: CIFilter in filters {
filter.setValue(inputImage, forKey: kCIInputImageKey)
inputImage = filter.outputImage
}
let cgImage = context.createCGImage(inputImage!, from: inputImage!.extent)
self.originalImage.image = UIImage(cgImage: cgImage!)
//Apply noir Filter
let currentFilter = CIFilter(name: "CIPhotoEffectTonal")
currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = context.createCGImage(output!, from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
originalImage.image = processedImage
}
Note: If you want to see the better result.You should be testing your code on real device not in the simulator...
A Swift 4.0 extension that returns an optional UIImage to avoid any potential crashes down the road.
import UIKit
extension UIImage {
var noir: UIImage? {
let context = CIContext(options: nil)
guard let currentFilter = CIFilter(name: "CIPhotoEffectNoir") else { return nil }
currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
if let output = currentFilter.outputImage,
let cgImage = context.createCGImage(output, from: output.extent) {
return UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
}
return nil
}
}
To use this:
let image = UIImage(...)
let noirImage = image.noir // noirImage is an optional UIImage (UIImage?)
Joe's answer as an UIImage exension for Swift 4 working correctly for different scales:
extension UIImage {
var noir: UIImage {
let context = CIContext(options: nil)
let currentFilter = CIFilter(name: "CIPhotoEffectNoir")!
currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
let output = currentFilter.outputImage!
let cgImage = context.createCGImage(output, from: output.extent)!
let processedImage = UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
return processedImage
}
}
I'd use CoreImage, which may keep the quality.
func convertImageToBW(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
// convert UIImage to CIImage and set as input
let ciInput = CIImage(image: image)
filter?.setValue(ciInput, forKey: "inputImage")
// get output CIImage, render as CGImage first to retain proper UIImage scale
let ciOutput = filter?.outputImage
let ciContext = CIContext()
let cgImage = ciContext.createCGImage(ciOutput!, from: (ciOutput?.extent)!)
return UIImage(cgImage: cgImage!)
}
Depending on how you use this code, you may want to create the CIContext outside of it for performance reasons.
Here's a category in objective c. Note that, critically, this version takes scale into consideration.
- (UIImage *)grayscaleImage{
return [self imageWithCIFilter:#"CIPhotoEffectMono"];
}
- (UIImage *)imageWithCIFilter:(NSString*)filterName{
CIImage *unfiltered = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:filterName];
[filter setValue:unfiltered forKey:kCIInputImageKey];
CIImage *filtered = [filter outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimage = [context createCGImage:filtered fromRect:CGRectMake(0, 0, self.size.width*self.scale, self.size.height*self.scale)];
// Do not use initWithCIImage because that renders the filter each time the image is displayed. This causes slow scrolling in tableviews.
UIImage *image = [[UIImage alloc] initWithCGImage:cgimage scale:self.scale orientation:self.imageOrientation];
CGImageRelease(cgimage);
return image;
}
All the above solutions rely on CIImage, while UIImage will often have CGImage as its underlying image, not CIImage. So it means you have to convert your underlying image into CIImage in the beginning, and convert it back to CGImage in the end (if you don't, constructing UIImage with CIImage will effectively do it for you).
Although it probably OK for many use cases, the conversion between CGImage and CIImage is not free: it can be slow, and can create a big memory spike while converting.
So I want to mention a completely different solution, that doesn't require converting image back and forth. It's using Accelerate, and it's perfectly described by Apple here.
Here's a playground example that demonstrates both methods.
import UIKit
import Accelerate
extension CIImage {
func toGrayscale() -> CIImage? {
guard let output = CIFilter(name: "CIPhotoEffectNoir", parameters: [kCIInputImageKey: self])?.outputImage else {
return nil
}
return output
}
}
extension CGImage {
func toGrayscale() -> CGImage {
guard let format = vImage_CGImageFormat(cgImage: self),
// The source image bufffer
var sourceBuffer = try? vImage_Buffer(
cgImage: self,
format: format
),
// The 1-channel, 8-bit vImage buffer used as the operation destination.
var destinationBuffer = try? vImage_Buffer(
width: Int(sourceBuffer.width),
height: Int(sourceBuffer.height),
bitsPerPixel: 8
) else {
return self
}
// Declare the three coefficients that model the eye's sensitivity
// to color.
let redCoefficient: Float = 0.2126
let greenCoefficient: Float = 0.7152
let blueCoefficient: Float = 0.0722
// Create a 1D matrix containing the three luma coefficients that
// specify the color-to-grayscale conversion.
let divisor: Int32 = 0x1000
let fDivisor = Float(divisor)
var coefficientsMatrix = [
Int16(redCoefficient * fDivisor),
Int16(greenCoefficient * fDivisor),
Int16(blueCoefficient * fDivisor)
]
// Use the matrix of coefficients to compute the scalar luminance by
// returning the dot product of each RGB pixel and the coefficients
// matrix.
let preBias: [Int16] = [0, 0, 0, 0]
let postBias: Int32 = 0
vImageMatrixMultiply_ARGB8888ToPlanar8(
&sourceBuffer,
&destinationBuffer,
&coefficientsMatrix,
divisor,
preBias,
postBias,
vImage_Flags(kvImageNoFlags)
)
// Create a 1-channel, 8-bit grayscale format that's used to
// generate a displayable image.
guard let monoFormat = vImage_CGImageFormat(
bitsPerComponent: 8,
bitsPerPixel: 8,
colorSpace: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue),
renderingIntent: .defaultIntent
) else {
return self
}
// Create a Core Graphics image from the grayscale destination buffer.
guard let result = try? destinationBuffer.createCGImage(format: monoFormat) else {
return self
}
return result
}
}
To test, I used a full size of this image.
let start = Date()
var prev = start.timeIntervalSinceNow * -1
func info(_ id: String) {
print("\(id)\t: \(start.timeIntervalSinceNow * -1 - prev)")
prev = start.timeIntervalSinceNow * -1
}
info("started")
let original = UIImage(named: "Golden_Gate_Bridge_2021.jpg")!
info("loaded UIImage(named)")
let cgImage = original.cgImage!
info("original.cgImage")
let cgImageToGreyscale = cgImage.toGrayscale()
info("cgImage.toGrayscale()")
let uiImageFromCGImage = UIImage(cgImage: cgImageToGreyscale, scale: original.scale, orientation: original.imageOrientation)
info("UIImage(cgImage)")
let ciImage = CIImage(image: original)!
info("CIImage(image: original)!")
let ciImageToGreyscale = ciImage.toGrayscale()!
info("ciImage.toGrayscale()")
let uiImageFromCIImage = UIImage(ciImage: ciImageToGreyscale, scale: original.scale, orientation: original.imageOrientation)
info("UIImage(ciImage)")
The result (in sec)
CGImage method took about 1 sec. total:
original.cgImage : 0.5257829427719116
cgImage.toGrayscale() : 0.46222901344299316
UIImage(cgImage) : 0.1819549798965454
CIImage method took about 7 sec. total:
CIImage(image: original)! : 0.6055610179901123
ciImage.toGrayscale() : 4.969912052154541
UIImage(ciImage) : 2.395193934440613
When saving images as JPEG to disk, the one created with CGImage was also 3 times smaller than the one created with CIImage (5 MB vs. 17 MB). The quality was good on both images. Here's a small version that fits SO restrictions:
As per Joe answer we easily converted Original to B&W . But back to Original image refer these code :
var context = CIContext(options: nil)
var startingImage : UIImage = UIImage()
func Noir() {
startingImage = imgView.image!
var inputImage = CIImage(image: imgView.image!)!
let options:[String : AnyObject] = [CIDetectorImageOrientation:1 as AnyObject]
let filters = inputImage.autoAdjustmentFilters(options: options)
for filter: CIFilter in filters {
filter.setValue(inputImage, forKey: kCIInputImageKey)
inputImage = filter.outputImage!
}
let cgImage = context.createCGImage(inputImage, from: inputImage.extent)
self.imgView.image = UIImage(cgImage: cgImage!)
//Filter Logic
let currentFilter = CIFilter(name: "CIPhotoEffectNoir")
currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = context.createCGImage(output!, from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
imgView.image = processedImage
}
func Original(){
imgView.image = startingImage
}
I try to change the size and the resolution of an image programmatically, afterwards I save this image.
The imagesize in the imageView is changing, but when I look at my file "file3.png" it always has the original resolution of 640x1142.
I googled around but can't find a solution. I try to redraw the image. But maybe it's the wrong strategy.
thanks
#IBAction func pickOneImageBtn(sender: AnyObject) {
//load image from path
pickedImage.image = loadImageFromPath(fileInDocumentsDirectory("Angebote.png"))
let newSize = NSSize(width: 10, height: 10)
if let image = pickedImage.image {
print("found image")
//cast to CGImage
var imageRect:CGRect = CGRectMake(0, 0, image.size.width, image.size.height)
let imageRef = image.CGImageForProposedRect(&imageRect, context: nil, hints: nil)
if let imageRefExists = imageRef {
print("Cast to CGImage worked \(imageRefExists)")
}
//redraw to NSImage with new size
let imageWithNewSize = NSImage(CGImage: imageRef!, size: newSize)
//save on disk
let imgData: NSData! = imageWithNewSize.TIFFRepresentation!
let bitmap: NSBitmapImageRep! = NSBitmapImageRep(data: imgData!)
if let pngCoverImage = bitmap!.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [:]) {
pngCoverImage.writeToFile("/...correctpath.../imageSourceForResize/file3.png", atomically: false)
print("saved new image")
}
//the size is smaller
pickedImage.image = imageWithNewSize
}
}
Change
let imgData: NSData! = pickedImage.image!.TIFFRepresentation!
to
let imgData: NSData! = imageWithNewSize.TIFFRepresentation!
I tried to change the size of a NSImage for Mac application and here is the working function to resize an image written in swift.
func resize(image: NSImage, w: Int, h: Int) -> NSImage
{
let destSize = NSMakeSize(CGFloat(w), CGFloat(h))
let newImage = NSImage(size: destSize)
newImage.lockFocus()
image.drawInRect(NSMakeRect(0, 0, destSize.width, destSize.height), fromRect: NSZeroRect, operation: NSCompositingOperation.CompositeCopy, fraction: 1.0)
newImage.unlockFocus()
newImage.size = destSize
return NSImage(data: newImage.TIFFRepresentation!)!
}
You need to pass 3 parameters to call this function i.e NSImage, width, height and this function will return resized image.
targetimage = resize(source, w: Int(targetwidth), h: Int(targetheight))
I tried:
let scale = UIScreen.mainScreen().scale
UIGraphicsBeginImageContextWithOptions(metalLayer.bounds.size, false, scale)
// metalLayer.renderInContext(UIGraphicsGetCurrentContext()!)
// self.view.layer ...
metalLayer.presentationLayer()!.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
But the result is an empty screenshot. Any help would be nice!
Please keep in mind that I want to take a snapshot of a CAMetalLayer.
To make a screenshot, you need to get MTLTexture of the frame buffer.
1. If you use MTKView:
let texture = view.currentDrawable!.texture
2. If you don't use MTKView
Here's what I would do - I would have a property which holds last drawable presented to the screen:
let lastDrawableDisplayed: CAMetalDrawable?
And then when you present drawable to the screen, I would update it:
let commandBuffer = commandQueue.commandBuffer()
commandBuffer.addCompletedHandler { buffer in
self.lastDrawableDisplayed = drawable
}
Now you whenever you need to take a screenshot, you can get a texture like this:
let texture = lastDrawableDisplayed.texture
Ok, now when you have MTLTexture you can convert it to CGImage and then to UIImage or NSImage.
Here's the code for OS X playground (MetalKit.MTLTextureLoader is not available for iOS playgrounds), in which I convert MTLTexture to CGImage
I made a small extension over MTLTexture for this.
import Metal
import MetalKit
import Cocoa
let device = MTLCreateSystemDefaultDevice()!
let textureLoader = MTKTextureLoader(device: device)
let path = "path/to/your/image.jpg"
let data = NSData(contentsOfFile: path)!
let texture = try! textureLoader.newTextureWithData(data, options: nil)
extension MTLTexture {
func bytes() -> UnsafeMutablePointer<Void> {
let width = self.width
let height = self.height
let rowBytes = self.width * 4
let p = malloc(width * height * 4)
self.getBytes(p, bytesPerRow: rowBytes, fromRegion: MTLRegionMake2D(0, 0, width, height), mipmapLevel: 0)
return p
}
func toImage() -> CGImage? {
let p = bytes()
let pColorSpace = CGColorSpaceCreateDeviceRGB()
let rawBitmapInfo = CGImageAlphaInfo.NoneSkipFirst.rawValue | CGBitmapInfo.ByteOrder32Little.rawValue
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: rawBitmapInfo)
let selftureSize = self.width * self.height * 4
let rowBytes = self.width * 4
let provider = CGDataProviderCreateWithData(nil, p, selftureSize, nil)
let cgImageRef = CGImageCreate(self.width, self.height, 8, 32, rowBytes, pColorSpace, bitmapInfo, provider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)!
return cgImageRef
}
}
if let imageRef = texture.toImage() {
let image = NSImage(CGImage: imageRef, size: NSSize(width: texture.width, height: texture.height))
}
For swift 4.0,
Just converting code provided by haawa
let lastDrawableDisplayed = metalView?.currentDrawable?.texture
if let imageRef = lastDrawableDisplayed?.toImage() {
let uiImage:UIImage = UIImage.init(cgImage: imageRef)
}
extension MTLTexture {
func bytes() -> UnsafeMutableRawPointer {
let width = self.width
let height = self.height
let rowBytes = self.width * 4
let p = malloc(width * height * 4)
self.getBytes(p!, bytesPerRow: rowBytes, from: MTLRegionMake2D(0, 0, width, height), mipmapLevel: 0)
return p!
}
func toImage() -> CGImage? {
let p = bytes()
let pColorSpace = CGColorSpaceCreateDeviceRGB()
let rawBitmapInfo = CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: rawBitmapInfo)
let selftureSize = self.width * self.height * 4
let rowBytes = self.width * 4
let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
return
}
let provider = CGDataProvider(dataInfo: nil, data: p, size: selftureSize, releaseData: releaseMaskImagePixelData)
let cgImageRef = CGImage(width: self.width, height: self.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: rowBytes, space: pColorSpace, bitmapInfo: bitmapInfo, provider: provider!, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
return cgImageRef
}
}
I didn't manage to get the accepted answer to work in Swift 4 / Metal 2 with XCode 9.1 on an iPhone 6s. Therefore I used a slightly different approach assuming lastDrawableDisplayed is saved as described in the accepted answer. Quick and dirty and without any exception handling:
let context = CIContext()
let texture = self.lastDrawableDisplayed!.texture
let cImg = CIImage(mtlTexture: texture, options: nil)!
let cgImg = context.createCGImage(cImg, from: cImg.extent)!
let uiImg = UIImage(cgImage: cgImg)
This is based on the documentation on the used CIImage Initializer:
init(mtlTexture:options:) Initializes an image object with data supplied by a Metal texture.
and CIImage Processing which describes how to create a CGImage with the use of CIContext:
CIContext() Create[s] a CIContext object (with default options) [...] context.createCGImage Render[s] the output image to a Core Graphics image that you can display or save to a file.
Hope that helps for anyone using Swift 4.
Edit: Additionally, I have multiple overlaying CAMetalLayer in my project and want to combine them into one single UIImage. Therefore it is needed to have references to the last CAMetalDrawable object of each layer. Before a new layer is added (and therefore used as the provider of nextDrawable()) I simply add the lastDrawableDisplayed to an array [CAMetalDrawable]. When "exporting" the layers I simply write all UIImages subsequently into a bitmap-based graphics context and get the final image with UIGraphicsGetImageFromCurrentImageContext().
Edit: If you are having trouble with orientation, try the following:
let uiImg = UIImage(cgImage: cgImg, scale: 1.0, orientation: UIImageOrientation.downMirrored)
MTLTexture's method toImage needs to release data's memory in release data callback:
let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
data.dealloc()
return
}
swift 4.2
extension MTLTexture {
func bytes() -> UnsafeMutableRawPointer {
let width = self.width
let height = self.height
let rowBytes = self.width * 4
let p = malloc(width * height * 4)
self.getBytes(p!, bytesPerRow: rowBytes, from: MTLRegionMake2D(0, 0, width, height), mipmapLevel: 0)
return p!
}
func toImage() -> CGImage? {
let p = bytes()
let pColorSpace = CGColorSpaceCreateDeviceRGB()
let rawBitmapInfo = CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: rawBitmapInfo)
let selftureSize = self.width * self.height * 4
let rowBytes = self.width * 4
let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
return
}
let provider = CGDataProvider(dataInfo: nil, data: p, size: selftureSize, releaseData: releaseMaskImagePixelData)
let cgImageRef = CGImage(width: self.width, height: self.height, bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: rowBytes, space: pColorSpace, bitmapInfo: bitmapInfo, provider: provider!, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
return cgImageRef
}
}
I'm generating a QR Code to put into a UIImage. I'm running the generation function asynchronously but for some reason the app crashes when I run it on my phone, but doesn't crash in the simulator. I'm not really sure what's going on... Any ideas?
Setup Image
let QR = UIImageView()
dispatch_async(dispatch_get_global_queue(Int(QOS_CLASS_USER_INITIATED.value), 0)) { // 1
var img = self.generateQRImage(self.arr[sender.tag],withSizeRate: self.screenWidth-40)
dispatch_async(dispatch_get_main_queue()) { // 2
QR.image = img
}
}
QR.frame = CGRectMake(0,0,screenWidth-40,screenWidth-40)
QR.center = CGPoint(x:screenWidth/2,y:screenHeight/2)
sView.addSubview(QR)
Generate QR
func generateQRImage(stringQR:NSString, withSizeRate rate:CGFloat) -> UIImage
{
var filter:CIFilter = CIFilter(name:"CIQRCodeGenerator")
filter.setDefaults()
var data:NSData = stringQR.dataUsingEncoding(NSUTF8StringEncoding)!
filter.setValue(data, forKey: "inputMessage")
var outputImg:CIImage = filter.outputImage
var context:CIContext = CIContext(options: nil)
var cgimg:CGImageRef = context.createCGImage(outputImg, fromRect: outputImg.extent())
var img:UIImage = UIImage(CGImage: cgimg, scale: 1.0, orientation: UIImageOrientation.Up)!
var width = img.size.width * rate
var height = img.size.height * rate
UIGraphicsBeginImageContext(CGSizeMake(width, height))
var cgContxt:CGContextRef = UIGraphicsGetCurrentContext()
CGContextSetInterpolationQuality(cgContxt, kCGInterpolationNone)
img.drawInRect(CGRectMake(0, 0, width, height))
img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
The intent of withSizeRate is clearly to be a scaling factor to apply to the QR image (which is 27x27). But you are using the screen width as the multiplier. That results in an exceedingly large image (once it's uncompressed, used in image view; don't go by the size of the resulting JPEG/PNG file). The theoretical internal, uncompressed representation of this image is extremely large (300 mb on iPhone 6 and nearly 400 mb on iPhone 6+). When I ran it through the iPhone 6 simulator, memory usage actually spiked to 2.4 gb:
I would suggest using a smaller scaling factor. Or just create an image that is precisely the size of the imageview (though use zero for the scale with UIGraphicsBeginImageContextWithOptions).
For example, you could simply pass the CGSize of the image view to generateQRImage, and adjust the method like so:
func generateQRImage(stringQR: String, size: CGSize) -> UIImage {
let filter = CIFilter(name:"CIQRCodeGenerator")
filter.setDefaults()
let data = stringQR.dataUsingEncoding(NSUTF8StringEncoding)!
filter.setValue(data, forKey: "inputMessage")
let outputImage = filter.outputImage
let context = CIContext(options: nil)
let cgImage = context.createCGImage(outputImage, fromRect: outputImage.extent())
var image = UIImage(CGImage: cgImage, scale: 1.0, orientation: UIImageOrientation.Up)!
let width = size.width
let height = size.height
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), true, 0)
let cgContext = UIGraphicsGetCurrentContext()
CGContextSetInterpolationQuality(cgContext, kCGInterpolationNone)
image.drawInRect(CGRectMake(0, 0, width, height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}