Asynchronous function causing crashing - swift

I'm generating a QR Code to put into a UIImage. I'm running the generation function asynchronously but for some reason the app crashes when I run it on my phone, but doesn't crash in the simulator. I'm not really sure what's going on... Any ideas?
Setup Image
let QR = UIImageView()
dispatch_async(dispatch_get_global_queue(Int(QOS_CLASS_USER_INITIATED.value), 0)) { // 1
var img = self.generateQRImage(self.arr[sender.tag],withSizeRate: self.screenWidth-40)
dispatch_async(dispatch_get_main_queue()) { // 2
QR.image = img
}
}
QR.frame = CGRectMake(0,0,screenWidth-40,screenWidth-40)
QR.center = CGPoint(x:screenWidth/2,y:screenHeight/2)
sView.addSubview(QR)
Generate QR
func generateQRImage(stringQR:NSString, withSizeRate rate:CGFloat) -> UIImage
{
var filter:CIFilter = CIFilter(name:"CIQRCodeGenerator")
filter.setDefaults()
var data:NSData = stringQR.dataUsingEncoding(NSUTF8StringEncoding)!
filter.setValue(data, forKey: "inputMessage")
var outputImg:CIImage = filter.outputImage
var context:CIContext = CIContext(options: nil)
var cgimg:CGImageRef = context.createCGImage(outputImg, fromRect: outputImg.extent())
var img:UIImage = UIImage(CGImage: cgimg, scale: 1.0, orientation: UIImageOrientation.Up)!
var width = img.size.width * rate
var height = img.size.height * rate
UIGraphicsBeginImageContext(CGSizeMake(width, height))
var cgContxt:CGContextRef = UIGraphicsGetCurrentContext()
CGContextSetInterpolationQuality(cgContxt, kCGInterpolationNone)
img.drawInRect(CGRectMake(0, 0, width, height))
img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}

The intent of withSizeRate is clearly to be a scaling factor to apply to the QR image (which is 27x27). But you are using the screen width as the multiplier. That results in an exceedingly large image (once it's uncompressed, used in image view; don't go by the size of the resulting JPEG/PNG file). The theoretical internal, uncompressed representation of this image is extremely large (300 mb on iPhone 6 and nearly 400 mb on iPhone 6+). When I ran it through the iPhone 6 simulator, memory usage actually spiked to 2.4 gb:
I would suggest using a smaller scaling factor. Or just create an image that is precisely the size of the imageview (though use zero for the scale with UIGraphicsBeginImageContextWithOptions).
For example, you could simply pass the CGSize of the image view to generateQRImage, and adjust the method like so:
func generateQRImage(stringQR: String, size: CGSize) -> UIImage {
let filter = CIFilter(name:"CIQRCodeGenerator")
filter.setDefaults()
let data = stringQR.dataUsingEncoding(NSUTF8StringEncoding)!
filter.setValue(data, forKey: "inputMessage")
let outputImage = filter.outputImage
let context = CIContext(options: nil)
let cgImage = context.createCGImage(outputImage, fromRect: outputImage.extent())
var image = UIImage(CGImage: cgImage, scale: 1.0, orientation: UIImageOrientation.Up)!
let width = size.width
let height = size.height
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), true, 0)
let cgContext = UIGraphicsGetCurrentContext()
CGContextSetInterpolationQuality(cgContext, kCGInterpolationNone)
image.drawInRect(CGRectMake(0, 0, width, height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}

Related

Convert a CGImage to MTLTexture without premultiplication

I have a UIImage which I've previously created from a png file:
let strokeUIImage = UIImage(data: pngData)
I want to convert strokeImage (which has opacity) to an MTLTexture for display in an MTKView, but doing the conversion seems to perform an unwanted premultiplication, which darkens all the semitransparent edges.
My blending settings are as follows:
pipelineDescriptor.colorAttachments[0].isBlendingEnabled = true
pipelineDescriptor.colorAttachments[0].rgbBlendOperation = .add
pipelineDescriptor.colorAttachments[0].alphaBlendOperation = .add
pipelineDescriptor.colorAttachments[0].sourceRGBBlendFactor = .one
pipelineDescriptor.colorAttachments[0].sourceAlphaBlendFactor = .one
pipelineDescriptor.colorAttachments[0].destinationRGBBlendFactor = .oneMinusSourceAlpha
pipelineDescriptor.colorAttachments[0].destinationAlphaBlendFactor = .oneMinusSourceAlpha
I've tried two methods of conversion:
let stampTexture = try! MTKTextureLoader(device: self.device!).newTexture(cgImage: strokeUIImage.cgImage!, options: nil)
and the more elaborate dataProvider-driven method:
let image = strokeUIImage.cgImage!
let imageWidth = image.width
let imageHeight = image.height
let bytesPerPixel:Int! = 4
let rowBytes = imageWidth * bytesPerPixel
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm_srgb,
width: imageWidth,
height: imageHeight,
mipmapped: false)
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else { return }
let srcData: CFData! = image.dataProvider?.data
let pixelData = CFDataGetBytePtr(srcData)
let region = MTLRegionMake2D(0, 0, imageWidth, imageHeight)
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
both of which yield the same unwanted premultiplied result.
The latter I tried, as there were some posts suggesting that the old swift3 method CGDataProviderCopyData() extracts raw pixel data from the image which is not premultiplied. Sadly, the equivalent:
let srcData: CFData! = image.dataProvider?.data
does not seem to do the trick. Am I missing something?
Any pointers would be appreciated.
After much experimenting, I've come to a solution which addresses the pre-multiplication issue inherent in CoreGraphics images. Thanks to Warren's tip regarding using an Accelerate function (vImageUnpremultiplyData_ARGB8888 in particular), I thought, why not build a CGImage using vImage_CGImageFormat which will allow me to play with the bitmapInfo setting that specifies how to interpret alpha...The result is not perfect, as demonstrated by the image attachment below:
Somehow, in the translation the alpha values are getting punched up slightly, (possibly the rgb as well, but not significantly). By the way, I should point out that the png pixel format is sRGB, and the MTKView I'm using is set to MTLPixelFormat.rgba16Float (app requirement)
Below is the full metalDrawStrokeUIImage routine I implemented. Of particular note is the line:
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue)
which essentially unassociates the alpha (I think) without calling vImageUnpremultiplyData_ARGB8888. Looking at the resulting image certainly looks like an un-premultiplied image...
Lastly, to get back a premultiplied texture on the MTKView side, I let the fragment shader handle the pre-multiplication:
fragment float4 premult_fragment(VertexOut interpolated [[stage_in]],
texture2d<float> texture [[texture(0)]],
sampler sampler2D [[sampler(0)]]) {
float4 sampled = texture.sample(sampler2D, interpolated.texCoord);
// this fragment shader premultiplies incoming rgb with texture's alpha
return float4(sampled.r * sampled.a,
sampled.g * sampled.a,
sampled.b * sampled.a,
sampled.a );
} // end of premult_fragment
The result is pretty close to the input source, but the image is maybe 5% more opaque than the incoming png. Again, png pixel format is sRGB, and the MTKView I'm using to display is set to MTLPixelFormat.rgba16Float . So, I'm sure something is getting mushed somewhere. If anyone has any pointers, I'd sure appreciate it.
Below is the rest of the relevant code:
func metalDrawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect) {
self.metalSetupRenderPipeline(compStyle: compMode.strokeCopy) // needed so stampTexture is not modified by fragmentFunction
let bytesPerPixel = 4
let bitsPerComponent = 8
let width = Int(strokeUIImage.size.width)
let height = Int(strokeUIImage.size.height)
let rowBytes = width * bytesPerPixel
//
let texDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm_srgb,
width: width,
height: height,
mipmapped: false)
guard let stampTexture = device!.makeTexture(descriptor: texDescriptor) else { return }
//let cgImage: CGImage = strokeUIImage.cgImage!
//let sourceColorSpace = cgImage.colorSpace else {
guard
let cgImage = strokeUIImage.cgImage,
let sourceColorSpace = cgImage.colorSpace else {
print("Unable to initialize cgImage or colorSpace.")
return
}
var format = vImage_CGImageFormat(
bitsPerComponent: UInt32(cgImage.bitsPerComponent),
bitsPerPixel: UInt32(cgImage.bitsPerPixel),
colorSpace: Unmanaged.passRetained(sourceColorSpace),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue),
version: 0, decode: nil,
renderingIntent: CGColorRenderingIntent.defaultIntent)
var sourceBuffer = vImage_Buffer()
defer {
free(sourceBuffer.data)
}
var error = vImageBuffer_InitWithCGImage(&sourceBuffer, &format, nil, cgImage, numericCast(kvImageNoFlags))
guard error == kvImageNoError else {
print ("[MetalBrushStrokeView]: can't vImageBuffer_InitWithCGImage")
return
}
//vImagePremultiplyData_RGBA8888(&sourceBuffer, &sourceBuffer, numericCast(kvImageNoFlags))
// create a CGImage from vImage_Buffer
var destCGImage = vImageCreateCGImageFromBuffer(&sourceBuffer, &format, nil, nil, numericCast(kvImageNoFlags), &error)?.takeRetainedValue()
guard error == kvImageNoError else {
print ("[MetalBrushStrokeView]: can't vImageCreateCGImageFromBuffer")
return
}
let dstData: CFData = (destCGImage!.dataProvider!.data)!
let pixelData = CFDataGetBytePtr(dstData)
destCGImage = nil
let region = MTLRegionMake2D(0, 0, Int(width), Int(height))
stampTexture.replace(region: region, mipmapLevel: 0, withBytes: pixelData!, bytesPerRow: Int(rowBytes))
let stampColor = UIColor.white
let stampCorners = self.stampSetVerticesFromBbox(bbox: strokeBbox)
self.stampAppendToVertexBuffer(stampLayer: stampLayerMode.stampLayerFG, stampCorners: stampCorners, stampColor: stampColor)
self.metalRenderStampSingle(stampTexture: stampTexture)
self.initializeStampArray() // clears out the stamp array so we always draw 1 stamp at a time
} // end of func metalDrawStrokeUIImage (strokeUIImage: UIImage, strokeBbox: CGRect)

How to use scale factor to scale image in Swift

I got a large image with 1920x1080 pixels. I'm trying to scale Image with 2 differents ways:
First: using CIFilter
func resize(image: UIImage, scale: Float, aspect: Float = 1) -> UIImage? {
return autoreleasepool(invoking: {
[weak self] () -> UIImage? in
var filter: CIFilter! = CIFilter(name: "CILanczosScaleTransform")!
filter.setValue(CIImage(image: image), forKey: kCIInputImageKey)
filter.setValue(NSNumber(value: scale as Float), forKey: kCIInputScaleKey)
filter.setValue(NSNumber(value: aspect as Float), forKey: kCIInputAspectRatioKey)
var result: UIImage?
var cgImage: CGImage? = nil
if let outputImage = filter.outputImage {
cgImage = self?.ctx?.createCGImage(outputImage, from: outputImage.extent)
}
if let cgImg = cgImage {
result = self?.convertUIImage(fromCGImage: cgImg)
}
if #available(iOS 10.0, *) {
self?.ctx?.clearCaches()
}
cgImage = nil
filter.setValue(nil, forKey: kCIInputImageKey)
filter.setValue(nil, forKey: kCIInputScaleKey)
filter.setValue(nil, forKey: kCIInputAspectRatioKey)
filter.setDefaults()
filter = nil
return result
})
}
Second: using UIImage()
func scaleImage(scale: CGFloat) -> UIImage? {
if let cgImage = self.cgImage {
return UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
}
return nil
}
But I realized that the scale factor in two methods produced conflicting results. For example, I set scale equal to 2
But in first method: new image is size (3840x2160), and second is (960x540).
I'm really confused. Can anyone explain me why this happened
In the future when using a new function have parameter scale how do I know when scale make my image smaller and vice versa
By default image has scale, its can any value. In began your image scale can be bigger than 2, that why your second way your image received smaller. Try this.
func scaleImage(image: UIImage, scale: CGFloat) -> UIImage? {
let size = CGSize(width: image.size.width * scale, height: image.size.height * scale)
let drawRect = CGRect(origin: .zero, size: size)
UIGraphicsBeginImageContextWithOptions(size, false, 0)
image.draw(in: drawRect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}

Convert UIImage to grayscale keeping image quality

I have this extension (found in obj-c and I converted it to Swift3) to get the same UIImage but grayscaled:
public func getGrayScale() -> UIImage
{
let imgRect = CGRect(x: 0, y: 0, width: width, height: height)
let colorSpace = CGColorSpaceCreateDeviceGray()
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue).rawValue)
context?.draw(self.cgImage!, in: imgRect)
let imageRef = context!.makeImage()
let newImg = UIImage(cgImage: imageRef!)
return newImg
}
I can see the gray image but its quality is pretty bad... The only thing I can see that's related to the quality is bitsPerComponent: 8 in the context contructor. However looking at Apple's doc, here is what I get:
It shows that iOS only supports 8bpc... Thus why can't I improve the quality ?
Try below code:
Note: code Updated and error been fixed...
Code tested in Swift 3.
originalImage is the image that you trying to convert.
Answer 1:
var context = CIContext(options: nil)
Update: CIContext is the Core Image component that handles rendering and All of the processing of a core image is done in a CIContext. This is somewhat similar to a Core Graphics or OpenGL context.For more info available in Apple Doc.
func Noir() {
let currentFilter = CIFilter(name: "CIPhotoEffectNoir")
currentFilter!.setValue(CIImage(image: originalImage.image!), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = context.createCGImage(output!,from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
originalImage.image = processedImage
}
Also you need to Considered following filter that can produce similar effect
CIPhotoEffectMono
CIPhotoEffectTonal
Output from Answer 1:
Output from Answer 2:
Improved answer :
Answer 2: Auto adjusting input image before applying coreImage filter
var context = CIContext(options: nil)
func Noir() {
//Auto Adjustment to Input Image
var inputImage = CIImage(image: originalImage.image!)
let options:[String : AnyObject] = [CIDetectorImageOrientation:1 as AnyObject]
let filters = inputImage!.autoAdjustmentFilters(options: options)
for filter: CIFilter in filters {
filter.setValue(inputImage, forKey: kCIInputImageKey)
inputImage = filter.outputImage
}
let cgImage = context.createCGImage(inputImage!, from: inputImage!.extent)
self.originalImage.image = UIImage(cgImage: cgImage!)
//Apply noir Filter
let currentFilter = CIFilter(name: "CIPhotoEffectTonal")
currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = context.createCGImage(output!, from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
originalImage.image = processedImage
}
Note: If you want to see the better result.You should be testing your code on real device not in the simulator...
A Swift 4.0 extension that returns an optional UIImage to avoid any potential crashes down the road.
import UIKit
extension UIImage {
var noir: UIImage? {
let context = CIContext(options: nil)
guard let currentFilter = CIFilter(name: "CIPhotoEffectNoir") else { return nil }
currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
if let output = currentFilter.outputImage,
let cgImage = context.createCGImage(output, from: output.extent) {
return UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
}
return nil
}
}
To use this:
let image = UIImage(...)
let noirImage = image.noir // noirImage is an optional UIImage (UIImage?)
Joe's answer as an UIImage exension for Swift 4 working correctly for different scales:
extension UIImage {
var noir: UIImage {
let context = CIContext(options: nil)
let currentFilter = CIFilter(name: "CIPhotoEffectNoir")!
currentFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
let output = currentFilter.outputImage!
let cgImage = context.createCGImage(output, from: output.extent)!
let processedImage = UIImage(cgImage: cgImage, scale: scale, orientation: imageOrientation)
return processedImage
}
}
I'd use CoreImage, which may keep the quality.
func convertImageToBW(image:UIImage) -> UIImage {
let filter = CIFilter(name: "CIPhotoEffectMono")
// convert UIImage to CIImage and set as input
let ciInput = CIImage(image: image)
filter?.setValue(ciInput, forKey: "inputImage")
// get output CIImage, render as CGImage first to retain proper UIImage scale
let ciOutput = filter?.outputImage
let ciContext = CIContext()
let cgImage = ciContext.createCGImage(ciOutput!, from: (ciOutput?.extent)!)
return UIImage(cgImage: cgImage!)
}
Depending on how you use this code, you may want to create the CIContext outside of it for performance reasons.
Here's a category in objective c. Note that, critically, this version takes scale into consideration.
- (UIImage *)grayscaleImage{
return [self imageWithCIFilter:#"CIPhotoEffectMono"];
}
- (UIImage *)imageWithCIFilter:(NSString*)filterName{
CIImage *unfiltered = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:filterName];
[filter setValue:unfiltered forKey:kCIInputImageKey];
CIImage *filtered = [filter outputImage];
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimage = [context createCGImage:filtered fromRect:CGRectMake(0, 0, self.size.width*self.scale, self.size.height*self.scale)];
// Do not use initWithCIImage because that renders the filter each time the image is displayed. This causes slow scrolling in tableviews.
UIImage *image = [[UIImage alloc] initWithCGImage:cgimage scale:self.scale orientation:self.imageOrientation];
CGImageRelease(cgimage);
return image;
}
All the above solutions rely on CIImage, while UIImage will often have CGImage as its underlying image, not CIImage. So it means you have to convert your underlying image into CIImage in the beginning, and convert it back to CGImage in the end (if you don't, constructing UIImage with CIImage will effectively do it for you).
Although it probably OK for many use cases, the conversion between CGImage and CIImage is not free: it can be slow, and can create a big memory spike while converting.
So I want to mention a completely different solution, that doesn't require converting image back and forth. It's using Accelerate, and it's perfectly described by Apple here.
Here's a playground example that demonstrates both methods.
import UIKit
import Accelerate
extension CIImage {
func toGrayscale() -> CIImage? {
guard let output = CIFilter(name: "CIPhotoEffectNoir", parameters: [kCIInputImageKey: self])?.outputImage else {
return nil
}
return output
}
}
extension CGImage {
func toGrayscale() -> CGImage {
guard let format = vImage_CGImageFormat(cgImage: self),
// The source image bufffer
var sourceBuffer = try? vImage_Buffer(
cgImage: self,
format: format
),
// The 1-channel, 8-bit vImage buffer used as the operation destination.
var destinationBuffer = try? vImage_Buffer(
width: Int(sourceBuffer.width),
height: Int(sourceBuffer.height),
bitsPerPixel: 8
) else {
return self
}
// Declare the three coefficients that model the eye's sensitivity
// to color.
let redCoefficient: Float = 0.2126
let greenCoefficient: Float = 0.7152
let blueCoefficient: Float = 0.0722
// Create a 1D matrix containing the three luma coefficients that
// specify the color-to-grayscale conversion.
let divisor: Int32 = 0x1000
let fDivisor = Float(divisor)
var coefficientsMatrix = [
Int16(redCoefficient * fDivisor),
Int16(greenCoefficient * fDivisor),
Int16(blueCoefficient * fDivisor)
]
// Use the matrix of coefficients to compute the scalar luminance by
// returning the dot product of each RGB pixel and the coefficients
// matrix.
let preBias: [Int16] = [0, 0, 0, 0]
let postBias: Int32 = 0
vImageMatrixMultiply_ARGB8888ToPlanar8(
&sourceBuffer,
&destinationBuffer,
&coefficientsMatrix,
divisor,
preBias,
postBias,
vImage_Flags(kvImageNoFlags)
)
// Create a 1-channel, 8-bit grayscale format that's used to
// generate a displayable image.
guard let monoFormat = vImage_CGImageFormat(
bitsPerComponent: 8,
bitsPerPixel: 8,
colorSpace: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue),
renderingIntent: .defaultIntent
) else {
return self
}
// Create a Core Graphics image from the grayscale destination buffer.
guard let result = try? destinationBuffer.createCGImage(format: monoFormat) else {
return self
}
return result
}
}
To test, I used a full size of this image.
let start = Date()
var prev = start.timeIntervalSinceNow * -1
func info(_ id: String) {
print("\(id)\t: \(start.timeIntervalSinceNow * -1 - prev)")
prev = start.timeIntervalSinceNow * -1
}
info("started")
let original = UIImage(named: "Golden_Gate_Bridge_2021.jpg")!
info("loaded UIImage(named)")
let cgImage = original.cgImage!
info("original.cgImage")
let cgImageToGreyscale = cgImage.toGrayscale()
info("cgImage.toGrayscale()")
let uiImageFromCGImage = UIImage(cgImage: cgImageToGreyscale, scale: original.scale, orientation: original.imageOrientation)
info("UIImage(cgImage)")
let ciImage = CIImage(image: original)!
info("CIImage(image: original)!")
let ciImageToGreyscale = ciImage.toGrayscale()!
info("ciImage.toGrayscale()")
let uiImageFromCIImage = UIImage(ciImage: ciImageToGreyscale, scale: original.scale, orientation: original.imageOrientation)
info("UIImage(ciImage)")
The result (in sec)
CGImage method took about 1 sec. total:
original.cgImage : 0.5257829427719116
cgImage.toGrayscale() : 0.46222901344299316
UIImage(cgImage) : 0.1819549798965454
CIImage method took about 7 sec. total:
CIImage(image: original)! : 0.6055610179901123
ciImage.toGrayscale() : 4.969912052154541
UIImage(ciImage) : 2.395193934440613
When saving images as JPEG to disk, the one created with CGImage was also 3 times smaller than the one created with CIImage (5 MB vs. 17 MB). The quality was good on both images. Here's a small version that fits SO restrictions:
As per Joe answer we easily converted Original to B&W . But back to Original image refer these code :
var context = CIContext(options: nil)
var startingImage : UIImage = UIImage()
func Noir() {
startingImage = imgView.image!
var inputImage = CIImage(image: imgView.image!)!
let options:[String : AnyObject] = [CIDetectorImageOrientation:1 as AnyObject]
let filters = inputImage.autoAdjustmentFilters(options: options)
for filter: CIFilter in filters {
filter.setValue(inputImage, forKey: kCIInputImageKey)
inputImage = filter.outputImage!
}
let cgImage = context.createCGImage(inputImage, from: inputImage.extent)
self.imgView.image = UIImage(cgImage: cgImage!)
//Filter Logic
let currentFilter = CIFilter(name: "CIPhotoEffectNoir")
currentFilter!.setValue(CIImage(image: UIImage(cgImage: cgImage!)), forKey: kCIInputImageKey)
let output = currentFilter!.outputImage
let cgimg = context.createCGImage(output!, from: output!.extent)
let processedImage = UIImage(cgImage: cgimg!)
imgView.image = processedImage
}
func Original(){
imgView.image = startingImage
}

change resolution and size of image with cocoa/osx/swift (no mobile apps)

I try to change the size and the resolution of an image programmatically, afterwards I save this image.
The imagesize in the imageView is changing, but when I look at my file "file3.png" it always has the original resolution of 640x1142.
I googled around but can't find a solution. I try to redraw the image. But maybe it's the wrong strategy.
thanks
#IBAction func pickOneImageBtn(sender: AnyObject) {
//load image from path
pickedImage.image = loadImageFromPath(fileInDocumentsDirectory("Angebote.png"))
let newSize = NSSize(width: 10, height: 10)
if let image = pickedImage.image {
print("found image")
//cast to CGImage
var imageRect:CGRect = CGRectMake(0, 0, image.size.width, image.size.height)
let imageRef = image.CGImageForProposedRect(&imageRect, context: nil, hints: nil)
if let imageRefExists = imageRef {
print("Cast to CGImage worked \(imageRefExists)")
}
//redraw to NSImage with new size
let imageWithNewSize = NSImage(CGImage: imageRef!, size: newSize)
//save on disk
let imgData: NSData! = imageWithNewSize.TIFFRepresentation!
let bitmap: NSBitmapImageRep! = NSBitmapImageRep(data: imgData!)
if let pngCoverImage = bitmap!.representationUsingType(NSBitmapImageFileType.NSPNGFileType, properties: [:]) {
pngCoverImage.writeToFile("/...correctpath.../imageSourceForResize/file3.png", atomically: false)
print("saved new image")
}
//the size is smaller
pickedImage.image = imageWithNewSize
}
}
Change
let imgData: NSData! = pickedImage.image!.TIFFRepresentation!
to
let imgData: NSData! = imageWithNewSize.TIFFRepresentation!
I tried to change the size of a NSImage for Mac application and here is the working function to resize an image written in swift.
func resize(image: NSImage, w: Int, h: Int) -> NSImage
{
let destSize = NSMakeSize(CGFloat(w), CGFloat(h))
let newImage = NSImage(size: destSize)
newImage.lockFocus()
image.drawInRect(NSMakeRect(0, 0, destSize.width, destSize.height), fromRect: NSZeroRect, operation: NSCompositingOperation.CompositeCopy, fraction: 1.0)
newImage.unlockFocus()
newImage.size = destSize
return NSImage(data: newImage.TIFFRepresentation!)!
}
You need to pass 3 parameters to call this function i.e NSImage, width, height and this function will return resized image.
targetimage = resize(source, w: Int(targetwidth), h: Int(targetheight))

How to copy an UIImage?

I have an UIImageView with an UIImage. I want to assign a copy of these picture to two variables. Based on what the user is doing the image should be manipulated. The problem is, the image is the same in each variables. I guess, because they are passed by reference.
let image1 = imageView.image!
let image2 = imageView.image!
How do I get two separate copies of this image?
What I want to achieve: Just crop the one image, keep the other like the original.
let imageLeft = googleImageView.image!
let imageRef = CGImageCreateCopy(googleImageView.image!.CGImage)
let imageRight = UIImage(CGImage: imageRef!, scale: googleImageView.image!.scale, orientation: googleImageView.image!.imageOrientation)
if translation.x < 0 {
let scale = imageLeft.scale
let newRect = CGRectMake(0, 0, (imageLeft.size.width + translation.x) * scale, imageLeft.size.height * scale)
let imageRef = CGImageCreateWithImageInRect(imageLeft.CGImage, newRect)
if let croppedImage = imageRef {
googleImageView.image = UIImage(CGImage: croppedImage, scale: scale, orientation: imageLeft.imageOrientation)
}
}
print("left image: \(imageLeft) right image \(imageRight)")
The code above prints to the console:
left image: <UIImage: 0x7fd020dca070>, {111, 167}
right image <UIImage: 0x7fd020dda430>, {111, 167}
left image: <UIImage: 0x7fd020df9ba0>, {110, 167}
right image <UIImage: 0x7fd020d45670>, {110, 167}
... and so on. So, BOTH images gets a new size. Only the left Image should get cropped.
As #Aggressor suggested you can copy your image from imageView this way:
let newCgIm = CGImageCreateCopy(imageView.image?.CGImage)
let newImage = UIImage(CGImage: newCgIm, scale: imageView.image!.scale, orientation: imageView.image!.imageOrientation)
There is a new copy function of cgImage comes with Swift 3 and also compatible with Swift 4. So you can use in this way
guard let cgImage = imageView.image?.cgImage?.copy() else {
return
}
let newImage = UIImage(cgImage: cgImage,
scale: imageView.image!.scale,
orientation: imageView.image!.imageOrientation)
Solution for Swift 5 or better:
Add this extension to your code
extension UIImage {
func clone() -> UIImage? {
guard let originalCgImage = self.cgImage, let newCgImage = originalCgImage.copy() else {
return nil
}
return UIImage(cgImage: newCgImage, scale: self.scale, orientation: self.imageOrientation)
}
}
then clone the image to get a new image object and not a reference:
let image1 = imageView.image!
let image2 = image1().clone()
! Note that cloning the image increases the memory consumption.