Changing JUST .scale in UIImage? - swift

Here, I'm creating a typical graphic (it's full-screen size, on all devices) on the fly...
func buildImage()->UIImage
{
let wrapperA:UIView = say, a picture
let wrapperB:UIView = say, some text to go on top
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
basicImage.drawHierarchy(in: basicImage.bounds, afterScreenUpdates:true)
wrapperA.drawHierarchy(in: wrapperA.bounds, afterScreenUpdates:true)
wrapperB.drawHierarchy(in: wrapperB.bounds, afterScreenUpdates: true)
let result:UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// so we've just created a nice big image for some reason,
// no problem so far
print( result?.scale )
// I want to change that image to have a scale of 1.
// I don't know how to do that, so I actually just
// make a new identical one, with scale of 1
let resultFixed:UIImage = UIImage(cgImage: result!.cgImage!,
scale: 1.0,
orientation: result!.imageOrientation)
print( resultFixed.scale )
print("Let's use only '1-scale' images to upload to things like Instagram")
// return result
return resultFixed
// be sure to ask on SO if there's a way to
// just "change the scale" rather than make new.
}
I need the final image to be .scale of 1 - but .scale is a read only property.
The only thing I know how to do is make a whole new image copy ... but set the scale to 1 as it's being created.
Is there a better way?
Handy tip -
This was motivated by: say you're saving a large image to the user's album, and also allowing UIActivityViewController so as to post to (example) Instagram. As a general rule, it seems to be best to make the scale 1 before sending to (example) Instagram; if the scale is say 3 you actually just get the top-left 1/3 of the image on your Instagram post. In terms of saving it to the iOS photo album, it does seem to be harmless (perhaps, better in some ways) to set the scale to 1. (I only say "better" as, if the image is, example, ultimately say emailed to a friend on PC, it can cause less confusion if the scale is 1.) Interestingly though, if you just use the iOS Photos album, and take a scale 2 or 3 image, and share it to Instagram: it does in fact appear properly on Instagram! (perhaps Apple's Photos indeed knows it os best to make it scale 1, before sending it to somewhere like Instagram!).

As you say, the scale property of UIImage is read-only – therefore you cannot change it directly.
However, using UIImage's init(cgImage:scale:orientation) initialiser doesn't really copy the image – the underlying CGImage that it's wrapping (which contains the actual bitmap data) is still the same instance. It's only a new UIImage wrapper that is created.
Although that being said, you could cut out the intermediate UIImage wrapper in this case by getting the CGImage from the context directly through CGContext's makeImage() method. For example:
func buildImage() -> UIImage? {
// ...
let mainSize = basicImage.bounds.size
UIGraphicsBeginImageContextWithOptions(mainSize, false, 0.0)
defer {
UIGraphicsEndImageContext()
}
// get the current context
guard let context = UIGraphicsGetCurrentContext() else { return nil }
// -- do drawing here --
// get the CGImage from the context by calling makeImage() – then wrap in a UIImage
// through using Optional's map(_:) (as makeImage() can return nil)
// by default, the scale of the UIImage is 1.
return context.makeImage().map(UIImage.init(cgImage:))
}

btw you can change scale of result image throw creating new image
let newScaleImage = UIImage(cgImage: oldScaleImage.cgImage!, scale: 1.0, orientation: oldScaleImage.imageOrientation)

Related

SwiftUI - Saving Image to Share Sheet causes image to save blurry/low res

I have a bit of code in my app that generates a QR Code and scales it up (code reference I used from this link from Hackng with Swift. Now, I'm using the share sheet to allow the user to save the qr code to their camera roll and, it is working, but saving the image low res, and it saves to the camera roll blurry (and i assume if it is shared via other methods it will also be blurry)
Here is the code of my share sheet function:
struct ActivityView: UIViewControllerRepresentable {
let activityItems: [Any]
let applicationActivities: [UIActivity]?
func makeUIViewController(context: UIViewControllerRepresentableContext<ActivityView>) -> UIActivityViewController {
return UIActivityViewController(activityItems: activityItems, applicationActivities: applicationActivities)
}
func updateUIViewController(_ uiViewController: UIActivityViewController, context: UIViewControllerRepresentableContext<ActivityView>) {
}
}
and here's the code in my view struct:
.sheet(isPresented: $showShareSheet) {
ShareSheet(activityItems: [self.qrCodeImage])
}
Is there a trick to remove the interpolation on the image when it saves to the share sheet like the .interpolation(.none) on the image view itself?
Your problem is that the QR code image is actually tiny! Like really tiny:
Printing description of image:
<UIImage:0x60000202cc60 anonymous {23, 23}>
When you share this image, the way it will be displayed is dependant on the program or app that will display it, and is out of control of your app as far as I know.
However,
there is a way that you could potentially make it "pretty" in other apps, and this would be to increase the resolution to a larger amount so that when it's rendered it'll appear to have "sharp" pixels.
How would this be accomplished? I think I have an example buried somewhere in old code, I'll dig into it and see if I can find you an example ;)
Edit
I found the code:
extension UIImage {
func resized(toWidth width: CGFloat) -> UIImage? {
let canvasSize = CGSize(width: round(width), height: CGFloat(ceil(width/size.width * size.height)))
UIGraphicsBeginImageContextWithOptions(canvasSize, false, scale)
defer { UIGraphicsEndImageContext() }
let context = UIGraphicsGetCurrentContext();
context?.interpolationQuality = .none
// Set the quality level to use when rescaling
draw(in: CGRect(origin: .zero, size: canvasSize))
let r = UIGraphicsGetImageFromCurrentImageContext()
return r
}
}
The trick is to provide a way to scale the image, but the real magic is on line 7:
context?.interpolationQuality = .none
If you exclude this line, you'll get blurry images, which is what the OS does by default because you don't generally want to see the pixel edges in images.
You could use this extension like so:
.sheet(isPresented: $showShareSheet) {
ShareSheet(activityItems: [self.qrCodeImage.resized(toWidth: 512) ?? UIImage()])
}
However, this may be resizing the image way more often than necessary. Optimally you'd resize it in the same function that you generate it.

Turning a UIBezierPath into a mask?

Not sure if I am asking this question correctly, but I have two components; a CIImage and a UIBezierPath. Ideally, I want to create a CGRect that encapsulates my UIBezierPath; everything inside of the path would be white, everything outside of the path would be black. This way, I can then render this CGRect to some sort of an image, which I could then use as a mask for other purposes.
I am struggling to figure out how to do this with a focus on performance. My tests, as noted below, leverage using UIGraphicsImageRenderer which is far too slow for my needs (I will be doing this on sample buffers from a camera). Therefore, I would like to stick within CoreImage. This is my attempt;
// Path
let path = UIBezierPath()
// ... define the path's shape and close it
// My source image
let image = CIImage(cgImage: UIImage(named: "test.jpg")!.cgImage!)
// Renderer
let renderer = UIGraphicsImageRenderer(size: image.extent.size)
// Render path as mask
let img = renderer.image { ctx in
ctx.cgContext.setFillColor(UIColor.black.cgColor)
ctx.cgContext.fill(CGRect(x: 0, y: 0, width: image.extent.size.width, height: image.extent.size.height))
ctx.cgContext.setFillColor(UIColor.white.cgColor)
ctx.cgContext.addPath(path.cgPath)
ctx.cgContext.drawPath(using: .fill)
}
// Put a filter on the image
let imageFiltered = image.applyingFilter("CIPhotoEffectNoir")
// Blend with mask
let maskFilter = CIFilter.blendWithMask()
maskFilter.inputImage = imageFiltered
maskFilter.backgroundImage = image
maskFilter.maskImage = CIImage(cgImage: img.cgImage!)
// Output
if let output = maskFilter.outputImage {
... use CIContext() to render back to CVPixelBuffer for preview on MTKView.
}
Overall, the goal is to have a defined portion of an image (which will not conform to a traditional shape like a square or circle) which will be filtered with a CIFilter, then composited back over the original. If there is a better approach (such as somehow taking the original image, filtering it, cropping it to the path (leaving everything outside of the path transparent) and composing, that would likely be better performant.
To note, the above sample code results in a crash as the UIGraphicsImageRenderer cannot render the mask fast enough.
Your approach looks good so far. I assume the slow part is the generation of the mask image with Core Graphics. Unfortunately, there is no direct way to do the same with Core Image directly (on the GPU). However, you can try the following:
(Assuming from your previous question that the path always has a certain shape,) you can generate a mask image containing the path once for a certain reference size of your choice. Make sure that the path doesn't "touch" the border.
Then, when you want to use it as a mask, move and scale the shape image to the correct place using transformations and let its edges extend infinitely (to cover the whole underlying image; that's why the shape shouldn't touch the edges). Something like this:
let pathImage = CIImage(cgImage: img.cgImage!)
// scale path to the size of the area you want to mask
var mask = pathImage.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
// move path to the place you want to cover
mask = mask.transformed(by: CGAffineTransform(translationX: offsetX, y: offsetY))
// let mask fill the rest of the area
mask = mask.clampedToExtent()
// use mask as maskImage...
You should be able to recycle the pathImage for every frame and thereby avoiding Core Graphics and CPU-GPU-synchronization.

How to swap from MTKView to UIView display seamlessly

I have an MTKView whose contents I draw into a UIView. I want to swap display from MTKView to UIView without perceptible changes. How to achieve?
Currently, I have
let strokeCIImage = CIImage(mtlTexture: metalTextureComposite...) // get MTLTexture
let imageCropCG = cicontext.createCGImage(strokeCIImage...) // convert to CGImage
let layerStroke = CALayer() // create layer
layerStroke.contents = imageCropCG // populate with CGImage
strokeUIView.layer.addSublayer(layerStroke) // add to view
strokeUIView.layerWillDraw(layerStroke) //heads up to strokeUIView
and a delegate method within layerWillDraw() that clears the MTKView.
strokeViewMetal.metalClearDisplay()
The result is that I'll see a frame drop every so often in which nothing is displayed.
In the hopes of cleanly separating the two tasks, I also tried the following:
let dispatchWorkItem = DispatchWorkItem{
print("lyr add start")
self.pageCanvasImage.layer.addSublayer(sublayer)
print("lyr add end")
}
let dg = DispatchGroup()
DispatchQueue.main.async(group: dg, execute: dispatchWorkItem)
//print message when all blocks in the group finish
dg.notify(queue: DispatchQueue.main) {
print("dispatch mtl clear")
self.strokeCanvasMetal.setNeedsDisplay() // clear MTKView
}
The idea being add the new CALayer to UIImageView, and THEN clear the MTKView.
Over many screen draws, I think this result in fewer frame drops during the View swap, but I'd like a foolproof solution with NO drops. Basically what I'm after is to only clear strokeViewMetal once strokeUIView is ready to display. Any pointers would be appreciated
Synchronicity issues between MTKView and UIView are resolved for 99% of my tests when I set MTKView's presentsWithTransaction property to true. According to Apple's documentation:
Setting this value to true changes this default behavior so that your
MTKView displays its drawable content synchronously, using whichever
Core Animation transaction is current at the time the drawable’s
present() method is called.
Once that is done, the draw loop has to be modified from:
commandEncoder.endEncoding()
commandBuffer.present(drawable)
commandBuffer.commit()
to:
commandEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilScheduled() // synchronously wait until the drawable is ready
drawable.present() // call the drawable’s present() method directly
This is done to prevent Core activities to end before we're ready to present MTKView's drawable.
With all of this set up, I can simply:
let strokeCIImage = CIImage(mtlTexture: metalTextureComposite...) // get MTLTexture
let imageCropCG = cicontext.createCGImage(strokeCIImage...) // convert to CGImage
let layerStroke = CALayer() // create layer
layerStroke.contents = imageCropCG // populate with CGImage
// the last two events will happen synchronously
strokeUIView.layer.addSublayer(layerStroke) // add to view
strokeViewMetal.metalClearDisplay() // empty out MTKView
With all of this said, I do see overlapping of the views every now and then, but at a much, much lower frequency

NSView to PDF and PNG: Why is the outcome so different?

I am trying to safe an NSView to an PNG.
I start with the NSView and then call dataWithPDF or cacheDisplay for PNG. The code to do both looks like this.
guard view.lockFocusIfCanDraw() else {
assert (false)
return
}
let pdfData = view.dataWithPDF(inside: rect)
guard let imgData = view.bitmapImageRepForCachingDisplay(in: rect) else {
assert(false)
}
view.cacheDisplay(in: rect, to: imgData)
view.unlockFocus()
try pdfData.write(to: pdfName, options: .atomic)
let pngData = imgData.representation(using: .png, properties: [:])
try pngData!.write(to: pngName, options: .atomic)
So far, so good. However, this is the different outcome.
PDF (correct!)
And this is the PNG output. As one can see, the subviews aren't included. The arrows are drawn as part of view
Why is the outcome so different?
Many thanks in advance!
Ok, I found the answer. Thanks to "View Debugging" did I see that the subviews use a layer (self.wantsLayer = true). And layers are not finding their way into the PNG, but into the PDF. Not sure whether this is a bug or a feature. However, now I can fix the PNG output.
Why is the outcome so different?
Trying your code using a different (I obviously don't have your view) view with subviews works as expected and the PNG is fine. So it has to be something to do with your views, but I can make no suggestion as to what. However...
As you've got valid PDF data you can generate your PNG from that using something like:
let captured = NSImage(data:pdfData)
let rep = NSBitmapImageRep(data:(captured?.tiffRepresentation)!)
let pngData = rep?.representation(using: NSPNGFileType, properties:[:])
(that is Swift 3, hence NSPNGFileType rather than .png)
This of course doesn't solve whatever problem you have, it avoids it :-) You should really figure out why your views are failing and treat this as a temporary band aid (assuming it works for you...).
HTH

UIImageView Image Disappears With Certain UIImageOrientations

Working through a Core Graphics tutorial http://www.raywenderlich.com/76285/beginning-core-image-swift
Theres one part where you need to preserve the UIImageOrientation (of course!). However I'm noticing something VERY quirky and I'm not sure what the cause is.
Here is the code block
#IBAction func amountSliderValueChanged(sender: UISlider) {
let sliderValue = sender.value
filter.setValue(sliderValue, forKey: kCIInputIntensityKey)
let outputImage = filter.outputImage;
let cgimg = context.createCGImage(filter.outputImage, fromRect: filter.outputImage.extent())
let newImage = UIImage(CGImage: cgimg, scale:1, orientation: UIImageOrientation.UpMirrored)
// let newImage = UIImage(CGImage: cgimg)
println("New image is \(newImage)")
self.imageView.image = newImage
}
When I change the image to certain rotations, say Up, the image appears. However when I change to right or left it moves outside the UIImageView (and if I set the scale up really high I can see parts of it coming back on the screen).
I am not rotating the UIImageView, only the UIImage. Before, when I was doing this in Objective-C I never had this issue. I would just set the UIImage rotation and it would always be 0,0 in the UIImageView.
Using Swift (or perhaps something else?) seems to result in different behaviour.
Can you think of any reason why rotating the image moves it way from the its 0,0 of its UIImageView, and what I can do fix that?
Thanks!