Image clipping with path (scaling issue) - swift

The app I am using for testing puposes is able to take a picture and save it as a PNG file. The next time the app is launched, it checks if a file is present and if it is, the image stored inside the file is used as the background view of the app. Up to this point all is OK.
I decided to add a clipping mask to this app and this where things go wrong.
The clipping itself works, but for some mysterious reason the clipped image gets expanded. If someone could tell me what I am doing wrong that would be very helpful.
Here is the relevant code (I can provide more information if ever needed):
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
if error == nil {
var localImage = UIImage(fromSampleBuffer: imageDataSampleBuffer)
var imageSize = CGSize(width: UIScreen.mainScreen().bounds.height * UIScreen.mainScreen().scale,
height: UIScreen.mainScreen().bounds.width * UIScreen.mainScreen().scale)
localImage = resizeImage(localImage!, toSize: imageSize)
imageSize = CGSize(width: imageSize.height, height: imageSize.width)
UIGraphicsBeginImageContext(imageSize)
CGContextRotateCTM (UIGraphicsGetCurrentContext(), CGFloat(M_PI_2))
localImage!.drawAtPoint(CGPoint(x: 0.0, y: -imageSize.width))
localImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// Clipping code:
localImage = maskImage(localImage!,
path: UIBezierPath(CGPath: CGPathCreateWithEllipseInRect(UIScreen.mainScreen().bounds, nil)))
if let data = UIImagePNGRepresentation(localImage!) {
data.writeToFile(self.bmpFilePath, atomically: true)
}
} else {print("Error on taking a picture:\n\(error)")}
}
}
The maskImage function is this (taken from iOS UIImage clip to paths and translated to Swift) :
func maskImage(originalImage :UIImage, path:UIBezierPath) -> UIImage {
UIGraphicsBeginImageContextWithOptions(originalImage.size, false, 0);
path.addClip()
originalImage.drawAtPoint(CGPointZero)
let maskedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return maskedImage;
}
When the lines:
localImage = maskImage(localImage!,
path: UIBezierPath(CGPath: CGPathCreateWithEllipseInRect(UIScreen.mainScreen().bounds, nil)))
are commented out, I see what I expect.
Thaking the picture below and having it as background when relaunching the app.
But when they are present(not commented out), I get the background hereafter when relaunching the app (of course taking the same picture at start):
If things worked as they should the mouse should appear inside the elliptic clip with the same size as in the first picture (not magnified as it is now).

Related

Drawing (to PDF) an UIImage that has been written to a file does not give the same result as drawing the original UIImage

Some context first:
I simply draw a UIImage to a PDFPage by subclassing PDFPage and overriding draw(with box,to context):
override func draw(with box: PDFDisplayBox, to context: CGContext) {
/* Draw image on PDF */
UIGraphicsPushContext(context)
// Change the PDF context to match the UIKit coordinate system.
context.translateBy(x: 0, y: pageBounds.height)
context.scaleBy(x: 1, y: -1)
context.interpolationQuality = .high
// The important line is here: drawing the image
self.myImage.draw(in: CGRect(x: leftMargin, y: topMargin, width: fittedImageSize.width, height: fittedImageSize.height))
}
where self.myImage contains a UIImage. So far so good.
The problem -> if I persist the image to save memory
If I init my CustomPDFPage with the original UIImage from memory --> I get a PDF file with a reasonable size, everything works well
However: if I persist the image using pngData(), then reload it using UIImage(contentsOfFile: url.path) for drawing, my PDF file is suddenly MUCH more heavier in size.
Writing the image to TMP:
let urlToWrite = tmpDir.appendingPathComponent(fileName)
do {
if let tmpData = image.png() {
DLog("TMPDATA SIZE = \(tmpData.count). Image dimensions = \(image.size) with scale = \(image.scale)")
}
try image.pngData()?.write(to: urlToWrite)
self.tmpImgURL = urlToWrite
} catch {
DLog("ERROR: could not write image to \(urlToWrite). Error is \(error)")
}
Reloading the image into memory:
var image = UIImage(contentsOfFile: self.tmpImgURL.path)
--> using that image to draw the PDF increases the PDF size dramatically.
Inspecting the UIImage size, the scale, and the bytes count of the image before writing to file and after reading to file give the exact same values.
So the reason behind this mess is because the user has the possibility to choose to reduce the quality of the image.
In that case, the source UIImage was an image recreated from jpegData (that was used to apply compression).
In short, calling UIImage.pngData() after UIImage.jpegData(...) is not a good idea. Just write directly the jpegData when the image might have been compressed.

Trying to save merged images but nothing works correctly

I have looked at every example I can find in previous questions and nothing works. I am merging two images by copying a cropped image to pasteboard and then pasting it in another image view. That works fine but when I try to save it the image is always squished(may be a scale issue but it seems only the vertical size getting distorted.
This is the result of combining the images before trying to save it
This is after saving the image to the photos album
The image will not stay in the translated location and it's distorted.
Here is the code when I save the image
let fullSize:CGSize = photoView.image!.size
let newSize:CGSize = fullSize
let scale:CGFloat = newSize.height/fullSize.height
let offset:CGFloat = (newSize.width - fullSize.width*scale)/2
let offsetRect:CGRect = CGRect.init(x: offset, y: 0, width: newSize.width - offset*2, height: newSize.height)
UIGraphicsBeginImageContext(newSize);
self.photoView.image!.draw(in: offsetRect)
self.copiedView.image!.draw(in: CGRect.init(x: 0, y: 0, width: photoView.image!.size.width, height:photoView.image!.size.height))
let combImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext();
//photoView.image = combImage
UIImageWriteToSavedPhotosAlbum(combImage, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
Any help is appreciated

IMessage MSSticker view created from UIView incorrect sizing

Hey I have been struggling with this for a couple of days now and can't seem to find any documentation out side of the standard grid views for MSStickerView sizes
I am working on an app that creates MSStickerViews dynamically - it does this via converting a UIView into an UIImage saving this to disk then passing the URL to MSSticker before creating the MSStickerView the frame of this is then set to the size of the original view.
The problem I have is that when I drag the MSStickerView into the messages window, the MSStickerView shrinks while being dragged - then when dropped in the messages window, changes to a larger size. I have no idea how to control the size when dragged or the final image size
Heres my code to create an image from a view
extension UIView {
func imageFromView() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
self.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
}
And here's the code to save this to disk
extension UIImage {
func savedPath(name: String) -> URL{
let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
let filePath = "\(paths[0])/name.png"
let url = URL(fileURLWithPath: filePath)
// Save image.
if let data = self.pngData() {
do {
try data.write(to: url)
} catch let error as NSError {
}
}
return url
}
}
finally here is the code that converts the data path to a Sticker
if let stickerImage = backgroundBox.imageFromView() {
let url = stickerImage.savedPath(name: textBox.text ?? "StickerMCSticker")
if let msSticker = try? MSSticker(contentsOfFileURL: url, localizedDescription: "") {
var newFrame = self.backgroundBox.frame
newFrame.size.width = newFrame.size.width
newFrame.size.height = newFrame.size.height
let stickerView = MSStickerView(frame: newFrame, sticker: msSticker)
self.view.addSubview(stickerView)
print("** sticker frame \(stickerView.frame)")
self.sticker = stickerView
}
}
I wondered first off if there was something I need to do regarding retina sizes, but adding #2x in the file just breaks the image - so am stuck on this - the WWDC sessions seem to show stickers being created from file paths and not altering in size in the transition between drag and drop - any help would be appreciated!
I fixed this issue eventually by getting the frame from the view I was copying's frame then calling sizeToFit()-
init(sticker: MSSticker, size: CGSize) {
let stickerFrame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
self.sticker = MSStickerView(frame: stickerFrame, sticker: sticker)
self.sticker.sizeToFit()
super.init(nibName: nil, bund
as the StickerView was not setting the correct size. Essentially the experience I was seeing was that the sticker size on my view was not accurate with the size of the MSSticker - so the moment the drag was initialized, the real sticker size was implemented (which was different to the frame size / autoLayout I was applying in my view)

I need help integrating a specific UIImage resizing extension into my current draw CGRect function

I found this extension online, it allows me to have images adhere to aspect fit/fill even when drawn inside dynamically growing/shrinking image views (currently when image is saved to camera roll after my draw function the image reverts to "scale fill" regardless of what the content mode of the image view is. I suspect the reasoning for this is because I have it drawing the image to size/bounds of the image view, but since the image view is dynamic, i don't see any way around this without using this extension):
// MARK: - Image Scaling.
extension UIImage {
/// Scales an image to fit within a bounds with a size governed by the passed size. Also keeps the aspect ratio.
/// Switch MIN to MAX for aspect fill instead of fit.
///
/// - parameter newSize: newSize the size of the bounds the image must fit within.
///
/// - returns: a new scaled image.
func scaleImageToSize(newSize: CGSize) -> UIImage {
var scaledImageRect = CGRect.zero
let aspectWidth = newSize.width/size.width
let aspectheight = newSize.height/size.height
let aspectRatio = max(aspectWidth, aspectheight)
scaledImageRect.size.width = size.width * aspectRatio;
scaledImageRect.size.height = size.height * aspectRatio;
scaledImageRect.origin.x = (newSize.width - scaledImageRect.size.width) / 2.0;
scaledImageRect.origin.y = (newSize.height - scaledImageRect.size.height) / 2.0;
UIGraphicsBeginImageContext(newSize)
draw(in: scaledImageRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
}
This is my current function I'm using for drawing the image on screen to be able to save it to camera roll (this function combines two images, a frame and an image from camera roll:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
// var newSize = currentImage.scaleImageToSize
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
All the tutorials I've found on how to use extensions don't cover how to pass in and out variables like this one requires. Any insight would be greatly appreciated.
I understand that you don't know how to use the extension, is that correct? Since it just adds a function to every UIImage, you can simply call it on your image like this: currentImage.scaleImageToSize(newSize: someSize) and pass the size you want the image to fit into.
Dorian Roy was telling me to use that call in place of using just "currentImage", and that's what worked!
(I commented on his initial answer saying I was having issues because I was trying to use the return value from the extension itself in place of "currentImage")

Why is my screenshot not displaying current view?

When I take a screenshot to share the current view of my device (iPhone), it only takes the upper part of it, and when I scroll down to the bottom (of my tableview at runtime), the screenshot is blank as if not capturing the current view on the device - I hope I am explaining alright there.
Am I missing anything?
func captureScreen() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, false, 0);
self.view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: true)
let image:UIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image
}
Change your code to the following :
func captureScreen() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.view.frame.size, false, 0);
self.view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: true)
let image:UIImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image
}
Note the change from bound to frame in your code. The last argument in UIGraphicsBeginImageContextWithOptions has to do with the scale. If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen
Learn more about it here