How to combine two UIImages into a single Image in Swift? - swift

I've been trying to merge two images where one is on top and the other at the bottom. This code below doesn't seem to work. The x coordinator is correct but the y doesn't seem right and it crops the top image when I alter it. What am I doing wrong?
func combine(bottomImage: Data, topImage: Data) -> UIImage {
let bottomImage = UIImage(data: topImage)
let topImage = UIImage(data: bottomImage)
let size = CGSize(width: bottomImage!.size.width, height: bottomImage!.size.height + topImage!.size.height)
UIGraphicsBeginImageContext(size)
let areaSizeb = CGRect(x: 0, y: 0, width: bottomImage!.size.width, height: bottomImage!.size.height)
let areaSize = CGRect(x: 0, y: 0, width: topImage!.size.width, height: topImage!.size.height)
bottomImage!.draw(in: areaSizeb)
topImage!.draw(in: areaSize)
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}

You are drawing both images into the same rect. You also should not use force-unwrapping. That causes your app to crash if anything goes wrong.
There are also various other small mistakes.
Change your function like this:
// Return an Optional so we can return nil if something goes wrong
func combine(bottomImage: Data, topImage: Data) -> UIImage? {
// Use a guard statement to make sure
// the data can be converted to images
guard
let bottomImage = UIImage(data: bottomImage),
let topImage = UIImage(data: topImage) else {
return nil
}
// Use a width wide enough for the widest image
let width = max(bottomImage.size.width, topImage.size.width)
// Make the height tall enough to stack the images on top of each other.
let size = CGSize(width: width, height: bottomImage.size.height + topImage.size.height)
UIGraphicsBeginImageContext(size)
let bottomRect = CGRect(
x: 0,
y: 0,
width: bottomImage.size.width,
height: bottomImage.size.height)
// Position the bottom image under the top image.
let topRect = CGRect(
x: 0,
y: bottomImage.size.height,
width: topImage.size.width,
height: topImage.size.height)
bottomImage.draw(in: bottomRect)
topImage!.draw(in: topRect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
(And you should really be using a UIGraphicsImageRenderer rather than than calling UIGraphicsBeginImageContext()/ UIGraphicsEndImageContext().)
Edit:
Note that if the 2 images are different widths the above code will leave a "dead space" on the right of the narrower image. You could also make the code center the narrower image, or scale it up to be the same width. (If you do scale it up I suggest scaling it up in both dimensions to preserve the original aspect ratio. Otherwise it will look stretched and unnatural.)

Related

How to convert this UIGraphics image operation to Core Image?

I write SplitMirror filter like in Apple Motion app or Snapchat lenses by using UIGraphics to use it with real time camera feed or video processing, with single image request it works well like in attached image in the question but for multiple filtering request not. I think it's code must be changed from UIGraphics to Core Image & CIContext for better performance and less memory usage like CIFilters and actually I don't know how to do it.
I tried several ways to convert it but I stuck on merging left and right, this can be done with `CICategoryCompositeOperations filters but which one is fit this case I o have idea, so need some help with this issue.
Filter code using UIGraphics:
//
// SplitMirror filter.swift
// Image Editor
//
// Created by Coder ACJHP on 25.02.2022.
//
// SnapChat & Tiktok & Motion App like image filter
// Inspired from 'https://support.apple.com/tr-tr/guide/motion/motn169f94ea/mac'
// Splits an image in half vertically and reverses the left remaining half to create a reflection.
final func splitMirrorFilter(processingImage image: UIImage) -> UIImage {
// Image size
let imageSize = image.size
// Left half
let leftHalfRect = CGRect(
origin: .zero,
size: CGSize(
width: imageSize.width/2,
height: imageSize.height
)
)
// Right half
let rightHalfRect = CGRect(
origin: CGPoint(
x: imageSize.width - (imageSize.width/2).rounded(),
y: 0
),
size: CGSize(
width: imageSize.width - (imageSize.width/2).rounded(),
height: imageSize.height
)
)
// Split image into two parts
guard let cgRightHalf = image.cgImage?.cropping(to: rightHalfRect) else { return image }
// Flip right side to be used as left side
let flippedLeft = UIImage(cgImage: cgRightHalf, scale: image.scale, orientation: .upMirrored)
let unFlippedRight = UIImage(cgImage: cgRightHalf, scale: image.scale, orientation: image.imageOrientation)
UIGraphicsBeginImageContextWithOptions(imageSize, false, image.scale)
flippedLeft.draw(at: leftHalfRect.origin)
unFlippedRight.draw(at: rightHalfRect.origin)
guard let splitMirroredImage = UIGraphicsGetImageFromCurrentImageContext() else { return image }
UIGraphicsEndImageContext()
return splitMirroredImage
}
Here is what it tried with Core Image
// Splits an image in half vertically and reverses the left remaining half to create a reflection.
final func splitMirrorFilterCIImageVersion(processingImage image: UIImage) -> CIImage? {
guard let ciImageCopy = CIImage(image: image) else { return image.ciImage }
// Image size
let imageSize = ciImageCopy.extent.size
let imageRect = CGRect(origin: .zero, size: imageSize)
// Left half
let leftHalfRect = CGRect(
origin: .zero,
size: CGSize(
width: imageSize.width/2,
height: imageSize.height
)
)
// Right half
let rightHalfRect = CGRect(
origin: CGPoint(
x: imageSize.width - (imageSize.width/2).rounded(),
y: 0
),
size: CGSize(
width: imageSize.width - (imageSize.width/2).rounded(),
height: imageSize.height
)
)
// Split image into two parts
let cgRightHalf = ciImageCopy.cropped(to: rightHalfRect)
context.draw(cgRightHalf.oriented(.upMirrored), in: leftHalfRect, from: imageRect)
context.draw(cgRightHalf, in: rightHalfRect, from: imageRect)
// I'm stuck here
// Merge two images into one
// Here I don't know which filter can be used to merge op
// CICategoryCompositeOperation filters may fits
}
I think you are on the right track.
You can create the left half from the right half by applying transformations to it using let leftHalf = rightHalf.transformed(by: transformation). The transformation should mirror it and translate it to the correct position, i.e., next to the right half.
You can them combine the two into one image using let result = leftHalf.composited(over: rightHalf) and render that result using a CIContext.
After getting the correct idea from Frank Schlegel 's answer I rewrote filter code and now works well.
// Splits an image in half vertically and reverses the left remaining half to create a reflection.
final func splitMirrorFilterCIImageVersion(processingImage image: UIImage) -> CIImage? {
guard let ciImageCopy = CIImage(image: image) else { return image.ciImage }
// Image size
let imageSize = ciImageCopy.extent.size
// Right half
let rightHalfRect = CGRect(
origin: CGPoint(
x: imageSize.width - (imageSize.width/2).rounded(),
y: 0
),
size: CGSize(
width: imageSize.width - (imageSize.width/2).rounded(),
height: imageSize.height
)
)
// Split image into two parts
let ciRightHalf = ciImageCopy.cropped(to: rightHalfRect)
// Make transform to move right part to left
let transform = CGAffineTransform(translationX: -rightHalfRect.size.width, y: -rightHalfRect.origin.y)
// Create left part and apply transform then flip it
let ciLeftHalf = ciRightHalf.transformed(by: transform).oriented(.upMirrored)
// Merge two images into one
return ciLeftHalf.composited(over: ciRightHalf)
}

Cropping CGRect from AVCapturePhotoOutput (resizeAspectFill)

I have found the following problem and unfortunatly other posts have not helped me to a working solution.
I have a simple app that shows the camera preview (AVCaptureVideoPreviewLayer) where the video gravity has been set to resizeAspectFill (videoGravity = .resizeAspectFill).
From my understanding this only streches the image in the width to make to fill the screen.
On my preview layer I also have applied a CGRect as a mask with fixed x, y, width and height.
Now once I take a photo i'm trying to crop that exact rectangle out of the image. For my understanding i'm supposed to use some kind of math to convert the CGRect to the same aspect ratio as the image that I get from the AVCapturePhotoOutput method but it never seems to crop correctly in the width.
private func cropImage(image: UIImage) {
let rect = CGRect(x: 25, y: 150, width: 325, height: 230)
let scale = CGAffineTransform(scaleX: 1/self.view.frame.width, y: 1/self.view.frame.height)
let flip = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -1)
let bounds = rect.applying(scale).applying(flip)
let topLeft = bounds.topLeft.scaled(to: image.size)
let topRight = bounds.topRight.scaled(to: image.size)
let bottomLeft = bounds.bottomLeft.scaled(to: image.size)
let bottomRight = bounds.bottomRight.scaled(to: image.size)
var ciImage = CIImage(image: image.forceSameOrientation())!
ciImage = ciImage.applyingFilter("CIPerspectiveCorrection", parameters: [
"inputTopLeft": CIVector(cgPoint: bottomLeft),
"inputTopRight": CIVector(cgPoint: bottomRight),
"inputBottomLeft": CIVector(cgPoint: topLeft),
"inputBottomRight": CIVector(cgPoint: topRight)
])
let context = CIContext()
let cgImage = context.createCGImage(ciImage, from: ciImage.extent)
let output = UIImage(cgImage: cgImage!)
let vc = PreviewViewController()
vc.imageView.image = output
self.present(vc, animated: true, completion: nil)
}
So again, basically it does crop at the correct height but its only the width that does not seem to go well.
Image example of what I would want to capture.
https://imgur.com/a/8GryEgX
As you can see the bounding box in the top left stops after the "Q" button.
Result:
https://imgur.com/FwKRWxK
As you can see in this image, it does crop correctly in the height however if we take a look at the top left it also includes half of the button to the left of the "Q" (Tab button)
Any help towards the solution would be appreciated!
I managed to solve the issue with this code.
private func cropToPreviewLayer(from originalImage: UIImage, toSizeOf rect: CGRect) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
// This previewLayer is the AVCaptureVideoPreviewLayer which the resizeAspectFill and videoOrientation portrait has been set.
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: rect)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: (outputRect.origin.x * width), y: (outputRect.origin.y * height), width: (outputRect.size.width * width), height: (outputRect.size.height * height))
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
usage of the piece of code for my case:
let rect = CGRect(x: 25, y: 150, width: 325, height: 230)
let croppedImage = self.cropToPreviewLayer(from: image, toSizeOf: rect)
self.imageView.image = croppedImage

Change UIImage to array or matrix in swift

I been trying to convert an UIImage to a matrix or a bitarray in swift but I don`t know how
import UIKit
import PlaygroundSupport
let bgimg = UIImage(named: "IMG_8565.JPG") // The image used as a background
let bgimgview = UIImageView(image: bgimg) // Create the view holding the image
bgimgview.frame = CGRect(x: 0, y: 0, width: 500, height: 500) // The size of the background image
let frontimg = UIImage(named: "spongebob.png") // The image in the foreground
let frontimgview = UIImageView(image: frontimg) // Create the view holding the image
frontimgview.frame = CGRect(x: 100, y: 200, width: 150, height: 150) // The size and position of the front image
bgimgview.addSubview(frontimgview) // Add the front image on top of the background
the last line contains the image and is what i`m trying to change into a matrix or a bitarray
also I know there are similar post but i hardly understand them, so please help me
mb you want just convert your image to the Data?
guard let image = UIImage(named: "IMG_8565.JPG") else { return }
guard let data = image?.jpegData(compressionQuality: 1.0) else { return }
compressionQuality: 1.0 - it's max quality for your image.

Image in NSTextAttachment too big and blurry

I'm trying to place an icon (in form of an image) next to a text in a UILabel. The icons are imported into the assets in al three sizes and are not blurry at all when I simply place them in a normal UIImageView.
However, within the NSTextAttachment they suddenly become extremely blurry and are too big, as well.
I already tried several things on my own and also tried nearly every snippet I could find online - nothing helps. This is what I'm left over with:
func updateWinnableCoins(coins: Int){
let attachImg = NSTextAttachment()
attachImg.image = resizeImage(image: #imageLiteral(resourceName: "geld"), targetSize: CGSize(width: 17.0, height: 17.0))
attachImg.setImageHeight(height: 17.0)
let imageOffsetY:CGFloat = -3.0;
attachImg.bounds = CGRect(x: 0, y: imageOffsetY, width: attachImg.image!.size.width, height: attachImg.image!.size.height)
let attchStr = NSAttributedString(attachment: attachImg)
let completeText = NSMutableAttributedString(string: "")
let tempText = NSMutableAttributedString(string: "You can win " + String(coins) + " ")
completeText.append(tempText)
completeText.append(attchStr)
self.lblWinnableCoins.textAlignment = .left;
self.lblWinnableCoins.attributedText = completeText;
}
func resizeImage(image: UIImage, targetSize: CGSize) -> (UIImage) {
let newRect = CGRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height).integral
UIGraphicsBeginImageContextWithOptions(targetSize, false, 0)
let context = UIGraphicsGetCurrentContext()
// Set the quality level to use when rescaling
context!.interpolationQuality = CGInterpolationQuality.default
let flipVertical = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: targetSize.height)
context!.concatenate(flipVertical)
// Draw into the context; this scales the image
context?.draw(image.cgImage!, in: CGRect(x: 0.0,y: 0.0, width: newRect.width, height: newRect.height))
let newImageRef = context!.makeImage()! as CGImage
let newImage = UIImage(cgImage: newImageRef)
// Get the resized image from the context and a UIImage
UIGraphicsEndImageContext()
return newImage
}
extension NSTextAttachment {
func setImageHeight(height: CGFloat) {
guard let image = image else { return }
let ratio = image.size.width / image.size.height
bounds = CGRect(x: bounds.origin.x, y: bounds.origin.y, width: ratio * height, height: height)
}
}
And this is how it looks:
The font size of the UILabel is 17, so I set the text attachment to be 17 big, too. When I set it to 9, it fits, but it's still very blurry.
What can I do about that?

How to combine two images

I have a single image. And I have a collection view with banner images. Now,I need to combine these 2 images into single image without affecting their quality and height so that I could be able to download the merged image. I searched but couldn't find proper solutions for swift 3. My code is given as:
As per as your question You have to add two images and show up in a single UIImageView.
Here is a simple example of adding two images vertically and showing up in an UIImageView -
let topImage = UIImage(named: "image1.png") // 355 X 200
let bottomImage = UIImage(named: "image2.png") // 355 X 60
let size = CGSize(width: (topImage?.size.width)!, height: (topImage?.size.height)! + (bottomImage?.size.height)!)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
topImage?.draw(in: CGRect(x:0, y:0, width:size.width, height: (topImage?.size.height)!))
bottomImage?.draw(in: CGRect(x:0, y:(topImage?.size.height)!, width: size.width, height: (bottomImage?.size.height)!))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
// I've added an UIImageView, You can change as per your requirement.
let mergeImageView = UIImageView(frame: CGRect(x:0, y: 200, width: 355, height: 260))
// Here is your final combined images into a single image view.
mergeImageView.image = newImage
I hope it will help you to start with.