All the stack overflow posts about this are very dated (at least 5 years old). I have images that are loaded remotely via AWS cloudfront and have aspect ratios similar to the aspect ratio of a phone screen. I need to get this images to squares, without stretching the image. I'd like to just crop them to the center square of the image. I've tried the below code, but specifying the content mode as .scaleAspectFill (which should just 'zoom' to the center square so it fits the UIImageView's frame) doesn't work:
let view = UIView()
let imageView = UIImageView()
imageView.load(url: url)
imageView.contentMode = .scaleAspectFill
imageView.frame = CGRect(x: 0, y: 0, width: 70, height: 70)
view.addSubview(imageView)
return view
The load function looks like this:
extension UIImageView {
func load(url: URL) {
DispatchQueue.global().async { [weak self] in
if let data = try? Data(contentsOf: url) {
if let image = UIImage(data: data) {
DispatchQueue.main.async {
self?.image = image
}
}
}
}
}
}
Related
I've been trying to merge two images where one is on top and the other at the bottom. This code below doesn't seem to work. The x coordinator is correct but the y doesn't seem right and it crops the top image when I alter it. What am I doing wrong?
func combine(bottomImage: Data, topImage: Data) -> UIImage {
let bottomImage = UIImage(data: topImage)
let topImage = UIImage(data: bottomImage)
let size = CGSize(width: bottomImage!.size.width, height: bottomImage!.size.height + topImage!.size.height)
UIGraphicsBeginImageContext(size)
let areaSizeb = CGRect(x: 0, y: 0, width: bottomImage!.size.width, height: bottomImage!.size.height)
let areaSize = CGRect(x: 0, y: 0, width: topImage!.size.width, height: topImage!.size.height)
bottomImage!.draw(in: areaSizeb)
topImage!.draw(in: areaSize)
let newImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
You are drawing both images into the same rect. You also should not use force-unwrapping. That causes your app to crash if anything goes wrong.
There are also various other small mistakes.
Change your function like this:
// Return an Optional so we can return nil if something goes wrong
func combine(bottomImage: Data, topImage: Data) -> UIImage? {
// Use a guard statement to make sure
// the data can be converted to images
guard
let bottomImage = UIImage(data: bottomImage),
let topImage = UIImage(data: topImage) else {
return nil
}
// Use a width wide enough for the widest image
let width = max(bottomImage.size.width, topImage.size.width)
// Make the height tall enough to stack the images on top of each other.
let size = CGSize(width: width, height: bottomImage.size.height + topImage.size.height)
UIGraphicsBeginImageContext(size)
let bottomRect = CGRect(
x: 0,
y: 0,
width: bottomImage.size.width,
height: bottomImage.size.height)
// Position the bottom image under the top image.
let topRect = CGRect(
x: 0,
y: bottomImage.size.height,
width: topImage.size.width,
height: topImage.size.height)
bottomImage.draw(in: bottomRect)
topImage!.draw(in: topRect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
(And you should really be using a UIGraphicsImageRenderer rather than than calling UIGraphicsBeginImageContext()/ UIGraphicsEndImageContext().)
Edit:
Note that if the 2 images are different widths the above code will leave a "dead space" on the right of the narrower image. You could also make the code center the narrower image, or scale it up to be the same width. (If you do scale it up I suggest scaling it up in both dimensions to preserve the original aspect ratio. Otherwise it will look stretched and unnatural.)
If you upload a picture to Snapchat (that isn't already full screen), it will zoom in and crop the photo so that it becomes full screen. I am able to do this in my ImageView using autoresizing masks, but I need to be able to save the image in this cropped state and I can't figure out how to do it.
This is how I am able to display the image (selected from camera roll) in the image view how I want it
let imgView = UIImageView(image: image)
imgView.autoresizingMask = [.flexibleWidth, .flexibleHeight, .flexibleBottomMargin, .flexibleRightMargin, .flexibleLeftMargin, .flexibleTopMargin]
imgView.contentMode = .scaleAspectFill
imgView.clipsToBounds = true
imgView.frame = CGRect(x: 0, y: 0, width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height)
self.view.addSubview(imgView)
This turns a non-full screen photo and displays it full screen with the propping zoom/crop. How can I now save the photo as a full screen photo?
you can capture an image from given views.
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
let img = image(with: YourImageView)
I have a vertically large long image, like 500pt x 1000pt.
And I am trying to display very bottom of the image.
So, I want to crop off top of the image.
But, contentMode = aspectToFil crops top and bottom of the image, and shows middle of the image.
There is explanation image below.
Is there any better way?
Note: I can not use contentMode = bottom. Because the image is pretty large.
aspectToFil
You can crop the image CGImage.cropping(to: CGRect).
Set the origin of the CGRect to the upper right corner of where you want to begin cropping, and set the size to the size of the crop you want. Then initialize and image from that cgImage.
let foo = UIImage(named: "fooImage")
guard let croppedCGImage = foo?.cgImage?.cropping(to: CGRect(x: 200, y: 200, width: 375, height: 400) else { return }
guard let croppedImage = UIImage(cgImage: croppedCGImage) else { return }
Apple Documentation
Playground Preview
Edit: Adding example using #IBDesignable and #IBInspectable
Video showing storyboard/nib usage
#IBDesignable
class UIImageViewCroppable: UIImageView {
#IBInspectable public var isCropped: Bool = false {
didSet { updateImage() }
}
#IBInspectable public var croppingRect: CGRect = .zero {
didSet { updateImage() }
}
func updateImage() {
guard isCropped else { return }
guard let croppedCGImage = image?.cgImage?.cropping(to: croppingRect) else { return }
image = UIImage(cgImage: croppedCGImage)
}
}
I found an answer.
I wish to solve the problem inside of the xib file; but i think there is no way to solve it.
imageView.contentMode = .bottom
if let anImage = image.cgImage {
imageView.image = UIImage(cgImage: anImage, scale: image.size.width / imageView.frame.size.width, orientation: .up)
}
I'm making an image app that can put simple frames over images loaded into the imageView and save them as a new image for practice, because I'm very new to swift and trying to learn. These screenshots show the frame hidden because the problem is with the image loaded behind the frame.
My rep isn't high enough to post images but the specific issue is: Image shown in imageView loses aspect ratio and becomes squished widthwise when saved to camera roll.
I have my imageView's constraints set to maintain a specific aspect ratio with any sized device so it grows and shrinks accordingly. I also have it's content mode set to aspect fill via IB.
The aspect fill works exactly how I'd expect until I save the image. When I hit save the image inside the image view instantly squishes width wise and loses its aspect ratio.
I import the image to the imageView with this:
func importPicture() {
let picker = UIImagePickerController()
picker.allowsEditing = true
picker.delegate = self
present(picker, animated: true)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
guard let image = info[UIImagePickerControllerEditedImage] as? UIImage else { return }
dismiss(animated: true)
currentImage = image
unchangedImage = image
self.imageView.image = currentImage
}
Then I draw:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
let frame = UIImage(named: "5x4frame")
frame?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
imageView.image = img
}
This was the only way I could think to do it to allow the image to be drawn while also having the image view dynamic.
this is the save function I'm using, I have this function tied to a "save button's" outlet from IB.
UIImageWriteToSavedPhotosAlbum(img, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
}
thanks for any and all advice
I want to apply an CIFilter on an UI element. I tried to apply it onto the views layer via the .filters member. However the filter won`t get applied.
Here's an approach: use UIGraphicsGetImageFromCurrentImageContext to generate a UIImage, apply the filter to that and overlay an image view containing the filtered image over your original component.
Here's a way to do that with a blur (taken from my blog):
Getting a blurred representation of a UIView is pretty simple: I need to begin an image context, use the view's layer's renderInContext method to render into the context and then get a UIImage from the context:
UIGraphicsBeginImageContextWithOptions(CGSize(width: frame.width, height: frame.height), false, 1)
layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
Once I have the image populated, it's a fairly standard workflow to apply a Gaussian blur to it:
guard let blur = CIFilter(name: "CIGaussianBlur") else
{
return
}
blur.setValue(CIImage(image: image), forKey: kCIInputImageKey)
blur.setValue(blurRadius, forKey: kCIInputRadiusKey)
let ciContext = CIContext(options: nil)
let result = blur.valueForKey(kCIOutputImageKey) as! CIImage!
let boundingRect = CGRect(x: -blurRadius * 4,
y: -blurRadius * 4,
width: frame.width + (blurRadius * 8),
height: frame.height + (blurRadius * 8))
let cgImage = ciContext.createCGImage(result, fromRect: boundingRect)
let filteredImage = UIImage(CGImage: cgImage)
A blurred image will be larger than its input image, so I need to be explicit about the size I require in createCGImage.
The next step is to add a UIImageView to my view and hide all the other views. I've subclassed UIImageView to BlurOverlay so that when it comes to removing it, I can be sure I'm not removing an existing UIImageView:
let blurOverlay = BlurOverlay()
blurOverlay.frame = boundingRect
blurOverlay.image = filteredImage
subviews.forEach{ $0.hidden = true }
addSubview(blurOverlay)
When it comes to de-blurring, I want to ensure the last subview is one of my BlurOverlay remove it and unhide the existing views:
func unBlur()
{
if let blurOverlay = subviews.last as? BlurOverlay
{
blurOverlay.removeFromSuperview()
subviews.forEach{ $0.hidden = false }
}
}
Finally, to see if a UIView is currently blurred, I just need to see if its last subview is a BlurOverlay:
var isBlurred: Bool
{
return subviews.last is BlurOverlay
}