UIImageView with aspectToFil and crop only top of the image - swift

I have a vertically large long image, like 500pt x 1000pt.
And I am trying to display very bottom of the image.
So, I want to crop off top of the image.
But, contentMode = aspectToFil crops top and bottom of the image, and shows middle of the image.
There is explanation image below.
Is there any better way?
Note: I can not use contentMode = bottom. Because the image is pretty large.
aspectToFil

You can crop the image CGImage.cropping(to: CGRect).
Set the origin of the CGRect to the upper right corner of where you want to begin cropping, and set the size to the size of the crop you want. Then initialize and image from that cgImage.
let foo = UIImage(named: "fooImage")
guard let croppedCGImage = foo?.cgImage?.cropping(to: CGRect(x: 200, y: 200, width: 375, height: 400) else { return }
guard let croppedImage = UIImage(cgImage: croppedCGImage) else { return }
Apple Documentation
Playground Preview
Edit: Adding example using #IBDesignable and #IBInspectable
Video showing storyboard/nib usage
#IBDesignable
class UIImageViewCroppable: UIImageView {
#IBInspectable public var isCropped: Bool = false {
didSet { updateImage() }
}
#IBInspectable public var croppingRect: CGRect = .zero {
didSet { updateImage() }
}
func updateImage() {
guard isCropped else { return }
guard let croppedCGImage = image?.cgImage?.cropping(to: croppingRect) else { return }
image = UIImage(cgImage: croppedCGImage)
}
}

I found an answer.
I wish to solve the problem inside of the xib file; but i think there is no way to solve it.
imageView.contentMode = .bottom
if let anImage = image.cgImage {
imageView.image = UIImage(cgImage: anImage, scale: image.size.width / imageView.frame.size.width, orientation: .up)
}

Related

Cropping visible part of UIImage in UIImageView for saliency

I'm doing attentionBased saliency and should pass image to the request. When the contentMode is ScaleAspectFill, the result of the request is not correct, because I use full image (not visible on screen part)
I'm trying to crop UIImage, but this method doesn't crop correctly
let newImage = cropImage(imageToCrop: imageView.image, toRect: imageView.frame)
func cropImage(imageToCrop: UIImage?, toRect rect: CGRect) -> UIImage? {
guard let imageRef = imageToCrop?.cgImage?.cropping(to: rect) else {
return nil
}
let cropped: UIImage = UIImage(cgImage: imageRef)
return cropped
}
How can I make saliency request only for the visible part of the image (which changes when change contentMode)?
If I understand your goal correctly...
Suppose we have this 640 x 360 image:
and we display it in a 240 x 240 image view, using .scaleAspectFill...
It looks like this (the red outline is the image view frame):
and, with .clipsToBounds = true:
we want to generate this new 360 x 360 image (that is, we want to keep the original image resolution... we don't want to end up with a 240 x 240 image):
To crop the visible portion of the image, we need to calculate the scaled rect, including the offset:
func cropImage(imageToCrop: UIImage?, toRect rect: CGRect) -> UIImage? {
guard let imageRef = imageToCrop?.cgImage?.cropping(to: rect) else {
return nil
}
let cropped: UIImage = UIImage(cgImage: imageRef)
return cropped
}
func myCrop(imgView: UIImageView) -> UIImage? {
// get the image from the imageView
guard let img = imgView.image else { return nil }
// image view rect
let vr: CGRect = imgView.bounds
// image size -- we need to account for scale
let imgSZ: CGSize = CGSize(width: img.size.width * img.scale, height: img.size.height * img.scale)
let viewRatio: CGFloat = vr.width / vr.height
let imgRatio: CGFloat = imgSZ.width / imgSZ.height
var newRect: CGRect = .zero
// calculate the rect that needs to be clipped from the full image
if viewRatio > imgRatio {
// image has a wider aspect ratio than the image view
// so top and bottom will be clipped
let f: CGFloat = imgSZ.width / vr.width
let h: CGFloat = vr.height * f
newRect.origin.y = (imgSZ.height - h) * 0.5
newRect.size.width = imgSZ.width
newRect.size.height = h
} else {
// image has a narrower aspect ratio than the image view
// so left and right will be clipped
let f: CGFloat = imgSZ.height / vr.height
let w: CGFloat = vr.width * f
newRect.origin.x = (imgSZ.width - w) * 0.5
newRect.size.width = w
newRect.size.height = imgSZ.height
}
return cropImage(imageToCrop: img, toRect: newRect)
}
and call it like this:
if let croppedImage = myCrop(imgView: theImageView) {
// do something with the new image
}

get the cgImage from a UIImage that was selected using UIImagePickerController

I am trying to build something for my own learning where I select an image from my photo library then divide that image up into sections. I had found info on how to split a single UIImage into sections, but in order to do that I need to have access to the cgImage property of the UIImage object. My problem is, the cgImage is always nil/null when selecting an image from the UIImagePickerController. Below is a stripped down version of my code, I'm hoping someone knows why the cgImage is always nil/null...
class ViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
#IBOutlet weak var selectButton: UIButton!
let picker = UIImagePickerController()
var image: UIImage!
var images: [UIImage]!
override func viewDidLoad() {
super.viewDidLoad()
picker.delegate = self
picker.sourceType = .photoLibrary
}
#objc func selectPressed(_ sender: UIButton) {
self.present(picker, animated: true)
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info [UIImagePickerController.InfoKey : Any]) {
guard let image = info[UIImagePickerController.InfoKey.originalImage] as? UIImage else {
self.picker.dismiss(animated: true, completion: nil)
return
}
self.image = image
self.picker.dismiss(animated: true, completion: nil)
self.makePuzzle()
}
func makePuzze() {
let images = self.image.split(times: 5)
}
}
extension UIImage {
func split(times: Int) -> [UIImage] {
let size = self.size
var xpos = 0, ypos = 0
var images: [UIImage] = []
let width = Int(size.width) / times
let height = Int(size.height) / times
for x in 0..<times {
xpos = 0
for y in 0..<times {
let rect = CGRect(x: xpos, y: ypos, width: width, height: height)
let ciRef = self.cgImage?.cropping(to: rect) //this is always nil
let img = UIImage(cgImage: ciRef!) //crash because nil
xpos += width
images.append(img)
}
ypos += height
}
return images
}
}
I can't seem to get the cgImage to be anything but nil/null and the app crashes every time. I know I can change the ! to ?? nil or something similar to avoid the crash, or add a guard or something, but that isn't really the problem, the problem is the cgImage is nil. I have looked around and the only thing I can find is how to get the cgImage with something like image.cgImage but that doesn't work. I think it has something to do with the image being selected from the UIImagePickerController, maybe that doesn't create the cgImage properly? Honestly not sure and could use some help. Thank you.
This is not an answer, just a beefed up comment with code.
Your assumption that the problem may be due to the UIImagePickerController could be correct.
Here is my SwiftUI test code. It shows your split(..) code (with some minor mods) working.
extension UIImage {
func split(times: Int) -> [UIImage] {
let size = self.size
var xpos = 0, ypos = 0
var images: [UIImage] = []
let width = Int(size.width) / times
let height = Int(size.height) / times
if let cgimg = self.cgImage { // <-- here
for _ in 0..<times {
xpos = 0
for _ in 0..<times {
let rect = CGRect(x: xpos, y: ypos, width: width, height: height)
if let ciRef = cgimg.cropping(to: rect) { // <-- here
let img = UIImage(cgImage: ciRef)
xpos += width
images.append(img)
}
}
ypos += height
}
}
return images
}
}
struct ContentView: View {
#State var imgSet = [UIImage]()
var body: some View {
ScrollView {
ForEach(imgSet, id: \.self) { img in
Image(uiImage: img).resizable().frame(width: 100, height: 100)
}
}
.onAppear {
if let img = UIImage(systemName: "globe") { // for testing
imgSet = img.split(times: 2)
}
}
}
}

.aspectFill on NSImageView

I'm porting my SpriteKit app from iOS to MacOS. I am designing my main menu in the main.storyboard, and I have an image as the background. When I resize the window, however, my image does not fill the whole screen.
I've tried:
.scaleAxesIndependently //???
.scaleNone //Centre
.scaleProportionallyDown //???
.scaleProportionallyUpOrDown //AspectFit
but none are the same as .aspectFill.
I am using swift
Subclassing NSImageView and overriding intrinsicContentSize you will be able to resizing image keeping aspect ratio, like so:
class AspectFillImageView: NSImageView {
override var intrinsicContentSize: CGSize {
guard let img = self.image else { return .zero }
let viewWidth = self.frame.size.width
let ratio = viewWidth / img.size.width
return CGSize(width: viewWidth, height: img.size.height * ratio)
}
}
If you just want to fill the whole view ignoring the ratio, use this extension instead:
extension NSImage {
func resize(to size: NSSize) -> NSImage {
return NSImage(size: size, flipped: false, drawingHandler: {
self.draw(in: $0)
return true
})
}
}
Extension usage:
NSImage.resize(to: self.view.frame.size)

Is it possible to change the size of the image you get from using the camera thru imagePicker?

When I try to change the property currently, I am getting a error that the size is a "Get Only Property." Anyone know a way around this?
you can try the following (I didn't run it yet, pretty sure it works..)
let imagePickerView: UIView = self.imagePicker.view
let cameraViewFrame: CGRect = CGRectMake(0, self.overlay.topBarHeight,
self.view.bounds.size.width,
self.view.bounds.size.height -self.overlay.topBarHeight - self.overlay.bottomBarHeight);
imagePickerView.frame = cameraViewFrame
Good luck :)
What we can do is to draw a new UIImage instead. The code below is a function to scale the image passed in. And after the image has been scaled, the size changes.
extension UIImage {
class func scaleImage(image:UIImage, scaleFloat:CGFloat) -> UIImage{
let size = CGSizeMake(image.size.width * scaleFloat, image.size.height * scaleFloat)
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
var transform = CGAffineTransformIdentity
transform = CGAffineTransformScale(transform, scaleFloat, scaleFloat)
CGContextConcatCTM(context, transform)
image.drawAtPoint(CGPointMake(0, 0))
let newimg = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newimg
}
}
Here is the example code to scale the image to 150x150. But these codes only helps the square image. If it is not square, you can test for what will happen.
extension UIImage {
class func scaleImgTo150x150(image:UIImage) -> UIImage{
let scale:CGFloat
if image.size.width > image.size.height{
scale = 150/image.size.width
}
else{
scale = 150/image.size.height
}
return UIImage.scaleImage(image, scaleFloat: scale)
}
}

Swift UILabel OverlapsUIImage in custom UITableViewCell

I'm making an UITableView with a custom UITableViewCell (CardCell). It contains an UIImage on the left an right next to it an UILabel.
I'm downloading the UIImages asynchronously from an URL, and Scal the UIImage to keep the aspect ratio of the image, th height should be 50 pixels, but the width can change (depending on the original width and height). I wrote a method to scale the Image, and it's working fine, but my UILabel overlaps the UIImage, like this:
I know that the Image is completely there because when I Tap and hold the cell (not really select the cell) I can see the Image underneath the UILabel, like this:
These are the constraints on the storyboard for the UIImage:
These are the constraints for the UILabel on the storyboard:
This is the code I wrote for scaling the downloaded Image:
func scaleImage(sourceImage: UIImage) -> UIImage {
var oldHeight = sourceImage.size.height
var scaleFactor = 50/oldHeight
var newWidth = sourceImage.size.width * scaleFactor
var newHeight = oldHeight * scaleFactor
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
sourceImage.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
var newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
And this is the code for downloading the UIImage, rescaling it (calling the previous function) and setting the scaled UIImage in the cell:
func downloadImage(url: NSURL, cell: CardCell) {
getDataFromUrl(url) { (data, response, error) in
dispatch_async(dispatch_get_main_queue()) { () -> Void in
guard let data = data where error == nil else {return}
let oldImage = UIImage(data: data)
cell.logoImageView.image = self.scaleImage(oldImage!)
}
}
}