I'm making an UITableView with a custom UITableViewCell (CardCell). It contains an UIImage on the left an right next to it an UILabel.
I'm downloading the UIImages asynchronously from an URL, and Scal the UIImage to keep the aspect ratio of the image, th height should be 50 pixels, but the width can change (depending on the original width and height). I wrote a method to scale the Image, and it's working fine, but my UILabel overlaps the UIImage, like this:
I know that the Image is completely there because when I Tap and hold the cell (not really select the cell) I can see the Image underneath the UILabel, like this:
These are the constraints on the storyboard for the UIImage:
These are the constraints for the UILabel on the storyboard:
This is the code I wrote for scaling the downloaded Image:
func scaleImage(sourceImage: UIImage) -> UIImage {
var oldHeight = sourceImage.size.height
var scaleFactor = 50/oldHeight
var newWidth = sourceImage.size.width * scaleFactor
var newHeight = oldHeight * scaleFactor
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
sourceImage.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
var newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
And this is the code for downloading the UIImage, rescaling it (calling the previous function) and setting the scaled UIImage in the cell:
func downloadImage(url: NSURL, cell: CardCell) {
getDataFromUrl(url) { (data, response, error) in
dispatch_async(dispatch_get_main_queue()) { () -> Void in
guard let data = data where error == nil else {return}
let oldImage = UIImage(data: data)
cell.logoImageView.image = self.scaleImage(oldImage!)
}
}
}
Related
I'm doing attentionBased saliency and should pass image to the request. When the contentMode is ScaleAspectFill, the result of the request is not correct, because I use full image (not visible on screen part)
I'm trying to crop UIImage, but this method doesn't crop correctly
let newImage = cropImage(imageToCrop: imageView.image, toRect: imageView.frame)
func cropImage(imageToCrop: UIImage?, toRect rect: CGRect) -> UIImage? {
guard let imageRef = imageToCrop?.cgImage?.cropping(to: rect) else {
return nil
}
let cropped: UIImage = UIImage(cgImage: imageRef)
return cropped
}
How can I make saliency request only for the visible part of the image (which changes when change contentMode)?
If I understand your goal correctly...
Suppose we have this 640 x 360 image:
and we display it in a 240 x 240 image view, using .scaleAspectFill...
It looks like this (the red outline is the image view frame):
and, with .clipsToBounds = true:
we want to generate this new 360 x 360 image (that is, we want to keep the original image resolution... we don't want to end up with a 240 x 240 image):
To crop the visible portion of the image, we need to calculate the scaled rect, including the offset:
func cropImage(imageToCrop: UIImage?, toRect rect: CGRect) -> UIImage? {
guard let imageRef = imageToCrop?.cgImage?.cropping(to: rect) else {
return nil
}
let cropped: UIImage = UIImage(cgImage: imageRef)
return cropped
}
func myCrop(imgView: UIImageView) -> UIImage? {
// get the image from the imageView
guard let img = imgView.image else { return nil }
// image view rect
let vr: CGRect = imgView.bounds
// image size -- we need to account for scale
let imgSZ: CGSize = CGSize(width: img.size.width * img.scale, height: img.size.height * img.scale)
let viewRatio: CGFloat = vr.width / vr.height
let imgRatio: CGFloat = imgSZ.width / imgSZ.height
var newRect: CGRect = .zero
// calculate the rect that needs to be clipped from the full image
if viewRatio > imgRatio {
// image has a wider aspect ratio than the image view
// so top and bottom will be clipped
let f: CGFloat = imgSZ.width / vr.width
let h: CGFloat = vr.height * f
newRect.origin.y = (imgSZ.height - h) * 0.5
newRect.size.width = imgSZ.width
newRect.size.height = h
} else {
// image has a narrower aspect ratio than the image view
// so left and right will be clipped
let f: CGFloat = imgSZ.height / vr.height
let w: CGFloat = vr.width * f
newRect.origin.x = (imgSZ.width - w) * 0.5
newRect.size.width = w
newRect.size.height = imgSZ.height
}
return cropImage(imageToCrop: img, toRect: newRect)
}
and call it like this:
if let croppedImage = myCrop(imgView: theImageView) {
// do something with the new image
}
I have a vertically large long image, like 500pt x 1000pt.
And I am trying to display very bottom of the image.
So, I want to crop off top of the image.
But, contentMode = aspectToFil crops top and bottom of the image, and shows middle of the image.
There is explanation image below.
Is there any better way?
Note: I can not use contentMode = bottom. Because the image is pretty large.
aspectToFil
You can crop the image CGImage.cropping(to: CGRect).
Set the origin of the CGRect to the upper right corner of where you want to begin cropping, and set the size to the size of the crop you want. Then initialize and image from that cgImage.
let foo = UIImage(named: "fooImage")
guard let croppedCGImage = foo?.cgImage?.cropping(to: CGRect(x: 200, y: 200, width: 375, height: 400) else { return }
guard let croppedImage = UIImage(cgImage: croppedCGImage) else { return }
Apple Documentation
Playground Preview
Edit: Adding example using #IBDesignable and #IBInspectable
Video showing storyboard/nib usage
#IBDesignable
class UIImageViewCroppable: UIImageView {
#IBInspectable public var isCropped: Bool = false {
didSet { updateImage() }
}
#IBInspectable public var croppingRect: CGRect = .zero {
didSet { updateImage() }
}
func updateImage() {
guard isCropped else { return }
guard let croppedCGImage = image?.cgImage?.cropping(to: croppingRect) else { return }
image = UIImage(cgImage: croppedCGImage)
}
}
I found an answer.
I wish to solve the problem inside of the xib file; but i think there is no way to solve it.
imageView.contentMode = .bottom
if let anImage = image.cgImage {
imageView.image = UIImage(cgImage: anImage, scale: image.size.width / imageView.frame.size.width, orientation: .up)
}
I am using the ImageScrollView cocoa pod v1.5 inside a UITableViewCell (Swift 4).
I am downloading images from Firestore using SDWebImage (but also tried Kingfisher with no change in my issue). My images are 700x700 and are being displayed in a ImageScrollView that, depending upon device, is about 400x100. I have set the SDWebImage imageContentMode to .widthFill. I rotate the image to get it in the form I want for the tableview. I use it in it's regular orientation in other places.
The first time my cells are shown the images are shown correctly. If I go back to the previous page, then show the same results in the table again, the visible cells will have their images no longer fitting correctly with regards to width, they are now too wide. If I scroll down hiding those cells all new cells are properly displayed, when I scroll back up the incorrect cells are now displaying correctly. Happens in simulator and actual phone.
Here are the important parts of my UITableViewCell class :
class MyTableCell: UITableViewCell {
var ski : Ski!
var skiImageView: UIImageView = UIImageView()
#IBOutlet weak var skiImageScrollView: ImageScrollView!
func configureCell(ski: Ski) {
self.ski = ski
let imageUrl = ski.imageUrl!
let url = URL(string: imageUrl)
skiImageView.sd_setImage(with: url, placeholderImage: placeholderImage, options: [.retryFailed, .continueInBackground]
, completed: {
(image, error, cacheType, url) in
if (error != nil) {
print("ConfigureCell Error : \(error!)")
return;
}
let rotatedImage = self.imageRotatedByDegrees(oldImage: image!, deg: 90.0)
self.skiImageScrollView.display(image: rotatedImage) var image = self.skiImageView.image!
self.skiImageScrollView.imageContentMode = .widthFill
})
}
func imageRotatedByDegrees(oldImage: UIImage, deg degrees: CGFloat) -> UIImage {
//Calculate the size of the rotated view's containing box for our drawing space
let rotatedViewBox: UIView = UIView(frame: CGRect(x: 0, y: 0, width: oldImage.size.width, height: oldImage.size.height))
let t: CGAffineTransform = CGAffineTransform(rotationAngle: degrees * CGFloat.pi / 180)
rotatedViewBox.transform = t
let rotatedSize: CGSize = rotatedViewBox.frame.size
//Create the bitmap context
UIGraphicsBeginImageContext(rotatedSize)
let bitmap: CGContext = UIGraphicsGetCurrentContext()!
//Move the origin to the middle of the image so we will rotate and scale around the center.
bitmap.translateBy(x: rotatedSize.width / 2, y: rotatedSize.height / 2)
//Rotate the image context
bitmap.rotate(by: (degrees * CGFloat.pi / 180))
//Now, draw the rotated/scaled image into the context
bitmap.scaleBy(x: 1.0, y: -1.0)
bitmap.draw(oldImage.cgImage!, in: CGRect(x: -oldImage.size.width / 2, y: -oldImage.size.height / 2, width: oldImage.size.width, height: oldImage.size.height))
let newImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return newImage
}
Another strange thing is if I slap the back of my phone or drop it on my desk, the images will jump down so only parts of the top of my image are visible in the ImageScrollView, but callbacks from ImageScrollView for zooming are not activated, so no idea what is happening there either.
You might try the following:
imageScrollView.imageContentMode = .aspectFit
before
self.skiImageScrollView.display(image: rotatedImage)
I've got a dynamically distorted UIImage appearing in a UIImageView (basically updating the UIImageView's image with a function). Every time I try to crop the image as it grows based on the distortion and touch response, I get a fatal error - trying to unwrap an optional coming back with nil.
Any ideas?
func pincushionDistortionWithAmount(amount:CGFloat, position:CGPoint){
//1 Remove All Subviews
for view in subviews {
view.removeFromSuperview()
}
//2
let beginImage = CIImage(image: UIImageView.image)
// 3
let filter :CIFilter = CIFilter(name: "CIBumpDistortion")!
filter.setValue(beginImage, forKey: kCIInputImageKey)
filter.setValue(CIVector(x: position.x, y: position.y), forKey:"inputCenter")
filter.setValue(75, forKey:"inputRadius")
filter.setValue(-1.0 * amount, forKey:"inputScale")
// 4
let newImage:UIImage = UIImage(CIImage: filter.outputImage!)
let imageViewWidth = pinImageView.frame.size.width
let imageViewHeight = pinImageView.frame.size.height
let imageWidth = newImage.size.width
let imageHeight = newImage.size.height
if imageWidth > imageViewWidth{
print("Image is wider than ImageView")
//Code to crop image to fit in imageView
}
if imageHeight > imageViewHeight{
print("Image is taller than ImageView")
//Code to crop image to fit in imageView
}
UIImageView.image = newImage
addSubview(UIImageView)
}
I'm generating a QR Code to put into a UIImage. I'm running the generation function asynchronously but for some reason the app crashes when I run it on my phone, but doesn't crash in the simulator. I'm not really sure what's going on... Any ideas?
Setup Image
let QR = UIImageView()
dispatch_async(dispatch_get_global_queue(Int(QOS_CLASS_USER_INITIATED.value), 0)) { // 1
var img = self.generateQRImage(self.arr[sender.tag],withSizeRate: self.screenWidth-40)
dispatch_async(dispatch_get_main_queue()) { // 2
QR.image = img
}
}
QR.frame = CGRectMake(0,0,screenWidth-40,screenWidth-40)
QR.center = CGPoint(x:screenWidth/2,y:screenHeight/2)
sView.addSubview(QR)
Generate QR
func generateQRImage(stringQR:NSString, withSizeRate rate:CGFloat) -> UIImage
{
var filter:CIFilter = CIFilter(name:"CIQRCodeGenerator")
filter.setDefaults()
var data:NSData = stringQR.dataUsingEncoding(NSUTF8StringEncoding)!
filter.setValue(data, forKey: "inputMessage")
var outputImg:CIImage = filter.outputImage
var context:CIContext = CIContext(options: nil)
var cgimg:CGImageRef = context.createCGImage(outputImg, fromRect: outputImg.extent())
var img:UIImage = UIImage(CGImage: cgimg, scale: 1.0, orientation: UIImageOrientation.Up)!
var width = img.size.width * rate
var height = img.size.height * rate
UIGraphicsBeginImageContext(CGSizeMake(width, height))
var cgContxt:CGContextRef = UIGraphicsGetCurrentContext()
CGContextSetInterpolationQuality(cgContxt, kCGInterpolationNone)
img.drawInRect(CGRectMake(0, 0, width, height))
img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return img
}
The intent of withSizeRate is clearly to be a scaling factor to apply to the QR image (which is 27x27). But you are using the screen width as the multiplier. That results in an exceedingly large image (once it's uncompressed, used in image view; don't go by the size of the resulting JPEG/PNG file). The theoretical internal, uncompressed representation of this image is extremely large (300 mb on iPhone 6 and nearly 400 mb on iPhone 6+). When I ran it through the iPhone 6 simulator, memory usage actually spiked to 2.4 gb:
I would suggest using a smaller scaling factor. Or just create an image that is precisely the size of the imageview (though use zero for the scale with UIGraphicsBeginImageContextWithOptions).
For example, you could simply pass the CGSize of the image view to generateQRImage, and adjust the method like so:
func generateQRImage(stringQR: String, size: CGSize) -> UIImage {
let filter = CIFilter(name:"CIQRCodeGenerator")
filter.setDefaults()
let data = stringQR.dataUsingEncoding(NSUTF8StringEncoding)!
filter.setValue(data, forKey: "inputMessage")
let outputImage = filter.outputImage
let context = CIContext(options: nil)
let cgImage = context.createCGImage(outputImage, fromRect: outputImage.extent())
var image = UIImage(CGImage: cgImage, scale: 1.0, orientation: UIImageOrientation.Up)!
let width = size.width
let height = size.height
UIGraphicsBeginImageContextWithOptions(CGSizeMake(width, height), true, 0)
let cgContext = UIGraphicsGetCurrentContext()
CGContextSetInterpolationQuality(cgContext, kCGInterpolationNone)
image.drawInRect(CGRectMake(0, 0, width, height))
image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}