I want to scale up an UIImage in such a way, that the user can see the pixels in the UIImage very sharp. When I put that to an UIImageView and scale the transform matrix up, the UIImage appears antialiased and smoothed.
Is there a way to render in a bigger bitmap context by simply repeating every row and every column to get bigger pixels? How could I do that?
#import <QuartzCore/CALayer.h>
view.layer.magnificationFilter = kCAFilterNearest
When drawing directly into bitmap context, we can use:
CGContextSetInterpolationQuality(myBitmapContext, kCGInterpolationNone);
I found this on CGContextDrawImage very slow on iPhone 4
Swift 5
let image = UIImage(named: "Foo")
let scaledImageSize = image.size.applying(CGAffineTransform(scaleX: 2, y: 2))
UIGraphicsBeginImageContext(scaledImageSize)
let scaledContext = UIGraphicsGetCurrentContext()!
scaledContext.interpolationQuality = .none
image.draw(in: CGRect(origin: .zero, size: scaledImageSize))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()!
I was also trying this (on a sublayer) and I couldn't get it working, it was still blurry. This is what I had to do:
const CGFloat PIXEL_SCALE = 2;
layer.magnificationFilter = kCAFilterNearest; //Nearest neighbor texture filtering
layer.transform = CATransform3DMakeScale(PIXEL_SCALE, PIXEL_SCALE, 1); //Scale layer up
//Rasterize w/ sufficient resolution to show sharp pixels
layer.shouldRasterize = YES;
layer.rasterizationScale = PIXEL_SCALE;
For UIImage created from CIImage you may use:
imageView.image = UIImage(CIImage: ciImage.imageByApplyingTransform(CGAffineTransformMakeScale(kScale, kScale)))
Related
I found this extension online, it allows me to have images adhere to aspect fit/fill even when drawn inside dynamically growing/shrinking image views (currently when image is saved to camera roll after my draw function the image reverts to "scale fill" regardless of what the content mode of the image view is. I suspect the reasoning for this is because I have it drawing the image to size/bounds of the image view, but since the image view is dynamic, i don't see any way around this without using this extension):
// MARK: - Image Scaling.
extension UIImage {
/// Scales an image to fit within a bounds with a size governed by the passed size. Also keeps the aspect ratio.
/// Switch MIN to MAX for aspect fill instead of fit.
///
/// - parameter newSize: newSize the size of the bounds the image must fit within.
///
/// - returns: a new scaled image.
func scaleImageToSize(newSize: CGSize) -> UIImage {
var scaledImageRect = CGRect.zero
let aspectWidth = newSize.width/size.width
let aspectheight = newSize.height/size.height
let aspectRatio = max(aspectWidth, aspectheight)
scaledImageRect.size.width = size.width * aspectRatio;
scaledImageRect.size.height = size.height * aspectRatio;
scaledImageRect.origin.x = (newSize.width - scaledImageRect.size.width) / 2.0;
scaledImageRect.origin.y = (newSize.height - scaledImageRect.size.height) / 2.0;
UIGraphicsBeginImageContext(newSize)
draw(in: scaledImageRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
}
This is my current function I'm using for drawing the image on screen to be able to save it to camera roll (this function combines two images, a frame and an image from camera roll:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
// var newSize = currentImage.scaleImageToSize
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
All the tutorials I've found on how to use extensions don't cover how to pass in and out variables like this one requires. Any insight would be greatly appreciated.
I understand that you don't know how to use the extension, is that correct? Since it just adds a function to every UIImage, you can simply call it on your image like this: currentImage.scaleImageToSize(newSize: someSize) and pass the size you want the image to fit into.
Dorian Roy was telling me to use that call in place of using just "currentImage", and that's what worked!
(I commented on his initial answer saying I was having issues because I was trying to use the return value from the extension itself in place of "currentImage")
I'm attempting to round the corners of images that are stored within an array, but i'm not entirely sure if its possible?
var holeImages = [UIImage(named:"1.png"),UIImage(named:"2.png"),UIImage(named:"3.png")]
self.holeImages1.layer.cornerRadius = 10.0f;
No, it's not possible. If you wish apply some method of object for all objects in array, you should do this in cycle, since Array type most likely don't have this method:
for image in holeImages {
image.performSomeMethod()
}
Also you can write Array extension to teach Array of objects (UIImage for example) for this method:
extension Array where Element: UIImage {
func performSomeMethod() {
for element in self {
element.performSomeMethod()
}
}
}
and then you can do
holeImages.performSomeMethod()
But let's return to your case. UIImage type don't have property called layer; moreover, idea rounding corners of image looks strange without context. Usually you need to round corners when you present image on screen, and you usually use UIImageView container for this. So, you probably better round corners of this container instead images:
let imageView = UIImageView()
imageView.layer.cornerRadius = 10
imageView.contentMode = .ScaleAspectFill
imageView.clipsToBounds = true
imageView.image = holeImages.first
You probably want to just apply a corner radius to the layer of the UIImageView in which you present the images, rather than rounding the images themselves, e.g.
imageView.layer.cornerRadius = 10
But if you really want to round the images, themselves, rather than rounding the UIImageView in which you present the images, you could also build a new array of rounded images from your holeImages array:
let roundedHoleImages = holeImages.map { return $0?.rounded(cornerRadius: 10) }
Where you could round the images with something like:
extension UIImage {
/// Round the corners of an image
///
/// - parameter cornerRadius: The `CGFloat` corner radius to apply to the images.
///
/// - returns: The rounded image.
func rounded(cornerRadius cornerRadius: CGFloat) -> UIImage? {
let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: size.width, height: size.height), cornerRadius: cornerRadius)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
let context = UIGraphicsGetCurrentContext()
CGContextAddPath(context, path.CGPath)
CGContextClip(context)
drawAtPoint(CGPointZero)
let outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return outputImage
}
}
You might use this rounding of the actual images if you were, for example, uploading the images to some web service and you wanted to upload images with rounded corners. But, if not, rounding the image views is not only easier, but avoids problems resulting from images of different scales (especially if those scales are different from the display scale) as well as minimizing the memory impact of creating a separate array of images.
Simple code for resizing image i.e. for navbar (source link) for png image with native res: 722x1028
let imageView = UIImageView(frame: CGRectMake(0, 0, 0, 60))
imageView.image = UIImage(named: "girl")
imageView.contentMode = UIViewContentMode.ScaleAspectFit
self.navigationItem.titleView = imageView
And I get on iPad2:
I can manually render desired image by changing code:
let imageView = UIImageView(frame: CGRectMake(0, 0, 0, 60))
imageView.image = imageWithImage(UIImage(named: "girl")!, scaledToSize: CGSizeMake(42,60))
self.navigationItem.titleView = imageView
func imageWithImage(image:UIImage, scaledToSize newSize:CGSize) -> UIImage{
UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0);
image.drawInRect(CGRectMake(0, 0, newSize.width, newSize.height))
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
But this is rather expensive solution.
Is there native library or simple swift solution to make png image automatically resized with good quality?
If you have hardcoded size 42x60 points for all devices, the best solution would be to provide pre-rendered image in that size. This is friendly to the battery and allows tweaking the image in a high quality editor before adding to the app: Resizing from 1000 pixels to 60 pixels is a bit drastic and will lead to loss of detail.
If that's not possible (i.e. image is dynamically loaded from the internet etc.), I would start my research with CIImage and CIFilter(name: "CILanczosScaleTransform") which provides very good interpolation quality, possibly followed by CISharpenLuminance if the loss of detail is too high.
In my project, textures are procedurally generated from method provided by PaintCode (paint-code).
I then create a SKTextureAtlas from a dictionary filed with UIImage generated by these methods :
myAtlas = SKTextureAtlas(dictionary: myTextures)
At last, textures are retrieve from atlas using textureNamed:
var sprite1 = SKSpriteNode(texture:myAtlas.textureNamed("texture1"))
But displayed nodes are double sized on iPhone4S simulator. And triple sized on iPhone 6 Plus simulator.
It seems that at init, atlas compute images at the device resolution.
But generated images already have the correct size and do not need to be changed. See Drawing Method below.
Here is the description of the generated image:
<UIImage: 0x7f86cae56cd0>, {52, 52}
And the description of the corresponding texture in atlas:
<SKTexture> 'image1' (156 x 156)
This for iPhone 6 Plus, using #3x images, that's why size is x3.
And for iPhone 4S, using #2x images, as expected:
<UIImage: 0x7d55dde0>, {52, 52}
<SKTexture> 'image1' (156 x 156)
At last, the scaleproperty for generated UIImage is set to the right device resolution: 2.0 for #2x (iPhone 4S) and 3.0 for #3x (iPhone 6 Plus).
The Question
So what can I do to avoid atlas resizing the pictures?
Drawing method
PaintCode generate drawing methods as the following:
public class func imageOfCell(#frame: CGRect) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
StyleKit.drawCell(frame: frame)
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
Update 1
Comparing two approaches to generate SKTextureAtlas
// Some test image
let testImage:UIImage...
// Atlas creation
var myTextures = [String:UIImage]()
myTextures["texture1"] = testImage
myAtlas = SKTextureAtlas(dictionary: myTextures)
// Create two textures from the same image
let texture1 = myAtlas.textureNamed("texture1")
let texture2 = SKTexture(image:testImage)
// Wrong display : node is oversized
var sprite1 = SKSpriteNode(texture:texture1)
// Correct display
var sprite2 = SKSpriteNode(texture:texture2)
It seems that the problem lie on SKTextureAtlas from a dictionary as as SKSpriteNode initialization does not use scale property from UIImage to correctly size the node.
Here are descriptions on console:
- texture1: '' (84 x 84)
- texture2: 'texture1' (84 x 84)
texture2 miss some data! That could explain the lack of scale information to properly size the node as:
node's size = texture's size divide by texture's scale.
Update 2
The problem occur when the scale property of UIImage is different than one.
So you can use the following method to generate picture:
func imageOfCell(frame: CGRect, color:SKColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
var bezierPath = UIBezierPath(rect: frame)
color.setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
The problem come from the use of SKTextureAtlas(dictionary:) to initialize atlas.
SKTexture created using this method does not embed data related to image's scale property. So during the creation of SKSpriteNode by init(texture:) the lack of scale information in texture leads to choose texture's size in place of image's size.
One way to correct it is to provide node's size during SKSpriteNode creation: init(texture:size:)
From the documentation for the scale parameter for UIGraphicsBeginImageContextWithOptions,
The scale factor to apply to the bitmap. If you specify a value of
0.0, the scale factor is set to the scale factor of the device’s main screen.
Therefore, if you want the textures to be the same "size" across all devices, set this value to 1.0.
EDIT:
override func didMoveToView(view: SKView) {
let image = imageOfCell(CGRectMake(0, 0, 10, 10),scale:0)
let dict:[String:UIImage] = ["t1":image]
let texture = SKTextureAtlas(dictionary: dict)
let sprite1 = SKSpriteNode(texture: texture.textureNamed("t1"))
sprite1.position = CGPointMake (CGRectGetMidX(view.frame),CGRectGetMidY(view.frame))
addChild(sprite1)
println(sprite1.size)
// prints (30.0, 30.0) if scale = 0
// prints (10,0, 10,0) if scale = 1
}
func imageOfCell(frame: CGRect, scale:CGFloat) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, scale)
var bezierPath = UIBezierPath(rect: frame)
UIColor.whiteColor().setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
I am trying to crop image in swift. I'm trying to implement something like, user will capture a photo. Once photo is captured user will be allowed to set the crop area. I'm able to get the image from that crop area, but I want that the crop image should be resized to particular width and height. That is, if particular height or width is smaller then it should be resized.
This image should be of frame of it's maximum width and height. Currently it is just adding transparency to the other area.
I had also added my code for cropping
let tempLayer = CAShapeLayer()
tempLayer.frame = self.view.frame
let path = UIBezierPath()
var endPoint: CGPoint!
for (var i = 0; i<4; i++){
let tag = 101+i
let pointView = viewCrop.viewWithTag(tag)
switch (pointView!.tag){
case 101:
endPoint = CGPointMake(pointView!.center.x-20, pointView!.center.y-20)
path.moveToPoint(endPoint)
default:
path.addLineToPoint(CGPointMake(pointView!.center.x-20, pointView!.center.y-20))
}
}
path.addLineToPoint(endPoint)
path.closePath()
tempLayer.path = path.CGPath
tempLayer.fillColor = UIColor.whiteColor().CGColor
tempLayer.backgroundColor = UIColor.clearColor().CGColor
imgReceiptView.layer.mask = tempLayer
UIGraphicsBeginImageContextWithOptions(viewCrop.bounds.size, imgReceiptView.opaque, 0.0);
imgReceiptView.layer.renderInContext(UIGraphicsGetCurrentContext())
let cropImg = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(cropImg, nil, nil, nil)
imgReceiptView.hidden = true
let tempImageView = UIImageView(frame: CGRectMake(20,self.view.center.y-80, self.view.frame.width-40,160))
tempImageView.backgroundColor = UIColor.grayColor()
tempImageView.image = cropImg
tempImageView.tag = 1001
tempImageView.layer.masksToBounds = true
self.view.addSubview(tempImageView)
Any help will be appreciable
Thanks in advance
Use this Library to crop image as User Specific
https://github.com/kishikawakatsumi/PEPhotoCropEditor
Thanks
Hope this will help you!