I'm attempting to round the corners of images that are stored within an array, but i'm not entirely sure if its possible?
var holeImages = [UIImage(named:"1.png"),UIImage(named:"2.png"),UIImage(named:"3.png")]
self.holeImages1.layer.cornerRadius = 10.0f;
No, it's not possible. If you wish apply some method of object for all objects in array, you should do this in cycle, since Array type most likely don't have this method:
for image in holeImages {
image.performSomeMethod()
}
Also you can write Array extension to teach Array of objects (UIImage for example) for this method:
extension Array where Element: UIImage {
func performSomeMethod() {
for element in self {
element.performSomeMethod()
}
}
}
and then you can do
holeImages.performSomeMethod()
But let's return to your case. UIImage type don't have property called layer; moreover, idea rounding corners of image looks strange without context. Usually you need to round corners when you present image on screen, and you usually use UIImageView container for this. So, you probably better round corners of this container instead images:
let imageView = UIImageView()
imageView.layer.cornerRadius = 10
imageView.contentMode = .ScaleAspectFill
imageView.clipsToBounds = true
imageView.image = holeImages.first
You probably want to just apply a corner radius to the layer of the UIImageView in which you present the images, rather than rounding the images themselves, e.g.
imageView.layer.cornerRadius = 10
But if you really want to round the images, themselves, rather than rounding the UIImageView in which you present the images, you could also build a new array of rounded images from your holeImages array:
let roundedHoleImages = holeImages.map { return $0?.rounded(cornerRadius: 10) }
Where you could round the images with something like:
extension UIImage {
/// Round the corners of an image
///
/// - parameter cornerRadius: The `CGFloat` corner radius to apply to the images.
///
/// - returns: The rounded image.
func rounded(cornerRadius cornerRadius: CGFloat) -> UIImage? {
let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: size.width, height: size.height), cornerRadius: cornerRadius)
UIGraphicsBeginImageContextWithOptions(size, false, scale)
let context = UIGraphicsGetCurrentContext()
CGContextAddPath(context, path.CGPath)
CGContextClip(context)
drawAtPoint(CGPointZero)
let outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return outputImage
}
}
You might use this rounding of the actual images if you were, for example, uploading the images to some web service and you wanted to upload images with rounded corners. But, if not, rounding the image views is not only easier, but avoids problems resulting from images of different scales (especially if those scales are different from the display scale) as well as minimizing the memory impact of creating a separate array of images.
Related
I found this extension online, it allows me to have images adhere to aspect fit/fill even when drawn inside dynamically growing/shrinking image views (currently when image is saved to camera roll after my draw function the image reverts to "scale fill" regardless of what the content mode of the image view is. I suspect the reasoning for this is because I have it drawing the image to size/bounds of the image view, but since the image view is dynamic, i don't see any way around this without using this extension):
// MARK: - Image Scaling.
extension UIImage {
/// Scales an image to fit within a bounds with a size governed by the passed size. Also keeps the aspect ratio.
/// Switch MIN to MAX for aspect fill instead of fit.
///
/// - parameter newSize: newSize the size of the bounds the image must fit within.
///
/// - returns: a new scaled image.
func scaleImageToSize(newSize: CGSize) -> UIImage {
var scaledImageRect = CGRect.zero
let aspectWidth = newSize.width/size.width
let aspectheight = newSize.height/size.height
let aspectRatio = max(aspectWidth, aspectheight)
scaledImageRect.size.width = size.width * aspectRatio;
scaledImageRect.size.height = size.height * aspectRatio;
scaledImageRect.origin.x = (newSize.width - scaledImageRect.size.width) / 2.0;
scaledImageRect.origin.y = (newSize.height - scaledImageRect.size.height) / 2.0;
UIGraphicsBeginImageContext(newSize)
draw(in: scaledImageRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage!
}
}
This is my current function I'm using for drawing the image on screen to be able to save it to camera roll (this function combines two images, a frame and an image from camera roll:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
// var newSize = currentImage.scaleImageToSize
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
All the tutorials I've found on how to use extensions don't cover how to pass in and out variables like this one requires. Any insight would be greatly appreciated.
I understand that you don't know how to use the extension, is that correct? Since it just adds a function to every UIImage, you can simply call it on your image like this: currentImage.scaleImageToSize(newSize: someSize) and pass the size you want the image to fit into.
Dorian Roy was telling me to use that call in place of using just "currentImage", and that's what worked!
(I commented on his initial answer saying I was having issues because I was trying to use the return value from the extension itself in place of "currentImage")
I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.
I assign an image to the background so the user can draw on it but the image always is not in the right size it takes like 1/3 of it and assigns it to the image view
the only way to bypass this is to make a canvas exactly the size of the image assigned like in PICSART app,
does any one knows how to do this?
mainimageview.backgroundColor = UIColor(patternImage: "draw")
`
You can write your own method to resize the image you want at the time of assigning background color. Pass your image and size you want
func imageWithImage(image:UIImage?, newSize:CGFloat)->UIImage?
{
let size = CGSizeMake(newSize, newSize)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
image?.drawInRect(CGRectMake(0, 0, size.width, size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
In my project, textures are procedurally generated from method provided by PaintCode (paint-code).
I then create a SKTextureAtlas from a dictionary filed with UIImage generated by these methods :
myAtlas = SKTextureAtlas(dictionary: myTextures)
At last, textures are retrieve from atlas using textureNamed:
var sprite1 = SKSpriteNode(texture:myAtlas.textureNamed("texture1"))
But displayed nodes are double sized on iPhone4S simulator. And triple sized on iPhone 6 Plus simulator.
It seems that at init, atlas compute images at the device resolution.
But generated images already have the correct size and do not need to be changed. See Drawing Method below.
Here is the description of the generated image:
<UIImage: 0x7f86cae56cd0>, {52, 52}
And the description of the corresponding texture in atlas:
<SKTexture> 'image1' (156 x 156)
This for iPhone 6 Plus, using #3x images, that's why size is x3.
And for iPhone 4S, using #2x images, as expected:
<UIImage: 0x7d55dde0>, {52, 52}
<SKTexture> 'image1' (156 x 156)
At last, the scaleproperty for generated UIImage is set to the right device resolution: 2.0 for #2x (iPhone 4S) and 3.0 for #3x (iPhone 6 Plus).
The Question
So what can I do to avoid atlas resizing the pictures?
Drawing method
PaintCode generate drawing methods as the following:
public class func imageOfCell(#frame: CGRect) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
StyleKit.drawCell(frame: frame)
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
Update 1
Comparing two approaches to generate SKTextureAtlas
// Some test image
let testImage:UIImage...
// Atlas creation
var myTextures = [String:UIImage]()
myTextures["texture1"] = testImage
myAtlas = SKTextureAtlas(dictionary: myTextures)
// Create two textures from the same image
let texture1 = myAtlas.textureNamed("texture1")
let texture2 = SKTexture(image:testImage)
// Wrong display : node is oversized
var sprite1 = SKSpriteNode(texture:texture1)
// Correct display
var sprite2 = SKSpriteNode(texture:texture2)
It seems that the problem lie on SKTextureAtlas from a dictionary as as SKSpriteNode initialization does not use scale property from UIImage to correctly size the node.
Here are descriptions on console:
- texture1: '' (84 x 84)
- texture2: 'texture1' (84 x 84)
texture2 miss some data! That could explain the lack of scale information to properly size the node as:
node's size = texture's size divide by texture's scale.
Update 2
The problem occur when the scale property of UIImage is different than one.
So you can use the following method to generate picture:
func imageOfCell(frame: CGRect, color:SKColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
var bezierPath = UIBezierPath(rect: frame)
color.setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
The problem come from the use of SKTextureAtlas(dictionary:) to initialize atlas.
SKTexture created using this method does not embed data related to image's scale property. So during the creation of SKSpriteNode by init(texture:) the lack of scale information in texture leads to choose texture's size in place of image's size.
One way to correct it is to provide node's size during SKSpriteNode creation: init(texture:size:)
From the documentation for the scale parameter for UIGraphicsBeginImageContextWithOptions,
The scale factor to apply to the bitmap. If you specify a value of
0.0, the scale factor is set to the scale factor of the device’s main screen.
Therefore, if you want the textures to be the same "size" across all devices, set this value to 1.0.
EDIT:
override func didMoveToView(view: SKView) {
let image = imageOfCell(CGRectMake(0, 0, 10, 10),scale:0)
let dict:[String:UIImage] = ["t1":image]
let texture = SKTextureAtlas(dictionary: dict)
let sprite1 = SKSpriteNode(texture: texture.textureNamed("t1"))
sprite1.position = CGPointMake (CGRectGetMidX(view.frame),CGRectGetMidY(view.frame))
addChild(sprite1)
println(sprite1.size)
// prints (30.0, 30.0) if scale = 0
// prints (10,0, 10,0) if scale = 1
}
func imageOfCell(frame: CGRect, scale:CGFloat) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, scale)
var bezierPath = UIBezierPath(rect: frame)
UIColor.whiteColor().setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
I want to scale up an UIImage in such a way, that the user can see the pixels in the UIImage very sharp. When I put that to an UIImageView and scale the transform matrix up, the UIImage appears antialiased and smoothed.
Is there a way to render in a bigger bitmap context by simply repeating every row and every column to get bigger pixels? How could I do that?
#import <QuartzCore/CALayer.h>
view.layer.magnificationFilter = kCAFilterNearest
When drawing directly into bitmap context, we can use:
CGContextSetInterpolationQuality(myBitmapContext, kCGInterpolationNone);
I found this on CGContextDrawImage very slow on iPhone 4
Swift 5
let image = UIImage(named: "Foo")
let scaledImageSize = image.size.applying(CGAffineTransform(scaleX: 2, y: 2))
UIGraphicsBeginImageContext(scaledImageSize)
let scaledContext = UIGraphicsGetCurrentContext()!
scaledContext.interpolationQuality = .none
image.draw(in: CGRect(origin: .zero, size: scaledImageSize))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()!
I was also trying this (on a sublayer) and I couldn't get it working, it was still blurry. This is what I had to do:
const CGFloat PIXEL_SCALE = 2;
layer.magnificationFilter = kCAFilterNearest; //Nearest neighbor texture filtering
layer.transform = CATransform3DMakeScale(PIXEL_SCALE, PIXEL_SCALE, 1); //Scale layer up
//Rasterize w/ sufficient resolution to show sharp pixels
layer.shouldRasterize = YES;
layer.rasterizationScale = PIXEL_SCALE;
For UIImage created from CIImage you may use:
imageView.image = UIImage(CIImage: ciImage.imageByApplyingTransform(CGAffineTransformMakeScale(kScale, kScale)))