I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter - swift

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.

Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.

Related

Blur face in face detection in vision kit

I'm using Apple tutorial about face detection in vision kit in a live camera feed, not an image.
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
It detects the face and adds some lines using CAShapeLayer to draw lines between different parts of the face.
fileprivate func setupVisionDrawingLayers() {
let captureDeviceResolution = self.captureDeviceResolution
let captureDeviceBounds = CGRect(x: 0,
y: 0,
width: captureDeviceResolution.width,
height: captureDeviceResolution.height)
let captureDeviceBoundsCenterPoint = CGPoint(x: captureDeviceBounds.midX,
y: captureDeviceBounds.midY)
let normalizedCenterPoint = CGPoint(x: 0.5, y: 0.5)
guard let rootLayer = self.rootLayer else {
self.presentErrorAlert(message: "view was not property initialized")
return
}
let overlayLayer = CALayer()
overlayLayer.name = "DetectionOverlay"
overlayLayer.masksToBounds = true
overlayLayer.anchorPoint = normalizedCenterPoint
overlayLayer.bounds = captureDeviceBounds
overlayLayer.position = CGPoint(x: rootLayer.bounds.midX, y: rootLayer.bounds.midY)
let faceRectangleShapeLayer = CAShapeLayer()
faceRectangleShapeLayer.name = "RectangleOutlineLayer"
faceRectangleShapeLayer.bounds = captureDeviceBounds
faceRectangleShapeLayer.anchorPoint = normalizedCenterPoint
faceRectangleShapeLayer.position = captureDeviceBoundsCenterPoint
faceRectangleShapeLayer.fillColor = nil
faceRectangleShapeLayer.strokeColor = UIColor.green.withAlphaComponent(0.7).cgColor
faceRectangleShapeLayer.lineWidth = 5
faceRectangleShapeLayer.shadowOpacity = 0.7
faceRectangleShapeLayer.shadowRadius = 5
let faceLandmarksShapeLayer = CAShapeLayer()
faceLandmarksShapeLayer.name = "FaceLandmarksLayer"
faceLandmarksShapeLayer.bounds = captureDeviceBounds
faceLandmarksShapeLayer.anchorPoint = normalizedCenterPoint
faceLandmarksShapeLayer.position = captureDeviceBoundsCenterPoint
faceLandmarksShapeLayer.fillColor = nil
faceLandmarksShapeLayer.strokeColor = UIColor.yellow.withAlphaComponent(0.7).cgColor
faceLandmarksShapeLayer.lineWidth = 3
faceLandmarksShapeLayer.shadowOpacity = 0.7
faceLandmarksShapeLayer.shadowRadius = 5
overlayLayer.addSublayer(faceRectangleShapeLayer)
faceRectangleShapeLayer.addSublayer(faceLandmarksShapeLayer)
rootLayer.addSublayer(overlayLayer)
self.detectionOverlayLayer = overlayLayer
self.detectedFaceRectangleShapeLayer = faceRectangleShapeLayer
self.detectedFaceLandmarksShapeLayer = faceLandmarksShapeLayer
self.updateLayerGeometry()
}
How can I fill inside the lines (different part of face) with a blurry view? I need to blur the face.
You could try placing a UIVisualEffectView on top of your video feed, and then adding a masking CAShapeLayer to that UIVisualEffectView. I don't know if that would work or not.
The docs on UIVisualEffectView say:
When using the UIVisualEffectView class, avoid alpha values that are less than 1. Creating views that are partially transparent causes the system to combine the view and all the associated subviews during an offscreen render pass. UIVisualEffectView objects need to be combined as part of the content they are layered on top of in order to look correct. Setting the alpha to less than 1 on the visual effect view or any of its superviews causes many effects to look incorrect or not show up at all.
I don't know if using a mask layer on a visual effect view would cause the same rendering problems or not. You'd have to try it. (And be sure to try it on a range of different hardware, since the rendering performance varies quite a bit between different versions of Apple's chipsets.)
You could also try using a shape layer filled with visual hash or a "pixellated" pattern instead of blurring. That would be faster and probably render more reliably.
Note that face detection tends to be a little jumpy. It might drop out for a few frames, or lag on quick pans or change of scene. If you're trying to hide people's faces in a live feed for privacy, it might not be reliable. It would only take a few un-blurred frames for somebody's identity to be revealed.

How to optimise swift code for drawing large datasets in realtime in graphs that can scroll and zoom

I am writing my own code in swift to draw a graph to display a large data set. (The reason not to use existing frameworks is I would like to use logarithmic and linear views, as well as user interaction with the graph). I need help trying to speed my code to try and make the app feel responsive, as it is too slow currently.
I pass my dataset into my magnitudeGraph class (a UIView), and calculate the co-ordinates for the data line (divLine) and save them as a UIBezierPath using the function below:
func calcDataCoOrdinates(frequencies: [Float], Mag: [Float]) -> UIBezierPath {
let divLine = UIBezierPath()
let graphWidth:Int = width - dBLabelSpace
let graphHeight:Int = height - freqLabelSpace
let yRange = Float(yMax-yMin)
//Limit Data to the Values being displayed
let indexOfGreaterThanXMin = frequencies.firstIndex(where: {$0 > xMin})
let indexLessThanXMax = frequencies.lastIndex(where: {$0 < xMax})
let frequencyRange = frequencies[indexOfGreaterThanXMin!..<indexLessThanXMax!]
let MagnitudeRange = Mag[indexOfGreaterThanXMin!..<indexLessThanXMax!]
let graphRatio = (yMax-yMin)/Float(graphHeight)
let logX = vForce.log10(frequencyRange)
let logRange = log10(Double(xMax))-log10(Double(xMin));
let graphDivideByLog = Float(graphWidth) / Float(logRange)
let xMinArray:[Float] = Array(repeating: Float(log10(xMin)), count: logX.count)
let x5 = vDSP.multiply(subtraction: (a: logX, b: xMinArray), graphDivideByLog)
let x6 = vDSP.add(Float(dBLabelSpace + 10), x5)
let yMaxArray = Array(repeating:Float(yMax), count: MagnitudeRange.count)
let y2 = vDSP.subtract(yMaxArray, MagnitudeRange)
let y3 = vDSP.divide(y2, Float(graphRatio))
var allPoints:[CGPoint]=[]
for i in 0..<x6.count{
let value = CGPoint.init(x: Double(x6[i]), y: Double(y3[i]))
allPoints.append(value)
}
divLine.removeAllPoints()
divLine.move(to: allPoints[0])
for i in 1 ..< allPoints.count {
divLine.addLine(to: allPoints[i])
}
return divLine
}
Instead of calling setNeedsDisplay directly from the UI element that triggers this, I call redrawGraph(). This calls the calcDataCoOrdinates function in a background thread, and then once all coordinates are calculated, I call setNeedsDisplay on the main thread.
func redrawGraph(){
width = Int(self.frame.width);
height = Int(self.frame.height);
magnitudeQueue.sync{
if mag1.isEmpty == false {
self.data1BezierPath = self.calcData1CoOrdinates(frequencies: freq1, Mag: mag1)
}
if mag2.isEmpty == false {
self.data2BezierPath = self.calcData1CoOrdinates(frequencies: freq2, Mag: mag2)
}
if magSum.isEmpty == false {
self.dataSumBezierPath = self.calcData1CoOrdinates(frequencies: freqSum, Mag: magSum)
}
DispatchQueue.main.async {
self.setNeedsDisplay()
}
}
Finally, I post the code of actual drawing function:
override func draw(_ rect: CGRect) {
self.layer.sublayers = nil
drawData(Color: graphData1Color, layer: in1Layer, lineThickness: Float(graphDataLineWidth), line:data1BezierPath)
drawData(Color: graphData2Color, layer: in2Layer, lineThickness: Float(graphDataLineWidth), line:data2BezierPath)
drawData(Color: sumColor, layer: in3Layer, lineThickness: Float(sumLineThickness), line: dataSumBezierPath)
}
func drawData(Color: UIColor, layer: CAShapeLayer, lineThickness:Float, line:UIBezierPath){
layer.removeFromSuperlayer()
layer.path = nil
Color.set()
line.lineWidth = CGFloat(lineThickness)
layer.strokeColor = Color.cgColor
layer.fillColor = UIColor.clear.cgColor
layer.path = line.cgPath
layer.lineWidth = CGFloat(lineThickness)
layer.contentsScale = UIScreen.main.scale
let mask = CAShapeLayer()
mask.contentsScale = UIScreen.main.scale
mask.path = bpath?.cgPath
layer.mask = mask
self.layer.addSublayer(layer)
}
This code works well to calculate the three data sets and view them on the graph. I have excluded the code to calculate the background layer (grid lines), but that is just to simplify the code.
The issue I get is that this is a lot to try and process for 'realtime' interactivity. I figure that we want around 30fps for this to feel responsive, which gives approximately 33ms of processing and rendering time in total.
Now if I have say 32k data points, currently measuring these, each of the three passes of calcDataCoOrdinates can take between. 22-36ms, totalling around 90ms. 1/10th of a second is very laggy, so my question is how to speed this up? If I measure the actual drawing process this is currently very low (1-2ms), which brings me to believe it is my calcDataCoOrdinates function that needs improving.
I have tried several approaches, including using a Douglas-Peucker algorithm to decrease the amount of data points, however this is at a large time cost (+100ms).
I have tried skipping data points that overlap pixels
I have tried to use accelerate framework as much as possible, and the top half of this function operates in approximately 10ms, however the bottom half putting the coordinates into a CGPoint array, and iterating to make a UIBezierPath takes 20ms. I have tried pre-initializing the allPoints array with CGPoints, figuring this will save some time, but this is still taking too long.
Can this be computed more. efficiently or have I hit the limit of my app?

Rotating UIControl with CAGradientLayer not updating correctly Swift

Rather than using a normal button, I subclassed a UIControl because I needed to add a gradient to it. I also have a way to add a shadow and an activity indicator (not visible in the image below) as a stateful button to stop users hammering the button if (for example) an API call is being made.
It was really tricky to try to get the UIControl to rotate, and to be able to do this I added the shadow as a separate view to a container view containing the UIControl so a shadow could be added.
Now the issue is the control does not behave quite like a view on rotation - let me show you a screen grab for context:
This is mid-rotation but is just about visible to the eye - the image shows that the Gradient is 75% of the length of a blue UIView in the image.
https://github.com/stevencurtis/statefulbutton
In order to perform this rotation I remove the shadowview and then change the frame of the gradient frame to its bounds, and this is the problem.
func viewRotated() {
CATransaction.setDisableActions(true)
shadowView!.removeFromSuperview()
shadowView!.frame = self.frame
shadowView!.layer.masksToBounds = false
shadowView!.layer.shadowOffset = CGSize(width: 0, height: 3)
shadowView!.layer.shadowRadius = 3
shadowView!.layer.shadowOpacity = 0.3
shadowView!.layer.shadowPath = UIBezierPath(roundedRect: self.bounds, byRoundingCorners: .allCorners, cornerRadii: CGSize(width: 20, height: 20)).cgPath
shadowView!.layer.shouldRasterize = true
shadowView!.layer.rasterizationScale = UIScreen.main.scale
self.gradientViewLayer.frame = self.bounds
self.selectedViewLayer.frame = self.bounds
CATransaction.commit()
self.insertSubview(shadowView!, at: 0)
}
So this rotation method is called through the parent view controller:
override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
super.viewWillTransition(to: size, with: coordinator)
coordinator.animate(alongsideTransition: { context in
context.viewController(forKey: UITransitionContextViewControllerKey.from)
//inform the loginButton that it is being rotated
self.loginButton.viewRotated()
}, completion: { context in
// can call here when completed the transition
})
}
I know this is the problem, and I guess it is not happening at quite the right time to act the same way as a UIView. Now the issue is that I have tried many things to get this to work, and my best solution (above) is not quite there.
It isn't helpful to suggest to use a UIButton, to use an image for the gradient (please don't suggest using a gradient image as a background for a UIButton, I've tried this) or a third party library. This is my work, it functions but does not work acceptably to me and I want to get it to work as well as a usual view (or at least know why not). I have tried the other solutions above as well, and have gone for my own UIControl. I know I can lock the view if there is an API call, or use other ways to stop the user pressing the button too many times. I'm trying to fix my solution, not invent ways of getting around this issue with CAGradientLayer.
The problem: I need to make a UIControlView with a CAGradientLayer as a background rotate in the same way as a UIView, and not exhibit the issue shown in the image above.
Full Example:
https://github.com/stevencurtis/statefulbutton
Here is working code:
https://gist.github.com/alldne/22d340b36613ae5870b3472fa1c64654
These are my recommendations to your code:
1. A proper place for setting size and the position of sublayers
The size of a view, namely your button, is determined after the layout is done. What you should do is just to set the proper size of sublayers after the layout. So I recommend you to set the size and position of the gradient sublayers in layoutSubviews.
override func layoutSubviews() {
super.layoutSubviews()
let center = CGPoint(x: self.bounds.width / 2, y: self.bounds.height / 2)
selectedViewLayer.bounds = self.bounds
selectedViewLayer.position = center
gradientViewLayer.bounds = self.bounds
gradientViewLayer.position = center
}
2. You don’t need to use an extra view to draw shadow
Remove shadowView and just set the layer properties:
layer.shadowOffset = CGSize(width: 0, height: 3)
layer.shadowRadius = 3
layer.shadowOpacity = 0.3
layer.shadowColor = UIColor.black.cgColor
clipsToBounds = false
If you have to use an extra view to draw shadow, then you can add the view once in init() and set the proper size and position in layoutSubviews or you can just programmatically set auto layout constraints to the superview.
3. Animation duration & timing function
After setting proper sizes, your animation of the gradient layers and the container view doesn’t sync well.
It seems that:
During the rotation transition, coordinator(UIViewControllerTransitionCoordinator) has its own transition duration and easing function.
And the duration and easing function are applied automatically to all the subviews (UIView).
However, those values are not applied to the CALayer without an associated UIView. Consequently, it uses the default timing function and duration of CoreAnimation.
To sync the animations, explicitly set the animation duration and the timing function like below:
class ViewController: UIViewController {
...
override func viewWillTransition(to size: CGSize, with coordinator: UIViewControllerTransitionCoordinator) {
super.viewWillTransition(to: size, with: coordinator)
CATransaction.setAnimationDuration(coordinator.transitionDuration)
CATransaction.setAnimationTimingFunction(coordinator.completionCurve.timingFunction)
}
...
}
// Swift 4
extension UIView.AnimationCurve {
var timingFunction: CAMediaTimingFunction {
let functionName: CAMediaTimingFunctionName
switch self {
case .easeIn:
functionName = kCAMediaTimingFunctionEaseIn as CAMediaTimingFunctionName
case .easeInOut:
functionName = kCAMediaTimingFunctionEaseInEaseOut as CAMediaTimingFunctionName
case .easeOut:
functionName = kCAMediaTimingFunctionEaseOut as CAMediaTimingFunctionName
case .linear:
functionName = kCAMediaTimingFunctionLinear as CAMediaTimingFunctionName
}
return CAMediaTimingFunction(name: functionName as String)
}
}

Image Cropping grabbing the wrong portion of UIImage during crop

I've been working on making a view controller that will crop an image down to a specific size with some draggable control points and the background image outside of the crop zone dimmed.
For some reason whenever the image is cropped, it is grabbing the wrong reference. I've looked at just about every other post on this to deal with cropping.
Here is my setup for the Storyboard:
I've asked a few other people including a tutor and mentor from a course that I'm taking, but we all seem to be stumped.
I can select a frame by dragging the UL UR DL DR corners around the view controller like this:
But when I press the button and use the crop function I've written, I get something that is not the correct crop based on the framed selection.
I also get this error message during the cropping proceedure:
2016-09-07 23:36:38.962 ImageCropView[33133:1056024]
<UIView: 0x7f9cfa42c730; frame = (0 0; 414 736); autoresize = W+H; layer = <CALayer: 0x7f9cfa408400>>'s window
is not equal to <ImageCropView.CroppedImageViewController: 0x7f9cfa43f9b0>'s view's window!
The offending part of the code must be somewhere in one of the functions below.
Here is the cropping function:
func cropImage(image: UIImage, toRect rect: CGRect) -> UIImage {
func rad(deg: CGFloat) -> CGFloat {
return deg / 180.0 * CGFloat(M_PI)
}
// determine the orientation of the image and apply a transformation to the crop rectangle to shift it to the correct position
var rectTransform: CGAffineTransform
switch image.imageOrientation {
case .Left:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(90)), 0, -image.size.height)
case .Right:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-90)), -image.size.width, 0)
case .Down:
rectTransform = CGAffineTransformTranslate(CGAffineTransformMakeRotation(rad(-180)), -image.size.width, -image.size.height)
default:
rectTransform = CGAffineTransformIdentity
}
// adjust the transformation scale based on the image scale
rectTransform = CGAffineTransformScale(rectTransform, UIScreen.mainScreen().scale, UIScreen.mainScreen().scale)
// apply the transformation to the rect to create a new, shifted rect
let transformedCropSquare = CGRectApplyAffineTransform(rect, rectTransform)
// use the rect to crop the image
let imageRef = CGImageCreateWithImageInRect(image.CGImage, transformedCropSquare)
// create a new UIImage and set the scale and orientation appropriately
let result = UIImage(CGImage: imageRef!, scale: image.scale, orientation: image.imageOrientation)
return result
}
Here are the functions to set and translate the mask view
func setTopMask(){
let path = CGPathCreateWithRect(cropViewMask.frame, nil)
topMaskLayer.path = path
topImageView.layer.mask = topMaskLayer
}
func translateMask(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(self.view)
sender.view!.center = CGPointMake(sender.view!.center.x + translation.x, sender.view!.center.y + translation.y)
// print(sender.translationInView(self.view))
sender.setTranslation(CGPointZero, inView: self.view)
// print("panned mask")
if sender.state == .Ended {
printFrames()
}
}
func setCropMaskFrame() {
let x = ulCorner.center.x
let y = ulCorner.center.y
let width = urCorner.center.x - ulCorner.center.x
let height = blCorner.center.y - ulCorner.center.y
cropViewMask.frame = CGRectMake(x, y, width, height)
setTopMask()
}
I know this was long time ago...Just a thought, I ran into similar problem and what I found is that the frames for cropping are most probably correct. The problem lies in the actual size of the picture you're trying to crop. I solved the issue by aligning sizes of my view which holds the picture, with the actual picture size (in points). Then the cropping area cropped what was selected. I know this is probably not a solution, just sharing my experience, hope it helps to turn on some lightbulbs :)

How to force SKTextureAtlas created from a dictionary to not modify textures size?

In my project, textures are procedurally generated from method provided by PaintCode (paint-code).
I then create a SKTextureAtlas from a dictionary filed with UIImage generated by these methods :
myAtlas = SKTextureAtlas(dictionary: myTextures)
At last, textures are retrieve from atlas using textureNamed:
var sprite1 = SKSpriteNode(texture:myAtlas.textureNamed("texture1"))
But displayed nodes are double sized on iPhone4S simulator. And triple sized on iPhone 6 Plus simulator.
It seems that at init, atlas compute images at the device resolution.
But generated images already have the correct size and do not need to be changed. See Drawing Method below.
Here is the description of the generated image:
<UIImage: 0x7f86cae56cd0>, {52, 52}
And the description of the corresponding texture in atlas:
<SKTexture> 'image1' (156 x 156)
This for iPhone 6 Plus, using #3x images, that's why size is x3.
And for iPhone 4S, using #2x images, as expected:
<UIImage: 0x7d55dde0>, {52, 52}
<SKTexture> 'image1' (156 x 156)
At last, the scaleproperty for generated UIImage is set to the right device resolution: 2.0 for #2x (iPhone 4S) and 3.0 for #3x (iPhone 6 Plus).
The Question
So what can I do to avoid atlas resizing the pictures?
Drawing method
PaintCode generate drawing methods as the following:
public class func imageOfCell(#frame: CGRect) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
StyleKit.drawCell(frame: frame)
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
Update 1
Comparing two approaches to generate SKTextureAtlas
// Some test image
let testImage:UIImage...
// Atlas creation
var myTextures = [String:UIImage]()
myTextures["texture1"] = testImage
myAtlas = SKTextureAtlas(dictionary: myTextures)
// Create two textures from the same image
let texture1 = myAtlas.textureNamed("texture1")
let texture2 = SKTexture(image:testImage)
// Wrong display : node is oversized
var sprite1 = SKSpriteNode(texture:texture1)
// Correct display
var sprite2 = SKSpriteNode(texture:texture2)
It seems that the problem lie on SKTextureAtlas from a dictionary as as SKSpriteNode initialization does not use scale property from UIImage to correctly size the node.
Here are descriptions on console:
- texture1: '' (84 x 84)
- texture2: 'texture1' (84 x 84)
texture2 miss some data! That could explain the lack of scale information to properly size the node as:
node's size = texture's size divide by texture's scale.
Update 2
The problem occur when the scale property of UIImage is different than one.
So you can use the following method to generate picture:
func imageOfCell(frame: CGRect, color:SKColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
var bezierPath = UIBezierPath(rect: frame)
color.setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
The problem come from the use of SKTextureAtlas(dictionary:) to initialize atlas.
SKTexture created using this method does not embed data related to image's scale property. So during the creation of SKSpriteNode by init(texture:) the lack of scale information in texture leads to choose texture's size in place of image's size.
One way to correct it is to provide node's size during SKSpriteNode creation: init(texture:size:)
From the documentation for the scale parameter for UIGraphicsBeginImageContextWithOptions,
The scale factor to apply to the bitmap. If you specify a value of
0.0, the scale factor is set to the scale factor of the device’s main screen.
Therefore, if you want the textures to be the same "size" across all devices, set this value to 1.0.
EDIT:
override func didMoveToView(view: SKView) {
let image = imageOfCell(CGRectMake(0, 0, 10, 10),scale:0)
let dict:[String:UIImage] = ["t1":image]
let texture = SKTextureAtlas(dictionary: dict)
let sprite1 = SKSpriteNode(texture: texture.textureNamed("t1"))
sprite1.position = CGPointMake (CGRectGetMidX(view.frame),CGRectGetMidY(view.frame))
addChild(sprite1)
println(sprite1.size)
// prints (30.0, 30.0) if scale = 0
// prints (10,0, 10,0) if scale = 1
}
func imageOfCell(frame: CGRect, scale:CGFloat) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, scale)
var bezierPath = UIBezierPath(rect: frame)
UIColor.whiteColor().setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}