watchOS2 animateWithDuration starts slow and speeds up - swift

I'm trying to animate the width of a group in my watchOS2 application by calling animateWithDuration of WKInterfaceController class. The idea is to show the user an horizontal line which decreases its width from right to left over a period of time (something like a timer):
self.timer.setWidth(100)
self.animateWithDuration(NSTimeInterval(duration)) {
self.timer.setWidth(0)
}
However I'm seeing that as soon as the animation starts the speed is very slow and then it increases. When the animation is about to stop (when the timer width is close to 0) the animation slows down again.
I want the speed to be the same over the duration of the animation.
Has anyone had this issue before? Any help is appreciated! Thanks

WatchOS 2 doesn't provide a way to specify a timing function, so the animation is limited to using the EaseInEaseOut curve (which starts out slow, speeds up, then slows down at the end).
You could try using Core Graphics to render the line, or use a series of WKInterfaceImage frames to smoothly animate the line.

I made a simple example of the timer I wanted to animate with Core Graphics in the watchOS2 app.
You can find the project here
UPDATE:
Here's the code that I made:
func configureTimerWithCounter(counter: Double) {
let size = CGSizeMake(self.contentFrame.width, 6)
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
UIGraphicsPushContext(context!)
// Draw line
let path = UIBezierPath()
path.lineWidth = 100
path.moveToPoint(CGPoint(x: 0, y: 0))
let counterPosition = (self.contentFrame.width/30)*CGFloat(counter)
path.addLineToPoint(CGPoint(x: counterPosition, y: 0))
UIColor.greenColor().setStroke()
UIColor.whiteColor().setFill()
path.stroke()
path.fill()
// Convert to UIImage
let cgimage = CGBitmapContextCreateImage(context);
let uiimage = UIImage(CGImage: cgimage!)
// End the graphics context
UIGraphicsPopContext()
UIGraphicsEndImageContext()
self.timerImage.setImage(uiimage)
}

Related

Blur face in face detection in vision kit

I'm using Apple tutorial about face detection in vision kit in a live camera feed, not an image.
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
It detects the face and adds some lines using CAShapeLayer to draw lines between different parts of the face.
fileprivate func setupVisionDrawingLayers() {
let captureDeviceResolution = self.captureDeviceResolution
let captureDeviceBounds = CGRect(x: 0,
y: 0,
width: captureDeviceResolution.width,
height: captureDeviceResolution.height)
let captureDeviceBoundsCenterPoint = CGPoint(x: captureDeviceBounds.midX,
y: captureDeviceBounds.midY)
let normalizedCenterPoint = CGPoint(x: 0.5, y: 0.5)
guard let rootLayer = self.rootLayer else {
self.presentErrorAlert(message: "view was not property initialized")
return
}
let overlayLayer = CALayer()
overlayLayer.name = "DetectionOverlay"
overlayLayer.masksToBounds = true
overlayLayer.anchorPoint = normalizedCenterPoint
overlayLayer.bounds = captureDeviceBounds
overlayLayer.position = CGPoint(x: rootLayer.bounds.midX, y: rootLayer.bounds.midY)
let faceRectangleShapeLayer = CAShapeLayer()
faceRectangleShapeLayer.name = "RectangleOutlineLayer"
faceRectangleShapeLayer.bounds = captureDeviceBounds
faceRectangleShapeLayer.anchorPoint = normalizedCenterPoint
faceRectangleShapeLayer.position = captureDeviceBoundsCenterPoint
faceRectangleShapeLayer.fillColor = nil
faceRectangleShapeLayer.strokeColor = UIColor.green.withAlphaComponent(0.7).cgColor
faceRectangleShapeLayer.lineWidth = 5
faceRectangleShapeLayer.shadowOpacity = 0.7
faceRectangleShapeLayer.shadowRadius = 5
let faceLandmarksShapeLayer = CAShapeLayer()
faceLandmarksShapeLayer.name = "FaceLandmarksLayer"
faceLandmarksShapeLayer.bounds = captureDeviceBounds
faceLandmarksShapeLayer.anchorPoint = normalizedCenterPoint
faceLandmarksShapeLayer.position = captureDeviceBoundsCenterPoint
faceLandmarksShapeLayer.fillColor = nil
faceLandmarksShapeLayer.strokeColor = UIColor.yellow.withAlphaComponent(0.7).cgColor
faceLandmarksShapeLayer.lineWidth = 3
faceLandmarksShapeLayer.shadowOpacity = 0.7
faceLandmarksShapeLayer.shadowRadius = 5
overlayLayer.addSublayer(faceRectangleShapeLayer)
faceRectangleShapeLayer.addSublayer(faceLandmarksShapeLayer)
rootLayer.addSublayer(overlayLayer)
self.detectionOverlayLayer = overlayLayer
self.detectedFaceRectangleShapeLayer = faceRectangleShapeLayer
self.detectedFaceLandmarksShapeLayer = faceLandmarksShapeLayer
self.updateLayerGeometry()
}
How can I fill inside the lines (different part of face) with a blurry view? I need to blur the face.
You could try placing a UIVisualEffectView on top of your video feed, and then adding a masking CAShapeLayer to that UIVisualEffectView. I don't know if that would work or not.
The docs on UIVisualEffectView say:
When using the UIVisualEffectView class, avoid alpha values that are less than 1. Creating views that are partially transparent causes the system to combine the view and all the associated subviews during an offscreen render pass. UIVisualEffectView objects need to be combined as part of the content they are layered on top of in order to look correct. Setting the alpha to less than 1 on the visual effect view or any of its superviews causes many effects to look incorrect or not show up at all.
I don't know if using a mask layer on a visual effect view would cause the same rendering problems or not. You'd have to try it. (And be sure to try it on a range of different hardware, since the rendering performance varies quite a bit between different versions of Apple's chipsets.)
You could also try using a shape layer filled with visual hash or a "pixellated" pattern instead of blurring. That would be faster and probably render more reliably.
Note that face detection tends to be a little jumpy. It might drop out for a few frames, or lag on quick pans or change of scene. If you're trying to hide people's faces in a live feed for privacy, it might not be reliable. It would only take a few un-blurred frames for somebody's identity to be revealed.

Merge CAShapeLayer into CVPixelBuffer

I'm capturing the output of a playing video using AVPlayerItemVideoOutput.copyPixelBuffer
I'm able to convert the pixel buffer into a CIImage, then render it back into a pixel buffer again, and then an AVAssetWriter writes the buffer stream out to a new movie clip successfully.
The reason I'm converting to CIImage is I want to do some manipulation of each frame. (So far I don't understand how to manipulate pixel buffers directly).
In this case I want to overlay a "scribble" style drawing that the user does with their finger. While the video plays, they can draw over it. I'm capturing this drawing successfully into a CAShapeLayer.
The code below outputs just the overlay CAShapeLayer successfully. When I try to reincorporate the original frame by uncommenting the lines shown, the entire process bogs down drastically and drops from 60fps to an unstable 10fps or so on an iPhone 12. I get stable 60fps in all cases except when I uncomment that code.
What's the best way to incorporate the shape layer into this stream of pixel buffers in 60fps "real time"?
Note: some of this code is not finalized -- setting bounds correctly, etc. However this is not related to my question and I'm aware that has to be done. The rotation/translation are there to orient the shape layer -- this all works for now.
func addShapesToBuffer(buffer: CVPixelBuffer, shapeLayer: CAShapeLayer) -> CVPixelBuffer? {
let coreImage = CoreImage.CIImage.init(cvImageBuffer: buffer)
let newBuffer = getBuffer(from: coreImage)
CVPixelBufferLockBaseAddress(newBuffer!, [])
let rect = CGRect(origin: CGPoint.zero, size: CGSize(width: 800, height: 390))
shapeLayer.shouldRasterize = true
shapeLayer.rasterizationScale = UIScreen.main.scale
shapeLayer.backgroundColor = UIColor.clear.cgColor
let renderer = UIGraphicsImageRenderer(size: rect.size)
let uiImageDrawing = renderer.image {
context in
// let videoImage = UIImage(ciImage: coreImage)
// videoImage.draw(in: rect)
let cgContext = context.cgContext
cgContext.rotate(by: deg2rad(-90))
cgContext.translateBy(x: -390, y: 0)
return shapeLayer.render(in: cgContext)
}
let ciContext = CIContext()
let newImage = CIImage(cgImage: uiImageDrawing.cgImage!)
ciContext.render(_: newImage, to: newBuffer!)
CVPixelBufferUnlockBaseAddress(newBuffer!, [])
return newBuffer
}

SpriteKit: using SKView in UIView instead of initializing a Game project

Completely new to SpriteKit. Currently I have a UIView, and I want to add a sprite node to it (like a small UIImageView, but I want animation for it so using SpriteKit). Therefore I didn't initialize my project to be a game project, as found in almost all of tutorials for SpriteKit. I've found a note here: link and what I have now is sth like:
func initializeImage() {
let imageView = SKView()
imageView.frame = CGRect(x: self.frame.width / 2 - Constants.imageWidth / 2, y: self.frame.height - Constants.imageHeight, width: Constants.imageWidth, height: Constants.imageHeight)
// so place it somewhere in the bottom middle of the whole frame
let sheet = SpriteSheet(texture: ...)
let sprite = SKSpriteNode(texture: sheet.itemFor(column: 0, row: 0))
sprite.position = imageView.center //basically the same position as the imageView.frame's x and y value
let scene = SKScene(size: imageView.frame.size)
scene.backgroundColor = SKColor.clear
scene.addChild(sprite)
imageView.presentScene(scene)
self.frame.addSubview(imageView)
}
The SpriteSheet is similar to this: sprite sheet; it's essentially cutting an image atlas and divide it into smaller images. I tracked the process and this step is indeed giving the smaller image (the var 'sprite'). But if running I only have a black square now (should be the size as defined by Constants). If I set scene.backgroundColor to be white then it's white. May I know how I should proceed from here, as how should I make the sprite showing up?
All of your code looks good except for this:
sprite.position = imageView.center // basically the same position as the imageView.frame's x and y value
That is basically not the position you think it is. The coordinate system in SpriteKit is a) relative to the (SK)scene, not to whatever view the SKView is contained in, and b) flipped vertically relative to the UIKit coordinate system. If you want a sprite centered in the scene, you probably want to set its position based on the scene's size:
sprite.position = CGPoint(x: scene.size.width / 2, y: scene.size.height / 2)
By the way, the external SpriteSheet code might not be needed (and you're more likely to benefit from Apple's optimizations) if you slice up your sprite sheet and put it in an Xcode asset catalog.

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.

How to edit high resolution images with Core Graphics?

I'm trying to draw a path on a high resolution image, that's nothing complicated for an iPhone but if I add shadow to my path everything lags. It lags only when I work on images with a certain resolution (2000 x 3000) even less.
The Storyboard vies are:
-Scroll View
-Image View
-Draw View
So I have the DrawingView on top of the ImageView when I need to draw.
So the ImageView and the DrawView (view.bounds.size) have the same resolution as the image (e.g. 2000 x 3000) (there's the problem).
I'm drawing on a view with a high resolution.
I'm not directly calling drawRect: but only calling setNeedsDisplay() inside touchesBegan() and touchesMoved() after doing some operations (path.moveToPoint, path.addCurveToPoint, array operations) and adding points to my array.
In drawRect: I essentially have:
override func drawRect(rect: CGRect) {
print(self.bounds.size)
UIColor.greenColor().setStroke()
path.lineCapStyle = .Round
path.lineJoinStyle = .Round
path.lineWidth = 60.0
context = UIGraphicsGetCurrentContext()!
CGContextAddPath(context, path.CGPath)
CGContextSetShadowWithColor(context, CGSizeZero, 14.0, UIColor.whiteColor().CGColor) // <-- with this shadow it lags a lot.
path.stroke()
}
My path is a UIBezierPath().
Any ideas to improve the speed?
Update:
I followed what #brimstone said. I now have ImageView with a lower resolution, but have to apply my drawn path to the high resolution image.
(I'm trying to hand crop an image with the path that the user draws)
In this code I already got my closed path:
let layer = CAShapeLayer()
layer.path = path.CGPath
self.imageToEditView.layer.mask = layer
UIGraphicsBeginImageContext(self.imageEdited.size)
self.imageToEditView.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let croppedCGImage = CGImageCreateWithImageInRect(image.CGImage!, CGPathGetBoundingBox(path.CGPath))
let croppedImage = UIImage(CGImage: croppedCGImage!)
self.imageToEditView.image = croppedImage
self.imageToEditView.layer.mask = nil
imageToEditView.bounds.size = low resolution
imageEdited.size = high resolution
I need to set the hight resolution (I think) when i renderInContext. But how can I change the resolution of the imageView now?
Try downsizing it for the user to draw over (doesn't make a huge difference on small iPhone screens for user experience), then apply the edits to the high-res image.
To downsize images, either use UIImagePNGRepresentation, which may make your image sufficiently smaller, or (if you're still having memory issues), try using techniques in this tutorial and this answer to make it even smaller.
Then, you can take the content of what they've drawn and apply it to the high-res image.
Alternatively, look at high-res optimisation techniques by Apple: https://developer.apple.com/library/ios/documentation/2DDrawing/Conceptual/DrawingPrintingiOS/SupportingHiResScreensInViews/SupportingHiResScreensInViews.html