Blur face in face detection in vision kit - swift

I'm using Apple tutorial about face detection in vision kit in a live camera feed, not an image.
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
It detects the face and adds some lines using CAShapeLayer to draw lines between different parts of the face.
fileprivate func setupVisionDrawingLayers() {
let captureDeviceResolution = self.captureDeviceResolution
let captureDeviceBounds = CGRect(x: 0,
y: 0,
width: captureDeviceResolution.width,
height: captureDeviceResolution.height)
let captureDeviceBoundsCenterPoint = CGPoint(x: captureDeviceBounds.midX,
y: captureDeviceBounds.midY)
let normalizedCenterPoint = CGPoint(x: 0.5, y: 0.5)
guard let rootLayer = self.rootLayer else {
self.presentErrorAlert(message: "view was not property initialized")
return
}
let overlayLayer = CALayer()
overlayLayer.name = "DetectionOverlay"
overlayLayer.masksToBounds = true
overlayLayer.anchorPoint = normalizedCenterPoint
overlayLayer.bounds = captureDeviceBounds
overlayLayer.position = CGPoint(x: rootLayer.bounds.midX, y: rootLayer.bounds.midY)
let faceRectangleShapeLayer = CAShapeLayer()
faceRectangleShapeLayer.name = "RectangleOutlineLayer"
faceRectangleShapeLayer.bounds = captureDeviceBounds
faceRectangleShapeLayer.anchorPoint = normalizedCenterPoint
faceRectangleShapeLayer.position = captureDeviceBoundsCenterPoint
faceRectangleShapeLayer.fillColor = nil
faceRectangleShapeLayer.strokeColor = UIColor.green.withAlphaComponent(0.7).cgColor
faceRectangleShapeLayer.lineWidth = 5
faceRectangleShapeLayer.shadowOpacity = 0.7
faceRectangleShapeLayer.shadowRadius = 5
let faceLandmarksShapeLayer = CAShapeLayer()
faceLandmarksShapeLayer.name = "FaceLandmarksLayer"
faceLandmarksShapeLayer.bounds = captureDeviceBounds
faceLandmarksShapeLayer.anchorPoint = normalizedCenterPoint
faceLandmarksShapeLayer.position = captureDeviceBoundsCenterPoint
faceLandmarksShapeLayer.fillColor = nil
faceLandmarksShapeLayer.strokeColor = UIColor.yellow.withAlphaComponent(0.7).cgColor
faceLandmarksShapeLayer.lineWidth = 3
faceLandmarksShapeLayer.shadowOpacity = 0.7
faceLandmarksShapeLayer.shadowRadius = 5
overlayLayer.addSublayer(faceRectangleShapeLayer)
faceRectangleShapeLayer.addSublayer(faceLandmarksShapeLayer)
rootLayer.addSublayer(overlayLayer)
self.detectionOverlayLayer = overlayLayer
self.detectedFaceRectangleShapeLayer = faceRectangleShapeLayer
self.detectedFaceLandmarksShapeLayer = faceLandmarksShapeLayer
self.updateLayerGeometry()
}
How can I fill inside the lines (different part of face) with a blurry view? I need to blur the face.

You could try placing a UIVisualEffectView on top of your video feed, and then adding a masking CAShapeLayer to that UIVisualEffectView. I don't know if that would work or not.
The docs on UIVisualEffectView say:
When using the UIVisualEffectView class, avoid alpha values that are less than 1. Creating views that are partially transparent causes the system to combine the view and all the associated subviews during an offscreen render pass. UIVisualEffectView objects need to be combined as part of the content they are layered on top of in order to look correct. Setting the alpha to less than 1 on the visual effect view or any of its superviews causes many effects to look incorrect or not show up at all.
I don't know if using a mask layer on a visual effect view would cause the same rendering problems or not. You'd have to try it. (And be sure to try it on a range of different hardware, since the rendering performance varies quite a bit between different versions of Apple's chipsets.)
You could also try using a shape layer filled with visual hash or a "pixellated" pattern instead of blurring. That would be faster and probably render more reliably.
Note that face detection tends to be a little jumpy. It might drop out for a few frames, or lag on quick pans or change of scene. If you're trying to hide people's faces in a live feed for privacy, it might not be reliable. It would only take a few un-blurred frames for somebody's identity to be revealed.

Related

SceneKit LIDAR iOS: Show unscanned regions of camera view in the background with a different color/texture

I'm building an app similar to Polycam, 3D Scanner App, Scaniverse, etc. I visualize a mesh for scanned regions and export it into different formats. I would like to show the user what regions are scanned, and what not. To do so, I need to differentiate between them.
My idea is to build something like Polycam does..
< Polycam blue background for unscanned regions >
I tried changing the background content property of the scene, but it causes the whole camera view to be replaced by the color.
arSceneView.scene.background.contents = UIColor.black
I'm using ARSCNView and setting up plane detection as follows:
private func setupPlaneDetection() {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
configuration.sceneReconstruction = .meshWithClassification
configuration.frameSemantics = .smoothedSceneDepth
arSceneView.session.run(configuration)
arSceneView.session.delegate = self
// arSceneView.scene.background.contents = UIColor.black
arSceneView.delegate = self
UIApplication.shared.isIdleTimerDisabled = true
arSceneView.showsStatistics = true
}
Thanks in advance for any help you can provide!
I’ve done this before by adding a sphere to the scene with a two-sided material (slightly transparent) and with a radius large enough that the camera and the scanned surface will always be inside of it. Here’s an example of how to do that:
let backgroundSphereNode = SCNNode()
backgroundSphereNode.geometry = SCNSphere(radius: 500)
let material = SCNMaterial()
material.isDoubleSided = true
material?.diffuse.contents = UIColor(white: 0, alpha: 0.9)
backgroundSphereNode.geometry?.materials = [material]
Note that I’m using a black color - you can obviously change this to whatever you need, but keep the alpha channel slightly transparent. And tweak the radius of the sphere so it works for your scene.

Bounding box realignment from CoreML object detection

I am currently trying to render a bounding boxes inside a UIView, however currently I'm facing the issue that there is a misalignment in the X axis when trying to render the box as can be seen in the screenshot below.
When the object is on the left of the view the misalignment will be on the right like seen in the image. However when the object is on the right the misalignment will be to the left. The misalignment increases the further it gets to the edge of the screen.
Currently are use ARKit to capture the current frame as a pixel buffer.
let pixelBuffer = sceneView.session.currentFrame?.capturedImage
// Capture current device orientation
let orientation = CGImagePropertyOrientation(rawValue: UIDevice.current.exifOrientation)
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: orientation)
Additionally additionally my CoroML vision request looks as follows
findObjectRequest = VNCoreMLRequest(model: visionModel, completionHandler: visionRequestDidComplete)
findObjectRequest?.imageCropAndScaleOption = .scaleFit
I then try to reschedule the normalised bounding box to image Space like this:
public func scaleImageForCameraOutput(predictionRect finderrItem: FinderrItem, viewRect: CGRect) -> FinderrItem {
let scale = CGAffineTransform.identity.scaledBy(x: viewRect.width, y: viewRect.height)
let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -1)
let bgRect = finderrItem.box.applying(transform).applying(scale)
finderrItem.box = bgRect
return finderrItem
}
I also tried to follow the Apple developer documentation and using the API code to re-scale the banding boxes as follows
let newBox = VNImageRectForNormalizedRect(
boundingBox,
Int(self.sceneView.bounds.width),
Int(self.sceneView.bounds.height))
However this still has the same issue with another issue that the y-axis is now inverted.
Does anyone know why I'm having this problem I've been stuck on it for quite awhile now and can't seem to figure it out.

Using particle effects or animated sks files to create filled letters

I have one .png image of a single star. I'd like to use this image to create animated star-filled letters. This is what I'd like to do (but the stars would be sort of animated within this, which I imagine can be done with particle effects):
Could I do this by using potentially several sks files for each letter and then loading them into one larger scene? In addition, if I just wanted to fill the label node with a static texture of several stars, is there an alternate way of doing this?
Not ideal, but an easy to achieve:
override func didMove(to view: SKView) {
if let nodeToMask = SKEmitterNode(fileNamed: "firelfies") {
backgroundColor = .black
let cropNode = SKCropNode()
cropNode.position = CGPoint(x: frame.midX, y: frame.midY)
cropNode.zPosition = 1
let mask = SKLabelNode(fontNamed: "ArialMT")
mask.text = "MASK"
mask.fontColor = .green
mask.fontSize = 185
cropNode.maskNode = mask
nodeToMask.position = CGPoint(x: 0, y: 0)
nodeToMask.name = "character"
cropNode.addChild(nodeToMask)
addChild(cropNode)
}
}
I think the code is self-explanatory, but basically, you just use text as a mask of a crop node, and you mask an emitter. Here is the result:
The thing with this implementation is that sparkles don't go outside of the letter bounds.

I'm having some trouble using x and y coordinates from touchesBegan as the center key in a CI filter

I'm trying to setup having the users tap a location in an image view and the X,Y of the tap becomes the center point (kCIInputCenterKey) of the current image filter in use.
These are my global variables:
var x: CGFloat = 0
var y: CGFloat = 0
var imgChecker = 0
This is my touchesBegan function that checks if the user is touching inside the image view or not, if not then sets the filter center key to the center of the image view:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let position = touch.location(in: self.imageView)
if (touch.view == imageView){
print("touchesBegan | This is an ImageView")
x = position.x * 4
y = position.y * 4
imgChecker = 1
}else{
print("touchesBegan | This is not an ImageView")
x = 0
y = 0
imgChecker = 0
}
print("x: \(x)")
print("y: \(y)")
}
}
As you can see I have the checker there to make the filter center appear in the middle of the image if inside the image view was not tapped. I'm also printing out the coordinates tapped to xCode's console and they appear without issue.
This is the part where i apply my filter:
currentFilter = CIFilter(name: "CIBumpDistortion")
currentFilter.setValue(200, forKey: kCIInputRadiusKey)
currentFilter.setValue(1, forKey: kCIInputScaleKey)
if imgChecker == 1 {
self.currentFilter.setValue(CIVector(x: self.x, y: self.y), forKey: kCIInputCenterKey)
}else{
self.currentFilter.setValue(CIVector(x: currentImage.size.width / 2, y: currentImage.size.height / 2), forKey: kCIInputCenterKey)
}
x = 0
y = 0
let beginImage = CIImage(image: currentImage)
currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
let cgimg = context.createCGImage(currentFilter.outputImage!, from: currentFilter.outputImage!.extent)
currentImage = UIImage(cgImage: cgimg!)
self.imageView.image = currentImage
This is the CGRect I'm using, ignore the "frame" in there, its just a image view in front of the first one that allows me to save a "frame" over the current filtered image:
func drawImagesAndText() {
let renderer = UIGraphicsImageRenderer(size: CGSize(width: imageView.bounds.size.width, height: imageView.bounds.size.height))
img = renderer.image { ctx in
let bgImage = currentImage
bgImage?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
frames = UIImage(named: framesAr)
frames?.draw(in: CGRect(x: 0, y: 0, width: imageView.bounds.size.width, height: imageView.bounds.size.height))
}
}
When I do set the x,y by tapping inside the image view, the center of the filter in the image view keeps appearing in the lower left hand side of it regardless of where I tapped inside. If i keep tapping around the image view, the center does seem to move around a bit, but its no where near where I'm actually tapping.
any insight would be greatly appreciated, thank you.
Keep two things in mind.
First (and I think you probably know this), the CI origin (0,0) is lower left, not top left.
Second (and I think this is the issue) UIKit (meaning UIImage and potentially CGPoint coordinates) are not the same as CIVector coordinates. You need to take the UIKit touchesBegan coordinate and turn it into the CIImage.extent coordinate.
EDIT:
All coordinates that follow are X then Y, and Width then Height.
After posting my comment I thought I'd give an example of what I mean by scaling. Let's say you have a UIImageView sized at 250x250, using a content mode of AspectFit, displaying an image whose size is 1000x500.
Now, let's say the touchesBegan is CGPoint(200,100). (NOTE: If your UIImageView is part of a larger superview, it could be something more like 250,400 - I'm working on the point within the UIImageView.)
Scaling down the image size (remember, AspectFit) means the image is actually centered vertically (landscape appearing) within the UIImageView at CGRect(0, 62.5, 250, 125). So first off, good! The touch point not only began within the image view, it also began wishing the image. (You'll probably want to consider the not-so-edge case of touches beginning outside of the image.)
Dividing by 4 gives you the scaled down image view coordinates, and as you'd expect, multiplying up will give you the needed vector coordinates. So a touchesBegan CGPoint(200,100) turns into a CIVector(800,400).
I have some code written - not much in the way of comments, done in Swift 2 (I think) and very poorly written - that is part of a subclass (probably should have been an extension) of UIImageView that computes all this. Using the UIImageView's bounds and it's image's size is what you need. Keep in mind - images in AspectFit can also be scaled up!
One last note on CIImage - extent. Many times it's a UIImage's size. But many masks and generated output may have an infinite eatent.
SECOND EDIT:
I made a stupid mistake in my scaling example. Remember, the CIImage Origin is bottom left, not upper left. So in my example a CGPoint(200,100), scaled to CGPoint(800,400) would be CGVector(800,100).
THIRD EDIT:
Apologies for the multiple/running edits, but it seems important. (Besides, only the last was due my stupidity! Worthwhile, to note, but still.)
Now we're talking "near real time" updating using a Core Image filter. I'm planning to eventually have some blog posts on this, but the real source you want is Simon Gladman (he's moved on, look back to his posts in 2015-16), and his eBook Core Image for Swift (uses Swift 2 but most is automatically upgraded to Swift 3). Just giving credit where it is due.
If you want "near real time" usage of Core Image, you need to use the GPU. UIView, and all it's subclasses (meaning UIKit) uses the CPU. That's okay, using the GPU means using a Core Graphics, and specifically using a GLKView. It's the CG equivalent of a UIImage.
Here's my subclass of it:
open class GLKViewDFD: GLKView {
var renderContext: CIContext
var myClearColor:UIColor!
var rgb:(Int?,Int?,Int?)!
open var image: CIImage! {
didSet {
setNeedsDisplay()
}
}
public var clearColor: UIColor! {
didSet {
myClearColor = clearColor
}
}
public init() {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(frame: CGRect.zero)
context = eaglContext!
}
override public init(frame: CGRect, context: EAGLContext) {
renderContext = CIContext(eaglContext: context)
super.init(frame: frame, context: context)
enableSetNeedsDisplay = true
}
public required init?(coder aDecoder: NSCoder) {
let eaglContext = EAGLContext(api: .openGLES2)
renderContext = CIContext(eaglContext: eaglContext!)
super.init(coder: aDecoder)
context = eaglContext!
enableSetNeedsDisplay = true
}
override open func draw(_ rect: CGRect) {
if let image = image {
let imageSize = image.extent.size
var drawFrame = CGRect(x: 0, y: 0, width: CGFloat(drawableWidth), height: CGFloat(drawableHeight))
let imageAR = imageSize.width / imageSize.height
let viewAR = drawFrame.width / drawFrame.height
if imageAR > viewAR {
drawFrame.origin.y += (drawFrame.height - drawFrame.width / imageAR) / 2.0
drawFrame.size.height = drawFrame.width / imageAR
} else {
drawFrame.origin.x += (drawFrame.width - drawFrame.height * imageAR) / 2.0
drawFrame.size.width = drawFrame.height * imageAR
}
rgb = (0,0,0)
rgb = myClearColor.rgb()
glClearColor(Float(rgb.0!)/256.0, Float(rgb.1!)/256.0, Float(rgb.2!)/256.0, 0.0);
glClear(0x00004000)
// set the blend mode to "source over" so that CI will use that
glEnable(0x0BE2);
glBlendFunc(1, 0x0303);
renderContext.draw(image, in: drawFrame, from: image.extent)
}
}
}
A few notes.
I absolutely need to credit Objc.io for much of this. This is also a great resource for Swift and UIKit coding.
I wanted AspectFit content mode with the potential to change the "backgroundColor" of the GLKView, which is why I subclassed and and called if clearColor.
Between the two resources I linked to, you should have what you need to have a good performing, near real time use of Core Image, using the GPU. One reason my afore-mentioned code to use scaling after getting the output of a filter was never updated? It didn't need it.
Lots here to process, I know. But I've found this side of things (Core Image effects) to be the most fun side (and pretty cool too) of iOS.

How to scale to be at the same distance in all devices?

I'm having problems opening my game in different divices , in the 6s iphone plus looks much bigger the circle the center, also the small circle that is on the line changes position , I would like that the center circle was the same size and that the small circle always this half on the line.
import SpriteKit
struct Circle {
var position:CGPoint
var radius:CGFloat
}
class GameScene: SKScene {
let node = SKNode()
let sprite = SKShapeNode(circleOfRadius: 6)
var rotation:CGFloat = CGFloat(M_PI)
var circles:[Circle] = []
var circuloFondo = SKSpriteNode()
var orbita = SKSpriteNode()
let padding2:CGFloat = 26.0
let padding3:CGFloat = 33.5
let padding5:CGFloat = 285.5
var circulo = SKSpriteNode()
override func didMoveToView(view: SKView) {
scaleMode = .ResizeFill
backgroundColor = UIColor(red: 0.3, green: 0.65, blue: 0.9, alpha: 1)
orbita = SKSpriteNode(imageNamed: "orbita2")
orbita.size = CGSize(width:view.frame.size.width - padding2 , height: view.frame.size.width - padding2)
orbita.color = UIColor.whiteColor()
orbita.colorBlendFactor = 1
orbita.alpha = 1
orbita.position = view.center
self.addChild(orbita)
orbita.zPosition = 3
circuloFondo = SKSpriteNode(imageNamed: "circuloFondo")
circuloFondo.size = CGSize(width:view.frame.size.width - padding5 , height: view.frame.size.width - padding5)
circuloFondo.color = UIColor.whiteColor()
circuloFondo.alpha = 1
circuloFondo.position = view.center
self.addChild(circuloFondo)
circuloFondo.zPosition = 0
let radius1:CGFloat = (view.frame.size.width - padding3)/2 - 1
let radius2:CGFloat = (view.frame.size.width - padding5)/2 + 6.5
circles.append(Circle(position: view.center, radius: radius1))
circles.append(Circle(position: view.center, radius: radius2))
addChild(node)
node.addChild(sprite)
if let circle = nextCircle() {
node.position = circle.position
sprite.fillColor = SKColor.whiteColor()
sprite.zPosition = 4.0
sprite.position = CGPoint(x:circle.radius, y:0)
rotate()
}
You can get the width of the screen like this:
let screenSize: CGRect = UIScreen.mainScreen().bounds
let screenWidth = screenSize.width
and then set elements in your UI to be a proportion of the screenWidth. For instance:
let radius1:CGFloat = screenWidth/4
//this would always give you a radius that is one quarter of the screen width
I've used this method a few times with success, hope it works for you.
Your main problem is your scene mode is set to ResizeFill. This will cause you so many headaches as you will have to do all the scaling yourself, hence the circles are different sizes on different devices. Having scale mode resizeFill will also affect things such as the physics engine or fontSizes which you will need to adjust for on every device.
I would recommend you use scene scale mode .AspectFill with the default scene size of 1024*768 (landscape) or 768*1024 (portrait). This is the same as the xCode default game template.
This way everything will look exactly the same on all iPhones. On iPads there will be slightly more screen space at the top and bottom which you simply cover with your background. The main trick is that you position your stuff from the center.
Furthermore you can use the universal assets in the asset catalogue and everything will look great and not blurry.
The only thing that you might have to adjust for this way is that on iPads you might need to move some buttons up/down if you want them on the top/bottom edge.
I strongly recommend you consider this as I can talk from experience that using scale mode ResizeFill is really bad. I have been through this pain with 2 games before I rewrote them because they were so inconsistent on all devices causing me so many bugs in the process. Lets not talk about the time I wasted testing on all devices, adjusting values until it felt right.
Hope this helps.