CIFilter GaussianBlur seems to be broken on iOS9.x (used with SKEffectNode) - sprite-kit

I am trying to create a blur effect using the following snippet:
let glowEffectNode = SKEffectNode()
glowEffectNode.shouldRasterize = true
let glowSize = CGSize(width: barSize.width, height: barSize.height)
let glowEffectSprite = SKSpriteNode(color: barColorData.topColor, size: glowSize)
glowEffectNode.addChild(glowEffectSprite)
let glowFilter = CIFilter(name: "CIGaussianBlur")
glowFilter!.setDefaults()
glowFilter!.setValue(5, forKey: "inputRadius")
glowEffectNode.filter = glowFilter
Of course on iOS 8.x it works perfectly but from iOS 9.x (tried it both both on 9.0 and 9.1) the blur is not working properly. (On the simulator the node seems to be a bit transparent but definitely not blurred and on the device it seems blurred but cropped and also has an offset from its center position:/)
Is there a quick way to fix this using CIFilter ?

I fiddled a bit more with this and found a solution...
First of all, it seems that using odd numbers for the blur radius causes the entire node to be rendered with an offset (???) so using 10 for example fixed the offset issue.
Secondly, it seems that the blur is cropped since the entire node is the rendered sprite and for a blur effect you need an extra space so I use a transparent sprite for the extra space and the following code snippet now works:
let glowEffectNode = SKEffectNode()
glowEffectNode.shouldRasterize = true
let glowBackgroundSize = CGSize(width: barSize.width + 60, height: barSize.height + 60)
let glowSize = CGSize(width: barSize.width + 10, height: barSize.height + 10)
let glowEffectSprite = SKSpriteNode(color: barColorData.topColor, size: glowSize)
glowEffectNode.addChild(SKSpriteNode(color: SKColor.clearColor(), size: glowBackgroundSize))
glowEffectNode.addChild(glowEffectSprite)
let glowFilter = CIFilter(name: "CIGaussianBlur")
glowFilter!.setDefaults()
glowFilter!.setValue(10, forKey: "inputRadius")
glowEffectNode.filter = glowFilter
I should have mentioned that I am creating a texture from this node using view.textureFromNode(glowEffectNode) for efficiency purposes but I tried using the node itself and the problem was still there so the above should work regardless

Related

Blur face in face detection in vision kit

I'm using Apple tutorial about face detection in vision kit in a live camera feed, not an image.
https://developer.apple.com/documentation/vision/tracking_the_user_s_face_in_real_time
It detects the face and adds some lines using CAShapeLayer to draw lines between different parts of the face.
fileprivate func setupVisionDrawingLayers() {
let captureDeviceResolution = self.captureDeviceResolution
let captureDeviceBounds = CGRect(x: 0,
y: 0,
width: captureDeviceResolution.width,
height: captureDeviceResolution.height)
let captureDeviceBoundsCenterPoint = CGPoint(x: captureDeviceBounds.midX,
y: captureDeviceBounds.midY)
let normalizedCenterPoint = CGPoint(x: 0.5, y: 0.5)
guard let rootLayer = self.rootLayer else {
self.presentErrorAlert(message: "view was not property initialized")
return
}
let overlayLayer = CALayer()
overlayLayer.name = "DetectionOverlay"
overlayLayer.masksToBounds = true
overlayLayer.anchorPoint = normalizedCenterPoint
overlayLayer.bounds = captureDeviceBounds
overlayLayer.position = CGPoint(x: rootLayer.bounds.midX, y: rootLayer.bounds.midY)
let faceRectangleShapeLayer = CAShapeLayer()
faceRectangleShapeLayer.name = "RectangleOutlineLayer"
faceRectangleShapeLayer.bounds = captureDeviceBounds
faceRectangleShapeLayer.anchorPoint = normalizedCenterPoint
faceRectangleShapeLayer.position = captureDeviceBoundsCenterPoint
faceRectangleShapeLayer.fillColor = nil
faceRectangleShapeLayer.strokeColor = UIColor.green.withAlphaComponent(0.7).cgColor
faceRectangleShapeLayer.lineWidth = 5
faceRectangleShapeLayer.shadowOpacity = 0.7
faceRectangleShapeLayer.shadowRadius = 5
let faceLandmarksShapeLayer = CAShapeLayer()
faceLandmarksShapeLayer.name = "FaceLandmarksLayer"
faceLandmarksShapeLayer.bounds = captureDeviceBounds
faceLandmarksShapeLayer.anchorPoint = normalizedCenterPoint
faceLandmarksShapeLayer.position = captureDeviceBoundsCenterPoint
faceLandmarksShapeLayer.fillColor = nil
faceLandmarksShapeLayer.strokeColor = UIColor.yellow.withAlphaComponent(0.7).cgColor
faceLandmarksShapeLayer.lineWidth = 3
faceLandmarksShapeLayer.shadowOpacity = 0.7
faceLandmarksShapeLayer.shadowRadius = 5
overlayLayer.addSublayer(faceRectangleShapeLayer)
faceRectangleShapeLayer.addSublayer(faceLandmarksShapeLayer)
rootLayer.addSublayer(overlayLayer)
self.detectionOverlayLayer = overlayLayer
self.detectedFaceRectangleShapeLayer = faceRectangleShapeLayer
self.detectedFaceLandmarksShapeLayer = faceLandmarksShapeLayer
self.updateLayerGeometry()
}
How can I fill inside the lines (different part of face) with a blurry view? I need to blur the face.
You could try placing a UIVisualEffectView on top of your video feed, and then adding a masking CAShapeLayer to that UIVisualEffectView. I don't know if that would work or not.
The docs on UIVisualEffectView say:
When using the UIVisualEffectView class, avoid alpha values that are less than 1. Creating views that are partially transparent causes the system to combine the view and all the associated subviews during an offscreen render pass. UIVisualEffectView objects need to be combined as part of the content they are layered on top of in order to look correct. Setting the alpha to less than 1 on the visual effect view or any of its superviews causes many effects to look incorrect or not show up at all.
I don't know if using a mask layer on a visual effect view would cause the same rendering problems or not. You'd have to try it. (And be sure to try it on a range of different hardware, since the rendering performance varies quite a bit between different versions of Apple's chipsets.)
You could also try using a shape layer filled with visual hash or a "pixellated" pattern instead of blurring. That would be faster and probably render more reliably.
Note that face detection tends to be a little jumpy. It might drop out for a few frames, or lag on quick pans or change of scene. If you're trying to hide people's faces in a live feed for privacy, it might not be reliable. It would only take a few un-blurred frames for somebody's identity to be revealed.

Why the SceneKit Material looks different, even when the image is the same?

The material content support many options to be loaded, two of these are NSImage (or UIImage) and SKTexture.
I noticed when loading the same image file (.png) with different loaders, the material is rendered different.
I'm very sure it is an extra property loaded from SpriteKit transformation, but I don't know what is it.
Why the SceneKit Material looks different, even when the image is the same?
This is the rendered example:
About the code:
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = NSColor.green
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = NSImage(named: "texture")
let plane = SCNPlane(width: 1, height: 1)
plane.firstMaterial?.diffuse.contents = SKTexture(imageNamed: "texture")
The complete example is here: https://github.com/Maetschl/SceneKitExamples/tree/master/MaterialTests
I think this has something to do with color spaces/gamma correction. My guess is that textures loaded via the SKTexture(imageNamed:) initializer aren't properly gamma corrected. You would think this would be documented somewhere, or other people would have noticed, but I can't seem to find anything.
Here's some code to swap with the last image in your linked sample project. I've force unwrapped as much as possible for brevity:
// Create the texture using the SKTexture(cgImage:) init
// to prove it has the same output image as SKTexture(imageNamed:)
let originalDogNSImage = NSImage(named: "dog")!
var originalDogRect = CGRect(x: 0, y: 0, width: originalDogNSImage.size.width, height: originalDogNSImage.size.height)
let originalDogCGImage = originalDogNSImage.cgImage(forProposedRect: &originalDogRect, context: nil, hints: nil)!
let originalDogTexture = SKTexture(cgImage: originalDogCGImage)
// Create the ciImage of the original image to use as the input for the CIFilter
let imageData = originalDogNSImage.tiffRepresentation!
let ciImage = CIImage(data: imageData)
// Create the gamma adjustment Core Image filter
let gammaFilter = CIFilter(name: "CIGammaAdjust")!
gammaFilter.setValue(ciImage, forKey: kCIInputImageKey)
// 0.75 is the default. 2.2 makes the dog image mostly match the NSImage(named:) intializer
gammaFilter.setValue(2.2, forKey: "inputPower")
// Create a SKTexture using the output of the CIFilter
let gammaCorrectedDogCIImage = gammaFilter.outputImage!
let gammaCorrectedDogCGImage = CIContext().createCGImage(gammaCorrectedDogCIImage, from: gammaCorrectedDogCIImage.extent)!
let gammaCorrectedDogTexture = SKTexture(cgImage: gammaCorrectedDogCGImage)
// Looks bad, like in StackOverflow question image.
// let planeWithSKTextureDog = planeWith(diffuseContent: originalDogTexture)
// Looks correct
let planeWithSKTextureDog = planeWith(diffuseContent: gammaCorrectedDogTexture)
Using a CIGammaAdjust filter with an inputPower of 2.2 makes the SKTexture almost? match the NSImage(named:) init. I've included the original image being loaded through SKTexture(cgImage:) to rule out any changes caused by using that initializer versus the SKTexture(imageNamed:) you asked about.

Spritekit - could not create physics body - image occasionally returns nil

I have looked at these questions but their answers did not work for me: 1, 2. I am targeting iOS 10.0. Occasionally when running the code a physicsBody is not created due to a problem with the texture/image and occasionally it does get created but is completely wrong yet I can't find anything wrong with the image, no stray pixels. Sometimes it does get created perfectly fine, about 1/3 times. Setting the alpha threshold as one answer suggests also doesn't change anything.
Code:
let pigeon: SKSpriteNode = SKSpriteNode(imageNamed: "pigeonHitBox")
init(){
pigeon.size = CGSize(width: 100, height: 100)
pigeon.position = CGPoint(x: 400, y: 631)
pigeon.physicsBody = SKPhysicsBody(texture: pigeon.texture ?? SKTexture(imageNamed: "pigeonHitBox"), size: pigeon.size)
pigeon.physicsBody!.isDynamic = true
}
Example of completely wrong outline

SKLabelNode extremely blurry [screenshot included]

I'm trying to add a label to an ARKit project, but it's rendering extremely blurry. See image below:
Here's my code:
let shapeNode = SKShapeNode(rectOf: CGSize(width: 30, height: 30))
shapeNode.name = "bar"
shapeNode.fillColor = UIColor.white
let labelNode = SKLabelNode(text: "Hello world")
labelNode.horizontalAlignmentMode = .left
labelNode.verticalAlignmentMode = .top
labelNode.fontColor = UIColor.black
labelNode.fontSize = 3
When you create a SKScene for display, you have to give it a size. This is the resolution of what will be rendered. It will then be scaled to the SKSceneView it appears in, according to how you set its scaleMode property. If the resolution of your SKScene is lower than the point size of the view it appears in, the output will be adjusted to fit using a standard scaling algorithm and will therefore be blurry.
Try increasing the size of your SKScene by a little bit and see if that helps. Note that you will likely also have to adjust the size and position of your nodes as these will appear to shrink as the scene gets larger.

SpriteKit. Why .strokeTexture does not work for SKShapeNode?

I can't understand why the "strokeTexture" option isn't working on a SKShapeNode. Or, how can I do border with texture? Thanks in advance!
let shape = SKShapeNode(circleOfRadius: 100)
shape.position = CGPoint(x: 200, y: 400)
shape.fillColor = SKColor.white
shape.fillTexture = SKTexture(imageNamed: "test")
shape.lineWidth = 50
shape.strokeColor = SKColor.white
shape.strokeTexture = SKTexture(imageNamed: "test(5)")
Output:
Test image:
Test(5) image:
It doesn't work in Simulator!
Try it on your device.
I get the feeling this stroke might be being done with metal, whereas the fill is somehow not being done with metal, so is visible in the simulator.
The fill also doesn't rotate with the object on the device, but the stroke/outline and its texture do.
This would tend to indicate that an SKShapeNode without a fill might have reasonable performance.
Set the width and height of the texture, that is used to render the stroke, multiple of 8 pixels. In your case Test(5) has dimensions 100x100. If you change this texture for another one with dimensions, e.g. 96x96 pixels, the stroke will be rendered correctly and displayed. I don't know, why there are no reference to this fact in the official documentation.