Rendering an SCNScene with transparent background makes the scene semi-transparent - swift

My goal is to render an SCNScene off screen with a transparent background, as a PNG. Full reproducing project here.
It works, but when I enable jittering, the resulting render is semitransparent. In this example I pasted in the resulting PNG on top of an image with black squares, and you will notice that the black squares are in fact visible:
As you can see, the black boxes are visible through the 3D objects.
But, when I disable jittering, I get this:
As you can see, the black boxes are not visible.
I'm on Monterrey 12.1 (21C52). I'm testing the images in Preview and in Figma.
I'm using standard SDK features only. Here's what I do:
scene.background.contents = NSColor.clear
let snapshotRenderer = SCNRenderer(device: MTLCreateSystemDefaultDevice())
snapshotRenderer.pointOfView = sceneView.pointOfView
snapshotRenderer.scene = scene
snapshotRenderer.scene!.background.contents = NSColor.clear
snapshotRenderer.autoenablesDefaultLighting = true
// setting this to false does not make the image semi-transparent
snapshotRenderer.isJitteringEnabled = true
let size = CGSize(width: 1000, height: 1000)
let image = snapshotRenderer.snapshot(atTime: .zero, with: size, antialiasingMode: .multisampling16X)
let imageRep = NSBitmapImageRep(data: image.tiffRepresentation!)
let pngData = imageRep?.representation(using: .png, properties: [:])
try! pngData!.write(to: destination)
The docs for jittering says
Jittering is a process that SceneKit uses to improve the visual quality of a rendered scene. While the scene’s content is still, SceneKit moves the pointOfView location very slightly (by less than a pixel in projected screen space). It then composites images rendered after several such moves to create the final rendered scene, creating an antialiasing effect that smooths the edges of rendered geometry.
To me, that doesn't sound like something that is expected to produce semi-transparency?

Related

Apple Vision API, Cropping from Original Image Based on Landmark Position

I have a swift based iPhone application (this tutorial https://www.kodeco.com/1163620-face-detection-tutorial-using-the-vision-framework-for-ios). It takes the camera feed and processes each frame using the vision API to find the landmarks on a face, and then draws and overlay on the video of the landmarks. All I was trying to do was take the position of a landmark and crop a rectangle around that position from the original image (after that I was going to run it through an ML model to determine some things). However, I have an issue translating the vision API landmark position back to the original image location to do the cropping. Below is hopefully the relevant portions of the code that show how I attempted to do this and failed (I pulled code from a number of functions/classes just to focus it on the problem).
Capture video frame
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let ciimage = CIImage(cvImageBuffer: imageBuffer)
Find the Face Landmarks
let detectFaceRequest = VNDetectFaceLandmarksRequest(completionHandler: detectedFace)
sequenceHandler.perform([detectFaceRequest],on: imageBuffer,orientation: .leftMirrored)
Get the left Eye Pupil Location
let point = result.landmarks?.leftPupil?.pointsInImage(imageSize: ciimage.extent.size)
Draw the cropped image around the leftPupil
let cropped = ciimage?.cropped(to: CGRect(x: point.x-100, y: point.y-100, width: 200, height: 200))
let uicropped = UIImage(ciImage: cropped!)
uicropped.draw(at: CGPoint(x:100,y:100))
The issue is the cropped image is not positioned over the left pupil.

SceneKit LIDAR iOS: Show unscanned regions of camera view in the background with a different color/texture

I'm building an app similar to Polycam, 3D Scanner App, Scaniverse, etc. I visualize a mesh for scanned regions and export it into different formats. I would like to show the user what regions are scanned, and what not. To do so, I need to differentiate between them.
My idea is to build something like Polycam does..
< Polycam blue background for unscanned regions >
I tried changing the background content property of the scene, but it causes the whole camera view to be replaced by the color.
arSceneView.scene.background.contents = UIColor.black
I'm using ARSCNView and setting up plane detection as follows:
private func setupPlaneDetection() {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
configuration.sceneReconstruction = .meshWithClassification
configuration.frameSemantics = .smoothedSceneDepth
arSceneView.session.run(configuration)
arSceneView.session.delegate = self
// arSceneView.scene.background.contents = UIColor.black
arSceneView.delegate = self
UIApplication.shared.isIdleTimerDisabled = true
arSceneView.showsStatistics = true
}
Thanks in advance for any help you can provide!
I’ve done this before by adding a sphere to the scene with a two-sided material (slightly transparent) and with a radius large enough that the camera and the scanned surface will always be inside of it. Here’s an example of how to do that:
let backgroundSphereNode = SCNNode()
backgroundSphereNode.geometry = SCNSphere(radius: 500)
let material = SCNMaterial()
material.isDoubleSided = true
material?.diffuse.contents = UIColor(white: 0, alpha: 0.9)
backgroundSphereNode.geometry?.materials = [material]
Note that I’m using a black color - you can obviously change this to whatever you need, but keep the alpha channel slightly transparent. And tweak the radius of the sphere so it works for your scene.

Turning a UIBezierPath into a mask?

Not sure if I am asking this question correctly, but I have two components; a CIImage and a UIBezierPath. Ideally, I want to create a CGRect that encapsulates my UIBezierPath; everything inside of the path would be white, everything outside of the path would be black. This way, I can then render this CGRect to some sort of an image, which I could then use as a mask for other purposes.
I am struggling to figure out how to do this with a focus on performance. My tests, as noted below, leverage using UIGraphicsImageRenderer which is far too slow for my needs (I will be doing this on sample buffers from a camera). Therefore, I would like to stick within CoreImage. This is my attempt;
// Path
let path = UIBezierPath()
// ... define the path's shape and close it
// My source image
let image = CIImage(cgImage: UIImage(named: "test.jpg")!.cgImage!)
// Renderer
let renderer = UIGraphicsImageRenderer(size: image.extent.size)
// Render path as mask
let img = renderer.image { ctx in
ctx.cgContext.setFillColor(UIColor.black.cgColor)
ctx.cgContext.fill(CGRect(x: 0, y: 0, width: image.extent.size.width, height: image.extent.size.height))
ctx.cgContext.setFillColor(UIColor.white.cgColor)
ctx.cgContext.addPath(path.cgPath)
ctx.cgContext.drawPath(using: .fill)
}
// Put a filter on the image
let imageFiltered = image.applyingFilter("CIPhotoEffectNoir")
// Blend with mask
let maskFilter = CIFilter.blendWithMask()
maskFilter.inputImage = imageFiltered
maskFilter.backgroundImage = image
maskFilter.maskImage = CIImage(cgImage: img.cgImage!)
// Output
if let output = maskFilter.outputImage {
... use CIContext() to render back to CVPixelBuffer for preview on MTKView.
}
Overall, the goal is to have a defined portion of an image (which will not conform to a traditional shape like a square or circle) which will be filtered with a CIFilter, then composited back over the original. If there is a better approach (such as somehow taking the original image, filtering it, cropping it to the path (leaving everything outside of the path transparent) and composing, that would likely be better performant.
To note, the above sample code results in a crash as the UIGraphicsImageRenderer cannot render the mask fast enough.
Your approach looks good so far. I assume the slow part is the generation of the mask image with Core Graphics. Unfortunately, there is no direct way to do the same with Core Image directly (on the GPU). However, you can try the following:
(Assuming from your previous question that the path always has a certain shape,) you can generate a mask image containing the path once for a certain reference size of your choice. Make sure that the path doesn't "touch" the border.
Then, when you want to use it as a mask, move and scale the shape image to the correct place using transformations and let its edges extend infinitely (to cover the whole underlying image; that's why the shape shouldn't touch the edges). Something like this:
let pathImage = CIImage(cgImage: img.cgImage!)
// scale path to the size of the area you want to mask
var mask = pathImage.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
// move path to the place you want to cover
mask = mask.transformed(by: CGAffineTransform(translationX: offsetX, y: offsetY))
// let mask fill the rest of the area
mask = mask.clampedToExtent()
// use mask as maskImage...
You should be able to recycle the pathImage for every frame and thereby avoiding Core Graphics and CPU-GPU-synchronization.

How to prevent distorted images?

I have the problem that the images I add are distorted. I have created a pixel accurate background for the iPhone X at (1125 x 2436), so I don't have to use .aspectFill and .aspectFit because I want a screen without black borders.
I use the following code to create the images:
func animateDeck() {
let chip = SKSpriteNode(imageNamed: "Chip")
chip.position = CGPoint(x: 300, y: 400)
chip.zPosition = 2
chip.setScale(1)
gameScene2.addChild(chip)
print("test")
}
Is there a way to display the images in their correct size without using .aspectFit or .aspectFill?
now (left) and how it should be (right)
Thank you in advance!
Check out this project I just made to show you how to create a texture and apply it to a node. All you need should be in GameScene.swift.
Also, in your ViewController, make sure that your GameScene is initialised properly as shown in my project, or how you did it with this:
gameScene2 = GameScene(size: view, bounds: size)

SKCropNode Strange Behaviour

When using SKCropNode, I wanted the image I add to the cropNode to adjust each individual pixel alpha value in accordance to the corresponding mask pixel alpha value.
After a lot of research, I came to the conclusion that the image pixel alpha values were not going to adjust to the mask, however after just continuing with my project, I notice that one specific cropNode image's pixels were in fact fading to the mask pixel alpha value??? Which was great! However after reproducing this, I don't know why it is doing it?
import SpriteKit
var textureArray: [SKTexture] = []
var display: SKSpriteNode!
class GameScene: SKScene {
override func didMoveToView(view: SKView) {
anchorPoint = CGPointMake(0.5, 0.5)
backgroundColor = UIColor.greenColor()
fetchTexures()
display = SKSpriteNode()
let image = SKSpriteNode(texture: textureArray[0])
display.addChild(image)
let randomCropNode = SKCropNode()
display.addChild(randomCropNode)
let cropNode = SKCropNode()
cropNode.maskNode = display
let fill = SKSpriteNode(color: UIColor.whiteColor(), size: frame.size)
cropNode.addChild(fill)
cropNode.zPosition = 10
addChild(cropNode)
}
func fetchTexures() {
var x: Int = 0
while x < 1 {
let texture: SKTexture = SKTextureAtlas(named: "texture").textureNamed("\(x)")
textureArray.append(texture)
x += 1
}
}
}
The above code gives me my desired effect, however if you remove the below, the image pixel alpha values no longer adjust in accordance with the mask?? The below code is not actually using in my project, but it's the only way I can make the pixel alpha value's adjust.
let randomCropNode = SKCropNode()
display.addChild(randomCropNode)
Can anybody see what is causing this behaviour, or if there a better way of getting my desired effect?
Mask:
Result:
If remove:
let randomCropNode = SKCropNode()
display.addChild(randomCropNode)
Result:
Crop node will only turn on and off pixels if the alpha varies between <.5 (off) and >=.5(on)
However to apply a fade, if your alpha mask is just black(with various alpha levels) and transparent, you apply the mask as a regular texture to your crop node, and you let alpha blending take care of the fade effect.
As for your issues with the code, are you sure your crop node is cropping, and not just rendering the texture? I do not know what the texture looks like to try and reproduce this.
The node supplied to the crop node must not be a child of another
node; however, it may have children of its own.
When the crop node’s contents are rendered, the crop node first draws
its mask into a private buffer. Then, it renders its children. When
rendering its children, each pixel is verified against the
corresponding pixel in the mask. If the pixel in the mask has an alpha
value of less than 0.05, the image pixel is masked out. Any pixel not
rendered by the mask node is automatically masked out.
https://developer.apple.com/library/ios/documentation/SpriteKit/Reference/SKCropNode_Ref/#//apple_ref/occ/instp/SKCropNode/maskNode