SKCropNode Strange Behaviour - swift

When using SKCropNode, I wanted the image I add to the cropNode to adjust each individual pixel alpha value in accordance to the corresponding mask pixel alpha value.
After a lot of research, I came to the conclusion that the image pixel alpha values were not going to adjust to the mask, however after just continuing with my project, I notice that one specific cropNode image's pixels were in fact fading to the mask pixel alpha value??? Which was great! However after reproducing this, I don't know why it is doing it?
import SpriteKit
var textureArray: [SKTexture] = []
var display: SKSpriteNode!
class GameScene: SKScene {
override func didMoveToView(view: SKView) {
anchorPoint = CGPointMake(0.5, 0.5)
backgroundColor = UIColor.greenColor()
fetchTexures()
display = SKSpriteNode()
let image = SKSpriteNode(texture: textureArray[0])
display.addChild(image)
let randomCropNode = SKCropNode()
display.addChild(randomCropNode)
let cropNode = SKCropNode()
cropNode.maskNode = display
let fill = SKSpriteNode(color: UIColor.whiteColor(), size: frame.size)
cropNode.addChild(fill)
cropNode.zPosition = 10
addChild(cropNode)
}
func fetchTexures() {
var x: Int = 0
while x < 1 {
let texture: SKTexture = SKTextureAtlas(named: "texture").textureNamed("\(x)")
textureArray.append(texture)
x += 1
}
}
}
The above code gives me my desired effect, however if you remove the below, the image pixel alpha values no longer adjust in accordance with the mask?? The below code is not actually using in my project, but it's the only way I can make the pixel alpha value's adjust.
let randomCropNode = SKCropNode()
display.addChild(randomCropNode)
Can anybody see what is causing this behaviour, or if there a better way of getting my desired effect?
Mask:
Result:
If remove:
let randomCropNode = SKCropNode()
display.addChild(randomCropNode)
Result:

Crop node will only turn on and off pixels if the alpha varies between <.5 (off) and >=.5(on)
However to apply a fade, if your alpha mask is just black(with various alpha levels) and transparent, you apply the mask as a regular texture to your crop node, and you let alpha blending take care of the fade effect.
As for your issues with the code, are you sure your crop node is cropping, and not just rendering the texture? I do not know what the texture looks like to try and reproduce this.
The node supplied to the crop node must not be a child of another
node; however, it may have children of its own.
When the crop node’s contents are rendered, the crop node first draws
its mask into a private buffer. Then, it renders its children. When
rendering its children, each pixel is verified against the
corresponding pixel in the mask. If the pixel in the mask has an alpha
value of less than 0.05, the image pixel is masked out. Any pixel not
rendered by the mask node is automatically masked out.
https://developer.apple.com/library/ios/documentation/SpriteKit/Reference/SKCropNode_Ref/#//apple_ref/occ/instp/SKCropNode/maskNode

Related

Turning a UIBezierPath into a mask?

Not sure if I am asking this question correctly, but I have two components; a CIImage and a UIBezierPath. Ideally, I want to create a CGRect that encapsulates my UIBezierPath; everything inside of the path would be white, everything outside of the path would be black. This way, I can then render this CGRect to some sort of an image, which I could then use as a mask for other purposes.
I am struggling to figure out how to do this with a focus on performance. My tests, as noted below, leverage using UIGraphicsImageRenderer which is far too slow for my needs (I will be doing this on sample buffers from a camera). Therefore, I would like to stick within CoreImage. This is my attempt;
// Path
let path = UIBezierPath()
// ... define the path's shape and close it
// My source image
let image = CIImage(cgImage: UIImage(named: "test.jpg")!.cgImage!)
// Renderer
let renderer = UIGraphicsImageRenderer(size: image.extent.size)
// Render path as mask
let img = renderer.image { ctx in
ctx.cgContext.setFillColor(UIColor.black.cgColor)
ctx.cgContext.fill(CGRect(x: 0, y: 0, width: image.extent.size.width, height: image.extent.size.height))
ctx.cgContext.setFillColor(UIColor.white.cgColor)
ctx.cgContext.addPath(path.cgPath)
ctx.cgContext.drawPath(using: .fill)
}
// Put a filter on the image
let imageFiltered = image.applyingFilter("CIPhotoEffectNoir")
// Blend with mask
let maskFilter = CIFilter.blendWithMask()
maskFilter.inputImage = imageFiltered
maskFilter.backgroundImage = image
maskFilter.maskImage = CIImage(cgImage: img.cgImage!)
// Output
if let output = maskFilter.outputImage {
... use CIContext() to render back to CVPixelBuffer for preview on MTKView.
}
Overall, the goal is to have a defined portion of an image (which will not conform to a traditional shape like a square or circle) which will be filtered with a CIFilter, then composited back over the original. If there is a better approach (such as somehow taking the original image, filtering it, cropping it to the path (leaving everything outside of the path transparent) and composing, that would likely be better performant.
To note, the above sample code results in a crash as the UIGraphicsImageRenderer cannot render the mask fast enough.
Your approach looks good so far. I assume the slow part is the generation of the mask image with Core Graphics. Unfortunately, there is no direct way to do the same with Core Image directly (on the GPU). However, you can try the following:
(Assuming from your previous question that the path always has a certain shape,) you can generate a mask image containing the path once for a certain reference size of your choice. Make sure that the path doesn't "touch" the border.
Then, when you want to use it as a mask, move and scale the shape image to the correct place using transformations and let its edges extend infinitely (to cover the whole underlying image; that's why the shape shouldn't touch the edges). Something like this:
let pathImage = CIImage(cgImage: img.cgImage!)
// scale path to the size of the area you want to mask
var mask = pathImage.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
// move path to the place you want to cover
mask = mask.transformed(by: CGAffineTransform(translationX: offsetX, y: offsetY))
// let mask fill the rest of the area
mask = mask.clampedToExtent()
// use mask as maskImage...
You should be able to recycle the pathImage for every frame and thereby avoiding Core Graphics and CPU-GPU-synchronization.

iOS Fill half of UIBezierPath with other color without CAGradientLayer

I have a question about drawing half of a UIBezierPath. How do I fill left part (left from thumb) with green color and right part (right from thumb) with white color without using CAGradientLayer?
Code I used to create Bezier Path - https://gist.github.com/robertmryan/67484c74297cede3926a3aed2fceedb9
Screenshot of what I want to achieve:
One approach is to add a mask layer to your curved-path shape layer.
When the "thumb" position changes, change the width of the mask to reveal only the "left-side" of the shape layer.
Create a shape layer to use as the mask:
let maskLayer: CALayer = {
let layer = CALayer()
layer.backgroundColor = UIColor.black.cgColor
return layer
}()
In viewDidLoad() set that layer as the mask for the curved-shape layer:
pathLayer.mask = maskLayer
Whenever the "thumb" position is set, update the width of the mask:
func updateMask(at point: CGPoint) -> Void {
var f = view.bounds
f.size.width = point.x
CATransaction.begin()
CATransaction.setDisableActions(true)
maskLayer.frame = f
CATransaction.commit()
}
I posted a modified version of your gist at: https://gist.github.com/DonMag/a2154e70a3c67193a7b19bee41c8fe95
It really has only a few changes... look for comments beginning with // DonMag -
Here is the result (with an imageView behind it to show the transparency):
Edit
After comments, the goal is to have the "right-side" of the track path be white instead of transparent.
Using the same approach, we can add a white shape layer on top of the original shape layer, and mask it to show only the right-hand-side.
Here is an updated gist - https://gist.github.com/DonMag/397dfbe4779e817531ef7a663365b2e7 - showing this result:

Repeating a texture over a plane in SceneKit

I have a 32x32 .png image that I want to repeat over a SCNPlane. The code I've got (See below) results in the image being stretched to fit the size of the plane, rather than repeated.
CODE:
let planeGeo = SCNPlane(width: 15, height: 15)
let imageMaterial = SCNMaterial()
imageMaterial.diffuse.contents = UIImage(named: "art.scnassets/grid.png")
planeGeo.firstMaterial = imageMaterial
let plane = SCNNode(geometry: planeGeo)
plane.geometry?.firstMaterial?.diffuse.wrapS = SCNWrapMode.repeat
plane.geometry?.firstMaterial?.diffuse.wrapT = SCNWrapMode.repeat
I fixed it. It seems like the image was zoomed in. If I do imageMaterial.diffuse.contentsTransform = SCNMatrix4MakeScale(32, 32, 0), the image repeats.
I faced an identical issue when implementing plane visualisation in ARKit. I wanted to visualise the detected plane as a checkerboard pattern. I fixed it by creating a custom SCNNode called a "PlaneNode" with a correctly configured SCNMaterial. The material uses wrapS, wrapT = .repeat and calculates the scale correctly based on the size of the plane itself.
Looks like this:
Have a look at the code below, the inline comments contain the explanation.
class PlaneNode : SCNNode {
init(planeAnchor: ARPlaneAnchor) {
super.init()
// Create the 3D plane geometry with the dimensions reported
// by ARKit in the ARPlaneAnchor instance
let planeGeometry = SCNPlane(width:CGFloat(planeAnchor.extent.x), height:CGFloat(planeAnchor.extent.z))
// Instead of just visualizing the grid as a gray plane, we will render
// it in some Tron style colours.
let material = SCNMaterial()
material.diffuse.contents = PaintCode.imageOfViewARPlane
//the scale gives the number of times the image is repeated
//ARKit givest the width and height in meters, in this case we want to repeat
//the pattern each 2cm = 0.02m so we divide the width/height to find the number of patterns
//we then round this so that we always have a clean repeat and not a truncated one
let scaleX = (Float(planeGeometry.width) / 0.02).rounded()
let scaleY = (Float(planeGeometry.height) / 0.02).rounded()
//we then apply the scaling
material.diffuse.contentsTransform = SCNMatrix4MakeScale(scaleX, scaleY, 0)
//set repeat mode in both direction otherwise the patern is stretched!
material.diffuse.wrapS = .repeat
material.diffuse.wrapT = .repeat
//apply material
planeGeometry.materials = [material];
//make a node for it
self.geometry = planeGeometry
// Move the plane to the position reported by ARKit
position.x = planeAnchor.center.x
position.y = 0
position.z = planeAnchor.center.z
// Planes in SceneKit are vertical by default so we need to rotate
// 90 degrees to match planes in ARKit
transform = SCNMatrix4MakeRotation(-Float.pi / 2.0, 1.0, 0.0, 0.0);
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func update(planeAnchor: ARPlaneAnchor) {
guard let planeGeometry = geometry as? SCNPlane else {
fatalError("update(planeAnchor: ARPlaneAnchor) called on node that has no SCNPlane geometry")
}
//update the size
planeGeometry.width = CGFloat(planeAnchor.extent.x)
planeGeometry.height = CGFloat(planeAnchor.extent.z)
//and material properties
let scaleX = (Float(planeGeometry.width) / 0.02).rounded()
let scaleY = (Float(planeGeometry.height) / 0.02).rounded()
planeGeometry.firstMaterial?.diffuse.contentsTransform = SCNMatrix4MakeScale(scaleX, scaleY, 0)
// Move the plane to the position reported by ARKit
position.x = planeAnchor.center.x
position.y = 0
position.z = planeAnchor.center.z
}
}
To do this in the SceneKit editor, select your plane (add one if needed) in the scene and then select the "Material Inspector" tab on the top right. Then, under "Properties" and where it says "Diffuse", select your texture. Now, expand the diffuse section by clicking the carat to the left of "Diffuse" and go down to where it says "Scale". Here, you can increase the scaling so that the texture can look repeated rather than stretched. For this question, the OP would have to set the scaling to 32x32.
You can learn it from Scene kit viewer Suppose You have SCNplane in your scene kit
Create scene file drag a plane
Which size is 12 inches in meter it is 0.3048
and select image in diffuse
now You have image with 4 Grid as shown in image
we want each box to be show in each inches so for 12 Inches we need 12 box * 12 box as we have 12 inches box
to calculate it. First we need convert 0.3048 meter to inches
which is meters / 0.0254 answer is 12.
but we need each grid to show in each inch so we also need to divide 12 / 4 = 3
now goto show material inspector and change scale value to 3
you can see 12 boxes for 12 inch plane.
Hope it is helpful

How to force SKTextureAtlas created from a dictionary to not modify textures size?

In my project, textures are procedurally generated from method provided by PaintCode (paint-code).
I then create a SKTextureAtlas from a dictionary filed with UIImage generated by these methods :
myAtlas = SKTextureAtlas(dictionary: myTextures)
At last, textures are retrieve from atlas using textureNamed:
var sprite1 = SKSpriteNode(texture:myAtlas.textureNamed("texture1"))
But displayed nodes are double sized on iPhone4S simulator. And triple sized on iPhone 6 Plus simulator.
It seems that at init, atlas compute images at the device resolution.
But generated images already have the correct size and do not need to be changed. See Drawing Method below.
Here is the description of the generated image:
<UIImage: 0x7f86cae56cd0>, {52, 52}
And the description of the corresponding texture in atlas:
<SKTexture> 'image1' (156 x 156)
This for iPhone 6 Plus, using #3x images, that's why size is x3.
And for iPhone 4S, using #2x images, as expected:
<UIImage: 0x7d55dde0>, {52, 52}
<SKTexture> 'image1' (156 x 156)
At last, the scaleproperty for generated UIImage is set to the right device resolution: 2.0 for #2x (iPhone 4S) and 3.0 for #3x (iPhone 6 Plus).
The Question
So what can I do to avoid atlas resizing the pictures?
Drawing method
PaintCode generate drawing methods as the following:
public class func imageOfCell(#frame: CGRect) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
StyleKit.drawCell(frame: frame)
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
Update 1
Comparing two approaches to generate SKTextureAtlas
// Some test image
let testImage:UIImage...
// Atlas creation
var myTextures = [String:UIImage]()
myTextures["texture1"] = testImage
myAtlas = SKTextureAtlas(dictionary: myTextures)
// Create two textures from the same image
let texture1 = myAtlas.textureNamed("texture1")
let texture2 = SKTexture(image:testImage)
// Wrong display : node is oversized
var sprite1 = SKSpriteNode(texture:texture1)
// Correct display
var sprite2 = SKSpriteNode(texture:texture2)
It seems that the problem lie on SKTextureAtlas from a dictionary as as SKSpriteNode initialization does not use scale property from UIImage to correctly size the node.
Here are descriptions on console:
- texture1: '' (84 x 84)
- texture2: 'texture1' (84 x 84)
texture2 miss some data! That could explain the lack of scale information to properly size the node as:
node's size = texture's size divide by texture's scale.
Update 2
The problem occur when the scale property of UIImage is different than one.
So you can use the following method to generate picture:
func imageOfCell(frame: CGRect, color:SKColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, 0)
var bezierPath = UIBezierPath(rect: frame)
color.setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}
The problem come from the use of SKTextureAtlas(dictionary:) to initialize atlas.
SKTexture created using this method does not embed data related to image's scale property. So during the creation of SKSpriteNode by init(texture:) the lack of scale information in texture leads to choose texture's size in place of image's size.
One way to correct it is to provide node's size during SKSpriteNode creation: init(texture:size:)
From the documentation for the scale parameter for UIGraphicsBeginImageContextWithOptions,
The scale factor to apply to the bitmap. If you specify a value of
0.0, the scale factor is set to the scale factor of the device’s main screen.
Therefore, if you want the textures to be the same "size" across all devices, set this value to 1.0.
EDIT:
override func didMoveToView(view: SKView) {
let image = imageOfCell(CGRectMake(0, 0, 10, 10),scale:0)
let dict:[String:UIImage] = ["t1":image]
let texture = SKTextureAtlas(dictionary: dict)
let sprite1 = SKSpriteNode(texture: texture.textureNamed("t1"))
sprite1.position = CGPointMake (CGRectGetMidX(view.frame),CGRectGetMidY(view.frame))
addChild(sprite1)
println(sprite1.size)
// prints (30.0, 30.0) if scale = 0
// prints (10,0, 10,0) if scale = 1
}
func imageOfCell(frame: CGRect, scale:CGFloat) -> UIImage {
UIGraphicsBeginImageContextWithOptions(frame.size, false, scale)
var bezierPath = UIBezierPath(rect: frame)
UIColor.whiteColor().setFill()
bezierPath.fill()
let imageOfCell = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return imageOfCell
}

SKCropNode not rendering greyscale mask values in SpriteKit

I have an SKCropNode that is working fine except for one thing - all of the greyscale values in the mask image that are not black or white are getting set to solid white.
This means that all greyscale alpha values are not rendering with transparency, but are just being set to be fully opaque.
Here is my code:
let foregroundTexture = SKTexture(image: UIImage(contentsOfFile:NSBundle.mainBundle().resourcePath!.stringByAppendingPathComponent("P01_foreground.jpg"))!)
var foreground = SKSpriteNode(texture: foregroundTexture)
let maskTexture = SKTexture(image: UIImage(contentsOfFile:NSBundle.mainBundle().resourcePath!.stringByAppendingPathComponent("P01_mask.png"))!)
var mask = SKSpriteNode(texture: maskTexture)
var cropNode = SKCropNode()
cropNode.addChild(foreground)
cropNode.maskNode = mask
cropNode.position = CGPoint(x: device.x/2, y: device.y/2)
cropNode.zPosition = 2.0
self.addChild(cropNode)
Does anyone know why the greyscale areas are being set to white or how I can achieve the desired result? Thanks in advance!
Unfortunately it looks like you can't. SKCropNode currently only has one property, the maskNode itself. There are no additional settings available to do what you want.