Keep relative positions of SKSpriteNode from SKShapeNode from CGPath - swift

I have an array of array of coordinates (from a shapefile), which I'm trying to make into a series of SKSpriteNodes.
My problem is that I need to keep the relative positions of each of the shapes in the array. If I use SKShapeNodes, it works, as they are just created directly from the path I trace, but their resources consumption is quite high, and, in particular I cannot use lighting effects on them.
If I use SKSpriteNodes with a texture created from the shape node, then I lose their relative positions.
I tried calculating the center of each shapes, but their positions are still not accurate.
Here's how I draw them so far:
override func didMoveToView(view: SKView)
{
self.backgroundColor = SKColor.blackColor()
let shapeCoordinates:[[(CGFloat, CGFloat)]] = [[(900.66563867095, 401.330302084953), (880.569690215615, 400.455067051099), (879.599839322167, 408.266821560754), (878.358968429675, 418.182833936104), (899.37522863267, 418.54861454054), (900.66563867095, 401.330302084953)],
[(879.599839322167, 408.266821560754), (869.991637153925, 408.122907880045), (870.320569111933, 400.161243286459), (868.569953361733, 400.11339198742), (864.517810669155, 399.54973007215), (858.682258706015, 397.619367903278), (855.665753299048, 395.808813873244), (853.479452218432, 392.811211835046), (847.923492419877, 394.273974470316), (834.320860167515, 397.859104108813), (826.495867917475, 399.921507079808), (829.86572598778, 404.531781837208), (835.898936154083, 409.178035013947), (840.887737516875, 411.839958392806), (847.191868005112, 414.441797809335), (854.251943938193, 416.198384209245), (860.095769038325, 417.277496957155), (866.21091316512, 417.954970608037), (873.27118845149, 418.182833936104), (878.358968429675, 418.182833936104), (879.599839322167, 408.266821560754)],
[(931.018881691707, 402.151689416542), (910.610746904717, 401.600140235583), (910.380693886848, 411.576056681467), (930.79710181083, 411.750012342223), (931.018881691707, 402.151689416542)],
[(880.569690215615, 400.455067051099), (870.320569111933, 400.161243286459), (869.991637153925, 408.122907880045), (879.599839322167, 408.266821560754), (880.569690215615, 400.455067051099)]]
for shapeCoord in shapeCoordinates
{
let path = CGPathCreateMutable()
var center:(CGFloat, CGFloat) = (0, 0)
for i in 0...(shapeCoord.count - 1)
{
let x = shapeCoord[i].0
let y = shapeCoord[i].1
center.0 += x
center.1 += y
if i == 0
{
CGPathMoveToPoint(path, nil, x, y)
}
else
{
CGPathAddLineToPoint(path, nil, x, y)
}
}
center.0 /= CGFloat(shapeCoord.count)
center.1 /= CGFloat(shapeCoord.count)
let shape = SKShapeNode(path: path)
let texture = self.view?.textureFromNode(shape)
let sprite = SKSpriteNode(texture: texture)
sprite.position = CGPointMake(center.0, center.1)
//self.addChild(shape)
self.addChild(sprite)
}
}
Is it feasible or should I switch to another technology / method?

Related

SceneKit: How to arrange buttons in ascending order using for in loop?

The task is to add 10 buttons (0...9) with labels using for in loop.
I created buttons based on class ButtonPrototype. I assigned label to each button via counter inside for in loop.
It works, but there is incorrect labels order:
I need another order:
How can I implement correct order?
Code:
func createButtons() {
for y in 0...1 {
for x in 0...4 {
counterForLoop += 1
self.button = ButtonPrototype(pos: .init( CGFloat(x)/7, CGFloat(y)/7, 0 ), imageName: "\(counterForLoop)")
parentNode.addChildNode(button)
parentNode.position = SCNVector3(x: 100,
y: 100,
z: 100)
}
}
}
The following approach perfectly makes the trick:
for y in 0...1 {
for x in 0...4 {
let textNode = SCNNode()
let ascendingOrder: String = "\(((x+1)+(y*5)) % 10)"
let geo = SCNText(string: ascendingOrder, extrusionDepth: 0.5)
geo.flatness = 0.04
geo.firstMaterial?.diffuse.contents = UIImage(named: ascendingOrder)
textNode.geometry = geo
textNode.position = SCNVector3(x*10, -y*10, 0)
sceneView.scene?.rootNode.addChildNode(textNode)
print(ascendingOrder)
}
}
You have at least two problems with your code. Your smallest button label is in the lower left and you want it to be in the lower right, and your labels go 0-9, and you want them to go from 1 to 10 (but display 10 as “0”).
To reverse the x ordering, change X to 10-x in your creation of a position, and change your imageName to “((counterForLoop+1)%10)”:
self.button = ButtonPrototype(
pos: .init(
CGFloat(10-x)/7,
CGFloat(y)/7,
0),
imageName: "\((counterForLoop+1)%10)")
By the way, you should add a SceneKit tag to your question. That seems more important than either the label tag or the loops tag.

How to apply a 3D Model on detected face by Apple Vision "NO AR"

With iPhoneX True-Depth camera its possible to get the 3D Coordinates of any object and use that information to position and scale the object, but with older iPhones we don't have access to AR on front-face camera, what i've done so far was detecting the face using Apple Vison frame work and drawing some 2D paths around the face or landmarks.
i've made a SceneView and Applied that as front layer of My view with clear background, and beneath it is AVCaptureVideoPreviewLayer , after detecting the face my 3D Object appears on the screen but positioning and scaling it correctly according to the face boundingBox required unprojecting and other stuffs which i got stuck there, i've also tried converting the 2D BoundingBox to 3D using CATransform3D but i failed! i am wondering if what i want to achieve is even possible ? i remember SnapChat was doing this before ARKit was available on iPhone if i'm not wrong!
override func viewDidLoad() {
super.viewDidLoad()
self.view.addSubview(self.sceneView)
self.sceneView.frame = self.view.bounds
self.sceneView.backgroundColor = .clear
self.node = self.scene.rootNode.childNode(withName: "face",
recursively: true)!
}
fileprivate func updateFaceView(for result:
VNFaceObservation, twoDFace: Face2D) {
let box = convert(rect: result.boundingBox)
defer {
DispatchQueue.main.async {
self.faceView.setNeedsDisplay()
}
}
faceView.boundingBox = box
self.sceneView.scene?.rootNode.addChildNode(self.node)
let unprojectedBox = SCNVector3(box.origin.x, box.origin.y,
0.8)
let worldPoint = sceneView.unprojectPoint(unprojectedBox)
self.node.position = worldPoint
/* Here i have to to unprojecting
to convert the value from a 2D point to 3D point also
issue here. */
}
The only way to achieve this is to SceneKit with an ortographic camera and use SCNGeometrySource to match the landmarks from Vision to the vertices of the mesh.
First, you need the mesh with the same number of vertices of Vision (66 - 77 depending on which Vision Revision you're in). You can create one using a tool like Blender.
The mesh on Blender
Then, on code, on each time you process your landmarks, you do the steps:
1- Get the mesh vertices:
func getVertices() -> [SCNVector3]{
var result = [SCNVector3]()
let planeSources = shape!.geometry?.sources(for: SCNGeometrySource.Semantic.vertex)
if let planeSource = planeSources?.first {
let stride = planeSource.dataStride
let offset = planeSource.dataOffset
let componentsPerVector = planeSource.componentsPerVector
let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
let vectors = [SCNVector3](repeating: SCNVector3Zero, count: planeSource.vectorCount)
// [SCNVector3](count: planeSource.vectorCount, repeatedValue: SCNVector3Zero)
let vertices = vectors.enumerated().map({
(index: Int, element: SCNVector3) -> SCNVector3 in
var vectorData = [Float](repeating: 0, count: componentsPerVector)
let byteRange = NSMakeRange(index * stride + offset, bytesPerVector)
let data = planeSource.data
(data as NSData).getBytes(&vectorData, range: byteRange)
return SCNVector3( x: vectorData[0], y: vectorData[1], z: vectorData[2])
})
result = vertices
}
return result
}
2- Unproject each landmark captured by Vision and keep them in a SCNVector3 array:
let unprojectedLandmark = sceneView.unprojectPoint( SCNVector3(landmarks[i].x + (landmarks[i].x,landmarks[i].y,0))
3- Modify the geometry using the new vertices:
func reshapeGeometry( _ vertices: [SCNVector3] ){
let source = SCNGeometrySource(vertices: vertices)
var newSources = [SCNGeometrySource]()
newSources.append(source)
for source in shape!.geometry!.sources {
if (source.semantic != SCNGeometrySource.Semantic.vertex) {
newSources.append(source)
}
}
let geometry = SCNGeometry(sources: newSources, elements: shape!.geometry?.elements)
let material = shape!.geometry?.firstMaterial
shape!.geometry = geometry
shape!.geometry?.firstMaterial = material
}
I was able to do that and that was my method.
Hope this helps!
I would suggest looking at Google's AR Core products which support an Apple AR scene with the back or front facing camera...but adds some additional functionality beyond Apple, when it comes to non Face depth camera devices.
Apple's Core Vision is almost the same as Googles Core Vision framework which returns 2D points representing the eyes/mouth/nose etc...and a face tilt component.
However, if you want a way to simply apply either 2D textures to a responsive 3D face, or alternatively attach 3D models to points on the face then take a look at Google's Core Augmented Faces framework. It has great sample code on iOS and Android.

How to Extract SceneKit Depth Buffer at runtime in AR scene?

How does one extract the SceneKit depth buffer? I make an AR based app that is running Metal and I'm really struggling to find any info on how to extract a 2D depth buffer so I can render out fancy 3D photos of my scenes. Any help greatly appreciated.
Your question is unclear but I'll try to answer.
Depth pass from VR view
If you need to render a Depth pass from SceneKit's 3D environment then you should use, for instance, a SCNGeometrySource.Semantic structure. There are vertex, normal, texcoord, color and tangent type properties. Let's see what a vertex type property is:
static let vertex: SCNGeometrySource.Semantic
This semantic identifies data containing the positions of each vertex in the geometry. For a custom shader program, you use this semantic to bind SceneKit’s vertex position data to an input attribute of the shader. Vertex position data is typically an array of three- or four-component vectors.
Here's a code's excerpt from iOS Depth Sample project.
UPDATED: Using this code you can get a position for every point in SCNScene and assign a color for these points (this is what a zDepth channel really is):
import SceneKit
struct PointCloudVertex {
var x: Float, y: Float, z: Float
var r: Float, g: Float, b: Float
}
#objc class PointCloud: NSObject {
var pointCloud : [SCNVector3] = []
var colors: [UInt8] = []
public func pointCloudNode() -> SCNNode {
let points = self.pointCloud
var vertices = Array(repeating: PointCloudVertex(x: 0,
y: 0,
z: 0,
r: 0,
g: 0,
b: 0),
count: points.count)
for i in 0...(points.count-1) {
let p = points[i]
vertices[i].x = Float(p.x)
vertices[i].y = Float(p.y)
vertices[i].z = Float(p.z)
vertices[i].r = Float(colors[i * 4]) / 255.0
vertices[i].g = Float(colors[i * 4 + 1]) / 255.0
vertices[i].b = Float(colors[i * 4 + 2]) / 255.0
}
let node = buildNode(points: vertices)
return node
}
private func buildNode(points: [PointCloudVertex]) -> SCNNode {
let vertexData = NSData(
bytes: points,
length: MemoryLayout<PointCloudVertex>.size * points.count
)
let positionSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.vertex,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: 0,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let colorSource = SCNGeometrySource(
data: vertexData as Data,
semantic: SCNGeometrySource.Semantic.color,
vectorCount: points.count,
usesFloatComponents: true,
componentsPerVector: 3,
bytesPerComponent: MemoryLayout<Float>.size,
dataOffset: MemoryLayout<Float>.size * 3,
dataStride: MemoryLayout<PointCloudVertex>.size
)
let element = SCNGeometryElement(
data: nil,
primitiveType: .point,
primitiveCount: points.count,
bytesPerIndex: MemoryLayout<Int>.size
)
element.pointSize = 1
element.minimumPointScreenSpaceRadius = 1
element.maximumPointScreenSpaceRadius = 5
let pointsGeometry = SCNGeometry(sources: [positionSource, colorSource], elements: [element])
return SCNNode(geometry: pointsGeometry)
}
}
Depth pass from AR view
If you need to render a Depth pass from ARSCNView it is possible only in case you're using ARFaceTrackingConfiguration for the front-facing camera. If so, then you can employ capturedDepthData instance property that brings you a depth map, captured along with the video frame.
var capturedDepthData: AVDepthData? { get }
But this depth map image is only 15 fps and of lower resolution than corresponding RGB image at 60 fps.
Face-based AR uses the front-facing, depth-sensing camera on compatible devices. When running such a configuration, frames vended by the session contain a depth map captured by the depth camera in addition to the color pixel buffer (see capturedImage) captured by the color camera. This property’s value is always nil when running other AR configurations.
And a real code could be like this:
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
DispatchQueue.global().async {
guard let frame = self.sceneView.session.currentFrame else {
return
}
if let depthImage = frame.capturedDepthData {
self.depthImage = (depthImage as! CVImageBuffer)
}
}
}
}
Depth pass from Video view
Also, you can extract a true Depth pass using 2 back-facing cameras and AVFoundation framework.
Look at Image Depth Map tutorial where Disparity concept will be introduced to you.

Swift CoreGraphics UIBezierPath will not fill interior correctly

I'm trying to draw countries and fill the interior a certain color. My data source is a TopoJSON file, which, in a nutshell, is made up of shapes that reference an array of arcs to create a shape. I convert this into an array of paths, which I then iterate through to draw the country. As you can see in the below screenshot, I'm drawing the correct lines of the outline of the country (Afghanistan).
However, when I try to use path.fill(), I end up getting the following. Note how the black lines are correct, but the colors go outside and inside haphazardly.
Code
var mapRegion = MapRegion()
var path = mapRegion.createPath()
var origin: CGPoint = .zero
geometry.paths
.enumerated()
.forEach { (geoIndex, shape) in
shape
.enumerated()
.forEach { (shapeIndex, coord) in
guard let coordPoint = coord.double else { return }
let values = coordinatesToGraphics(x: coordPoint.x, y: coordPoint.y)
let point = CGPoint(x: values.x, y: values.y)
if origin == .zero {
origin = point
}
// Shape is about to be closed
if shapeIndex != 0 && path.contains(point) {
// Close, save path (2)
path.addLine(to: origin)
// (3) path.close()
mapRegion.savePath()
// Add to map, reset process
canvas.layer.addSublayer(mapRegion)
mapRegions.append(mapRegion)
mapRegion = MapRegion()
path = mapRegion.createPath()
}
else {
if shapeIndex == 0 {
path.move(to: point)
} else {
path.addLine(to: point)
}
}
}
}
I've tried exhaustively messing with usesEvenOddFillRule (further reading), but nothing ever changes. I found that Comment (1) above helped resolve an issue where borders were being drawn that shouldn't be. The function savePath() at (2) runs the setStroke(), stroke(), setFill(), fill() functions.
Update: path.close() draws a line that closes the path at the bottom-left corner of the shape, instead of the top-left corner where it first starts drawing. That function closes the "most recently added subpath", but how are subpaths defined?
I can't say for sure whether the problem is my logic or some CoreGraphics trick. I have a collection of paths that I need to stitch together and treat as one, and I believe I'm doing that. I've looked at the data points, and the end of one arc to the beginning of the next are identical. I printed the path I stitch together and I basically move(to:) the same point, so there are no duplicates when I addLine(to:) Looking at the way the simulator is coloring the region, I first guessed maybe the individual arcs were being treated as shapes, but there are only 6 arcs in this example, and several more inside-outside color switches.
I'd really appreciate any help here!
Turns out that using path.move(to:) creates a subpath within the UIBezierPath(), which the fill algorithm seemingly treats as separate, multiple paths (source that led to discovery). The solution was to remove the extra, unnecessary move(to:) calls. Below is the working code and happy result! Thanks!
var mapRegion = MapRegion()
var path = mapRegion.createPath()
path.move(to: .zero)
var pointsDictionary: [String: Bool] = [:]
geometry.paths
.enumerated()
.forEach { (geoIndex, shape) in
shape
.enumerated()
.forEach { (shapeIndex, coord) in
guard let coordPoint = coord.double else { return }
let values = coordinatesToGraphics(x: coordPoint.x, y: coordPoint.y)
let point = CGPoint(x: values.x, y: values.y)
// Move to start
if path.currentPoint == .zero {
path.move(to: point)
}
if shapeIndex != 0 {
// Close shape
if pointsDictionary[point.debugDescription] ?? false {
// Close path, set colors, save
mapRegion.save(path)
regionDrawer.drawPath(of: mapRegion)
// Reset process
canvas.layer.addSublayer(mapRegion)
mapRegions.append(mapRegion)
mapRegion = MapRegion()
path = mapRegion.createPath()
pointsDictionary = [:]
}
// Add to shape
else {
path.addLine(to: point)
}
}
}
}

Following a path in Spritekit EXC_BAD_ACCESS

I have a simple Snake game where the head draws a UIBezier path. That part works fine:
func addLineToSnakePath(snakeHead: SnakeBodyUnit) {
//add a new CGPoint to array
activeSnakePathPoints.append(CGPoint(x: snakeHead.partX, y: snakeHead.partY))
let index = activeSnakePathPoints.count-1
if (index == 1) {
path.moveToPoint(activeSnakePathPoints[index-1])
}
path.addLineToPoint(activeSnakePathPoints[index])
shapeNode.path = path.CGPath
}
The path generates with swipes as the Head moves around the screen. Now I add a body unit to follow the UIBezier path and I get a bad access error.
func addBodyPart() {
let followBody = SKAction.followPath(path.CGPath, asOffset: true, orientToPath: false, duration: 1.0)
snakePart.runAction(followBody)
}
Crash at:
0 SKCFollowPath::cpp_willStartWithTargetAtTime(SKCNode*, double)
Thread 1 EXC_BAD_ACCESS
Looking at the code I would rather strengthen my code this way:
func addLineToSnakePath(snakeHead: CGPoint) {
let count = activeSnakePathPoints.count
if count == 0 { return } // The snake don't have an head..
//add a new CGPoint to array
activeSnakePathPoints.append(CGPoint(x: snakeHead.x, y: snakeHead.y))
guard count > 1 else {
// There are only two element (the head point and the new point) so:
path = UIBezierPath.init()
path.moveToPoint(activeSnakePathPoints[count-1])
shapeNode.path = path.CGPath
return
}
// There are some elements:
path.addLineToPoint(activeSnakePathPoints[count-1])
shapeNode.path = path.CGPath
}
Next, when you make the followPath action I've seen you use offset value to true ( I don't know if this is wanted from you..):
asOffset : If YES, the points in the path are relative offsets to the
node’s starting position. If NO, the points in the node are absolute
coordinate values.
In other words true means the path is relative from the sprite's starting position, false means absolute
Finally in the line:
snakePart.runAction(followBody)
I don't know what is snakePart but you use shapeNode in your function
Update:
Thinking about your crash, seems snakePart is not a valid SKNode (like when you try to use a sprite or shape without initialize itself)