ARkit SCNMorpher isn't working for me. No errors, just no shape changes - swift

I'm trying a simple example using SCNMorpher to blend between to poly spheres. They are identical in topology except for the position of the points
Each is stored in a .scn file and I get the shapes like:
sphereNode = SCNReferenceNode(named: "sphere")
sphereNode2 = SCNReferenceNode(named: "sphere2")
sphereNode?.morpher = SCNMorpher()
sphereNode!.morpher?.targets = [(sphereNode2?.childNodes.first!.geometry)!]
sphereNode!.name = "EFFECT"
I'm using the faceAnchor blend shapes to drive it
if let effectNode = sceneView?.scene.rootNode.childNode(withName: "EFFECT", recursively: true) {
let v = faceAnchor?.blendShapes[ARFaceAnchor.BlendShapeLocation.jawOpen]
effectNode.morpher?.setWeight(v as! CGFloat, forTargetNamed: "sphere2")
}
I've also tried:
...
effectNode.morpher?.setWeight(v as! CGFloat, forTargetAt: 0)
...
The code runs.. I can print values for v.. they change as I open/close my jaw and that value is passed to the morpher. I see the base sphere shape but it never deforms toward the sphere2 shape. Am I suppose to do anything else to force it to redraw or calc the deformation?

Hmm. looks like I was attaching the morpher to the parent of the shape, not the actual sphere.. funny how asking a question here sometimes creates that "Ah Ha" moment. Reading in my spheres like this fixed it:
sphereNode = SCNReferenceNode(named: "sphere").childNodes.first
sphereNode2 = SCNReferenceNode(named: "sphere2").childNodes.first
sphereNode?.morpher = SCNMorpher()
sphereNode!.morpher?.targets = [(sphereNode2.geometry)!]
sphereNode!.name = "EFFECT"

Related

iPad Pro Lidar - Export Geometry & Texture

I would like to be able to export a mesh and texture from the iPad Pro Lidar.
There's examples here of how to export a mesh, but Id like to be able to export the environment texture too
ARKit 3.5 – How to export OBJ from new iPad Pro with LiDAR?
ARMeshGeometry stores the vertices for the mesh, would it be the case that one would have to 'record' the textures as one scans the environment, and manually apply them?
This post seems to show a way to get texture co-ordinates, but I can't see a way to do that with the ARMeshGeometry: Save ARFaceGeometry to OBJ file
Any point in the right direction, or things to look at greatly appreciated!
Chris
You need to compute the texture coordinates for each vertex, apply them to the mesh and supply a texture as a material to the mesh.
let geom = meshAnchor.geometry
let vertices = geom.vertices
let size = arFrame.camera.imageResolution
let camera = arFrame.camera
let modelMatrix = meshAnchor.transform
let textureCoordinates = vertices.map { vertex -> vector_float2 in
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let world_vertex4 = simd_mul(modelMatrix!, vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3,
orientation: .portrait,
viewportSize: CGSize(
width: CGFloat(size.height),
height: CGFloat(size.width)))
let v = 1.0 - Float(pt.x) / Float(size.height)
let u = Float(pt.y) / Float(size.width)
return vector_float2(u, v)
}
// construct your vertices, normals and faces from the source geometry
// directly and supply the computed texture coords to create new geometry
// and then apply the texture.
let scnGeometry = SCNGeometry(sources: [verticesSource, textureCoordinates, normalsSource], elements: [facesSource])
let texture = UIImage(pixelBuffer: frame.capturedImage)
let imageMaterial = SCNMaterial()
imageMaterial.isDoubleSided = false
imageMaterial.diffuse.contents = texture
scnGeometry.materials = [imageMaterial]
let pcNode = SCNNode(geometry: scnGeometry)
pcNode if added to your scene will contain the mesh with the texture applied.
Texture coordinates computation from here
Check out my answer over here
It's a description of this project: MetalWorldTextureScan which demonstrates how to scan your environment and create a textured mesh using ARKit and Metal.

How to apply a 3D Model on detected face by Apple Vision "NO AR"

With iPhoneX True-Depth camera its possible to get the 3D Coordinates of any object and use that information to position and scale the object, but with older iPhones we don't have access to AR on front-face camera, what i've done so far was detecting the face using Apple Vison frame work and drawing some 2D paths around the face or landmarks.
i've made a SceneView and Applied that as front layer of My view with clear background, and beneath it is AVCaptureVideoPreviewLayer , after detecting the face my 3D Object appears on the screen but positioning and scaling it correctly according to the face boundingBox required unprojecting and other stuffs which i got stuck there, i've also tried converting the 2D BoundingBox to 3D using CATransform3D but i failed! i am wondering if what i want to achieve is even possible ? i remember SnapChat was doing this before ARKit was available on iPhone if i'm not wrong!
override func viewDidLoad() {
super.viewDidLoad()
self.view.addSubview(self.sceneView)
self.sceneView.frame = self.view.bounds
self.sceneView.backgroundColor = .clear
self.node = self.scene.rootNode.childNode(withName: "face",
recursively: true)!
}
fileprivate func updateFaceView(for result:
VNFaceObservation, twoDFace: Face2D) {
let box = convert(rect: result.boundingBox)
defer {
DispatchQueue.main.async {
self.faceView.setNeedsDisplay()
}
}
faceView.boundingBox = box
self.sceneView.scene?.rootNode.addChildNode(self.node)
let unprojectedBox = SCNVector3(box.origin.x, box.origin.y,
0.8)
let worldPoint = sceneView.unprojectPoint(unprojectedBox)
self.node.position = worldPoint
/* Here i have to to unprojecting
to convert the value from a 2D point to 3D point also
issue here. */
}
The only way to achieve this is to SceneKit with an ortographic camera and use SCNGeometrySource to match the landmarks from Vision to the vertices of the mesh.
First, you need the mesh with the same number of vertices of Vision (66 - 77 depending on which Vision Revision you're in). You can create one using a tool like Blender.
The mesh on Blender
Then, on code, on each time you process your landmarks, you do the steps:
1- Get the mesh vertices:
func getVertices() -> [SCNVector3]{
var result = [SCNVector3]()
let planeSources = shape!.geometry?.sources(for: SCNGeometrySource.Semantic.vertex)
if let planeSource = planeSources?.first {
let stride = planeSource.dataStride
let offset = planeSource.dataOffset
let componentsPerVector = planeSource.componentsPerVector
let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
let vectors = [SCNVector3](repeating: SCNVector3Zero, count: planeSource.vectorCount)
// [SCNVector3](count: planeSource.vectorCount, repeatedValue: SCNVector3Zero)
let vertices = vectors.enumerated().map({
(index: Int, element: SCNVector3) -> SCNVector3 in
var vectorData = [Float](repeating: 0, count: componentsPerVector)
let byteRange = NSMakeRange(index * stride + offset, bytesPerVector)
let data = planeSource.data
(data as NSData).getBytes(&vectorData, range: byteRange)
return SCNVector3( x: vectorData[0], y: vectorData[1], z: vectorData[2])
})
result = vertices
}
return result
}
2- Unproject each landmark captured by Vision and keep them in a SCNVector3 array:
let unprojectedLandmark = sceneView.unprojectPoint( SCNVector3(landmarks[i].x + (landmarks[i].x,landmarks[i].y,0))
3- Modify the geometry using the new vertices:
func reshapeGeometry( _ vertices: [SCNVector3] ){
let source = SCNGeometrySource(vertices: vertices)
var newSources = [SCNGeometrySource]()
newSources.append(source)
for source in shape!.geometry!.sources {
if (source.semantic != SCNGeometrySource.Semantic.vertex) {
newSources.append(source)
}
}
let geometry = SCNGeometry(sources: newSources, elements: shape!.geometry?.elements)
let material = shape!.geometry?.firstMaterial
shape!.geometry = geometry
shape!.geometry?.firstMaterial = material
}
I was able to do that and that was my method.
Hope this helps!
I would suggest looking at Google's AR Core products which support an Apple AR scene with the back or front facing camera...but adds some additional functionality beyond Apple, when it comes to non Face depth camera devices.
Apple's Core Vision is almost the same as Googles Core Vision framework which returns 2D points representing the eyes/mouth/nose etc...and a face tilt component.
However, if you want a way to simply apply either 2D textures to a responsive 3D face, or alternatively attach 3D models to points on the face then take a look at Google's Core Augmented Faces framework. It has great sample code on iOS and Android.

ARKit limit display distance of a node

I would like to create a node in sceneView, that is displayed at normal position in the scene, until user get too close or too far from it. Then it should be displayed at the same direction from the user, but with restricted distance. So far best I found is SCNDistanceConstraint, which limits this distance, but the problem is, that this constraint after it moved the node, this node stays in this new place. So for example, I want to limit the node to be displayed not closer then one meter from camera. I'm getting closer to the node, and it's being pushed away, but then when I get camera back, this node should return to it's original position - for now it stays where it was pushed. Is there some easy way to get such behavior?
Im not entirely sure I have understood what you mean, but it seems you always want your SCNNode to be positioned 1m away from the camera, but keeping its other x, y values?
If this is the case then you can do something like this:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Node On Screen & The Camera Point Of View
guard let nodeToPosition = currentNode, let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Set The Position Of The Node 1m Away From The Camera
nodeToPosition.simdPosition.z = pointOfView.presentation.worldPosition.z - 1
//3. Get The Current Distance Between The SCNNode & The Camera
let positionOfNode = SCNVector3ToGLKVector3(nodeToPosition.presentation.worldPosition)
let positionOfCamera = SCNVector3ToGLKVector3(pointOfView.presentation.worldPosition)
let distanceBetweenNodeAndCamera = GLKVector3Distance(positionOfNode, positionOfCamera)
print(distanceBetweenNodeAndCamera)
}
I have added in part three, so you could use the distance to do some additional calculations etc.
Hope this points you in the right direction...
The answer above is not exactly what I need - I want object to be displayed like it was just placed in normal position, so I can get closer and farer to/from it, but limit how close/far I can get from that object. When I'm beyond that limit, object should start move to be always within given distance range from camera. Anyway I think I have found some right direction in this. Instead of assigning position, I'm creating a constraint that constantly updates position of my node to be either in given position if it's in given range from user, or if not, adjusts this position to fit in that range:
private func setupConstraint() {
guard let mainNodeDisplayDistanceRange = mainNodeDisplayDistanceRange else {
constraints = nil
position = requestedPosition
return
}
let constraint = SCNTransformConstraint.positionConstraint(inWorldSpace: true) { (node, currentPosition) -> SCNVector3 in
var cameraPositionHorizontally = (self.augmentedRealityView as! AugmentedRealityViewARKit).currentCameraPosition
cameraPositionHorizontally.y = self.requestedPosition.y
let cameraToObjectVector = self.requestedPosition - cameraPositionHorizontally
let horizontalDistanceFromCamera = Double((cameraToObjectVector).distanceHorizontal)
guard mainNodeDisplayDistanceRange ~= horizontalDistanceFromCamera else {
let normalizedDistance = horizontalDistanceFromCamera.keepInRange(mainNodeDisplayDistanceRange)
let normalizedPosition = cameraPositionHorizontally + cameraToObjectVector.normalizeHorizontally(toDistance: normalizedDistance)
return normalizedPosition
}
return self.requestedPosition
}
constraints = [constraint]
}
internal var requestedPosition: SCNVector3 = .zero {
didSet {
setupConstraint()
}
}
This starts to work fine, but I still need to find a way to animate this.

Pathfinding no longer works after blocking and unblocking path

I have a map with obstacles that can be placed by the user. And sprites that move from the right of the map to the left.
As the user places obstacles they're added to my game's graph (GKObstacleGraph) and each time I check if a path from the start point to the end point is available. If it's not, it means the user has blocked the path entirely, so I remove the placed obstacle.
That should open the path up again, however it doesn't. It's as if the obstacle is still there.
I have a debug function which loops through all the obstacles in my graph node, showing me visually where they are. It runs each time I add an obstacle. And after I place an obstacle that blocks the path (which is then removed immediately) it shows that there is no obstacle there. The path is clear.
func createObstacle(coordinates: CCCordinates) {
let tX = -game.mapSize.widthHalf + (coordinates.x*game.tileSize.value) + (game.tileSize.valueHalf)
let tY = game.mapSize.heightHalf - (coordinates.y*game.tileSize.value) - (game.tileSize.valueHalf)
let obstacle = CCObstacle(color: color.red, size: game.obstacleSize.size)
obstacle.position = CGPoint(x: tX, y: tY)
let polygonObstacle = getPolygonObstacleForSpriteNode(obstacle)
game.graph.addObstacles([polygonObstacle])
let pathClear = checkPath()
print("Path Clear: \(pathClear)")
if pathClear {
let texture = SKTexture(imageNamed: "obstacle")
obstacle.texture = texture
obstacle.coordinates = coordinates
obstacle.name = "obstacle:\(obstacle.coordinates.convertToString())"
obstacle.zPosition = 5
for var i = 0; i < game.sprites.count; i++ {
let sprite = game.sprites[i]
updatePathForSprite(sprite)
}
panels.map.addChild(obstacle)
} else {
game.graph.removeObstacles([polygonObstacle])
print("removedObstacle")
}
drawAllObstacleOutlines()
}
This code works perfectly until pathClear returns false and the code in else runs. Even though the obstacle is removed from the graph (and the drawAllObstacleOutlines() function confirms that) pathClear always returns false afterwards.
Unless I use:
game.graph.removeAllObstacles()
If I replace the removeObstacles([]) line with that line above. It lets me place obstacles again. (It works even if I add back all the obstacles that were removed, excluding the one that blocked the path.)
The function updatePathForSprite basically calls this one below:
func getPathForNodeToEndPoint(startPoint: CGPoint) -> [GKGraphNode] {
let startNode = GKGraphNode2D(point: float2(Float(startPoint.x), Float(startPoint.y)))
let endNode = GKGraphNode2D(point: float2(Float(game.finishPoint.x), Float(game.finishPoint.y)))
game.graph.connectNodeUsingObstacles(startNode, ignoringBufferRadiusOfObstacles: game.outerTileObstacles)
game.graph.connectNodeUsingObstacles(endNode, ignoringBufferRadiusOfObstacles: game.outerTileObstacles)
let path:[GKGraphNode2D] = game.graph.findPathFromNode(startNode, toNode: endNode) as! [GKGraphNode2D]
game.graph.removeNodes([startNode, endNode])
return path
}
Does anyone know what's going on here?
EDIT: I may have found something weird.
When I add the following lines above the drawAllObstacleOutlines() line:
print("Obstacles \(game.graph.obstacles.count)")
print("Nodes \(game.graph.nodes!.count)")
It increases as I add obstacles... In my case, nodes is 404 and obstacles is 101, however when I place the obstacle that blocks the path, the output is: obstacles 101, Nodes 0
For some reason it's removing all the other nodes in the graph, even though I only removed one obstacle.

GameplayKit Pathfinding with obstacles and agents

I've been searching for days about this new framework and trying to make usage of some of it's funcionalities,
but... there're some things that are not fitting together for me, the demobot source code isn't helping at some points,
I miss some kind of simple tutorial but here goes my main doubts:
let obstacles = scene["obstacle"]
polygonObstacles = SKNode.obstaclesFromNodePhysicsBodies(obstacles)
graph = GKObstacleGraph(obstacles: polygonObstacles, bufferRadius: 60.0)
func drawGraph() {
for node in graph.nodes as! [GKGraphNode2D] {
for destination in node.connectedNodes as! [GKGraphNode2D] {
let points = [CGPoint(node.position), CGPoint(destination.position)]
let shapeNode = SKShapeNode(points: UnsafeMutablePointer<CGPoint>(points), count: 2)
shapeNode.strokeColor = SKColor(white: 1.0, alpha: 0.5)
shapeNode.lineWidth = 5.0
shapeNode.zPosition = 3
scene.addChild(shapeNode)
}
}
}
So, when I try to draw this graph and see the connections, I get this: http://i.imgur.com/EZ3dx5v.jpg
I find it really weird, anywhere I put my obstacles, even in low numbers, the left-corner portion of the screen have always more connections(the radius don't have influence on that)
I don't use GKComponents on my game, but I tried to run some GKAgents2D to hunt the player, like this:
func calculateBehaviorForAgents(){
let mainCharacterPosition = float2(scene.mainCharacter.position)
let mainCharacterGraphNode = GKGraphNode2D(point: mainCharacterPosition)
graph.connectNodeUsingObstacles(mainCharacterGraphNode)
for i in 0...monsters.count-1{
let monster = monsters[i]
let agent = agents[i]
let behavior = GKBehavior()
let monsterPosition = float2(monster.position)
let monsterGraphNode = GKGraphNode2D(point: monsterPosition)
graph.connectNodeUsingObstacles(monsterGraphNode)
let pathNodes = graph.findPathFromNode(monsterGraphNode, toNode: mainCharacterGraphNode) as! [GKGraphNode2D]
let path = GKPath(graphNodes: pathNodes, radius: 00.0)
let followPathGoal = GKGoal(toFollowPath: path, maxPredictionTime: 1.0, forward: true)
behavior.setWeight(1.0, forGoal: followPathGoal)
let stayOnPathGoal = GKGoal(toStayOnPath: path, maxPredictionTime: 1.0)
behavior.setWeight(1.0, forGoal: stayOnPathGoal)
agent.behavior = behavior
graph.removeNodes([monsterGraphNode])
}
graph.removeNodes([mainCharacterGraphNode])
}
Now when I call the updateWithDeltaTime method, his delegate methods:
func agentWillUpdate(agent: GKAgent){}
func agentDidUpdate(agent: GKAgent){}
give me unexpected values for the agents, it's position doesn't make any sense, with giant numbers that leads to outside of the battlefield
But I saw that the his velocity vector were making sense, so I matched it to my monster and updated the agent to the monster's position
func updateWithDeltaTime(currentTime : CFTimeInterval){
for i in 0...monsters.count-1{
let agent = agents[i]
let monster = monsters[i]
monster.physicsBody?.velocity = CGVectorMake(CGFloat(agent.velocity.x), CGFloat(agent.velocity.y))
agent.updateWithDeltaTime(currentTime)
agent.position = float2(monster.position)
monster.gameSceneUpdate(currentTime)
}
Now I was getting some results, but it's far away from what I want:
The monsters are not following the character to the edges or the right-top portion of screen, I remove their points from the graph but after make a path for them to follow (the image doesn't have this points, but they exists).
Apparently because there was no path leading to there, remember the image?
The question is: how to make this agent system to work?
Maybe I'm totally wrong at the usage of agents, goals and even the graphs! I read the documentation but I still can't make it right
And more...
At first, the monster were not avoid obstacles, even with GKGoals like "avoidObstacles", passing the same PolygonObstacles,
but when I change
graph.connectNodeUsingObstacles(mainCharacterGraphNode)
graph.connectNodeUsingObstacles(monsterGraphNode)
to
graph.connectNodeUsingObstacles(mainCharacterGraphNode, ignoringObstacles: polygonObstacles)
graph.connectNodeUsingObstacles(monsterGraphNode, ignoringObstacles: polygonObstacles)
it worked! o.O
I really need some help, thank you all :D!