Recently, I make it about lidar scan project.
It is very difficult. And I need to manipulate vertex data.
So I tried by this code
guard let meshAnchors = arView.session.currentFrame?.anchors.compactMap { $0 as? ARMeshAnchor }
else { return }
meshAnchors.first?.geometry.vertices // I want to get vertex position
There is no position of vertex, only buffer data
How can I do that? is it change from buffer data to array?
Plz help me.
Just went through this myself so I figured I'd drop ya my solution.
First grab this extension from Apple's Documentation to get a vertex at a specific index:
extension ARMeshGeometry {
func vertex(at index: UInt32) -> SIMD3<Float> {
assert(vertices.format == MTLVertexFormat.float3, "Expected three floats (twelve bytes) per vertex.")
let vertexPointer = vertices.buffer.contents().advanced(by: vertices.offset + (vertices.stride * Int(index)))
let vertex = vertexPointer.assumingMemoryBound(to: SIMD3<Float>.self).pointee
return vertex
}
}
Then, to get the positions in ARKit world space, you can do something like this:
func getVertexWorldPositions(frame: ARFrame) {
let anchors = frame.anchors.filter { $0 is ARMeshAnchor } as! [ARMeshAnchor]
// Each mesh geometry lives in its own anchor
for anchor in anchors {
// Anchor's transform in world space
let aTrans = SCNMatrix4(anchor.transform)
let meshGeometry = anchor.geometry
let vertices: ARGeometrySource = meshGeometry.vertices
for vIndex in 0..<vertices.count {
// This will give you a vertex in local (anchor) space
let vertex = meshGeometry.vertex(at: UInt32(vIndex))
// Create a new matrix with the vertex coordinates
let vTrans = SCNMatrix4MakeTranslation(vertex[0], vertex[1], vertex[2])
// Multiply it by the anchors's transform to get it into world space
let wTrans = SCNMatrix4Mult(vTrans, aTrans)
// Use the coordinates for something!
let vPos = SCNVector3(wTrans.m41, wTrans.m42, wTrans.m43)
print(vPos)
}
}
}
Related
I am trying to create some AR experience.
I load the Model with animations as an Entity. Lets call it a Toy.
I create an AnchorEntity.
I attach the Toy to the AnchorEntity. Up to this point everything works great.
I want the Toy to walk in random directions. And it does for the first time. Then it gets interesting, allow me to share my code:
First method uses a newly created Transform for the Toy with the modified translation x, y, to make the Toy move and that is it.
func walk(completion: #escaping () -> Void) {
guard let robot = robot else {
return
}
let currentTransform = robot.transform
guard let path = randomPath(from: currentTransform) else {
return
}
let (newTranslation , travelTime) = path
let newTransform = Transform(scale: currentTransform.scale,
rotation: currentTransform.rotation,
translation: newTranslation)
robot.move(to: newTransform, relativeTo: nil, duration: travelTime)
DispatchQueue.main.asyncAfter(deadline: .now() + travelTime + 1) {
completion()
}
}
We get that new Transform from the method below.
func randomPath(from currentTransform: Transform) -> (SIMD3<Float>, TimeInterval)? {
// Get the robot's current transform and translation
let robotTranslation = currentTransform.translation
// Generate random distances for a model to cross, relative to origin
let randomXTranslation = Float.random(in: 0.1...0.4) * [-1.0,1.0].randomElement()!
let randomZTranslation = Float.random(in: 0.1...0.4) * [-1.0,1.0].randomElement()!
// Create a translation relative to the current transform
let relativeXTranslation = robotTranslation.x + randomXTranslation
let relativeZTranslation = robotTranslation.z + randomZTranslation
// Find a path
var path = (randomXTranslation * randomXTranslation + randomZTranslation * randomZTranslation).squareRoot()
// Path only positive
if path < 0 { path = -path }
// Calculate the time of walking based on the distance and default speed
let timeOfWalking: Float = path / settings.robotSpeed
// Based on old trasnlation calculate the new one
let newTranslation: SIMD3<Float> = [relativeXTranslation,
Float(0),
relativeZTranslation]
return (newTranslation, TimeInterval(timeOfWalking))
}
The problem is that the value of Entity.transform.translation.y grows from 0 to some random value < 1. Always after the second time the walk() method is being called.
As you can see, every time the method is called, newTranslation sets the Y value to be 0. And yet the Toy's translation:
I am out of ideas any help is appreciated. I can share the whole code if needed.
I have managed to fix the issue by specifying parameter relativeTo as Toy's AnchorEntity:
toy.move(to: newTransform, relativeTo: anchorEntity, duration: travelTime)
I want to position an object in front of the camera, without changing its parent. The object should be in the center of the screen, at specified distance distanceFromCamera.
The object is stored as cursorEntity and is a child of sceneEntity.
A reference to the ARView is stored as arView and the position of the cursorEntity gets updated in the function updateCursorPosition
First, add forward in an extension to float4x4 that gives the forward-facing directional vector of a transform matrix.
extension float4x4 {
var forward: SIMD3<Float> {
normalize(SIMD3<Float>(-columns.2.x, -columns.2.y, -columns.2.z))
}
}
Then, implement the following 4 steps:
func updateCursorPosition() {
let cameraTransform: Transform = arView.cameraTransform
// 1. Calculate the local camera position, relative to the sceneEntity
let localCameraPosition: SIMD3<Float> = sceneEntity.convert(position: cameraTransform.translation, from: nil)
// 2. Get the forward-facing directional vector of the camera using the extension described above
let cameraForwardVector: SIMD3<Float> = cameraTransform.matrix.forward
// 3. Calculate the final local position of the cursor using distanceFromCamera
let finalPosition: SIMD3<Float> = localCameraPosition + cameraForwardVector * distanceFromCamera
// 4. Apply the translation
cursorEntity.transform.translation = finalPosition
}
I'm try to create a func that should calculate the coordinate at 90deg and 270deg from each of the given coordinate..
and save it in an array of arrayLocationOffset
var arrayLocations : [CLLocationCoordinate2D] =
[
CLLocationCoordinate2D(latitude: 45.15055543976834, longitude: 11.656891939801518 ),
CLLocationCoordinate2D(latitude: 45.154446871287924, longitude: 11.66058789179949),
]
// simplify only 2 coordinate
I use this for loop in order to perform the action:
func createCoordinatePoly() {
let polypoint = arrayLocations.count
for i in 0..<polypoint {
let coord = arrayLocations[i]
// calc LH
let lhCoord = coord270(coord: coord)
arrayLocationOffset.append(lhCoord)
}
for i in 0..<polypoint { // shuld revers this for loop and get first the last index
let coord = arrayLocations[i]
// calc LH
let RhCoord = coord90(coord: coord)
arrayLocationOffset.append(RhCoord)
}
debugPrint("aggiunto array poly \(arrayLocationOffset.count)")
}
The arrayLocationOffset must have a specific sequence otherwise I can't properly draw a polygon in the Mapkit.
In order get the specific order on the second forLoop (where I calculate RHside) I should start to calculate from last array index back to the first...
is this possible?
thanks
As the name implies you can reverse an array just with reversed().
Your code is rather objective-c-ish. A better way to enumerate the array is fast enumeration because you actually don't need the index.
for coord in arrayLocations { ...
and
for coord in arrayLocations.reversed() { ...
However there is a still swiftier way mapping the arrays
func createCoordinatePoly() {
arrayLocationOffset.append(contentsOf: arrayLocations.map{coord270(coord: $0)})
arrayLocationOffset.append(contentsOf: arrayLocations.reversed().map{coord90(coord: $0)})
debugPrint("aggiunto array poly \(arrayLocationOffset.count)")
}
solved:
var reversIndex = polypoint-1
for _ in 0..<polypoint {
let coord = arrayLocations[reversIndex]
// calc LH
let RhCoord = coord90(coord: coord)
arrayLocationOffset.append(RhCoord)
reversIndex -= 1
}
I use a reverse index to start from the end...
With iPhoneX True-Depth camera its possible to get the 3D Coordinates of any object and use that information to position and scale the object, but with older iPhones we don't have access to AR on front-face camera, what i've done so far was detecting the face using Apple Vison frame work and drawing some 2D paths around the face or landmarks.
i've made a SceneView and Applied that as front layer of My view with clear background, and beneath it is AVCaptureVideoPreviewLayer , after detecting the face my 3D Object appears on the screen but positioning and scaling it correctly according to the face boundingBox required unprojecting and other stuffs which i got stuck there, i've also tried converting the 2D BoundingBox to 3D using CATransform3D but i failed! i am wondering if what i want to achieve is even possible ? i remember SnapChat was doing this before ARKit was available on iPhone if i'm not wrong!
override func viewDidLoad() {
super.viewDidLoad()
self.view.addSubview(self.sceneView)
self.sceneView.frame = self.view.bounds
self.sceneView.backgroundColor = .clear
self.node = self.scene.rootNode.childNode(withName: "face",
recursively: true)!
}
fileprivate func updateFaceView(for result:
VNFaceObservation, twoDFace: Face2D) {
let box = convert(rect: result.boundingBox)
defer {
DispatchQueue.main.async {
self.faceView.setNeedsDisplay()
}
}
faceView.boundingBox = box
self.sceneView.scene?.rootNode.addChildNode(self.node)
let unprojectedBox = SCNVector3(box.origin.x, box.origin.y,
0.8)
let worldPoint = sceneView.unprojectPoint(unprojectedBox)
self.node.position = worldPoint
/* Here i have to to unprojecting
to convert the value from a 2D point to 3D point also
issue here. */
}
The only way to achieve this is to SceneKit with an ortographic camera and use SCNGeometrySource to match the landmarks from Vision to the vertices of the mesh.
First, you need the mesh with the same number of vertices of Vision (66 - 77 depending on which Vision Revision you're in). You can create one using a tool like Blender.
The mesh on Blender
Then, on code, on each time you process your landmarks, you do the steps:
1- Get the mesh vertices:
func getVertices() -> [SCNVector3]{
var result = [SCNVector3]()
let planeSources = shape!.geometry?.sources(for: SCNGeometrySource.Semantic.vertex)
if let planeSource = planeSources?.first {
let stride = planeSource.dataStride
let offset = planeSource.dataOffset
let componentsPerVector = planeSource.componentsPerVector
let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
let vectors = [SCNVector3](repeating: SCNVector3Zero, count: planeSource.vectorCount)
// [SCNVector3](count: planeSource.vectorCount, repeatedValue: SCNVector3Zero)
let vertices = vectors.enumerated().map({
(index: Int, element: SCNVector3) -> SCNVector3 in
var vectorData = [Float](repeating: 0, count: componentsPerVector)
let byteRange = NSMakeRange(index * stride + offset, bytesPerVector)
let data = planeSource.data
(data as NSData).getBytes(&vectorData, range: byteRange)
return SCNVector3( x: vectorData[0], y: vectorData[1], z: vectorData[2])
})
result = vertices
}
return result
}
2- Unproject each landmark captured by Vision and keep them in a SCNVector3 array:
let unprojectedLandmark = sceneView.unprojectPoint( SCNVector3(landmarks[i].x + (landmarks[i].x,landmarks[i].y,0))
3- Modify the geometry using the new vertices:
func reshapeGeometry( _ vertices: [SCNVector3] ){
let source = SCNGeometrySource(vertices: vertices)
var newSources = [SCNGeometrySource]()
newSources.append(source)
for source in shape!.geometry!.sources {
if (source.semantic != SCNGeometrySource.Semantic.vertex) {
newSources.append(source)
}
}
let geometry = SCNGeometry(sources: newSources, elements: shape!.geometry?.elements)
let material = shape!.geometry?.firstMaterial
shape!.geometry = geometry
shape!.geometry?.firstMaterial = material
}
I was able to do that and that was my method.
Hope this helps!
I would suggest looking at Google's AR Core products which support an Apple AR scene with the back or front facing camera...but adds some additional functionality beyond Apple, when it comes to non Face depth camera devices.
Apple's Core Vision is almost the same as Googles Core Vision framework which returns 2D points representing the eyes/mouth/nose etc...and a face tilt component.
However, if you want a way to simply apply either 2D textures to a responsive 3D face, or alternatively attach 3D models to points on the face then take a look at Google's Core Augmented Faces framework. It has great sample code on iOS and Android.
I've been having some fun using GameplayKit in a Scenekit (and ARKit, although that isn't important to this question) application.
Specifically, I have been using Entities, Components, and Agents for Behavior, and it has all been working great, except for one thing.
I have managed to use an GKAgent inside a component to move around a scene, and to avoid other objects. That all seems to be working. However, I can only get the GKAgent position working, not the "rotation" property.
import Foundation
import SceneKit
import GameplayKit
class MoveComponent: GKScnComponent, GKAgentDelegate {
//this class is abbreviated for perfunctory sakes
func agentWillUpdate(_ agent: GKAgent) {
guard let visualComponent = entity?.component(ofType: self.visualType) as? VisualComponent else {
return
}
//works with just "position", but turns possessed with both rotation and position
agentSeeking.rotation = convertToFloat3x3(float4x4: visualComponent.parentMostNode.simdTransform)
agentSeeking.position = visualComponent.parentMostNode.simdPosition
}
func agentDidUpdate(_ agent: GKAgent) {
guard let visualComponent = entity?.component(ofType: self.visualType) as? VisualComponent else {
return
}
//works with just "position", but turns possessed with both rotation and position
visualComponent.parentMostNode.simdPosition = agentSeeking.position
visualComponent.parentMostNode.simdTransform = convertToFloat4x4(float3x3: agentSeeking.rotation)
}
}
I have written some conversion code between 3x3 matrices and 4x4 matrices, thanks to this question:
Converting matrix_float3x3 rotation to SceneKit
import Foundation
import GameplayKit
func convertToFloat3x3(float4x4: simd_float4x4) -> simd_float3x3 {
let column0 = convertToFloat3 ( float4: float4x4.columns.0 )
let column1 = convertToFloat3 ( float4: float4x4.columns.1 )
let column2 = convertToFloat3 ( float4: float4x4.columns.2 )
return simd_float3x3.init(column0, column1, column2)
}
func convertToFloat3(float4: simd_float4) -> simd_float3 {
return simd_float3.init(float4.x, float4.y, float4.z)
}
func convertToFloat4x4(float3x3: simd_float3x3) -> simd_float4x4 {
let column0 = convertToFloat4 ( float3: float3x3.columns.0 )
let column1 = convertToFloat4 ( float3: float3x3.columns.1 )
let column2 = convertToFloat4 ( float3: float3x3.columns.2 )
let identity3 = simd_float4.init(x: 0, y: 0, z: 0, w: 1)
return simd_float4x4.init(column0, column1, column2, identity3)
}
func convertToFloat4(float3: simd_float3) -> simd_float4 {
return simd_float4.init(float3.x, float3.y, float3.z, 0)
}
I'm a little new to all this, and I'm not a linear algebra guru, so I'm not 100% certain if my matrix conversion functions are doing exactly what they are supposed to do.
When I just use the "position" property of the agent, everything is fine, but when I add in rotation/transform from the agent to the node, everything starts acting possessed.
Any thoughts/pointers/help with what I'm doing wrong?
I figured it out. Matrix multiplication is the key to getting this working. Anyone with experience with 3d game development probably could have told me that without any mental exertion :)
func agentWillUpdate(_ agent: GKAgent) {
guard let visualComponent = entity?.component(ofType: self.visualType) as? VisualComponent else {
return
}
agentSeeking.position = visualComponent.parentMostNode.simdPosition
let rotation = visualComponent.parentMostNode.rotation
let rotationMatrix = SCNMatrix4MakeRotation(rotation.w, rotation.x, rotation.y, rotation.z)
let float4x4 = SCNMatrix4ToMat4(rotationMatrix)
agentSeeking.rotation = convertToFloat3x3(float4x4: float4x4)
}
func agentDidUpdate(_ agent: GKAgent) {
guard let visualComponent = entity?.component(ofType: self.visualType) as? VisualComponent else {
return
}
// visualComponent.parentMostNode.simdPosition = agentSeeking.position
let rotation3x3 = agentSeeking.rotation
let rotation4x4 = convertToFloat4x4(float3x3: rotation3x3)
let rotationMatrix = SCNMatrix4FromMat4(rotation4x4)
let position = agentSeeking.position
let translationMatrix = SCNMatrix4MakeTranslation(position.x, position.y, position.z)
let transformMatrix = SCNMatrix4Mult(rotationMatrix, translationMatrix)
visualComponent.parentMostNode.transform = transformMatrix
}
I hope somebody finds this useful.
For converting from an agent's rotation 3x3 matrix to a scene node rotation, one convenient way is to use simd_quatf, so you can do something like:
node.simdWorldOrientation = simd_quatf(agent.rotation)
Similarly, using simd_float3x3 lets one convert from a quaternion (e.g. simdWorldOrientation) to a 3x3 rotation matrix. For example:
agent.rotation = simd_float3x3(node.simdWorldOrientation)
These let you avoid the extra conversion functions.