I want to position an object in front of the camera, without changing its parent. The object should be in the center of the screen, at specified distance distanceFromCamera.
The object is stored as cursorEntity and is a child of sceneEntity.
A reference to the ARView is stored as arView and the position of the cursorEntity gets updated in the function updateCursorPosition
First, add forward in an extension to float4x4 that gives the forward-facing directional vector of a transform matrix.
extension float4x4 {
var forward: SIMD3<Float> {
normalize(SIMD3<Float>(-columns.2.x, -columns.2.y, -columns.2.z))
}
}
Then, implement the following 4 steps:
func updateCursorPosition() {
let cameraTransform: Transform = arView.cameraTransform
// 1. Calculate the local camera position, relative to the sceneEntity
let localCameraPosition: SIMD3<Float> = sceneEntity.convert(position: cameraTransform.translation, from: nil)
// 2. Get the forward-facing directional vector of the camera using the extension described above
let cameraForwardVector: SIMD3<Float> = cameraTransform.matrix.forward
// 3. Calculate the final local position of the cursor using distanceFromCamera
let finalPosition: SIMD3<Float> = localCameraPosition + cameraForwardVector * distanceFromCamera
// 4. Apply the translation
cursorEntity.transform.translation = finalPosition
}
Related
I am trying to create some AR experience.
I load the Model with animations as an Entity. Lets call it a Toy.
I create an AnchorEntity.
I attach the Toy to the AnchorEntity. Up to this point everything works great.
I want the Toy to walk in random directions. And it does for the first time. Then it gets interesting, allow me to share my code:
First method uses a newly created Transform for the Toy with the modified translation x, y, to make the Toy move and that is it.
func walk(completion: #escaping () -> Void) {
guard let robot = robot else {
return
}
let currentTransform = robot.transform
guard let path = randomPath(from: currentTransform) else {
return
}
let (newTranslation , travelTime) = path
let newTransform = Transform(scale: currentTransform.scale,
rotation: currentTransform.rotation,
translation: newTranslation)
robot.move(to: newTransform, relativeTo: nil, duration: travelTime)
DispatchQueue.main.asyncAfter(deadline: .now() + travelTime + 1) {
completion()
}
}
We get that new Transform from the method below.
func randomPath(from currentTransform: Transform) -> (SIMD3<Float>, TimeInterval)? {
// Get the robot's current transform and translation
let robotTranslation = currentTransform.translation
// Generate random distances for a model to cross, relative to origin
let randomXTranslation = Float.random(in: 0.1...0.4) * [-1.0,1.0].randomElement()!
let randomZTranslation = Float.random(in: 0.1...0.4) * [-1.0,1.0].randomElement()!
// Create a translation relative to the current transform
let relativeXTranslation = robotTranslation.x + randomXTranslation
let relativeZTranslation = robotTranslation.z + randomZTranslation
// Find a path
var path = (randomXTranslation * randomXTranslation + randomZTranslation * randomZTranslation).squareRoot()
// Path only positive
if path < 0 { path = -path }
// Calculate the time of walking based on the distance and default speed
let timeOfWalking: Float = path / settings.robotSpeed
// Based on old trasnlation calculate the new one
let newTranslation: SIMD3<Float> = [relativeXTranslation,
Float(0),
relativeZTranslation]
return (newTranslation, TimeInterval(timeOfWalking))
}
The problem is that the value of Entity.transform.translation.y grows from 0 to some random value < 1. Always after the second time the walk() method is being called.
As you can see, every time the method is called, newTranslation sets the Y value to be 0. And yet the Toy's translation:
I am out of ideas any help is appreciated. I can share the whole code if needed.
I have managed to fix the issue by specifying parameter relativeTo as Toy's AnchorEntity:
toy.move(to: newTransform, relativeTo: anchorEntity, duration: travelTime)
Recently, I make it about lidar scan project.
It is very difficult. And I need to manipulate vertex data.
So I tried by this code
guard let meshAnchors = arView.session.currentFrame?.anchors.compactMap { $0 as? ARMeshAnchor }
else { return }
meshAnchors.first?.geometry.vertices // I want to get vertex position
There is no position of vertex, only buffer data
How can I do that? is it change from buffer data to array?
Plz help me.
Just went through this myself so I figured I'd drop ya my solution.
First grab this extension from Apple's Documentation to get a vertex at a specific index:
extension ARMeshGeometry {
func vertex(at index: UInt32) -> SIMD3<Float> {
assert(vertices.format == MTLVertexFormat.float3, "Expected three floats (twelve bytes) per vertex.")
let vertexPointer = vertices.buffer.contents().advanced(by: vertices.offset + (vertices.stride * Int(index)))
let vertex = vertexPointer.assumingMemoryBound(to: SIMD3<Float>.self).pointee
return vertex
}
}
Then, to get the positions in ARKit world space, you can do something like this:
func getVertexWorldPositions(frame: ARFrame) {
let anchors = frame.anchors.filter { $0 is ARMeshAnchor } as! [ARMeshAnchor]
// Each mesh geometry lives in its own anchor
for anchor in anchors {
// Anchor's transform in world space
let aTrans = SCNMatrix4(anchor.transform)
let meshGeometry = anchor.geometry
let vertices: ARGeometrySource = meshGeometry.vertices
for vIndex in 0..<vertices.count {
// This will give you a vertex in local (anchor) space
let vertex = meshGeometry.vertex(at: UInt32(vIndex))
// Create a new matrix with the vertex coordinates
let vTrans = SCNMatrix4MakeTranslation(vertex[0], vertex[1], vertex[2])
// Multiply it by the anchors's transform to get it into world space
let wTrans = SCNMatrix4Mult(vTrans, aTrans)
// Use the coordinates for something!
let vPos = SCNVector3(wTrans.m41, wTrans.m42, wTrans.m43)
print(vPos)
}
}
}
I would like to be able to export a mesh and texture from the iPad Pro Lidar.
There's examples here of how to export a mesh, but Id like to be able to export the environment texture too
ARKit 3.5 – How to export OBJ from new iPad Pro with LiDAR?
ARMeshGeometry stores the vertices for the mesh, would it be the case that one would have to 'record' the textures as one scans the environment, and manually apply them?
This post seems to show a way to get texture co-ordinates, but I can't see a way to do that with the ARMeshGeometry: Save ARFaceGeometry to OBJ file
Any point in the right direction, or things to look at greatly appreciated!
Chris
You need to compute the texture coordinates for each vertex, apply them to the mesh and supply a texture as a material to the mesh.
let geom = meshAnchor.geometry
let vertices = geom.vertices
let size = arFrame.camera.imageResolution
let camera = arFrame.camera
let modelMatrix = meshAnchor.transform
let textureCoordinates = vertices.map { vertex -> vector_float2 in
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let world_vertex4 = simd_mul(modelMatrix!, vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3,
orientation: .portrait,
viewportSize: CGSize(
width: CGFloat(size.height),
height: CGFloat(size.width)))
let v = 1.0 - Float(pt.x) / Float(size.height)
let u = Float(pt.y) / Float(size.width)
return vector_float2(u, v)
}
// construct your vertices, normals and faces from the source geometry
// directly and supply the computed texture coords to create new geometry
// and then apply the texture.
let scnGeometry = SCNGeometry(sources: [verticesSource, textureCoordinates, normalsSource], elements: [facesSource])
let texture = UIImage(pixelBuffer: frame.capturedImage)
let imageMaterial = SCNMaterial()
imageMaterial.isDoubleSided = false
imageMaterial.diffuse.contents = texture
scnGeometry.materials = [imageMaterial]
let pcNode = SCNNode(geometry: scnGeometry)
pcNode if added to your scene will contain the mesh with the texture applied.
Texture coordinates computation from here
Check out my answer over here
It's a description of this project: MetalWorldTextureScan which demonstrates how to scan your environment and create a textured mesh using ARKit and Metal.
With iPhoneX True-Depth camera its possible to get the 3D Coordinates of any object and use that information to position and scale the object, but with older iPhones we don't have access to AR on front-face camera, what i've done so far was detecting the face using Apple Vison frame work and drawing some 2D paths around the face or landmarks.
i've made a SceneView and Applied that as front layer of My view with clear background, and beneath it is AVCaptureVideoPreviewLayer , after detecting the face my 3D Object appears on the screen but positioning and scaling it correctly according to the face boundingBox required unprojecting and other stuffs which i got stuck there, i've also tried converting the 2D BoundingBox to 3D using CATransform3D but i failed! i am wondering if what i want to achieve is even possible ? i remember SnapChat was doing this before ARKit was available on iPhone if i'm not wrong!
override func viewDidLoad() {
super.viewDidLoad()
self.view.addSubview(self.sceneView)
self.sceneView.frame = self.view.bounds
self.sceneView.backgroundColor = .clear
self.node = self.scene.rootNode.childNode(withName: "face",
recursively: true)!
}
fileprivate func updateFaceView(for result:
VNFaceObservation, twoDFace: Face2D) {
let box = convert(rect: result.boundingBox)
defer {
DispatchQueue.main.async {
self.faceView.setNeedsDisplay()
}
}
faceView.boundingBox = box
self.sceneView.scene?.rootNode.addChildNode(self.node)
let unprojectedBox = SCNVector3(box.origin.x, box.origin.y,
0.8)
let worldPoint = sceneView.unprojectPoint(unprojectedBox)
self.node.position = worldPoint
/* Here i have to to unprojecting
to convert the value from a 2D point to 3D point also
issue here. */
}
The only way to achieve this is to SceneKit with an ortographic camera and use SCNGeometrySource to match the landmarks from Vision to the vertices of the mesh.
First, you need the mesh with the same number of vertices of Vision (66 - 77 depending on which Vision Revision you're in). You can create one using a tool like Blender.
The mesh on Blender
Then, on code, on each time you process your landmarks, you do the steps:
1- Get the mesh vertices:
func getVertices() -> [SCNVector3]{
var result = [SCNVector3]()
let planeSources = shape!.geometry?.sources(for: SCNGeometrySource.Semantic.vertex)
if let planeSource = planeSources?.first {
let stride = planeSource.dataStride
let offset = planeSource.dataOffset
let componentsPerVector = planeSource.componentsPerVector
let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
let vectors = [SCNVector3](repeating: SCNVector3Zero, count: planeSource.vectorCount)
// [SCNVector3](count: planeSource.vectorCount, repeatedValue: SCNVector3Zero)
let vertices = vectors.enumerated().map({
(index: Int, element: SCNVector3) -> SCNVector3 in
var vectorData = [Float](repeating: 0, count: componentsPerVector)
let byteRange = NSMakeRange(index * stride + offset, bytesPerVector)
let data = planeSource.data
(data as NSData).getBytes(&vectorData, range: byteRange)
return SCNVector3( x: vectorData[0], y: vectorData[1], z: vectorData[2])
})
result = vertices
}
return result
}
2- Unproject each landmark captured by Vision and keep them in a SCNVector3 array:
let unprojectedLandmark = sceneView.unprojectPoint( SCNVector3(landmarks[i].x + (landmarks[i].x,landmarks[i].y,0))
3- Modify the geometry using the new vertices:
func reshapeGeometry( _ vertices: [SCNVector3] ){
let source = SCNGeometrySource(vertices: vertices)
var newSources = [SCNGeometrySource]()
newSources.append(source)
for source in shape!.geometry!.sources {
if (source.semantic != SCNGeometrySource.Semantic.vertex) {
newSources.append(source)
}
}
let geometry = SCNGeometry(sources: newSources, elements: shape!.geometry?.elements)
let material = shape!.geometry?.firstMaterial
shape!.geometry = geometry
shape!.geometry?.firstMaterial = material
}
I was able to do that and that was my method.
Hope this helps!
I would suggest looking at Google's AR Core products which support an Apple AR scene with the back or front facing camera...but adds some additional functionality beyond Apple, when it comes to non Face depth camera devices.
Apple's Core Vision is almost the same as Googles Core Vision framework which returns 2D points representing the eyes/mouth/nose etc...and a face tilt component.
However, if you want a way to simply apply either 2D textures to a responsive 3D face, or alternatively attach 3D models to points on the face then take a look at Google's Core Augmented Faces framework. It has great sample code on iOS and Android.
I would like to create a node in sceneView, that is displayed at normal position in the scene, until user get too close or too far from it. Then it should be displayed at the same direction from the user, but with restricted distance. So far best I found is SCNDistanceConstraint, which limits this distance, but the problem is, that this constraint after it moved the node, this node stays in this new place. So for example, I want to limit the node to be displayed not closer then one meter from camera. I'm getting closer to the node, and it's being pushed away, but then when I get camera back, this node should return to it's original position - for now it stays where it was pushed. Is there some easy way to get such behavior?
Im not entirely sure I have understood what you mean, but it seems you always want your SCNNode to be positioned 1m away from the camera, but keeping its other x, y values?
If this is the case then you can do something like this:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
//1. Get The Current Node On Screen & The Camera Point Of View
guard let nodeToPosition = currentNode, let pointOfView = augmentedRealityView.pointOfView else { return }
//2. Set The Position Of The Node 1m Away From The Camera
nodeToPosition.simdPosition.z = pointOfView.presentation.worldPosition.z - 1
//3. Get The Current Distance Between The SCNNode & The Camera
let positionOfNode = SCNVector3ToGLKVector3(nodeToPosition.presentation.worldPosition)
let positionOfCamera = SCNVector3ToGLKVector3(pointOfView.presentation.worldPosition)
let distanceBetweenNodeAndCamera = GLKVector3Distance(positionOfNode, positionOfCamera)
print(distanceBetweenNodeAndCamera)
}
I have added in part three, so you could use the distance to do some additional calculations etc.
Hope this points you in the right direction...
The answer above is not exactly what I need - I want object to be displayed like it was just placed in normal position, so I can get closer and farer to/from it, but limit how close/far I can get from that object. When I'm beyond that limit, object should start move to be always within given distance range from camera. Anyway I think I have found some right direction in this. Instead of assigning position, I'm creating a constraint that constantly updates position of my node to be either in given position if it's in given range from user, or if not, adjusts this position to fit in that range:
private func setupConstraint() {
guard let mainNodeDisplayDistanceRange = mainNodeDisplayDistanceRange else {
constraints = nil
position = requestedPosition
return
}
let constraint = SCNTransformConstraint.positionConstraint(inWorldSpace: true) { (node, currentPosition) -> SCNVector3 in
var cameraPositionHorizontally = (self.augmentedRealityView as! AugmentedRealityViewARKit).currentCameraPosition
cameraPositionHorizontally.y = self.requestedPosition.y
let cameraToObjectVector = self.requestedPosition - cameraPositionHorizontally
let horizontalDistanceFromCamera = Double((cameraToObjectVector).distanceHorizontal)
guard mainNodeDisplayDistanceRange ~= horizontalDistanceFromCamera else {
let normalizedDistance = horizontalDistanceFromCamera.keepInRange(mainNodeDisplayDistanceRange)
let normalizedPosition = cameraPositionHorizontally + cameraToObjectVector.normalizeHorizontally(toDistance: normalizedDistance)
return normalizedPosition
}
return self.requestedPosition
}
constraints = [constraint]
}
internal var requestedPosition: SCNVector3 = .zero {
didSet {
setupConstraint()
}
}
This starts to work fine, but I still need to find a way to animate this.