Extracting the ARMeshGeometry data to apply AudioKit sound generator - arkit

How can I extract and work with the ARMeshGeometry generated by the new SceneReconstruction API on the iPad Pro? I am using Apple's Visualising Scene Semantics sample app/code
I am trying to attach an AudioKit AKOscillator() to the centre of the 'face' as a 3D sound source as its created in realtime.
I can see from the LiDAR example code that this 'seems' to be the point at which a 'face' is created however I am having trouble combining both the extraction/seeing the 'face' data and adding the AudioKit sound source.
Here is where I believe the face is determined (I am new to swift could be very wrong):
DispatchQueue.global().async {
for anchor in meshAnchors {
for index in 0..<anchor.geometry.faces.count {
// Get the center of the face so that we can compare it to the given location.
let geometricCenterOfFace = anchor.geometry.centerOf(faceWithIndex: index)
// Convert the face's center to world coordinates.
var centerLocalTransform = matrix_identity_float4x4
centerLocalTransform.columns.3 = SIMD4<Float>(geometricCenterOfFace.0, geometricCenterOfFace.1, geometricCenterOfFace.2, 1)
let centerWorldPosition = (anchor.transform * centerLocalTransform).position
I would really benefit from seeing the RAW array data if that is achievable? Is this from ARGeometrySource?? Can this be printed or viewed/extracted??
I then want to add something like an oscillator/noise generator to that 'face' in the 3D world location and it be spatialised using the array/location data using something like:
var oscillator = AKOscillator() Create the sound generator
AudioKit.output = oscillator Tell AudioKit what to output
AudioKit.start() Start up AudioKit
oscillator.start() Start the oscillator
oscillator.frequency = random(in: 220...880) Set oscillator parameters
I appreciate this is almost a two-fold question however any approach on the ARMeshGemotery data extraction/use or the implementation of a sound source in the centre of each 'face' or both are welcomed.
Further code for LiDAR visualising scene semantics example in link above.
Thanks, your assistance is much appreciated,
R

Related

ARKit – How to track iPhone camera location during ARSession?

I'm very new to Xcode, so any and all help would be a godsend. I'm trying to write an app that saves the positional and rotational data from an iPhone at a set interval and saves it to a file. Right now, I'm not sure where to look when it comes to getting that data.
CoreMotion seems to not be enough so I'm using ARKit. I have a sceneView where I can see the origin and the feature points, but again, I'm stuck when it comes to where or even if the camera's position is tracked.
You can retrieve ARCamera's position and rotation via Transform Matrix (simd_float4x4). This data is contained inside every ARFrame of a running ARSession (for selfie or rear camera).
let sceneView = ARSCNView(frame: .zero)
sceneView.delegate = self
let frame: ARFrame = sceneView.session.currentFrame!
let cameraPosition: simd_float4 = frame.camera.transform.columns.3
let cameraRotation: simd_float3 = frame.camera.eulerAngles
The best place for these lines is SceneKit's renderer(_:didUpdate:for:) instance method. Take into consideration, that ARCamera transform values coming from IMU sensors are specially filtered.

Perspective camera RealityKit

I created a RealityKit project which loads objects (usdz files). Using the LiDAR is really great for occlusion and be able to see the real world mesh.
I would like to use something found in the Apple's documentation : Perspective Camera. If I well understood this could be compare as a third person camera.
I created a dedicated button in my arView which, when called, execute the code following :
let cameraEntity = PerspectiveCamera()
cameraEntity.camera.far = 10
cameraEntity.camera.fieldOfViewInDegrees = 60
cameraEntity.camera.near = 0.01
let cameraAnchor = AnchorEntity(world: .zero)
cameraAnchor.children.append(cameraEntity)
self.arView.scene.anchors.append(cameraAnchor)
When is code is called the the become black.... I do not understand how to place the camera to see the scanned mesh.
If someone has an idea ? Thanks in advance!
It depends where the USDZ you're looking at is located. I think the default will mean the camera is located at the origin looking in the direction of [0, 0, -1].
You could change this using the Entity.look(at:from:upVector:relativeTo:) method. Making sure that your from: parameter is far enough from the centre of your USDZ object.

LiDAR and RealityKit – Capture a Real World Texture for a Scanned Model

Task
I would like to capture a real-world texture and apply it to a reconstructed mesh produced with a help of LiDAR scanner. I suppose that Projection-View-Model matrices should be used for that. A texture must be made from fixed Point-of-View, for example, from center of a room. However, it would be an ideal solution if we could apply an environmentTexturing data, collected as a cube-map texture in a scene.
Look at 3D Scanner App. It's a reference app allowing us to export a model with its texture.
I need to capture a texture with one iteration. I do not need to update it in a realtime. I realize that changing PoV leads to a wrong texture's perception, in other words, distortion of a texture. Also I realize that there's a dynamic tesselation in RealityKit and there's an automatic texture mipmapping (texture's resolution depends on a distance it captured from).
import RealityKit
import ARKit
import Metal
import ModelIO
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
arView.debugOptions.insert(.showSceneUnderstanding)
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
config.environmentTexturing = .automatic
arView.session.run(config)
}
}
Question
How to capture and apply a real world texture to a reconstructed 3D mesh?
Scene Reconstruction
Pity but I am still unable to capture model's texture in realtime using the LiDAR scanning process. Neither at WWDC20 nor at WWDC22 Apple announced a native API for that (so texture capturing is only possible now using third-party APIs - don't ask me which ones :-) ).
However, there's good news – a new methodology has emerged at last. It will allow developers to create textured models from a series of shots.
Photogrammetry
Object Capture API, announced at WWDC 2021, provides developers with the long-awaited photogrammetry tool. At the output we get USDZ model with UV-mapped hi-rez texture. To implement Object Capture API you need macOS 12 and Xcode 13.
To create a USDZ model from a series of shots, submit all taken images to RealityKit's PhotogrammetrySession.
Here's a code snippet that spills a light on this process:
import RealityKit
import Combine
let pathToImages = URL(fileURLWithPath: "/path/to/my/images/")
let url = URL(fileURLWithPath: "model.usdz")
var request = PhotogrammetrySession.Request.modelFile(url: url,
detail: .medium)
var configuration = PhotogrammetrySession.Configuration()
configuration.sampleOverlap = .normal
configuration.sampleOrdering = .unordered
configuration.featureSensitivity = .normal
configuration.isObjectMaskingEnabled = false
guard let session = try PhotogrammetrySession(input: pathToImages,
configuration: configuration)
else { return 
}
var subscriptions = Set<AnyCancellable>()
session.output.receive(on: DispatchQueue.global())
.sink(receiveCompletion: { _ in
// errors
}, receiveValue: { _ in
// output
})
.store(in: &subscriptions)
session.process(requests: [request])
You can reconstruct USD and OBJ models with their corresponding UV-mapped textures.
How it can be done in Unity
I'd like to share some interesting info about the work of Unity's AR Foundation with a mesh coming from LiDAR. At the moment – November 01, 2020 – there's an absurd situation. It's associated with the fact that native ARKit developers cannot capture the texture of a scanned object using standard high-level RealityKit tools, however Unity's AR Foundation users (creating ARKit apps) can do this using the ARMeshManager script. I don't know whether this script was developed by AR Foundation team or just by developers of a small creative startup (and then subsequently bought), but the fact remains.
To use ARKit meshing with AR Foundation, you just need to add the ARMeshManager component to your scene. As you can see on the picture there are such features as Texture Coordinates, Color and Mesh Density.
If anyone has more detailed information on how this must be configured or scripted in Unity, please post about it in this thread.
You can check out the answer over here
It is a description of this project: MetalWorldTextureScan which demonstrates how to scan your environment and create a textured mesh using ARKit and Metal.

Do 3D models have size information?

I know 3d model (.fbx, .obj etc) files have accurate scale information. But do they also have embedded unit measurement information (inches, centimeters)?
Is it possible to include units information where I draw a cube of 1*1*1 cm and then later generate another cube of 1*1*1 m. If so how interoperable are they?
Can I generate these 2 different size cubes in one software(say unity or 3ds max), export fbx file and then import it in another software like playcanvas. Will they recognize the different sizes ?
Objects imported to Unity3D usually comes with a renderer component, most likely, MeshRenderer, you can then retrieve the size info with below code snippet:
var renderer= GetComponent<Renderer<();
var bound = renderer.bounds;
var center = bound.center;
var radius = bound.extents.magnitude;

Displaying a Mesh based on pointcloud data

I am sampling data from the point cloud and trying to display the selected points using a mesh renderer.
I have the data but I can't visualize it. I am using the Augmented Reality application as template.
I am doing the point saving and mesh population in a coroutine. There are no errors but I can't see any resulting mesh.
I am wondering if there is a conflict with an existing mesh component from the point cloud example that I use for creating the cloud.
I pick a point on screen (touch) and use the index to find coordinates and populate a Vector3[]. The mesh receiveds the vertices( 5000 points out of 500000 in the point cloud)
this is where I set the mesh:
if (m_updateSubPointsMesh)
{
int[] indices = new int[ctr];
for (int i = 0; i < ctr; ++i)
{
indices[i] = i;
}
m_submesh.Clear();
m_submesh.vertices = m_subpoints;
int vertsInMesh = m_submesh.vertexCount;
m_submesh.SetIndices(indices, MeshTopology.Points, 0);
}
m_subrenderer.material.SetColor("_SpecColor", Color.yellow);
I am using Unity pro 5.3.3 and VS 2015 on windows 10.
Comments and advice are very much appreciated even if they are not themselves a solution.
Jose
I sort it out. The meshing was right it turn out to be a bug on a transform (not tango-defined). The mesh was rendered in another point. Had to walk around to find it.
Thanks
You must convert the Tango mesh data to mesh data for unity, its not structured in the same way I believe its the triangles thats different. You also need to set triangles and normals to the mesh.