iPad Pro Lidar - Export Geometry & Texture - arkit

I would like to be able to export a mesh and texture from the iPad Pro Lidar.
There's examples here of how to export a mesh, but Id like to be able to export the environment texture too
ARKit 3.5 – How to export OBJ from new iPad Pro with LiDAR?
ARMeshGeometry stores the vertices for the mesh, would it be the case that one would have to 'record' the textures as one scans the environment, and manually apply them?
This post seems to show a way to get texture co-ordinates, but I can't see a way to do that with the ARMeshGeometry: Save ARFaceGeometry to OBJ file
Any point in the right direction, or things to look at greatly appreciated!
Chris

You need to compute the texture coordinates for each vertex, apply them to the mesh and supply a texture as a material to the mesh.
let geom = meshAnchor.geometry
let vertices = geom.vertices
let size = arFrame.camera.imageResolution
let camera = arFrame.camera
let modelMatrix = meshAnchor.transform
let textureCoordinates = vertices.map { vertex -> vector_float2 in
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let world_vertex4 = simd_mul(modelMatrix!, vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3,
orientation: .portrait,
viewportSize: CGSize(
width: CGFloat(size.height),
height: CGFloat(size.width)))
let v = 1.0 - Float(pt.x) / Float(size.height)
let u = Float(pt.y) / Float(size.width)
return vector_float2(u, v)
}
// construct your vertices, normals and faces from the source geometry
// directly and supply the computed texture coords to create new geometry
// and then apply the texture.
let scnGeometry = SCNGeometry(sources: [verticesSource, textureCoordinates, normalsSource], elements: [facesSource])
let texture = UIImage(pixelBuffer: frame.capturedImage)
let imageMaterial = SCNMaterial()
imageMaterial.isDoubleSided = false
imageMaterial.diffuse.contents = texture
scnGeometry.materials = [imageMaterial]
let pcNode = SCNNode(geometry: scnGeometry)
pcNode if added to your scene will contain the mesh with the texture applied.
Texture coordinates computation from here

Check out my answer over here
It's a description of this project: MetalWorldTextureScan which demonstrates how to scan your environment and create a textured mesh using ARKit and Metal.

Related

How to apply a texture to a specific channel on a 3d obj model in Swift?

I'm kind of stuck right now when it comes to applying a specific texture on my 3d obj model.
Easiest solution of all would be to do let test = SCNScene(named: "models.scnassets/modelFolder/ModelName.obj"), but this requires that the mtl file maps the texture file directly inside of it which is not something that's possible with my current workflow.
With my current understanding, this leaves me with the option of using a scattering function to apply textures to a specific semantic, something like such :
if let url = URL(string: obj) {
let asset = MDLAsset(url: url)
guard let object = asset.object(at: 0) as? MDLMesh else {
print("Failed to get mesh from asset.")
self.presentAlert(title: "Warning", message: "Could not fetch the model.", firstBtn: "Ok")
return
}
// Create a material from the various textures with a scatteringFunction
let scatteringFunction = MDLScatteringFunction()
let material = MDLMaterial(name: "material", scatteringFunction: scatteringFunction)
let property = MDLMaterialProperty(name: "texture", semantic: .baseColor, url: URL(string: self.textureURL))
material.setProperty(property)
// Apply the texture to every submesh of the asset
object.submeshes?.forEach {
if let submesh = $0 as? MDLSubmesh {
submesh.material = material
}
}
// Wrap the ModelIO object in a SceneKit object
let node = SCNNode(mdlObject: object)
let scene = SCNScene()
scene.rootNode.addChildNode(node)
// Set up the SceneView
sceneView.scene = scene
...
}
The actual problem is the semantics. The 3d models are made on Unreal and for many models there's a png texture which has 3 semantics inside of it, namely Ambient Occlusion, Roughness and Metallic. Ambient Occlusion would need to be applied on the red channel, Roughness on the greed channel and Metallic on the blue channel.
How could I achieve this? An MdlMaterialSemantic has all of these possible semantics, but metallic, ambient occlusion and roughness are all separate. I tried simply applying the texture on each, but obviously this did not work very well.
Considering that my .png texture has all of those 3 "packaged" in it under a different channel, how can I work with this? I was thinking that maybe I could somehow use a small script to add mapping to the texture in the mtl file on my end in the app directly, but this seems sketchy lol..
What are my other options if there's no way of doing this? I've also been trying to use fbx files with assimpKit, but I couldn't manage to load any textures, just the model in black...
I am open to any suggestion, if more info is needed, please let me know! Thank you very much!
Sorry, I don't have enough rep to comment, but this might be more of a comment than an answer!
Have you tried loading the texture png image separately (as a NS/UI/CGImage) and then splitting it into three channels manually, then applying these channels separately? (Splitting into three separate channels is not as simple as it could be... but you could use this grayscale conversion for guidance, and just do one channel at a time.)
Once you have your objects in SceneKit, it is possibly slightly easier to modify these materials. Once you have a SCNNode with a SCNGeometry with a SCNMaterial you can access any of these materials and set the .contents property to almost anything (including a XXImage).
Edit:
Here's an extension you can try to extract the individual channels from a CGImage using Accelerate. You can get a CGImage from an NSImage/UIImage depending on whether you're on Mac or iOS (and you can load the file directly into one of those image formats).
I've just adapted the code from the link above, I am not very experienced with the Accelerate framework, so use at your own risk! But hopefully this puts you on the right path.
extension CGImage {
enum Channel {
case red, green, blue
}
func getChannel(channel: Channel) -> CGImage? {
// code adapted from https://developer.apple.com/documentation/accelerate/converting_color_images_to_grayscale
guard let format = vImage_CGImageFormat(cgImage: cgImage) else {return nil}
guard var sourceImageBuffer = try? vImage_Buffer(cgImage: cgImage, format: format) else {return nil}
guard var destinationBuffer = try? vImage_Buffer(width: Int(sourceImageBuffer.width), height: Int(sourceImageBuffer.height), bitsPerPixel: 8) else {return nil}
defer {
sourceImageBuffer.free()
destinationBuffer.free()
}
let redCoefficient: Float = channel == .red ? 1 : 0
let greenCoefficient: Float = channel == .green ? 1 : 0
let blueCoefficient: Float = channel == .blue ? 1 : 0
let divisor: Int32 = 0x1000
let fDivisor = Float(divisor)
var coefficientsMatrix = [
Int16(redCoefficient * fDivisor),
Int16(greenCoefficient * fDivisor),
Int16(blueCoefficient * fDivisor)
]
let preBias: [Int16] = [0, 0, 0, 0]
let postBias: Int32 = 0
vImageMatrixMultiply_ARGB8888ToPlanar8(&sourceImageBuffer,
&destinationBuffer,
&coefficientsMatrix,
divisor,
preBias,
postBias,
vImage_Flags(kvImageNoFlags))
guard let monoFormat = vImage_CGImageFormat(
bitsPerComponent: 8,
bitsPerPixel: 8,
colorSpace: CGColorSpaceCreateDeviceGray(),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue),
renderingIntent: .defaultIntent) else {return nil}
guard let result = try? destinationBuffer.createCGImage(format: monoFormat) else {return nil}
return result
}
}

Within RealityKit, how can I make the world without friction?

I want to make the physics world without friction and damping.
I tried to make the scene's gravity to (0,0,0), and make a square and ball, give force when tapping. I want to make the ball move eternally, but it just stop in some time.
How can I make the entities friction to zero?
Apply a new Physics Material to your model entity.
For this use generate(friction:restitution:) type method:
static func generate(friction: Float = 0,
restitution: Float = 0) -> PhysicsMaterialResource
where
/* the coefficient of friction is in the range [0, infinity] */
/* and the coefficient of restitution is in the range [0, 1] */
Here's a code:
arView.environment.background = .color(.darkGray)
let mesh = MeshResource.generateSphere(radius: 0.5)
let material = SimpleMaterial()
let model = ModelEntity(mesh: mesh,
materials: [material]) as (ModelEntity & HasPhysics)
let physicsResource: PhysicsMaterialResource = .generate(friction: 0,
restitution: 0)
model.components[PhysicsBodyComponent] = PhysicsBodyComponent(
shapes: [.generateSphere(radius: 0.51)],
mass: 20, // in kilograms
material: physicsResource,
mode: .dynamic)
model.generateCollisionShapes(recursive: true)
let anchor = AnchorEntity()
anchor.addChild(model)
arView.scene.anchors.append(anchor)
P.S. Due to some imperfectness of physics engine in RealityKit, I suppose there's no possibility to create an eternal bouncing. Seemingly next RealityKit's update will fix physics engine imperfectness.

Resulting MTLTexture lighter than CGImage

I have kernel func which must convert Y and CbCr textures created from pixelBuffer(ARFrame.capturedImage) to RGB texture like in apple guide https://developer.apple.com/documentation/arkit/displaying_an_ar_experience_with_metal
But I get over lighted texture
kernel void renderTexture(texture2d<float, access::sample> capturedImageTextureY [[ texture(0) ]],
texture2d<float, access::sample> capturedImageTextureCbCr [[ texture(1) ]],
texture2d<float, access::read_write> outTextue [[texture(2)]],
uint2 size [[threads_per_grid]],
uint2 pid [[thread_position_in_grid]]){
constexpr sampler colorSampler(mip_filter::linear,
mag_filter::linear,
min_filter::linear);
const float4x4 ycbcrToRGBTransform = float4x4(
float4(+1.0000f, +1.0000f, +1.0000f, +0.0000f),
float4(+0.0000f, -0.3441f, +1.7720f, +0.0000f),
float4(+1.4020f, -0.7141f, +0.0000f, +0.0000f),
float4(-0.7010f, +0.5291f, -0.8860f, +1.0000f)
);
float2 texCoord;
texCoord.x = float(pid.x) / size.x;
texCoord.y = float(pid.y) / size.y;
// Sample Y and CbCr textures to get the YCbCr color at the given texture coordinate
float4 ycbcr = float4(capturedImageTextureY.sample(colorSampler, texCoord).r,
capturedImageTextureCbCr.sample(colorSampler, texCoord).rg, 1.0);
float4 color = ycbcrToRGBTransform * ycbcr;
outTextue.write(color, pid);
}
I create CGImage with this code:
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
cgImage has normal lightning
when I try to create texture from cgImage with MTKTextureLoader I get over lighted texture too
How to get MTLTexture with normal light like in cgImage
cgImage: (expected result)
kernel func:
create texture with this code:
let descriptor = MTLTextureDescriptor()
descriptor.width = Int(Self.maxTextureSize.width)
descriptor.height = Int(Self.maxTextureSize.height)
descriptor.usage = [.shaderWrite, .shaderRead]
let texture = MTLCreateSystemDefaultDevice()?.makeTexture(descriptor: descriptor)
and write pixels with kernel func.
already tried different pixelFormats of MTLTextureDescriptor
textureLoader:
let textureLoader = MTKTextureLoader(device: MTLCreateSystemDefaultDevice()!)
let texturee = try! textureLoader.newTexture(cgImage: cgImage!, options: [.SRGB : (false as NSNumber)])
already tried different MTKTextureLoader.Options
GitHub project demonstrating issue: PixelBufferToMTLTexture
Problem was solved Thanks 0xBFE1A8, by adding gamma correction
by replacing
outTextue.write(color, pid);
with:
outTextue.write(float4(pow(color.rgb, float3(2,2,2)), color.a), pid);
when I try to create texture from cgImage with MTKTextureLoader I get
over lighted texture
It's because metal applies gamma correction to your texture.
MTKTextureLoader has an SRGB key that is used to specify whether the texture data is stored as sRGB image data.
If the value is false, the image data is treated as linear pixel data.
If the value is true, the image data is treated as sRGB pixel data. If
this key is not specified and the image being loaded has been
gamma-corrected, the image data uses the specified sRGB information.
let path = Bundle.main.path(forResource: "yourTexture", ofType: "png")!
let data = NSData(contentsOfFile: path) as! Data
let texture = try! textureLoader.newTexture(with: data, options: [MTKTextureLoaderOptionSRGB : (false as NSNumber)])
You can also solve this problem by adding a gamma correcting equation to your shader.
Linear to sRGB and vice versa:
rgb = mix(rgb.0.0774, pow(rgb*0.9479 + 0.05213, 2.4), step(0.04045, rgb))
rgb = mix(rgb12.92, pow(rgb*0.4167) * 1.055 - 0.055, step(0.00313, rgb))

ARkit SCNMorpher isn't working for me. No errors, just no shape changes

I'm trying a simple example using SCNMorpher to blend between to poly spheres. They are identical in topology except for the position of the points
Each is stored in a .scn file and I get the shapes like:
sphereNode = SCNReferenceNode(named: "sphere")
sphereNode2 = SCNReferenceNode(named: "sphere2")
sphereNode?.morpher = SCNMorpher()
sphereNode!.morpher?.targets = [(sphereNode2?.childNodes.first!.geometry)!]
sphereNode!.name = "EFFECT"
I'm using the faceAnchor blend shapes to drive it
if let effectNode = sceneView?.scene.rootNode.childNode(withName: "EFFECT", recursively: true) {
let v = faceAnchor?.blendShapes[ARFaceAnchor.BlendShapeLocation.jawOpen]
effectNode.morpher?.setWeight(v as! CGFloat, forTargetNamed: "sphere2")
}
I've also tried:
...
effectNode.morpher?.setWeight(v as! CGFloat, forTargetAt: 0)
...
The code runs.. I can print values for v.. they change as I open/close my jaw and that value is passed to the morpher. I see the base sphere shape but it never deforms toward the sphere2 shape. Am I suppose to do anything else to force it to redraw or calc the deformation?
Hmm. looks like I was attaching the morpher to the parent of the shape, not the actual sphere.. funny how asking a question here sometimes creates that "Ah Ha" moment. Reading in my spheres like this fixed it:
sphereNode = SCNReferenceNode(named: "sphere").childNodes.first
sphereNode2 = SCNReferenceNode(named: "sphere2").childNodes.first
sphereNode?.morpher = SCNMorpher()
sphereNode!.morpher?.targets = [(sphereNode2.geometry)!]
sphereNode!.name = "EFFECT"

How to apply a 3D Model on detected face by Apple Vision "NO AR"

With iPhoneX True-Depth camera its possible to get the 3D Coordinates of any object and use that information to position and scale the object, but with older iPhones we don't have access to AR on front-face camera, what i've done so far was detecting the face using Apple Vison frame work and drawing some 2D paths around the face or landmarks.
i've made a SceneView and Applied that as front layer of My view with clear background, and beneath it is AVCaptureVideoPreviewLayer , after detecting the face my 3D Object appears on the screen but positioning and scaling it correctly according to the face boundingBox required unprojecting and other stuffs which i got stuck there, i've also tried converting the 2D BoundingBox to 3D using CATransform3D but i failed! i am wondering if what i want to achieve is even possible ? i remember SnapChat was doing this before ARKit was available on iPhone if i'm not wrong!
override func viewDidLoad() {
super.viewDidLoad()
self.view.addSubview(self.sceneView)
self.sceneView.frame = self.view.bounds
self.sceneView.backgroundColor = .clear
self.node = self.scene.rootNode.childNode(withName: "face",
recursively: true)!
}
fileprivate func updateFaceView(for result:
VNFaceObservation, twoDFace: Face2D) {
let box = convert(rect: result.boundingBox)
defer {
DispatchQueue.main.async {
self.faceView.setNeedsDisplay()
}
}
faceView.boundingBox = box
self.sceneView.scene?.rootNode.addChildNode(self.node)
let unprojectedBox = SCNVector3(box.origin.x, box.origin.y,
0.8)
let worldPoint = sceneView.unprojectPoint(unprojectedBox)
self.node.position = worldPoint
/* Here i have to to unprojecting
to convert the value from a 2D point to 3D point also
issue here. */
}
The only way to achieve this is to SceneKit with an ortographic camera and use SCNGeometrySource to match the landmarks from Vision to the vertices of the mesh.
First, you need the mesh with the same number of vertices of Vision (66 - 77 depending on which Vision Revision you're in). You can create one using a tool like Blender.
The mesh on Blender
Then, on code, on each time you process your landmarks, you do the steps:
1- Get the mesh vertices:
func getVertices() -> [SCNVector3]{
var result = [SCNVector3]()
let planeSources = shape!.geometry?.sources(for: SCNGeometrySource.Semantic.vertex)
if let planeSource = planeSources?.first {
let stride = planeSource.dataStride
let offset = planeSource.dataOffset
let componentsPerVector = planeSource.componentsPerVector
let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
let vectors = [SCNVector3](repeating: SCNVector3Zero, count: planeSource.vectorCount)
// [SCNVector3](count: planeSource.vectorCount, repeatedValue: SCNVector3Zero)
let vertices = vectors.enumerated().map({
(index: Int, element: SCNVector3) -> SCNVector3 in
var vectorData = [Float](repeating: 0, count: componentsPerVector)
let byteRange = NSMakeRange(index * stride + offset, bytesPerVector)
let data = planeSource.data
(data as NSData).getBytes(&vectorData, range: byteRange)
return SCNVector3( x: vectorData[0], y: vectorData[1], z: vectorData[2])
})
result = vertices
}
return result
}
2- Unproject each landmark captured by Vision and keep them in a SCNVector3 array:
let unprojectedLandmark = sceneView.unprojectPoint( SCNVector3(landmarks[i].x + (landmarks[i].x,landmarks[i].y,0))
3- Modify the geometry using the new vertices:
func reshapeGeometry( _ vertices: [SCNVector3] ){
let source = SCNGeometrySource(vertices: vertices)
var newSources = [SCNGeometrySource]()
newSources.append(source)
for source in shape!.geometry!.sources {
if (source.semantic != SCNGeometrySource.Semantic.vertex) {
newSources.append(source)
}
}
let geometry = SCNGeometry(sources: newSources, elements: shape!.geometry?.elements)
let material = shape!.geometry?.firstMaterial
shape!.geometry = geometry
shape!.geometry?.firstMaterial = material
}
I was able to do that and that was my method.
Hope this helps!
I would suggest looking at Google's AR Core products which support an Apple AR scene with the back or front facing camera...but adds some additional functionality beyond Apple, when it comes to non Face depth camera devices.
Apple's Core Vision is almost the same as Googles Core Vision framework which returns 2D points representing the eyes/mouth/nose etc...and a face tilt component.
However, if you want a way to simply apply either 2D textures to a responsive 3D face, or alternatively attach 3D models to points on the face then take a look at Google's Core Augmented Faces framework. It has great sample code on iOS and Android.