Noob here. When using the following code, it wraps the image texture around the entire model. Is it possible to apply the image texture to a portion of the model (as opposed to the entire model itself) in RealityKit/ARKit?
CODE:
var material = SimpleMaterial()
material.baseColor = try! .texture(TextureResource.load(named: "image.jpg"))
modelSample?.model?.materials = [material]
If you want to apply a texture to a portion of a AR model, not to an entire model, you have to UV-map this texture to a model in a 3d authoring tool (like 3dsMax, Maya, or Blender). UV-mapping is neither possible in RealityKit 1.0 nor in RealityKit 2.0.
Related
I have a fanfare.reality model in my arView from Reality Composer. I do raycast by entity(at:location) and enable the ModelDebugOptionsComponent(visualizationMode: .lightingDiffuse) of the objects hit, which make the appearance of objects turns grey. However, I found only the fanfare itself turns grey and the flag above the fanfare does not change at all.
I load the fanfare.reality by LoadAsync() and print the returned value as follows. The reason is that the flag, star and fanfare itself are divded into 3 ModelEntity. In RealityKit, raycast searches the entities with CollisionComponent.only can be added to entities that have ModelComponent.
Therefore, my question how can I turn the entire reality model grey (fanfare+flag+star) when I tap the model on screen(by raycast).
Separate-parts-model approach
You can easily retrieve all 3 models. But you have to specify this whole long hierarchical path:
let scene = try! Experience.loadFanfare()
// Fanfare – .children[0].children[0]
let fanfare = scene.children[0] ..... children[0].children[0] as! ModelEntity
fanfare.model?.materials[0] = UnlitMaterial(color: .darkGray)
// Flag – .children[1].children[0]
let flag = scene.children[0] ..... children[1].children[0] as! ModelEntity
flag.model?.materials[0] = UnlitMaterial(color: .darkGray)
// Star – .children[2].children[0]
let star = scene.children[0] ..... children[2].children[0] as! ModelEntity
star.model?.materials[0] = UnlitMaterial(color: .darkGray)
I don't see much difference when retrieving model entities from .rcproject, .reality or .usdz files. According to the printed diagram, all three model-entities are located at the same level of hierarchy, they are offsprings of the same entity. The condition in the if statement can be set to its simplest form – if a ray hits a collision shape of fanfare or (||) flag or (||) star, then all three models must be recolored.
Mono-model approach
The best solution for interacting with 3D models through raycasting is the mono-model approach. A mono-model is a solid 3D object that does not have separate parts – all parts are combined into a whole model. Textures for mono-models are always mapped in UV editors. The mono-model can be made in 3D authoring apps like Maya or Blender.
P.S.
All seasoned AR developers know that Wow! AR experience isn't about code but rather about 3D content. You understand that there is no "miracle pill" for an easy solution if your 3D model consists of many parts. Competently made AR model is 75% of success when working with code.
Task
I would like to capture a real-world texture and apply it to a reconstructed mesh produced with a help of LiDAR scanner. I suppose that Projection-View-Model matrices should be used for that. A texture must be made from fixed Point-of-View, for example, from center of a room. However, it would be an ideal solution if we could apply an environmentTexturing data, collected as a cube-map texture in a scene.
Look at 3D Scanner App. It's a reference app allowing us to export a model with its texture.
I need to capture a texture with one iteration. I do not need to update it in a realtime. I realize that changing PoV leads to a wrong texture's perception, in other words, distortion of a texture. Also I realize that there's a dynamic tesselation in RealityKit and there's an automatic texture mipmapping (texture's resolution depends on a distance it captured from).
import RealityKit
import ARKit
import Metal
import ModelIO
class ViewController: UIViewController, ARSessionDelegate {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
arView.debugOptions.insert(.showSceneUnderstanding)
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
config.environmentTexturing = .automatic
arView.session.run(config)
}
}
Question
How to capture and apply a real world texture to a reconstructed 3D mesh?
Scene Reconstruction
Pity but I am still unable to capture model's texture in realtime using the LiDAR scanning process. Neither at WWDC20 nor at WWDC22 Apple announced a native API for that (so texture capturing is only possible now using third-party APIs - don't ask me which ones :-) ).
However, there's good news – a new methodology has emerged at last. It will allow developers to create textured models from a series of shots.
Photogrammetry
Object Capture API, announced at WWDC 2021, provides developers with the long-awaited photogrammetry tool. At the output we get USDZ model with UV-mapped hi-rez texture. To implement Object Capture API you need macOS 12 and Xcode 13.
To create a USDZ model from a series of shots, submit all taken images to RealityKit's PhotogrammetrySession.
Here's a code snippet that spills a light on this process:
import RealityKit
import Combine
let pathToImages = URL(fileURLWithPath: "/path/to/my/images/")
let url = URL(fileURLWithPath: "model.usdz")
var request = PhotogrammetrySession.Request.modelFile(url: url,
detail: .medium)
var configuration = PhotogrammetrySession.Configuration()
configuration.sampleOverlap = .normal
configuration.sampleOrdering = .unordered
configuration.featureSensitivity = .normal
configuration.isObjectMaskingEnabled = false
guard let session = try PhotogrammetrySession(input: pathToImages,
configuration: configuration)
else { return
}
var subscriptions = Set<AnyCancellable>()
session.output.receive(on: DispatchQueue.global())
.sink(receiveCompletion: { _ in
// errors
}, receiveValue: { _ in
// output
})
.store(in: &subscriptions)
session.process(requests: [request])
You can reconstruct USD and OBJ models with their corresponding UV-mapped textures.
How it can be done in Unity
I'd like to share some interesting info about the work of Unity's AR Foundation with a mesh coming from LiDAR. At the moment – November 01, 2020 – there's an absurd situation. It's associated with the fact that native ARKit developers cannot capture the texture of a scanned object using standard high-level RealityKit tools, however Unity's AR Foundation users (creating ARKit apps) can do this using the ARMeshManager script. I don't know whether this script was developed by AR Foundation team or just by developers of a small creative startup (and then subsequently bought), but the fact remains.
To use ARKit meshing with AR Foundation, you just need to add the ARMeshManager component to your scene. As you can see on the picture there are such features as Texture Coordinates, Color and Mesh Density.
If anyone has more detailed information on how this must be configured or scripted in Unity, please post about it in this thread.
You can check out the answer over here
It is a description of this project: MetalWorldTextureScan which demonstrates how to scan your environment and create a textured mesh using ARKit and Metal.
I'm trying to render a 3D model in SceneKit but it looks incorrect.
For example this model (it's an SCN file with texture and you can reproduce it in your Xcode):
In Xcode Scene Editor it is rendered like this:
Transparency -> Mode -> Dual Layer
Double Sided = true
If I turn off the "Write depth" option, it will look like this:
But there are also some issues because it I see only "the lowest layer" of haircut.
I think this should be possible. How to do it right?
The reason that in your 3D model some strands of hair popped out when viewed from different angles is quite usual for SceneKit: your model has semi-transparent material that SceneKit can't render properly due to some inner engine rendering techniques (time 49:35) applied to depth buffer.
In order to deal with this problem there are two solutions:
Solution 1:
Your 3D model must have a completely opaque texture (without semi-transparent parts at all). In that case use .dualLayer property.
let scene = SCNScene(named: "art.scnassets/Hair.scn")!
let hair = scene.rootNode.childNode(withName: "MDL_OBJ", recursively: true)!
hair.geometry?.firstMaterial?.transparencyMode = SCNTransparencyMode.dualLayer
Solution 2:
Strands of hair mustn't be a mono-geometry but must be a compound geometry (consisted of several geometry layers unified in one group).
hair.geometry?.firstMaterial?.colorBufferWriteMask = SCNColorMask.all
hair.geometry?.firstMaterial?.readsFromDepthBuffer = false
hair.geometry?.firstMaterial?.writesToDepthBuffer = false
hair.geometry?.firstMaterial?.blendMode = SCNBlendMode.alpha
I know 3d model (.fbx, .obj etc) files have accurate scale information. But do they also have embedded unit measurement information (inches, centimeters)?
Is it possible to include units information where I draw a cube of 1*1*1 cm and then later generate another cube of 1*1*1 m. If so how interoperable are they?
Can I generate these 2 different size cubes in one software(say unity or 3ds max), export fbx file and then import it in another software like playcanvas. Will they recognize the different sizes ?
Objects imported to Unity3D usually comes with a renderer component, most likely, MeshRenderer, you can then retrieve the size info with below code snippet:
var renderer= GetComponent<Renderer<();
var bound = renderer.bounds;
var center = bound.center;
var radius = bound.extents.magnitude;
How do you create a .dae file from a 3D model? I've created a 3D model from a drone areal mapping and now have a very large file I can import in to Photoshop, but I can't figure out how to create a .dae file I can use in SceneKit.
The default game example for Xcode has a SceneKit that shows a rotating aircraft, and the asset is a .dae file, but I don't see any documentation on how to create one of those from a 3D model, and how to correctly apply a texture to it.
To create a 3D model and export it as a Collada .dae file you can use any of the following 3D authoring tools: Autodesk Maya, Blender, Autodesk 3dsMax, The Foundry Modo, Maxon Cinema 4D, SideFX Houdini, etc. The simplest way is to use a non-commercial student version of Autodesk Maya 2022. It's free. You can download it from HERE.
There are countless examples in YouTube how to model and uv-map in Maya software. Look at this example of UV-mapping in Maya. So, when your 3D model (and its UV-texture) is ready for use, you can export it as one of the four formats supported by SceneKit:
animated Collada DAE
animated Pixar USDZ (for iOS 12 and higher)
animated Autodesk FBX
single-frame Sony Alembic
single-frame Wavefront OBJ
In Maya Export Type for your 3D geometry must be DAE_FBX export:
A texture for you model (square UV-mapping 1K or 2K) you can export as JPEG or PNG file. It may look like this:
This UV-square-texture you must assign to a Diffuse slot of properties in Lighting Model (shader) in Show the Material Inspector.
And here's some Swift code if you wanna make it programmatically:
let scene = SCNScene(named: "art.scnassets/mushroom.scn")!
let mushroom = scene.rootNode.childNode(withName: "mushroom",
recursively: true)!
let mushroomMaterial = SCNMaterial()
mushroomMaterial.diffuse.contents = UIImage(named: "mushroom.png")
P.S. Working with Pixar's USDZ file format:
If you need a .usdz for your 3D scene, you can convert .usda using the following command in Terminal:
usdzconvert file.usda
Here you can read about usdzconvert command.