I'm trying to render a 3D model in SceneKit but it looks incorrect.
For example this model (it's an SCN file with texture and you can reproduce it in your Xcode):
In Xcode Scene Editor it is rendered like this:
Transparency -> Mode -> Dual Layer
Double Sided = true
If I turn off the "Write depth" option, it will look like this:
But there are also some issues because it I see only "the lowest layer" of haircut.
I think this should be possible. How to do it right?
The reason that in your 3D model some strands of hair popped out when viewed from different angles is quite usual for SceneKit: your model has semi-transparent material that SceneKit can't render properly due to some inner engine rendering techniques (time 49:35) applied to depth buffer.
In order to deal with this problem there are two solutions:
Solution 1:
Your 3D model must have a completely opaque texture (without semi-transparent parts at all). In that case use .dualLayer property.
let scene = SCNScene(named: "art.scnassets/Hair.scn")!
let hair = scene.rootNode.childNode(withName: "MDL_OBJ", recursively: true)!
hair.geometry?.firstMaterial?.transparencyMode = SCNTransparencyMode.dualLayer
Solution 2:
Strands of hair mustn't be a mono-geometry but must be a compound geometry (consisted of several geometry layers unified in one group).
hair.geometry?.firstMaterial?.colorBufferWriteMask = SCNColorMask.all
hair.geometry?.firstMaterial?.readsFromDepthBuffer = false
hair.geometry?.firstMaterial?.writesToDepthBuffer = false
hair.geometry?.firstMaterial?.blendMode = SCNBlendMode.alpha
Related
I have a fanfare.reality model in my arView from Reality Composer. I do raycast by entity(at:location) and enable the ModelDebugOptionsComponent(visualizationMode: .lightingDiffuse) of the objects hit, which make the appearance of objects turns grey. However, I found only the fanfare itself turns grey and the flag above the fanfare does not change at all.
I load the fanfare.reality by LoadAsync() and print the returned value as follows. The reason is that the flag, star and fanfare itself are divded into 3 ModelEntity. In RealityKit, raycast searches the entities with CollisionComponent.only can be added to entities that have ModelComponent.
Therefore, my question how can I turn the entire reality model grey (fanfare+flag+star) when I tap the model on screen(by raycast).
Separate-parts-model approach
You can easily retrieve all 3 models. But you have to specify this whole long hierarchical path:
let scene = try! Experience.loadFanfare()
// Fanfare – .children[0].children[0]
let fanfare = scene.children[0] ..... children[0].children[0] as! ModelEntity
fanfare.model?.materials[0] = UnlitMaterial(color: .darkGray)
// Flag – .children[1].children[0]
let flag = scene.children[0] ..... children[1].children[0] as! ModelEntity
flag.model?.materials[0] = UnlitMaterial(color: .darkGray)
// Star – .children[2].children[0]
let star = scene.children[0] ..... children[2].children[0] as! ModelEntity
star.model?.materials[0] = UnlitMaterial(color: .darkGray)
I don't see much difference when retrieving model entities from .rcproject, .reality or .usdz files. According to the printed diagram, all three model-entities are located at the same level of hierarchy, they are offsprings of the same entity. The condition in the if statement can be set to its simplest form – if a ray hits a collision shape of fanfare or (||) flag or (||) star, then all three models must be recolored.
Mono-model approach
The best solution for interacting with 3D models through raycasting is the mono-model approach. A mono-model is a solid 3D object that does not have separate parts – all parts are combined into a whole model. Textures for mono-models are always mapped in UV editors. The mono-model can be made in 3D authoring apps like Maya or Blender.
P.S.
All seasoned AR developers know that Wow! AR experience isn't about code but rather about 3D content. You understand that there is no "miracle pill" for an easy solution if your 3D model consists of many parts. Competently made AR model is 75% of success when working with code.
Noob here. When using the following code, it wraps the image texture around the entire model. Is it possible to apply the image texture to a portion of the model (as opposed to the entire model itself) in RealityKit/ARKit?
CODE:
var material = SimpleMaterial()
material.baseColor = try! .texture(TextureResource.load(named: "image.jpg"))
modelSample?.model?.materials = [material]
If you want to apply a texture to a portion of a AR model, not to an entire model, you have to UV-map this texture to a model in a 3d authoring tool (like 3dsMax, Maya, or Blender). UV-mapping is neither possible in RealityKit 1.0 nor in RealityKit 2.0.
I'm writing a 3D space game using SceneKit and I'm pretty happy with it so far, but I'm starting to hit some limitations and would like some advice.
I have a sphere which represents a star that sits on a point light, and I'd like to add some effects to this to make it look more realistic. I think I should use the sphere's shaderModifiers to do this, but I'm not sure which modifiers I should be looking at i.e. to achieve a lens flare effect. Infact if anyone can give me a clear explanation of the differences between: ShaderModifiers, SCNProgram and SCNTechnique that would be great!
I'd like to draw a 1px circle to represent an orbit. I've tried using a cylinder that is really thin, but this results in some visual artefacts (the ring seems to have gaps at larger distances and break up). Any ideas how I can do this and maintain a nice smooth circle?
The shortest way to have a flare effect for SceneKit models is to use a CoreImage framework's capabilities.
For making a visible orbit use a regular PNG-image technique with premultiplied alpha channel (RGBxA). Look at this link to find out how a png image with orbits looks like (save a file). And if you wanna automatically orient a plane toward a camera use a constraint.
And here's a code:
let orbitsOnPlane = SCNPlane(width: 10, height: 10)
let material = SCNMaterial()
material.diffuse.contents = UIImage(named:"overlayOrbits.png")
orbitsOnPlane.materials = [material]
let orbitNode = SCNNode()
orbitNode.geometry = orbitsOnPlane
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
let constraint = SCNLookAtConstraint(target: orbitNode)
cameraNode.constraints = [constraint]
I would like to create a prototype like this one: just using Xcode SceneKit Editor. I found an answer where the room is created programmatically with simple SCNPlane objects and playing around with the rendering order.
However, I would like to put together something more elaborated like downloading the 3d model of a room and make it accessible only through the portal. I'm trying to achieve the same effect directly in the Xcode's SceneKit editor converting this part:
//a. Create The Left Wall And Set Its Rendering Order To 200 Meaning It Will Be Rendered After Our Masks
let leftWallNode = SCNNode()
let leftWallMainGeometry = SCNPlane(width: 0.5, height: 1)
leftWallNode.geometry = leftWallMainGeometry
leftWallMainGeometry.firstMaterial?.diffuse.contents = UIColor.red
leftWallMainGeometry.firstMaterial?.isDoubleSided = true
leftWallNode.renderingOrder = 200
//b. Create The Left Wall Mask And Set Its Rendering Order To 10 Meaning It Will Be Rendered Before Our Walls
let leftWallMaskNode = SCNNode()
let leftWallMaskGeometry = SCNPlane(width: 0.5, height: 1)
leftWallMaskNode.geometry = leftWallMaskGeometry
leftWallMaskGeometry.firstMaterial?.diffuse.contents = UIColor.blue
leftWallMaskGeometry.firstMaterial?.isDoubleSided = true
leftWallMaskGeometry.firstMaterial?.transparency = 0.0000001
leftWallMaskNode.renderingOrder = 10
leftWallMaskNode.position = SCNVector3(0, 0, 0.001)
into two planes in the editor:
I took care of setting isDoubleSided and renderingOrder for both of them and I made the second one transparent (using alpha on the Diffuse Color).
Unfortunately, when displaying in AR, mode it doesn't work. The .scn file is available here.
A virtual world in your example is hidden behind a wall. In order to get a portal like in the presented movie you need a wall opening (where an entrance is), not a plane blocking your 3D objects. The alpha channel of portal's entrance should look like right part of the following image:
Also, look at my answers in the SO posts: ARKit hide objects behind walls and ARKit – Rendering a 3D object under an invisible plane for checking how to set up invisible material.
The code might be like this one:
portalPlane.geometry?.materials.first?.colorBufferWriteMask = []
portalPlane.geometry?.materials.first?.readsFromDepthBuffer = true
portalPlane.geometry?.materials.first?.writesToDepthBuffer = true
portalPlane.renderingOrder = -1
And, of course, you can use properties in Material Inspector:
For portal plane the properties are the following: Writes Depth is true, Reads Depth is true, Write to Color is empty, Rendering Order (in Node Inspector) is -1.
For 3D objects inside a portal Rendering Order (in Node Inspector) is greater than 0.
You can definitely observe a hidden effect right in Viewport of Xcode.
Now hidden wall masks a bigger part of 3D to show the real street, and you see your 3D environment through portal (wrong result is on the left, right result is on the right part of this picture).
And the next picture shows how 3D wall (in my case it's extruded plane) looks like :
But for exit of the portal you just need a 3D object like a door (not a wall opening) and this exit should look like the left side of presented pictures. The normals of the door must be pointed inside, the normals of the wall must be pointed outside. The material for both objects is single sided.
Hope this helps.
I'd like to know if there are only Point Light and Ambient Light in Metal API or I can also use Spot Light and Directional Light in my 3D environment?
struct Light {
var color: (Float, Float, Float)
var ambientIntensity: Float
static func size() -> Int {
return sizeof(Float) * 4
}
func raw() -> [Float] {
let raw = [color.0, color.1, color.2, ambientIntensity]
return raw
}
}
How to implement Spot Light and Directional Light if they exist in Apple's Metal API?
The Metal API itself has no concept of anything as high-level as a “light”, in the same way that modern OpenGL doesn’t—the APIs for that kind of thing went the way of the dinosaur with the fixed-function pipeline. With modern low-level graphics APIs, you need to roll your own lighting; the excellent Metal By Example series has an article on doing that, though you may want to go through the earlier sections to get a clearer idea of what’s going on. Note that that article only deals with directional lights; spot lights are significantly trickier and you’ll need to do some research to find how those are usually done and then implement that approach in Metal.
As an alternative to using Metal directly, you might want to look into SceneKit, which is quite powerful and has built-in support for many types of light via the SCNLight class.
Marius gave me a hint that the MDLLight class is the abstract superclass for objects that describe light sources in a scene. When you load lights from an asset file using the MDLAsset class, or create lights when building an asset for export, you use one or more of the concrete subclasses MDLPhysicallyPlausibleLight, MDLAreaLight, MDLPhotometricLight, or MDLLightProbe.
...and MDLLightType is Enumeration that has options for the shape and style of illumination provided by a light, used by the lightType property:
case unknown = 0
0. The type of the light is unknown or has not been initialized.
case ambient = 1
1. The light source should illuminate a scene evenly regardless of position or direction.
case directional = 2
2. The light source illuminates a scene from a uniform direction regardless of position.
case spot = 3
3. The light source illuminates a scene from a specific position and direction.
case point = 4
4. The light source illuminates a scene in all directions from a specific position.
case linear = 5
5. The light source illuminates a scene in all directions from an area in the shape of a line.
case discArea = 6
6. The light source illuminates a scene in all directions from an area in the shape of a disc.
case rectangularArea = 7
7. The light source illuminates a scene in all directions from an area in the shape of a rectangle.
case superElliptical = 8
8. The light source illuminates a scene in all directions from an area in the shape of a rectangle.
case photometric = 9
9. The illumination from the light is determined by a photometric profile.
case probe = 10
10. The illumination from the light is determined by texture images representing a sample of a scene at a specific point.
case environment = 11
11. The illumination from the light is determined by texture images representing a sample of the surrounding environment for a scene.
These are excerpts from Apple Metal API Reference.
MDLPhysicallyPlausibleLight
A MDLPhysicallyPlausibleLight object describes a light source for use in a shading model based on real-world physics.
MDLAreaLight
A MDLAreaLight object represents a source of light that illuminates a 3D scene not from a single point or direction, but from an area with a specific shape. The shape of an area light is a two-dimensional figure in the xy-plane of the light’s local coordinate space, and its illumination is directed in the negative z-axis direction (spreading out from that direction according to the inherited innerConeAngle and outerConeAngle properties).
MDLPhotometricLight
A MDLPhotometricLight object represents a light source whose shape, direction, and intensity of illumination is determined by a photometric profile. You create a photometric light from a file in the IES format, containing physical measurements of a light source. Many manufacturers of real-world light fixtures publish such files describing the lighting characteristics of their products. This photometry data measures the light web surrounding a light source—measurements of the light’s intensity in all directions around the source.
MDLLightProbe
A MDLLightProbe object describes a light source in terms of the variations in color and intensity of its illumination in all directions. A light probe represents this variation either as a cube map texture, where each texel represents the color and intensity of light in a particular direction from the cube’s center, or as a set of spherical harmonic coefficients. In addition to describing such light sources, the MDLLightProbe provides methods for generating light probe textures based on the contents of a scene and for generating spherical harmonic coefficients from a texture.