Problems with interaction of 3D objects. I've found some beta-functions of RealityKit such as PhysicsBodyComponent, applyImpulse, addForce, applyAngularImpulse and etc.
I was trying to add physics characteristics to object 'vase' and making an impulse to object on event a tap or something like that.
It's really strange, after execution the commands, the physics characteristics are added normal and at same time impulses and force aren't added into object(see below at debugging output).
Output of debugging print:
Something 1 Optional(RealityKit.PhysicsBodyComponent(mode: RealityKit.PhysicsBodyMode.dynamic, massProperties: RealityKit.PhysicsMassProperties(mass: 0.2, inertia: SIMD3(0.1, 0.1, 0.1), centerOfMass: (position: SIMD3(0.0, 0.0, 0.0), orientation: simd_quatf(real: 1.0, imag: SIMD3(0.0, 0.0, 0.0)))), material: RealityKit.PhysicsMaterialResource, isTranslationLocked: (x: false, y: false, z: false), isRotationLocked: (x: false, y: false, z: false), isContinuousCollisionDetectionEnabled: false, teleport: false, userForce: SIMD3(0.0, 0.0, 0.0), userTorque: SIMD3(0.0, 0.0, 0.0), userLinearImpulse: SIMD3(0.0, 0.0, 0.0), userAngularImpulse: SIMD3(0.0, 0.0, 0.0)))
Something 5 Optional(RealityKit.PhysicsBodyComponent(mode: RealityKit.PhysicsBodyMode.dynamic, massProperties: RealityKit.PhysicsMassProperties(mass: 0.2, inertia: SIMD3(0.1, 0.1, 0.1), centerOfMass: (position: SIMD3(0.0, 0.0, 0.0), orientation: simd_quatf(real: 1.0, imag: SIMD3(0.0, 0.0, >0.0)))), material: RealityKit.PhysicsMaterialResource, isTranslationLocked: (x: false, y: false, z: false), isRotationLocked: (x: false, y: false, z: false), isContinuousCollisionDetectionEnabled: false, teleport: false, userForce: SIMD3(0.0, 0.0, 0.0), userTorque: SIMD3(0.0, 0.0, 0.0), userLinearImpulse: SIMD3(0.0, 0.0, 0.0), userAngularImpulse: SIMD3(0.0, 0.0, 0.0)))
As we can see, functions doesn't add impulses and force to the object 'vase'. Maybe I make something wrong.
I don't think you're supposed to create instances of ModelEntity directly. I think that is an internal component of the Entity class representing just the mesh. You augment the interactions/animations directly with components in RealityKit. These apply to the entire entity object. I think it's similar to how if you create a Metal view, you need it to be backed by a Core Animation layer in order to get access to user interactions.
If you look at the SwiftStrike sample application they do not directly instantiate ModelEntity which leads me to believe that it's not best practice to create those outside of an Entity object.
I believe you would add your vase to the project through Reality Composer. You can apply materials and collisions to it there. You can also add behaviors like responding to touch. Then you can access the vase through the reality composer file in Xcode and add components to the vase entity that will transform its position. All entities have a transform component, which is documented here.
RealityKit feels like it was designed to abstract as much code away from the programmer as possible, so a lot of the underlying architecture isn't really exposed or documented. It's also not organized the way that I would organize it, but I don't have a lot of familiarity with the Entity-Component design pattern.
If you don't like how RealityKit is laid out you also have the option to use SceneKit as a renderer. That does not abstract away as much of the functionality and allows you to use Core Animation commands directly on objects.
Related
I'm learning how to implement great CollectionViewPagingLayout templates in my project.
This one: https://github.com/amirdew/CollectionViewPagingLayout
First I specify which template to use (now I use "invertedCylinder") - and this part works well:
extension MovieCollectionViewCell: ScaleTransformView {
var scaleOptions: ScaleTransformViewOptions {
.layout(.invertedCylinder)
}
}
The problem appears when I try to modify the template. There is an extension, written by the creator of the Layout:
extension YourCell: ScaleTransformView {
var scaleOptions = ScaleTransformViewOptions(
minScale: 0.6,
scaleRatio: 0.4,
translationRatio: CGPoint(x: 0.66, y: 0.2),
maxTranslationRatio: CGPoint(x: 2, y: 0
)
}
I have tried to get rid of stored properties error and modify the code:
extension MovieCollectionViewCell: ScaleTransformView {
var scaleOptionsDetailed: ScaleTransformViewOptions {
minScale: 0.6,
scaleRatio: 0.4,
translationRatio: CGPoint(x: 0.66, y: 0.2),
maxTranslationRatio: CGPoint(x: 2, y: 0)
}
}
But this gives me more errors:
Redundant conformance of 'MovieCollectionViewCell' to protocol 'ScaleTransformView'
Cannot find 'minScale' in scope
Consecutive statements on a line must be separated by ';'
I understand that this is a question about basics. But it is already the second day im trying to solve the issue and would be very grateful for a guidance.
I don’t know why that repo includes stored properties on an extension. That is illegal and will likely always be illegal.
You could convert your variable scaleOptions to a computed property. To do that get rid of the equals sign. Then every time you reference that property it will run the code in the closure and generate a new value.
It is also possible to fake stored properties for extensions using associated values from the Objective-C runtime, but that is considered a hack and probably not a great idea.
In WebGL (OpenGL), write as follows
// alpha blending
gl.blendFuncSeparate (gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ONE);
gl.bindFramebuffer (gl.FRAMEBUFFER, buffers [0] .framebuffer);
gl.useProgram (firstProgram);
webgl.enableAttribute (planeVBO, attLocation, attStride);
gl.bindBuffer (gl.ELEMENT_ARRAY_BUFFER, planeIBO);
gl.uniform2fv (resolution, [width, height]);
gl.drawElements (gl.TRIANGLES, plane.index.length, gl.UNSIGNED_SHORT, 0);
I am worried when I want to realize with Metal. In WebGL, if you read (bind) the created frame buffer and draw it with Shader, it will add it, but in the case of Metal
Is it more appropriate to send the framebuffer as a texture to Shader and add it on the Shader side rather than reading it? Or does it exist as a mechanism in the same way?
Or
renderToTextureRenderPassDescriptor = MTLRenderPassDescriptor()
renderToTextureRenderPassDescriptor.colorAttachments[0].texture = renderTargetTexture
renderToTextureRenderPassDescriptor.colorAttachments[0].loadAction = .clear
renderToTextureRenderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0)
renderToTextureRenderPassDescriptor.colorAttachments[0].storeAction = .store
Is it possible to continue adding in the same way as long as it is not cleared in such a place?
Or the earlier pipline
renderTargetTexture = mtlDevice.makeTexture(descriptor: textureDescriptor)
Since it has been initialized at that point, is there any need for change?
The code template itself is as follows.
https://sgaworks.com/metalsample/MetalSample2.zip
I have multiple SKSpriteNodes that I keep in an array, I also keep their SKActions in an array. This code is just an example I trimmed down, there are many more.
I wrote it this way so that when they have to be repositioned, resized, etc. I go through the array of grouped SKActions.
moves.append(SKAction.follow(iconPath[0], asOffset: false, orientToPath: false, duration: 1))
moves.append(SKAction.follow(iconPath[1], asOffset: false, orientToPath: false, duration: 1))
moves.append(SKAction.follow(iconPath[2], asOffset: false, orientToPath: false, duration: 1))
...
sizing.append(SKAction.resize(byWidth: -iSize.width/2, height: -iSize.width/2, duration: 1))
sizing.append(SKAction.wait(forDuration: 0))
sizing.append(SKAction.resize(byWidth: +iSize.width/2, height: +iSize.width/2, duration: 1))
...
groups.append(SKAction.group([moves[0], sizing[0], blurs[0]]))
groups.append(SKAction.group([moves[1], sizing[1], blurs[1]]))
groups.append(SKAction.group([moves[2], sizing[2], blurs[2]]))
However, depending on a nodes position, it doesn't get every SKAction, as seen in the sizing array. So am using SKAction.wait set to zero. Is this kludgy? Is there another/proper way to do this?
I’m looking for a way of chaining SceneKit animations.
I’m adding x number of nodes to a scene, I’d like to add the first node, make it fade in and once it’s visible then we go to the next node until all nodes are visible. So I need the animation on the first node to end before we start on the second.
Obviously I tried a for loop with SCNActions but the animation is batched together so all nodes appear at the same time.
I don’t know how many nodes will be added, so I can’t make a sequence.
What would be the best way to handle this?
Edit:
Okay, I've figured that if I add a sequence to the node before I add it to the paused scene (that includes an incremented wait interval)
let sequence = SCNAction.sequence([.wait(duration: delay), .fadeIn(duration: 0.5)])
node.runAction(sequence)
then I un-pause the scene once all the nodes are added it achieves the effect that I'm looking for. But it seems hacky.
Is there a better way?
Pause/unpause - hmm, yeah it works, but just feels like that might cause problems down the road if you start doing more things.
I like the completionHandler route per (James P). You could set up multiple (and different) animations or movements with this method. Like move to (5,0,0), rotate, animate, and when it gets there, call it again and move to (10,0,0), etc.
You could do it with a timer if they all work the same way and that would give you some consistency if that's what you are looking for. If you go this route, please ensure to put timers in the main thread.
You can also create some pre-defined sequences, depending on your needs:
let heartBeat = SCNAction.sequence([
SCNAction.move(to: SCNVector3(-0.5, 0.0, 0.80), duration: 0.4),
SCNAction.unhide(),
SCNAction.fadeIn(duration: 1.0),
SCNAction.move(to: SCNVector3(-0.3, 0.0, 0.80), duration: 0.4),
SCNAction.move(to: SCNVector3(-0.2, 0.2, 0.80), duration: 0.4),
SCNAction.move(to: SCNVector3(-0.1, -0.2, 0.80), duration: 0.4),
SCNAction.move(to: SCNVector3( 0.0, 0.0, 0.80), duration: 0.4),
SCNAction.move(to: SCNVector3( 0.1, 0.5, 0.80), duration: 0.4),
SCNAction.move(to: SCNVector3( 0.3, -0.5, 0.80), duration: 0.4),
SCNAction.move(to: SCNVector3( 0.5, 0.0, 0.80), duration: 0.4),
SCNAction.fadeOut(duration: 0.1),
SCNAction.hide()
])
node.runAction(SCNAction.repeatForever(heartBeat))
I ended up using a Timer, I did think about this but couldn't get it to work, probably because I wasn't doing it on the main thread.
Here's the code I'm using so far in case anyone else is in the same boat.
func animateNodesInTo(scene: SCNScene, withDuration: TimeInterval) {
DispatchQueue.main.async {
let nodes = scene.rootNode.childNodes
let acion = SCNAction.fadeIn(duration: withDuration)
Timer.scheduledTimer(withTimeInterval: 1.0, repeats: true) { timer in
let opaqueNode = nodes.first(where: {$0.opacity == 0})
opaqueNode?.runAction(acion)
if opaqueNode == nil {
timer.invalidate()
}
}
}
}
In my program I have a method called addObstacle, which creates a rectangular SKShapeNode with an SKPhysicsBody, and a leftward velocity.
func addObstacle(bottom: CGFloat, top: CGFloat, width: CGFloat){
let obstacleRect = CGRectMake(self.size.width + 100, bottom, width, (top - bottom))
let obstacle = SKShapeNode(rect: obstacleRect)
obstacle.name = "obstacleNode"
obstacle.fillColor = UIColor.grayColor()
obstacle.physicsBody = SKPhysicsBody(edgeLoopFromPath: obstacle.path!)
obstacle.physicsBody?.dynamic = false
obstacle.physicsBody?.affectedByGravity = false
obstacle.physicsBody?.contactTestBitMask = PhysicsCatagory.Ball
obstacle.physicsBody?.categoryBitMask = PhysicsCatagory.Obstacle
obstacle.physicsBody?.usesPreciseCollisionDetection = true
self.addChild(obstacle)
obstacle.runAction(SKAction.moveBy(obstacleVector, duration: obstacleSpeed))
}
In a separate method, called endGame, I want to fade out all the obstacles currently in existence on the screen. All the obstacle objects are private, which makes accessing their properties difficult. If there is only one on the screen, I can usually access it by its name. However, when I say childNodeWithName("obstacleNode")?.runAction(SKAction.fadeAlphaBy(-1.0, duration: 1.0)), only one of the "obstacles" fades away; the rest remain completely opaque. Is there a good way of doing this? Thanks in advance (:
You could probably go with:
self.enumerateChildNodesWithName("obstacleNode", usingBlock: {
node, stop in
//do your stuff
})
More about this method can be found here.
In this example I assumed that you've added obstacles to the scene. If not, then instead of scene, run this method on obstacle's parent node.
And one side note...SKShapeNode is not performant solution in many cases because it requires at least one draw pass to be rendered by the scene (it can't be drawn in batches like SKSpriteNode). If using a SKShapeNode is not "a must" in your app, and you can switch them with SKSpriteNode, I would warmly suggest you to do that because of performance.
SpriteKit can render hundreds of nodes in a single draw pass if you are using same atlas and same blending mode for all sprites. This is not the case with SKShapeNodes. More about this here. Search SO about this topic, there are some useful posts about all this.