I'm trying to play animations from a usdz file. So I was given a .dae file and a .scn file both of the same thing. for RealityKit they only accept .usdz files. So I used Xcode's exporter and exported them both to .usdz format. However the animations do not transfer over. I also tried copying the scene graph of the .scn file and pasting it in the .usdz file and when I press the play button in the Bottom center of the viewer in Xcode. I can see the animation play.
However this is wrong because .usdz files can't be edited. so it doesn't save. and hence it doesn't play in the ARview when I run on Xcode. Here's my code for playing the animations. I have tried looking at a bunch of post from both stack overflow and apple developer forum.
bird = try! Entity.load(named: "plane")
bird.name = "bird"
resultAnchor.addChild(bird)
arView.scene.subscribe(to: SceneEvents.AnchoredStateChanged.self) { [self] (event) in
if resultAnchor.isActive {
for entity in resultAnchor.children {
for animation in entity.availableAnimations {
entity.playAnimation(animation.repeat())
}
}
}
}.store(in: &birdAnimations) // Remember to store the cancellable!
I found the structure for the code in a post
Also I guess its important to note that I found a .usdz file online that had an animation. Quick look was able to play it when I rightclicked->Quicklook on the file in finder. But again when I try playing the animation on Xcode it doesn't play.
If you have any questions, need clarity or screenrecordings of what I am doing just ask.
To play animation use DidAddEntity struct instead of AnchoredStateChanged.
import UIKit
import RealityKit
import Combine
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var subscriptions: [AnyCancellable] = []
override func viewDidLoad() {
super.viewDidLoad()
let model = try! Entity.load(named: "drummer.usdz")
let anchor = AnchorEntity()
anchor.addChild(model)
arView.scene.anchors.append(anchor)
arView.scene.subscribe(to: SceneEvents.DidAddEntity.self) { _ in
if anchor.isActive {
for entity in anchor.children {
for animation in entity.availableAnimations {
entity.playAnimation(animation.repeat())
}
}
}
}.store(in: &subscriptions)
}
}
My issue wasn't with my code. It was the way I was converting the .blend/.dae file to the .usdz.
I first exported it as a .glb in blender and Maya (worked for both). Then used Apple's Reality Converter to export it as a .usdz. that was able to play the animation properly.
Related
I have a very simple app which puts down an .rcproject file.
import ARKit
import RealityKit
class ViewController: UIViewController {
private var marLent: Bool = false
private lazy var arView: ARView = {
let arview = ARView()
arview.translatesAutoresizingMaskIntoConstraints = false
arview.isUserInteractionEnabled = true
return arview
}()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
let scene = try! Experience.loadScene()
arView.scene.anchors.append(scene)
configureUI()
setupARView()
}
private func configureUI() {
view.addSubview(arView)
arView.translatesAutoresizingMaskIntoConstraints = false
NSLayoutConstraint.activate([
arView.topAnchor.constraint(equalTo: view.topAnchor),
arView.leadingAnchor.constraint(equalTo: view.leadingAnchor),
arView.bottomAnchor.constraint(equalTo: view.bottomAnchor),
arView.trailingAnchor.constraint(equalTo: view.trailingAnchor),
])
}
private func setupARView() {
arView.automaticallyConfigureSession = false
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal]
configuration.environmentTexturing = .automatic
arView.session.run(configuration)
}
}
How could I create a label for the placed down Entity which looks like something like these. So basically have a text that points on the entity and the text would be the entity's name for example.
There are 4 ways to create info dots with text plates for AR scenes. Here's an animated .gif.
the first way – using Autodesk Maya with pre-installed USD plugin (it's the most preferable way, because you can apply both animation and Python scripting techniques);
the second one – using Reality Composer (it's quite fast way, but you won't be able to exactly replicate info dots animation, like in Apple's .reality sample files);
the third one – programmatically in RealityKit;
the fourth way – programmatically using Pythonic USD Schema.
Nonetheless, for brevity, let's see how we can do it in Reality Composer app.
Reality Composer's behaviors
In Reality Composer's scene, drag and drop .png files with transparency (8-bit RGBA) to create an info dot and an info plate – each file will be turned into plane with its corresponding image. After that, you can apply Reality Composer's behaviors to any separate part of your model.
Create first custom behavior with a Scene Start trigger, then add LookAtCamera and Hide actions (when scene starts, both, a cylinder primitive and info plate must be hidden).
Create second behavior with a Tap trigger, then add LookAtCamera, Show, Wait and Hide actions (three actions must be merged together). If you tap an info dot, both hidden objects will be shown with the help of fade in/out animation.
Final step: save the scene as .reality file.
Hope, now you have an idea how it's done.
Working on taking around 23 ".mov" files - short animations - that will piece together to make a children's book, with a choose-your-own-adventure style to it.
I feel like I have a decent grasp on what needs to be done on a macro sense, present the video in a scene, once the video finishes create x number of buttons that will launch the user into the next chapter/scene. Furtherdown the road will add some simple animations and music/sound as well.
However, I am having some issues with playing the videos, I was successful in creating a scene with a screenshot as the background which had a button to launch the next scene, but for adding the video, I am both lost and having a hard time finding resources on the topic.
Apple Dev's solution is to use:
let sample = SKVideoNode(fileNamed: "sample.mov")
sample.position = CGPoint(x: frame.midX,
y: frame.midY)
addChild(sample)
sample.play()
I tried this, albeit probably improperly, in my GameScene.swift file.
import SpriteKit
import GameplayKit
class GameScene: SKScene {
private var label : SKVideoNode?
override func didMove(to view: SKView) {
let sample = SKVideoNode(fileNamed: "V1.mov")
sample.position = CGPoint(x: frame.midX,
y: frame.midY)
addChild(sample)
sample.play()
}
}
I was hoping this would be - and who knows, maybe it still is, my golden egg of a solution, cause then the rest would really be just a copy and paste session with multiple scenes, but I was not able to get the video to load.
Do I need to further connect the SKVideoNode to the "V1.mov" file?
Furthermore, I think some of my confusion stems from whether or not I implement the video programatically or through the .sks / storyboard files, is one easier than the other?
Also am aware that this question may be more-so to do with something very rudimentary that I've brushed over in my learning process, and if that is the case, I apologize for my over zealousness :p
Appreciate any advice you folks can give would be greatly appreciated:)
UPDATE:
Okay, so here is the code I have so far which works, it is the skeleton framework for the book I am making, it's a choose your own adventure, so each scene has multiple options to take you to other scenes:
import SpriteKit
import Foundation
import SceneKit
class v1Scene: SKScene {
var trashButtonNode:SKSpriteNode!
var bikeButtonNode:SKSpriteNode!
var backButtonNode:SKSpriteNode!
override func didMove(to view: SKView) {
trashButtonNode = (self.childNode(withName: "trashButton") as? SKSpriteNode)
bikeButtonNode = (self.childNode(withName: "bikeButton") as? SKSpriteNode)
backButtonNode = (self.childNode(withName: "backButton") as? SKSpriteNode)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first
let v2Scene = v2Scene(fileNamed: "v2Scene")
let v3Scene = v3Scene(fileNamed: "v3Scene")
let MainMenu = MainMenu(fileNamed: "MainMenu")
if let location = touch?.location(in: self) {
let nodesArray = self.nodes(at: location)
if nodesArray.first?.name == "trashButton" {
let transistion = SKTransition.fade(withDuration: 2)
self.view?.presentScene(v2Scene!, transition: transistion)
} else if nodesArray.first?.name == "bikeButton" {
let transistion = SKTransition.fade(withDuration: 2)
self.view?.presentScene(v3Scene!, transition: transistion)
} else if nodesArray.first?.name == "backButton" {
let transistion = SKTransition.fade(withDuration: 2)
self.view?.presentScene(MainMenu!, transition: transistion)
}
}
}
}
What I am trying to achieve is that when the "trashButton" is clicked, v2Scene is launched and a video automatically plays. Each video is a "page" in a book, so need to assign each video to a scene.
The files are in a folder in my project and the Assets.xcasstes folder, although, the .mov files do not seem to work in that folder.
Thanks,
Owen
Because the video files are in their own folder, you won't be able to use the SKVideoNode init that takes a file name. You must use the init that takes a URL as an argument.
To get the URL for a video file, you must call the Bundle class's url function, supplying the file name, extension, and the name of the path to the folder where your video files are.
The following code shows what you need to do to create a video node from a file in the app bundle, using the V1.mov file from your question as the example:
let bundle = Bundle.main
if let videoURL = bundle.url(forResource: "V1",
withExtension: "mov",
subdirectory "FolderName" {
sample = SKVideoNode(url: videoURL)
}
FolderName is the folder where the video files are.
Make sure the folder with the video files is in the app target's Copy Bundle Resources build phase. The folder must be in this build phase for Xcode to copy it to the app bundle when it builds the project. Take the following steps to view the build phases for a target:
Open the project editor by selecting the project file on the left side of the project window.
Select the target on the left side of the project editor.
Click the Build Phases button at the top of the project editor.
I am using RealityKit face anchors. I downloaded a model from SketchFab but I am trying to put the model on the face it does not work and does not display anything.
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
let configuration = ARFaceTrackingConfiguration()
arView.session.run(configuration)
let anchor = AnchorEntity(.face)
let model = try! Entity.loadModel(named: "squid-game")
anchor.addChild(model)
arView.scene.addAnchor(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
One of the most common problems that AR developers can deal with is model size. In RealityKit, ARKit, RoomPlan & SceneKit, the working units are meters. Quite often models created in 3dsMax or Blender are imported into Xcode in centimeter scale. Therefore, they are 100 times bigger than they should be. You cannot see your model because you may be inside it and its inner surface of shader is not rendered in RealityKit. So, all you need is to scale the size of the model.
anchor.scale /= 100
The second common problem is a pivot point's location. In 99% of cases, the pivot should be inside the model. Model's pivot is like a "dart", and .face anchor is like "10 points". Unfortunately, RealityKit 2.0 does not have the ability to control the pivot. SceneKit does.
There are also hardware constraints. Run the following simple check:
if !ARFaceTrackingConfiguration.isSupported {
print("Your device isn't supported")
} else {
let config = ARFaceTrackingConfiguration()
arView.session.run(config)
}
I also recommend you open your .usdz model in Reality Composer app to make sure it can be successfully loaded and is not 100% transparent.
Check your model.
Is there any error when you run the demo?
You can use a .reality file to test, and you can also download a sample from the Apple Developer site.
I have added content to the face anchor in Reality Composer, later on, after loading the Experience that i created on Reality Composer, i create a face tracking session like this:
guard ARFaceTrackingConfiguration.isSupported else { return }
let configuration = ARFaceTrackingConfiguration()
configuration.maximumNumberOfTrackedFaces = ARFaceTrackingConfiguration.supportedNumberOfTrackedFaces
configuration.isLightEstimationEnabled = true
arView.session.delegate = self
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
It is not adding the content to all the faces that is detecting, and i know it is detecting more than one face because the other faces occlude the content that is stick to the other face, is this a limitation on RealityKit or i am missing something in the composer? actually is pretty hard to miss somehting since it is so basic and simple.
Thanks.
You can't succeed in multi-face tracking in RealityKit in case you use models with embedded Face Anchor, i.e. the models that came from Reality Composer' Face Tracking preset (you can use just one model with .face anchor, not three). Or you MAY USE such models but you need to delete these embedded AnchorEntity(.face) anchors. Although there's a better approach – simply load models in .usdz format.
Let's see what Apple documentation says about embedded anchors:
You can manually load and anchor Reality Composer scenes using code, like you do with other ARKit content. When you anchor a scene in code, RealityKit ignores the scene's anchoring information.
Reality Composer supports 5 anchor types: Horizontal, Vertical, Image, Face & Object. It displays a different set of guides for each anchor type to help you place your content. You can change the anchor type later if you choose the wrong option or change your mind about how to anchor your scene.
There are two options:
In new Reality Composer project, deselect the Create with default content checkbox at the bottom left of the action sheet you see at startup.
In RealityKit code, delete existing Face Anchor and assign a new one. The latter option is not great because you need to recreate objects positions from scratch:
boxAnchor.removeFromParent()
Nevertheless, I've achieved a multi-face tracking using AnchorEntity() with ARAnchor intializer inside session(:didUpdate:) instance method (just like SceneKit's renderer() instance method).
Here's my code:
import ARKit
import RealityKit
extension ViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
guard let faceAnchor = anchors.first as? ARFaceAnchor
else { return }
let anchor1 = AnchorEntity(anchor: faceAnchor)
let anchor2 = AnchorEntity(anchor: faceAnchor)
anchor1.addChild(model01)
anchor2.addChild(model02)
arView.scene.anchors.append(anchor1)
arView.scene.anchors.append(anchor2)
}
}
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
let model01 = try! Entity.load(named: "angryFace") // USDZ file
let model02 = try! FacialExpression.loadSmilingFace() // RC scene
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
guard ARFaceTrackingConfiguration.isSupported else {
fatalError("Alas, Face Tracking isn't supported")
}
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let config = ARFaceTrackingConfiguration()
config.maximumNumberOfTrackedFaces = 2
arView.session.run(config)
}
}
In SpriteKit (for those unfamiliar with it) there's a way to load and unload scenes, and a transition (visual) between them.
I'm trying to make a sound play between scenes, as they transition... that doesn't stutter.
So far, all the ways I've tried either create no sound, or the sound stutters, even using a sound manager, like this:
import AVFoundation
import SpriteKit
open class SoundManager {
static let soundStart = SKAction.playSoundFileNamed("ActionBeep_AP1.7", waitForCompletion: true)
static var coinSound = NSURL(fileURLWithPath:Bundle.main.path(forResource: "ActionBeep_AP1.7", ofType: "wav")!)
static var audioPlayer = AVAudioPlayer()
open static func playCoinSound(){
guard let soundToPlay = try? AVAudioPlayer(contentsOf: coinSound as URL) else {
fatalError("Failed to initialize the audio player with asset: \(coinSound)")
}
soundToPlay.prepareToPlay()
self.audioPlayer = soundToPlay
self.audioPlayer.play()
}
}
Anyone had any success making scene transition sounds run smoothly? I realise there's a lot going on during a scene transition, and that's probably not helping the sound engine. But think there must be a way to make a clear sound play during a scene transition.
And yes, I've tried with .caf and .mp3 and .wav files, all with different "compression" and raw states. I think it's how I play the sounds that's the problem, not the file type.
As crashoverride777 said, you need to change when you load the sound manager. You could set up a thread in your didMoveToView() function to minimize loading time:
DispatchQueue.global(qos: .userInitiated).async {
// Load your sound stuff
}
Now, you have your sound file loaded for whenever you need it, lag-free.