Trying to use my RealityKit project as the foundation for an on screen app (VR) instead of projecting onto the real-world (AR) out the back camera.
Anyone know how to load a RealityKit project asynchronously with the .nonAR camera option, so it project in an app instead of leveraging the rear facing camera?
Do I create position information in the Swift code or the Reality Composer project?
Here's how you can asynchronously load .usdz VR-model with a help of RealityKit's .loadModelAsync() instance method and Combine's AnyCancellable type.
import UIKit
import RealityKit
import Combine
class VRViewController: UIViewController {
#IBOutlet var arView: ARView!
var anyCancellable: AnyCancellable? = nil
let anchorEntity = AnchorEntity(world: [0, 0,-2])
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
arView.backgroundColor = .black
arView.cameraMode = .nonAR
anyCancellable = ModelEntity.loadModelAsync(named: "horse").sink(
receiveCompletion: { _ in
self.anyCancellable?.cancel()
},
receiveValue: { [self] (object: Entity) in
if let model = object as? ModelEntity {
self.anchorEntity.addChild(model)
self.arView.scene.anchors.append(self.anchorEntity)
} else {
print("Can't load a model due to some issues")
}
}
)
}
}
However, if you wanna move inside 3D environment, instead of using .nonAR camera mode use:
arView.environment.background = .color(.black)
Related
I'm trying to change the model component of a text entity created in Reality Composer in my code, but this as! casting the gui-created entity to a reference to an entity with a model component failed.
self.entityReference = scene.realityComposerEntity as! HasModel
textEntity.model!.mesh = MeshResource.generateText("New Text")
The text entity in RealityKit should have a model property as it has a visual appearance in the ARView, but I don't know how to access it. Does anyone have any idea how?
Are there any other easy ways to dynamically display different text in the same spot using RealityKit/Reality Composer?
To access Reality Composer's ModelComponent in RealityKit try the following approach:
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
override func viewDidLoad() {
super.viewDidLoad()
arView.environment.background = .color(.darkGray)
let textAnchor = try! SomeText.loadTextScene() // SomeText is enum
let textEntity: Entity = textAnchor.realityComp!.children[0]
textEntity.scale = [5,5,5]
var textModelComp: ModelComponent = textEntity.children[0].components[ModelComponent]!
var material = SimpleMaterial()
material.baseColor = .color(.systemTeal)
textModelComp.materials[0] = material
textAnchor.realityComp!.children[0].children[0].components.set(textModelComp)
arView.scene.anchors.append(textAnchor)
}
}
I’ve been experimenting with ARKit on Swift Playgrounds. I’ve written the starter code, but when I run it nothing happens. Instead of evaluating the code, it displays the pop up that shows ay issues in the code.
I know the code I’m using works, because I’ve used the same code on an iPad running an older version of Swift Playgrounds and the code works perfectly. It seems to be a problem with either Swift Playgrounds 3 or Swift 5.
Here’s the interesting part. When I remove the line of code that runs the ARWorldTrackingConfiguration initializer, and the code that makes the view controller the delegate of the session and scene, the code runs just fine. When I put it back, it does the same error again. I don’t know what’s going wrong.
I’m running Swift Playgrounds 3.0 on and iPad 6th Generation. The playground uses ARKit, UIKit, SceneKit, and PlaygroundSupport.
Lastly, here’s some code.
// Code inside modules can be shared between pages and other source files.
import ARKit
import SceneKit
import UIKit
extension ARSCNView {
public func setup(){
antialiasingMode = .multisampling4X
automaticallyUpdatesLighting = false
preferredFramesPerSecond = 60
contentScaleFactor = 1.0
if let camera = pointOfView?.camera {
camera.wantsHDR = true
camera.wantsExposureAdaptation = true
camera.exposureOffset = -1
camera.minimumExposure = -1
camera.maximumExposure = 3
}
}
}
public class vc : UIViewController, ARSessionDelegate, ARSCNViewDelegate {
var arscn : ARSCNView!
var scene : SCNScene!
public override func loadView() {
arscn = ARSCNView(frame: CGRect(x: 0, y: 0, width: 768, height: 1024))
arscn.delegate = self
arscn.setup()
scene = SCNScene()
arscn.scene = scene
var config = ARWorldTrackingConfiguration()
config.planeDetection = .horizontal
arscn.session.delegate = self
self.view = arscn
arscn.session.run(configåå)
}
public func session(_ session: ARSession, didFailWithError error: Error) {
// Present an error message to the user
}
public func sessionWasInterrupted(_ session: ARSession) {
// Inform the user that the session has been interrupted, for example, by presenting an overlay
}
public func sessionInterruptionEnded(_ session: ARSession) {
// Reset tracking and/or remove existing anchors if consistent tracking is required
}
}
Lastly, please note that I’m presenting the live view in the main playground page and putting the class in the shared code.
I’ve figured out a way to make this work. All I had to do was assign the view controller to a variable and then present the variable. I’m not exactly sure why this works, I just know it does.
import ARKit
import SceneKit
import UIKit
import PlaygroundSupport
public class LiveVC: UIViewController, ARSessionDelegate, ARSCNViewDelegate {
let scene = SCNScene()
public var arscn = ARSCNView(frame: CGRect(x: 0,y: 0,width: 640,height: 360))
override public func viewDidLoad() {
super.viewDidLoad()
arscn.delegate = self
arscn.session.delegate = self
arscn.scene = scene
let config = ARWorldTrackingConfiguration()
config.planeDetection = [.horizontal]
arscn.session.run(config)
view.addSubview(arscn)
}
public func session(_ session: ARSession, didFailWithError error: Error) {}
public func sessionWasInterrupted(_ session: ARSession) {}
public func sessionInterruptionEnded(_ session: ARSession) {}
}
var vc = LiveVC()
PlaygroundPage.current.liveView = vc
PlaygroundPage.current.needsIndefiniteExecution = true
Use UpperCamelCasing for classes' names and add two strings of code in the bottom.
This code is suitable for macOS Xcode Playground and iPad Swift Playgrounds:
import ARKit
import PlaygroundSupport
class LiveVC: UIViewController, ARSessionDelegate, ARSCNViewDelegate {
let scene = SCNScene()
var arscn = ARSCNView(frame: CGRect(x: 0,
y: 0,
width: 640,
height: 360))
override func viewDidLoad() {
super.viewDidLoad()
arscn.delegate = self
arscn.session.delegate = self
arscn.scene = scene
let config = ARWorldTrackingConfiguration()
config.planeDetection = [.horizontal]
arscn.session.run(config)
}
func session(_ session: ARSession, didFailWithError error: Error) {}
func sessionWasInterrupted(_ session: ARSession) {}
func sessionInterruptionEnded(_ session: ARSession) {}
}
PlaygroundPage.current.liveView = LiveVC().arscn
PlaygroundPage.current.needsIndefiniteExecution = true
P.S. Tip for Playground on macOS (although it doesn't have much sense when using ARKit module):
To turn on Live View in Xcode Playground 11.0 and higher use the following shortcut:
Command+Option+Return
I want to create an ARKit app using Xcode. I want it to recognize a generic rectangle without pressing a button and that subsequently the rectangle does a certain function.
How to do it?
You do not need ARKit to recognise rectangles, only Vision.
In case to recognise generic rectangles, use VNDetectRectanglesRequest.
As you rightly wrote, you need to use Vision or CoreML frameworks in your project along with ARKit. Also you have to create a pre-trained machine learning model (.mlmodel file) to classify input data to recognize your generic rectangle.
For creating a learning model use one of the following resources: TensorFlow, Turi, Caffe, or Keras.
Using .mlmodel with classification tags inside it, Vision requests return results as VNRecognizedObjectObservation objects, which identify objects found in the captured scene. So, if the image's corresponding tag is available via recognition process in ARSKView then an ARAnchor will be created (and SK/SCN object can be placed onto this ARAnchor).
Here's a snippet code on a topic "how it works":
import UIKit
import ARKit
import Vision
import SpriteKit
.................................................................
// file – ARBridge.swift
class ARBridge {
static let shared = ARBridge()
var anchorsToIdentifiers = [ARAnchor : String]()
}
.................................................................
// file – Scene.swift
DispatchQueue.global(qos: .background).async {
do {
let model = try VNCoreMLModel(for: Inceptionv3().model)
let request = VNCoreMLRequest(model: model, completionHandler: { (request, error) in
DispatchQueue.main.async {
guard let results = request.results as? [VNClassificationObservation], let result = results.first else {
print ("No results.")
return
}
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.75
let transform = simd_mul(currentFrame.camera.transform, translation)
let anchor = ARAnchor(transform: transform)
ARBridge.shared.anchorsToIdentifiers[anchor] = result.identifier
sceneView.session.add(anchor: anchor)
}
}
let handler = VNImageRequestHandler(cvPixelBuffer: currentFrame.capturedImage, options: [:])
try handler.perform([request])
} catch {
print(error)
}
}
.................................................................
// file – ViewController.swift
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
guard let identifier = ARBridge.shared.anchorsToIdentifiers[anchor] else {
return nil
}
let labelNode = SKLabelNode(text: identifier)
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
labelNode.fontName = UIFont.boldSystemFont(ofSize: 24).fontName
return labelNode
}
And you can download two Apple's projects (sample code) written by Vision engineers:
Recognizing Objects in Live Capture
Classifying Images with Vision and Core ML
Hope this helps.
I'm trying to get a video I have in my project to play in my macOS application using swift. Here is my code:
import Cocoa
import AVKit
import AVFoundation
class ViewController: NSViewController {
var player : AVPlayer!
var avPlayerLayer : AVPlayerLayer!
#IBOutlet weak var videoPlayer: AVPlayerView!
override func viewDidLoad() {
super.viewDidLoad()
guard let path = Bundle.main.path(forResource: "Background", ofType:"mp4") else {
debugPrint("Not found")
return
}
let playerURL = AVPlayer(url: URL(fileURLWithPath: path))
let playerView = AVPlayerView()
playerView.player = playerURL
playerView.player?.play()
}
}
I also have an AVPlayerView in my storyboard that I have connected to my viewcontroller.
It builds and runs fine but when I run it nothing shows up. I just get a black screen.
Any help is appreciated. Thanks!
The problem is this line:
let playerView = AVPlayerView()
Instead of configuring the player view that is in your interface, you make a whole new separate player view — and then you throw it away.
I am trying to get my macOS Cocoa Application Xcode project to play some video when I feed it a specified URL.
To simplify the question lets just use an empty Xcode project.
I want to achieve the same level of control as I am able to get using AVPlayerViewController() in my iOS Single View Application. Within that app I am currently using the solution offered here.
However, only the second example works in my macOS Cocoa Application when adapted like so:
import Cocoa
import AVFoundation
import AVKit
class ViewController: NSViewController {
override func viewDidLoad() {
super.viewDidLoad()
let videoURL = URL(string: "https://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4")
let player = AVPlayer(url: videoURL!)
let playerLayer = AVPlayerLayer(player: player)
playerLayer.frame = self.view.bounds
self.view.layer?.addSublayer(playerLayer)
player.play()
}
override var representedObject: Any? {
didSet {
// Update the view, if already loaded.
}
}
}
This option however has 2 downsides:
It creates a repetitive error message in the console:
2017-06-27 18:22:06.833441+0200 testVideoOnmacOS[24456:1216773] initWithSessionInfo: XPC connection interrupted
2017-06-27 18:22:06.834830+0200 testVideoOnmacOS[24456:1216773] startConfigurationWithCompletionHandler: Failed to get remote object proxy: Error Domain=NSCocoaErrorDomain Code=4097 "connection to service named com.apple.rtcreportingd" UserInfo={NSDebugDescription=connection to service named com.apple.rtcreportingd}
This AVPlayer solution does not offer the same control as the AVPlayerViewController() solution.
So I attempted to get the following AVPlayerViewController() solution to work. But it fails:
import Cocoa
import AVFoundation
import AVKit
class ViewController: NSViewController {
override func viewDidLoad() {
super.viewDidLoad()
let videoURL = URL(string: "https://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4")
let player = AVPlayer(url: videoURL!)
let playerViewController = AVPlayerViewController()
playerViewController.player = player
self.present(playerViewController, animated: true) {
playerViewController.player!.play()
}
}
override var representedObject: Any? {
didSet {
// Update the view, if already loaded.
}
}
}
The error I am getting is:
Use of unresolved identifier 'AVPlayerViewController' - Did you mean
'AVPlayerViewControlsStyle'?
I have tried to work around this in many different ways. It will be confusing if I spell them all out. My prominent problem seems to be that the vast majority of online examples are iOS based.
Thus, the main question:
How would I be able to successfully use AVPlayerViewController within my Xcode macOS Cocoa Application?
Yes, got it working!
Thnx to ninjaproger's comment.
This seems to work in order to get a video from the web playing / streaming on a macOS Cocoa Application in Xcode.
First, create a AVKitPlayerView on your storyboard and link it as an IBOutlet to your NSViewController.
Then, simply apply this code:
import Cocoa
import AVFoundation
import AVKit
class ViewController: NSViewController {
#IBOutlet var videoView: AVPlayerView!
override func viewDidLoad() {
super.viewDidLoad()
let videoURL = URL(string: "https://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4")
let player = AVPlayer(url: videoURL!)
let playerViewController = videoView
playerViewController?.player = player
playerViewController?.player!.play()
}
override var representedObject: Any? {
didSet {
// Update the view, if already loaded.
}
}
}
Update august 2021
Ok, it's been a while since this answer has been up and I am no longer invested in it. But it seems others still end up here and the solution needs a bit more work to work properly at times.
With thanks to #SouthernYankee65:
What is needed sometimes is to go to the project file and on the Signing & Capabilities tab check Outgoing Connection (Client). Then in the info.plist add App Transport Security Settings dictionary property, then add Allows Arbitrary Loads boolean property and set it to YES.
The video now plays. However, some errors in the console might occur.