So I wanted to try ARKit. I installed iOS 11 on my iPad air but it keeps crashing.
Here's the code in my view controller
import UIKit
import ARKit
class ViewController: UIViewController {
#IBOutlet weak var sceneView: ARSCNView!
#IBOutlet weak var counterLabel: UILabel!
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
sceneView.scene = scene
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(true)
let configuration = ARSessionConfiguration()
sceneView.session.run(configuration)
}
}
So I searched up a bit and I came into this:
https://developer.apple.com/documentation/arkit/building_a_basic_ar_experience
which basically says that for devices that has an older chip than the A9, should use ARSessionConfiguration instead of ARWorldTrackingSessionConfiguration however I still keep getting crashes.
I tried the ARKit Demp provided by Apple, same thing.
I also tried sceneView.antialiasingMode = .none but it didn't help either.
Here's the console log I get when it crashes
2017-06-26 21:44:16.539469+0200 ARKitGame[562:56168] [DYMTLInitPlatform] platform initialization successful
2017-06-26 21:44:18.630888+0200 ARKitGame[562:55915] Metal GPU Frame Capture Enabled
2017-06-26 21:44:18.633276+0200 ARKitGame[562:55915] Metal API Validation Enabled
2017-06-26 21:44:19.625366+0200 ARKitGame[562:56176] [MC] System group container for systemgroup.com.apple.configurationprofiles path is /private/var/containers/Shared/SystemGroup/systemgroup.com.apple.configurationprofiles
2017-06-26 21:44:19.628963+0200 ARKitGame[562:56176] [MC] Reading from public effective user settings.
2017-06-26 21:44:22.706910+0200 ARKitGame[562:56176] -[MTLTextureDescriptorInternal validateWithDevice:], line 778: error 'MTLTextureDescriptor has invalid pixelFormat (520).'
-[MTLTextureDescriptorInternal validateWithDevice:]:778: failed assertion `MTLTextureDescriptor has invalid pixelFormat (520).'
(lldb)
Apple changed the ARKit documentation with beta 2: it now unequivocally says that ARKit as a whole -- not just world tracking -- requires A9.
Perhaps that explains why even the basic session configuration seemed to never actually work on devices below A9...
I have downloaded the latest beta and run it on iPad Air 1 and still crashes and as Apple mentioned here : https://developer.apple.com/documentation/arkit
Important ARKit requires an iOS device with an A9 or later processor.
To make your app available only on devices supporting ARKit, use the
arkit key in the UIRequiredDeviceCapabilities section of your app's
Info.plist. If augmented reality is a secondary feature of your app,
use the isSupported property to determine whether the current device
supports the session configuration you want to use.
So it seems it only works on A9 processors.
ARKit only works on A9 chips onward: http://www.iphonehacks.com/2017/06/list-iphone-ipad-compatible-arkit-ios-11.html
iPad Air 1 has an A7 chip, so i would think maybe that's the crash you are seeing.
Updated: Didnt notice your comment about older chips using ARSessionConfiguration instead of ARWorldTrackingSessionConfiguration. Maybe you could try changing this settings on the ARKit demo app.
Related
I have a Reality Composer file with several scenes, all of which starts empty and then some models appear one by one every second. Even though the animation works perfectly in Quicklook and Reality Composer, it does a strange glitch when I try to integrate it in my app with Xcode. When the very first scene is launched or when we go to another scene, they don't start empty.. For a tiny split second, we see all the models of that scene being displayed, only to disappear immediately.
Then, we see them appearing slowly as they were supposed to. That tiny flash of models at the beginning of each scene is ruining everything. I tried using a .reality file and a .rcproject file, same problem. In fact, when we preview the animation of the Reality file inside Xcode, it does the same glitch. I tried using different Hide and Show functions, no change.. I tried different triggers such as notifications, scene start, on tap, no change.
I checked quite some tutorials and still couldn't find anything wrong in what I'm doing. I almost feel like there is a glitch in the current integration of Reality Composer. I would really appreciate some ideas on the subject...
Try this to prevent a glimpse...
import UIKit
import RealityKit
class ViewController: UIViewController {
#IBOutlet var arView: ARView!
var box: ModelEntity!
override func viewDidLoad() {
super.viewDidLoad()
let scene = try! Experience.loadScene()
self.box = scene.cube!.children[0] as? ModelEntity
self.box.isEnabled = false
arView.scene.anchors.append(scene)
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
self.box.isEnabled = true
}
}
}
In such a scenario glitching occurs only for sphere. Box object works fine.
#AndyJazz Thanks. This solution works for me. Alternately to the line :
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
self.box.isEnabled = true
I suggest (while in Reality Composer) Create a Behavior with:
Trigger Scene Start
Action Show
The appearance of the entity can then also be tweaked with a Motion Type, Ease Type and Style as well as chained to additional sequences.
I have an iOS app with deployment target iOS 10+, I need to add some features that depend only on RealityKit to appear with users whom their iOS version is 13+, the app compiles and runs successfully on real device but the problem is when archiving for upload to AppStore it generates a Swift file and says:
// "No such module RealityKit"
Sure the reason is related to iOS versions <13.0 but I can't edit that file (to add canImport to RealityKit) it's read-only.
My question is how to cross this problem and make it archive successfully with lower versions support?
Here is a demo that shows the problem when archiving Demo.
Firstly :
Do not include Reality Composer's .rcproject files in your archive for distribution. .rcproject bundles contain the code with iOS 13.0+ classes, structs and enums. Instead, supply your project with USDZ files.
Secondly :
To allow iOS 13+ users to use RealityKit features, but still allow non-AR users to run this app starting from iOS 10.0, use the following code (CONSIDER, IT'S A SIMULATOR VERSION):
import UIKit
#if canImport(RealityKit) && TARGET_OS_SIMULATOR
import RealityKit
#available(iOS 13.0, *)
class ViewController: UIViewController {
var arView = ARView(frame: .zero)
override func viewDidLoad() {
super.viewDidLoad()
arView.frame = self.view.frame
self.view.addSubview(arView)
let entity = ModelEntity(mesh: .generateBox(size: 0.1))
let anchor = AnchorEntity(world: [0,0,-2])
anchor.addChild(entity)
arView.scene.anchors.append(anchor)
}
}
#else
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
}
}
#endif
Deployment target is iOS 10.0:
Thirdly :
When publishing to the AppStore (in case we have a deployment target lower than iOS 13.0), we must make the import of this framework weakly linked in the build settings (that's because RealityKit is deeply integrated in iOS and Xcode).
So, go to Build Settings β> Linking -> Other linker Flags.
Double-click it, press +, and paste the following command:
-weak_framework RealityKit -weak_framework Combine
P.S. In Xcode 13.3, there's a project setting that also could help
OTHER_LDFLAGS = -weak_framework RealityFoundation
Fourthly :
So, go to Build Settings β> Framework Search Paths.
Then type there the following command:
$(SRCROOT)
it must be recursive.
Fifthly
The archives window:
I have built an app that uses image tracking and swaps flat images. I am also using people occlusion (now) so people can get photos in front of those images. I really want this app to have a selfie mode, so people can take their own pictures in front of image swapped areas.
I'm reading the features on ARKit 3.5, but as far as I can tell, the only front-facing camera support is with ARFaceTrackingConfiguration, which doesn't support image tracking. ARImageTrackingConfiguration and ARWorldTrackingConfiguration only use the back camera.
Is there any way to make a selfie mode with people occlusion (and image tracking) using the front-facing camera?
The answer is NO, you can't use any ARConfiguration except ARFaceTrackingConfiguration for front camera. Although, you can simultaneously use ARFaceTrackingConfiguration on the front camera and ARWorldTrackingConfiguration on the rear camera. This allows users interact with AR content in the rear camera using their face as certain controller.
Look at this docs page to find out what config to what camera (rear or front) corresponds to.
Here's a table containing ARKit 5.0 eight tracking configurations:
ARConfiguration
What Camera?
ARWorldTrackingConfiguration
Rear
ARBodyTrackingConfiguration
Rear
AROrientationTrackingConfiguration
Rear
ARImageTrackingConfiguration
Rear
ARFaceTrackingConfiguration
FRONT
ARObjectScanningConfiguration
Rear
ARPositionalTrackingConfiguration
Rear
ARGeoTrackingConfiguration
Rear
Simultaneous World and Face configs
To use driven World Tracking depending on driver Face Tracking use the following code:
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
guard ARFaceTrackingConfiguration.isSupported,
ARFaceTrackingConfiguration.supportsWorldTracking
else {
fatalError("We can't do face tracking")
}
let config = ARFaceTrackingConfiguration()
config.isWorldTrackingEnabled = true
sceneView.session.run(config)
}
Or you can use Face Tracking as a secondary configuration:
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let config = ARWorldTrackingConfiguration()
if ARFaceTrackingConfiguration.isSupported {
config.userFaceTrackingEnabled = true
}
sceneView.session.run(config)
}
Pay attention that both properties are available on iOS 13 and higher.
var userFaceTrackingEnabled: Bool { get set }
var isWorldTrackingEnabled: Bool { get set }
P.S.
At the moment .userFaceTrackingEnabled = true still doesn't work. I tested it in Xcode 13.2.1 and iPad Pro 4th Gen with iPadOS 15.3 installed.
I've noticed an issue where IOS does not seem to localize the reading (by AVSpeechSynthesizer) of emojis on IOS 10.0 or higher, but it does seem to do it properly on IOS 9.3 or lower.
If you tell an AVSpeechSynthesizer that's set to English to speak an emoji by sending it the string, "π", it will say "Grinning face with normal eyes."
When you change the voice language of the synth to anything other than English, such as French, for example, and send the same emoji, it should say "Visage souriant avec des yeux normaux," which is does on IOS 9.3 or lower, but on IOS 10.0 and higher, it simply reads the English text ("Grinning face with normal eyes") in a French accent.
I conjured up a "playground" below that shows how I came to this conclusion... although I hope I'm missing something or doing something wrong.
To reproduce this issue, create new project in XCode and attach a button to the speakNext() function.
Run on a simulator running IOS 9.3 or lower, then do the same on IOS 10.0 or higher.
Can YOU explain zat?
import UIKit
import AVKit
class ViewController: UIViewController {
var counter = 0
let langArray = ["en","fr","de","ru","zh-Hans"]
let synth = AVSpeechSynthesizer()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
#IBAction func speakNext(_ sender: Any) {
print("testing \(langArray[counter])")
let utterance = AVSpeechUtterance(string: "π")
utterance.voice = AVSpeechSynthesisVoice(language: langArray[counter])
counter += 1
if (counter > 4) { counter = 0 }
synth.speak(utterance)
}
}
It looks like, for better or worse, emojis are now read according to the user's preferred language. If you run it on device and switch to, for example, French, then emoji will be read out in French, even if the voice synthesis voice is English. It's worthwhile noting that some languages do not seem to read out emoji. Surprisingly, this seems to be true for Japanese.
So can you change it?
Well, kind of, but I am not sure it's Apple-approved. You can set the "AppleLanguages" key in UserDefaults.standard. The first language in this array when UIApplicationMain is called will be the one used to read emoji. This means that if you change the value in your app, it will not take effect until the next time the app is started.
It's not really clear if this is a bug or intended behavior, it's certainly jarring to hear. It may be worth filing a Radar or Feedback or whatever they're calling them now with Apple.
UPDATE: Issue appears to be fixed in iOS 13.2! Yay!
UPDATE: After official release of iOS 13, the issue has been eclipsed/superceded by a worse issue (iOS 13 Text To Speech (TTS - AVSpeechSynthesisVoice) crashes for some users after update).
// original post:
After notifying Apple via the Feedback Assistant, it appears that this was a bug introduced somehow in IOS 10 that went unnoticed for three consecutive versions of IOS.
After testing with IOS 13 beta 5 (17A5547d), the issue doesn't appear.
They claim that the issue has been explicitly fixed from this point forward.
I want to adjust the device's physical camera focus while in augmented reality. (I'm not talking about the SCNCamera object.)
In an Apple Dev forum post, I've read that autofocus would interfere with ARKit's object detection, which makes sense to me.
Now, I'm working on an app where the users will be close to the object they're looking at. The focus the camera has by default makes everything look very blurry when closer to an object than around 10cm.
Can I adjust the camera's focus before initializing the scene, or preferably while in the scene?
20.01.2018
Apparently, there's still no solution to this problem. You can read more about this at this reddit post and this developer forum post for private API workarounds and other (non-helping) info.
25.01.2018
#AlexanderVasenin provided a useful update pointing to Apple's documentation. It shows that ARKit will be able to support not just focusing, but also autofocusing as of iOS 11.3.
See my usage sample below.
As stated by Alexander, iOS 11.3 brings autofocus to ARKit.
The corresponding documentation site shows how it is declared:
var isAutoFocusEnabled: Bool { get set }
You can access it this way:
var configuration = ARWorldTrackingConfiguration()
configuration.isAutoFocusEnabled = true // or false
However, as it is true by default, you should not even have to set it manually, unless you chose to opt out.
UPDATE: Starting from iOS 11.3 ARKit supports autofocusing, and it's enabled by default (more info). Manual focusing still aren't available.
Prior to iOS 11.3 ARKit did not supported neither manual focus adjust nor autofocusing.
Here is Apple's reply on the subject (Oct 2017):
ARKit does not run with autofocus enabled as it may adversely affect plane detection. There is an existing feature request to support autofocus and no need to file additional requests. Any other focus discrepancies should be filed as bug reports. Be sure to include the device model and OS version. (source)
There is another thread on Apple forums where a developer claims he was able to adjust autofocus by calling AVCaptureDevice.setFocusModeLocked(lensPosition:completionHandler:) method on private AVCaptureDevice used by ARKit and it appears it's not affecting tracking. Though the method itself is public, the ARKit's AVCaptureDevice is not, so using this hack in production would most likely result in App Store rejection.
if #available(iOS 16.0, *) {
// This property is nil on devices that arenβt equiped with an ultra-wide camera.
if let device = ARWorldTrackingConfiguration.configurableCaptureDeviceForPrimaryCamera {
do {
try device.lockForConfiguration ()
// configuration your focus mode
// you need to change ARWorldTrackingConfiguration().isAutoFocusEnabled at the same time
device.unlockForConfiguration ()
} catch {
}
}
} else {
// Fallback on earlier versions
}
Use configurableCaptureDeviceForPrimaryCamera method, and this is only available after iOS 16 or later.
Documentation
/
ARKit
/
Configuration Objects
/
ARConfiguration
/
configurableCaptureDeviceForPrimaryCamera