I want to adjust the device's physical camera focus while in augmented reality. (I'm not talking about the SCNCamera object.)
In an Apple Dev forum post, I've read that autofocus would interfere with ARKit's object detection, which makes sense to me.
Now, I'm working on an app where the users will be close to the object they're looking at. The focus the camera has by default makes everything look very blurry when closer to an object than around 10cm.
Can I adjust the camera's focus before initializing the scene, or preferably while in the scene?
20.01.2018
Apparently, there's still no solution to this problem. You can read more about this at this reddit post and this developer forum post for private API workarounds and other (non-helping) info.
25.01.2018
#AlexanderVasenin provided a useful update pointing to Apple's documentation. It shows that ARKit will be able to support not just focusing, but also autofocusing as of iOS 11.3.
See my usage sample below.
As stated by Alexander, iOS 11.3 brings autofocus to ARKit.
The corresponding documentation site shows how it is declared:
var isAutoFocusEnabled: Bool { get set }
You can access it this way:
var configuration = ARWorldTrackingConfiguration()
configuration.isAutoFocusEnabled = true // or false
However, as it is true by default, you should not even have to set it manually, unless you chose to opt out.
UPDATE: Starting from iOS 11.3 ARKit supports autofocusing, and it's enabled by default (more info). Manual focusing still aren't available.
Prior to iOS 11.3 ARKit did not supported neither manual focus adjust nor autofocusing.
Here is Apple's reply on the subject (Oct 2017):
ARKit does not run with autofocus enabled as it may adversely affect plane detection. There is an existing feature request to support autofocus and no need to file additional requests. Any other focus discrepancies should be filed as bug reports. Be sure to include the device model and OS version. (source)
There is another thread on Apple forums where a developer claims he was able to adjust autofocus by calling AVCaptureDevice.setFocusModeLocked(lensPosition:completionHandler:) method on private AVCaptureDevice used by ARKit and it appears it's not affecting tracking. Though the method itself is public, the ARKit's AVCaptureDevice is not, so using this hack in production would most likely result in App Store rejection.
if #available(iOS 16.0, *) {
// This property is nil on devices that aren’t equiped with an ultra-wide camera.
if let device = ARWorldTrackingConfiguration.configurableCaptureDeviceForPrimaryCamera {
do {
try device.lockForConfiguration ()
// configuration your focus mode
// you need to change ARWorldTrackingConfiguration().isAutoFocusEnabled at the same time
device.unlockForConfiguration ()
} catch {
}
}
} else {
// Fallback on earlier versions
}
Use configurableCaptureDeviceForPrimaryCamera method, and this is only available after iOS 16 or later.
Documentation
/
ARKit
/
Configuration Objects
/
ARConfiguration
/
configurableCaptureDeviceForPrimaryCamera
Related
After reading about it, I have some mess in my head.
This function is being called while user swipe his finger on some UI element :
func wasDragged() { signal here }
I would like to make small Haptic signals every time it's being called ( like a dates picker wheel )
How would I setup first time and make the signals of the Haptic Engine on call ?
Do I have to check for device kind ? I want it only on iPhone 7 and up.
Using latest Swift.
The documentation oh Haptic feedback is really descriptive.
But if you want some quick solution here it is.
var hapticGenerator: UISelectionFeedbackGenerator?
func wasDragged() {
hapticGenerator = UISelectionFeedbackGenerator()
haptiGenerator.selectionChanged()
hapticGeneraor = nil
}
Alternatively depending on the logic of the screen, you can initialize generator outside of wasDragged function and inside of it just call hapticGenerator.prepare() and selectionChanged(). In that case you should not assign nil to it after the dragging is complete because it won't get triggered again. As per documentation you have to release generator when no longer needed as Taptic Engine will wait and therefore consume system resources for another call.
Note that calling these methods does not play haptics directly.
Instead, it informs the system of the event. The system then
determines whether to play the haptics based on the device, the
application’s state, the amount of battery power remaining, and other
factors.
For example, haptic feedback is currently played only:
On a device with a supported Taptic Engine
When the app is running in the foreground
When the System Haptics setting is enabled
Documentation:
https://developer.apple.com/documentation/uikit/uifeedbackgenerator
I've built an app which runs on ARCOre preview 1 package on Unity. I know Google has made major changes in preview 2.
My question is what changes will I have to make in order to run my ARCore preview 1 app run on preview 2?
Take a look at the code in the Preview 2 sample app(s) and update your code accordingly. For example, here is the new code for properly instantiating an object into the AR scene:
if (Session.Raycast(touch.position.x, touch.position.y, raycastFilter, out hit))
{
var andyObject = Instantiate(AndyAndroidPrefab, hit.Pose.position,
hit.Pose.rotation);
// Create an anchor to allow ARCore to track the hitpoint
// as understanding of the physical world evolves.
var anchor = hit.Trackable.CreateAnchor(hit.Pose);
// Andy should look at the camera but still be flush with the plane.
andyObject.transform.LookAt(FirstPersonCamera.transform);
andyObject.transform.rotation = Quaternion.Euler(0.0f,
andyObject.transform.rotation.eulerAngles.y,
andyObject.transform.rotation.z);
// Make Andy model a child of the anchor.
andyObject.transform.parent = anchor.transform;
}
Common
Preview 1 use Tango Core service that can changed Ar-Core service in Preview 2.
Automatic Screen Rotation is Handled.
Some Classes are altered like some reason of following.
For Users:
Introduce AR Stickers
For Developers:
A new C API for use with the Android NDK that complements our existing Java, Unity, and Unreal SDKs;
Functionality that lets AR apps pause and resume AR sessions, for example to let a user return to an AR app after taking a phone call;
Improved accuracy and runtime efficiency across our anchor, plane finding, and point cloud APIs.
I have updated my app from Preview 1 to Preview 2. And it's not a lot. It had minor API changes like the ones for hit flags, Pose.position etc. It would probably be stupid to post the change log here. I suggest that you can file the below steps:
Replace the old sdk with the new one in the Unity Project
Then, check for the error in your default editor, vs or vs code or mono
Just check for the relevant API's in the deveoper docs of AR.
It's not such a cumbersome job, it too me some 5-10 min to upgrade that's it.
Cheers!
On Apple's iOS 6.0 feature page, it used to say
Take advantage of the built-in camera’s advanced features. New APIs let you control focus, exposure, and region of interest. You can also access and display faces with face detection APIs, and leverage hardware-enabled video stabilization.
This text has since been removed, and I can't find new methods in the API for controlling exposure. In class AVCaptureDevice under "Exposure Settings" there is no new property/method for iOS 6.0. Do you know where i can find new features for exposure in API?
It's true that there is an -exposureMode property on AVCaptureDevice, but that's only for setting the mode (off/auto/continuous) and not the actual f-stop, SS, or ISO. Camera apps that provide "exposure" control all seem to do it through post-processing.
However, it seems there are undocumented APIs in the framework to do this. Check out the full headers for AVCaptureDevice.h (via a class-dump) and note the following methods:
- (void)setManualExposureSupportEnabled:(BOOL)arg1;
- (BOOL)isManualExposureSupportEnabled;
- (void)setExposureGain:(float)arg1;
- (float)exposureGain;
- (void)setExposureDuration:(struct { long long x1; int x2; unsigned int x3; long long x4; })arg1;
- (struct { long long x1; int x2; unsigned int x3; long long x4; })exposureDuration;
- (void)setExposureMode:(int)arg1;
- (int)exposureMode;
- (BOOL)isExposureModeSupported:(int)arg1;
My guess is gain is equivalent f-stop (fixed aperture), and duration is shutter speed. I wonder if these are used for the iPhone 5's low-light boost mode.
You can also use otool to poke around and try to piece together the symbols. There's likely a new constant in exposureMode for enabling manual control, and exposureDuration seems like it has flags too. When calling these, make sure to use the new -isExposureModeSupported: and also call -respondsToSelector: to check compatibility.
As always, using private APIs is frowned upon by Apple and is cause for rejection from the App Store. There might be ways around this, such as hiding the calls using -performSelector: or obc_msgsend with rot13 strings or something, as I'm pretty sure they only do static analysis on the app binary.
I've managed to 'trick' the camera into running a shorter exposure time, but I suspect it will only be of use to those doing similar (macro) image acquires. I first set up AVCaptureDevice to use AVCaptureExposureModeContinuousAutoExposure and set the flash to TorchMode. I then UnlockForConfiguration and set up a key-value observer to watch for adjustingExposure to finish. I then re-lock the device, flip to AVCaptureExposureModeLocked, and turn off the Torch. This has the effect of brute-force setting a shorter shutter speed than what the camera would select on the un-illuminated scene. By playing with the Torch level I can set any relative shutter speed value I want (it would be best of course to leave the torch on, but in my application it produces glare on the subject). Again this only really works when your object distance is very close (less than say 6 inches), but it's allowed me to eliminate hand shake blurring in my close-up images. The down side is that the images are darker since I don't have a way of spoofing the camera gain, but not a problem in my particular application.
It looks like they've updated that linked text—there's no mention of new APIs for exposure:
Use powerful new features of the built-in camera. New APIs support real-time video stabilization, an improved LED flash, and face detection and display. You can get reports of dropped frames during capture and leverage new utilities to map UI touches to focus and exposure commands. And apps that support iPhone 5 can take advantage of low light boost mode.
There is an opt-in low-light boost mode for iPhone 5, detailed here by Jim Rhoades (and in this developer forum post, log-in required).
As a follow-up to Michael Grinich's excellent information, I found that there is an order dependency on some of the calls in the private API. To use "manual" exposure controls, you have to enable them before you set the mode, like so:
#define AVCaptureExposureModeManual 3
NSError* error = nil;
if ([captureDevice lockForConfiguration:&error]) {
captureDevice.manualExposureSupportEnabled = YES;
if ([captureDevice isExposureModeSupported:AVCaptureExposureModeManual]) {
captureDevice.exposureMode = AVCaptureExposureModeManual;
captureDevice.exposureGain = ...;
captureDevice.exposureDuration = {...};
}
[captureDevice unlockForConfiguration];
}
All of this is demonstrated in iOS-ManualCamera.
Starting with iOS 8.0, this is now finally possible.
See setExposureModeCustomWithDuration etc. in the Apple documentation.
Here is an article discussing how to use the APIs.
Is there a way to determine whether VoiceOver is currently announcing and when it stops. I've tried UIAccessibilityVoiceOverStatusChanged but my understanding is that this is only if you switch VoiceOver on or off. Any help would be greatly appreciated. thanks.
We use otherAudioIsPlaying, the problem is some app's running in the background like some pedometer monitors etc. turn on the audio it seems and never release it so even though nothing is actually being spoken or played otherAudioIsPlaying always returns 1 until you remove the other application from the background. So now not only can you not play music but you have no idea that another application in the background will mess up this test. Apple really needs to put in an API to determine if Voice Over is currently speaking or not.
These are all the Accessibility booleans that I found in the documentation:
UIAccessibilityPostNotification
UIAccessibilityIsVoiceOverRunning
UIAccessibilityIsMonoAudioEnabled
UIAccessibilityIsClosedCaptioningEnabled
UIAccessibilityRegisterGestureConflictWithZoom
I don't think that there are any booleans to do what you are talking about.
You could use the audio session's "OtherAudioIsPlaying" property to check if another system process is using the audio hardware at the moment. It should be "true" if VoiceOver is speaking and "false" if not.
Actually this might not work properly if the user is playing music in the background. But most users running VoiceOver will usually not have any other audio enabled permanently, since it makes it harder to understand what VoiceOver is saying.
Here is an example for usage:
UInt32 otherAudioIsPlaying;
UInt32 propertySize = sizeof(otherAudioIsPlaying);
AudioSessionGetProperty(kAudioSessionProperty_OtherAudioIsPlaying,&propertySize, &otherAudioIsPlaying);
if(otherAudioIsPlaying) {
// other application is generating sound output (including VoiceOver)
// but might also be any other app (like iPod App)
}
I'm about to begin building an iPhone game that will make use of Game Center Achievements and high scores, but I'd also like to have a version that works on iPhones that don't have Game Center (i.e. iOS version < 4.1). Can I have two versions of the same app in the app store, one for game center, one for without? Or should I design the app such that if the iPhone doesn't have Game Center, it won't make use of it, and if it does, it will make use of it?
I'm going to continue researching this, just thought I'd post this question and get some feedback in the meantime. Thanks so much!
Here's the definitive response I received from one of the Apple engineers...
"We'd recommend making one version of the app which dynamically detects whether Game Center is available and uses it (or not) based on that."
Maybe create a game without it, then create the capabilities for the game center, but disable them, and only enable them if they have the right version.
I am doing the same thing. If you have GameCenter capabilities, you can use the features. If you don't, you can't.
I would not program a game without and then add it later. In my case I disable Multiplayer for non-GC users.
Also, you may want your game to work if the device has GC capabilities, but the user cannot, for whatever reason, connect to GC currently.
You can use the following function to detect if the device supports Game Center:
BOOL isGameCenterAvailable()
{
// Check for presence of GKLocalPlayer API.
Class gcClass = (NSClassFromString(#"GKLocalPlayer"));
// The device must be running running iOS 4.1 or later.
NSString *reqSysVer = #"4.1";
NSString *currSysVer = [[UIDevice currentDevice] systemVersion];
BOOL osVersionSupported = ([currSysVer compare:reqSysVer options:NSNumericSearch] != NSOrderedAscending);
return (gcClass && osVersionSupported);
}
However I found that a lot of people have not updated to iOS 4.1 or are naive about Game Center. The number of users in my game is quite small even though there are so many downloads. I was actually considering moving over to Open Feint which is very much easier to implement than Game Kit and also supports older devices.