Localizing the Reading of Emoji on IOS 10.0 or Higher - swift

I've noticed an issue where IOS does not seem to localize the reading (by AVSpeechSynthesizer) of emojis on IOS 10.0 or higher, but it does seem to do it properly on IOS 9.3 or lower.
If you tell an AVSpeechSynthesizer that's set to English to speak an emoji by sending it the string, "๐Ÿ˜€", it will say "Grinning face with normal eyes."
When you change the voice language of the synth to anything other than English, such as French, for example, and send the same emoji, it should say "Visage souriant avec des yeux normaux," which is does on IOS 9.3 or lower, but on IOS 10.0 and higher, it simply reads the English text ("Grinning face with normal eyes") in a French accent.
I conjured up a "playground" below that shows how I came to this conclusion... although I hope I'm missing something or doing something wrong.
To reproduce this issue, create new project in XCode and attach a button to the speakNext() function.
Run on a simulator running IOS 9.3 or lower, then do the same on IOS 10.0 or higher.
Can YOU explain zat?
import UIKit
import AVKit
class ViewController: UIViewController {
var counter = 0
let langArray = ["en","fr","de","ru","zh-Hans"]
let synth = AVSpeechSynthesizer()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
}
#IBAction func speakNext(_ sender: Any) {
print("testing \(langArray[counter])")
let utterance = AVSpeechUtterance(string: "๐Ÿ˜€")
utterance.voice = AVSpeechSynthesisVoice(language: langArray[counter])
counter += 1
if (counter > 4) { counter = 0 }
synth.speak(utterance)
}
}

It looks like, for better or worse, emojis are now read according to the user's preferred language. If you run it on device and switch to, for example, French, then emoji will be read out in French, even if the voice synthesis voice is English. It's worthwhile noting that some languages do not seem to read out emoji. Surprisingly, this seems to be true for Japanese.
So can you change it?
Well, kind of, but I am not sure it's Apple-approved. You can set the "AppleLanguages" key in UserDefaults.standard. The first language in this array when UIApplicationMain is called will be the one used to read emoji. This means that if you change the value in your app, it will not take effect until the next time the app is started.
It's not really clear if this is a bug or intended behavior, it's certainly jarring to hear. It may be worth filing a Radar or Feedback or whatever they're calling them now with Apple.

UPDATE: Issue appears to be fixed in iOS 13.2! Yay!
UPDATE: After official release of iOS 13, the issue has been eclipsed/superceded by a worse issue (iOS 13 Text To Speech (TTS - AVSpeechSynthesisVoice) crashes for some users after update).
// original post:
After notifying Apple via the Feedback Assistant, it appears that this was a bug introduced somehow in IOS 10 that went unnoticed for three consecutive versions of IOS.
After testing with IOS 13 beta 5 (17A5547d), the issue doesn't appear.
They claim that the issue has been explicitly fixed from this point forward.

Related

translating a older swift line of code about recording a mic signal to swift 4.2

I got this in a older version of my app
var recSession: AVAudioSession!
recSession = AVAudioSession.sharedInstance()
try recSession.setCategory(AVAudioSession.Category.playAndRecord)
How should I translate the last line into swift 4.2.
setCategory is deprecated, but what is the alternative?
Try this:
try recSession.setCategory(.playAndRecord, mode: .default)
Seems Apple is recommending to set category and mode at the same time.
Note
Instead of setting your category and mode properties independently,
it's recommended that you set them at the same time using the
setCategory:mode:options:error: method.
AVAudioSession's mode defaults to AVAudioSession.Mode.default, so if your app does not change it, the code above should work.

How play music in SceneKit

I tried to play music in my SceneKit game, but it crashes without a reason.
So I try to add this 3 lines of code in the standard Apple game template (at the end of viewDidLoad):
let music = SCNAudioSource(fileNamed: "Music.mp3")
let action = SCNAction.playAudio(music!, waitForCompletion: false)
ship.runAction(action)
and, at runtime, Xcode show me this message:
com.apple.scenekit.scnview-renderer (8): breakpoint 1.2
Where is my mistake?
I tried to compile and run my code in two different Mac: on my MacBookPro Retina it runs ok with the sound, in my iMac 21,5 it crashes.
So, my code is correct, probably I will fill a radar to Apple.
Note that both Macs are installed with OS Mojave in beta (same version) and also Xcode used is the Beta.
You have to put the audio file into an SCNAssets catalog (with .scnassets extension), like the art.scnassets catalog in the template. Then play the music like that:
if let source = SCNAudioSource(fileNamed: "art.scnassets/Music.mp3") {
let action = SCNAction.playAudio(source, waitForCompletion: false)
ship.runAction(action)
} else {
print("cannot find file")
}
I think you are looking for something like this...
ship.runAction(SKAction.playSoundFileNamed("Music.mp3",waitForCompletion:false));

On macOS Mojave, in a cocoa application, how to use AVSpeechSynthesizer?

AVSpeechSynthesizer is marked as available on macOS Mojave beta.
Previously it was only available on iOS, tvOS and watchOS. But if I prepare a small macOS test project in Xcode 10, it gives me an error "Use of unresolved identifier 'AVSpeechSynthesizer'". on the top, I have:
import Cocoa
import NaturalLanguage
import AVFoundation
My code is:
let string = "Mickey mouse went to town"
let recognizer = NLLanguageRecognizer()
recognizer.processString(string)
let language = recognizer.dominantLanguage!.rawValue
let speechSynthesizer = AVSpeechSynthesizer()
let utterance = AVSpeechUtterance(string: string)
utterance.voice = AVSpeechSynthesisVoice(language: language)
speechSynthesizer.speak(utterance)
It is exactly the same code as on iOS, but on iOS it works, on macOS it gives the error. Any help is much appreciated. Thanks
Currently AVFAudio.h on Mojave has this:
#if TARGET_OS_IPHONE
#import <AVFAudio/AVAudioSession.h>
#import <AVFAudio/AVSpeechSynthesis.h>
#endif
And AVSpeechSynthesizer and friends are declared in AVSpeechSynthesis.h. This is the technical reason why you're seeing that error, those headers are only included when compiling for iOS.
But I've tried importing that header manually and I don't think AVSpeechSynthesizer is working correctly on Mojave even if that #if wasn't there. Trying to create an AVSpeechUtterance on Mojave always returns nil.
So this is either unfinished work on Mojave, or the headers are incorrectly annotated and speech synthesis is not available via AVFoundation on Mojave. Note that NSSpeechSynthesizer is still there and has not been marked deprecated.

Adjust camera focus in ARKit

I want to adjust the device's physical camera focus while in augmented reality. (I'm not talking about the SCNCamera object.)
In an Apple Dev forum post, I've read that autofocus would interfere with ARKit's object detection, which makes sense to me.
Now, I'm working on an app where the users will be close to the object they're looking at. The focus the camera has by default makes everything look very blurry when closer to an object than around 10cm.
Can I adjust the camera's focus before initializing the scene, or preferably while in the scene?
20.01.2018
Apparently, there's still no solution to this problem. You can read more about this at this reddit post and this developer forum post for private API workarounds and other (non-helping) info.
25.01.2018
#AlexanderVasenin provided a useful update pointing to Apple's documentation. It shows that ARKit will be able to support not just focusing, but also autofocusing as of iOS 11.3.
See my usage sample below.
As stated by Alexander, iOS 11.3 brings autofocus to ARKit.
The corresponding documentation site shows how it is declared:
var isAutoFocusEnabled: Bool { get set }
You can access it this way:
var configuration = ARWorldTrackingConfiguration()
configuration.isAutoFocusEnabled = true // or false
However, as it is true by default, you should not even have to set it manually, unless you chose to opt out.
UPDATE: Starting from iOS 11.3 ARKit supports autofocusing, and it's enabled by default (more info). Manual focusing still aren't available.
Prior to iOS 11.3 ARKit did not supported neither manual focus adjust nor autofocusing.
Here is Apple's reply on the subject (Oct 2017):
ARKit does not run with autofocus enabled as it may adversely affect plane detection. There is an existing feature request to support autofocus and no need to file additional requests. Any other focus discrepancies should be filed as bug reports. Be sure to include the device model and OS version. (source)
There is another thread on Apple forums where a developer claims he was able to adjust autofocus by calling AVCaptureDevice.setFocusModeLocked(lensPosition:completionHandler:) method on private AVCaptureDevice used by ARKit and it appears it's not affecting tracking. Though the method itself is public, the ARKit's AVCaptureDevice is not, so using this hack in production would most likely result in App Store rejection.
if #available(iOS 16.0, *) {
// This property is nil on devices that arenโ€™t equiped with an ultra-wide camera.
if let device = ARWorldTrackingConfiguration.configurableCaptureDeviceForPrimaryCamera {
do {
try device.lockForConfiguration ()
// configuration your focus mode
// you need to change ARWorldTrackingConfiguration().isAutoFocusEnabled at the same time
device.unlockForConfiguration ()
} catch {
}
}
} else {
// Fallback on earlier versions
}
Use configurableCaptureDeviceForPrimaryCamera method, and this is only available after iOS 16 or later.
Documentation
/
ARKit
/
Configuration Objects
/
ARConfiguration
/
configurableCaptureDeviceForPrimaryCamera

Adapt app from iOS7 to iOS6

I wrote my application for iPhone in xcode 5.0 and it supports only 7 ios.
How can I make it available for ios 6?
Also interested in how to prevent applications load on ipad?
First question: Make sure your deployment target is 6.0, don't use API's that are iOS 7 only, or check by using
if ([someObject respondsToSelector:#selector(ios7onlymethod)] {
// do your iOS 7 only stuff
} else {
// Fall back to iOS 6-supported ways
}
Or use
if ([[[UIDevice currentDevice] systemVersion] floatValue] >= 7.0f) {
// do your iOS 7 only stuff
} else {
// Fall back to iOS 6-supported ways
}
New frameworks you want to use should be marked as optional in Xcode; to do that select your target, click general, and scroll to the "Linked Frameworks and Libraries" section.
What's really cool is that classes in frameworks marked as optional are replaced with nil on versions of iOS that don't have them. So suppose you write some code like this, using a class from the Sprite Kit framework, new in iOS 7:
SKSpriteNode *spriteNode = [SKSpriteNode spriteWithImageNamed:#"mySprite"];
On iOS 6, when the linker, which "links" frameworks to apps (apps don't copy frameworks, they just get them from the system), sees SKSpriteNode in your code, and the framework is marked as optional, that line of code will be replaced by this:
... = [nil spriteWithImageNamed:#"mySprite"];
Sending messages to nil in Objective-C does absolutely nothing, so the above code doesn't crash. No problem. So instead of lingering your code with if-statements checking for the existence of a class, you can just go with the flow and let the dynamic linker do the work.
Further reading:
iOS 7 UI Transition Guide: Supporting iOS 6
Supporting Multiple iOS Versions and Devices
Second question: There is no way to say that you want your app to only run on iPhones and iPod touches. You can require things that are specifical to the iPhone and iPod touch (like a certain processor architecture or the M7 motion coprocessor) but Apple won't like it if you require the M7 chip to exclude a certain device when you don't even need it. You should probably think about why you don't want your app to run on iPads.