iOS 16 Poor SpeechSynthesisUtterance voice quality - iphone

since iOS 16 update my vocabulary app (PWA) has problems with spelling provided text to SpeechSynthesisUtterance object. It doesn't apply to all languages, eg. Russian sounds the same like before update to iOS 16. If it comes to German or English - the quality is very low, muffled, the voice sounds nasal... For MacOS Safari everything works as supposed to, but not for iOS 16.
const fullPhrase = toFullPhrase(props.phrase);
const utterance = new SpeechSynthesisUtterance();
onMounted(() => { // Vue lifecycle method
utterance.text = fullPhrase;
utterance.lang = voice.value.lang;
utterance.voice = voice.value;
utterance.addEventListener(ON_SPEAK_END, toggleSpeakStatus);
});
I tried to modify pitch and rate properties but without success... Did they change API for SpeechSynthesis / SpeechSynthesisUtterance for Safari in iOS 16 maybe?

It looks like IO16 introduced a lot of new (sometimes very weird) voices for en-GB and en-US. In my case I was looking for a voice only by a lang and taking the first one. As a result I was getting a strange voice.

Related

Using multiple audio devices simultaneously on osx

My aim is to write an audio app for low latency realtime audio analysis on OSX. This will involve connecting to one or more USB interfaces and taking specific channels from these devices.
I started with the learning core audio book and writing this using C. As I went down this path it came to light that a lot of the old frameworks have been deprecated. It appears that the majority of what I would like to achieve can be written using AVAudioengine and connecting AVAudioUnits, digging down into core audio level only for the lower things like configuring the hardware devices.
I am confused here as to how to access two devices simultaneously. I do not want to create an aggregate device as I would like to treat the devices individually.
Using core audio I can list the audio device ID for all devices and change the default system output device here (and can do the input device using similar methods). However this only allows me one physical device, and will always track the device in system preferences.
static func setOutputDevice(newDeviceID: AudioDeviceID) {
let propertySize = UInt32(MemoryLayout<UInt32>.size)
var deviceID = newDeviceID
var propertyAddress = AudioObjectPropertyAddress(
mSelector: AudioObjectPropertySelector(kAudioHardwarePropertyDefaultOutputDevice),
mScope: AudioObjectPropertyScope(kAudioObjectPropertyScopeGlobal),
mElement: AudioObjectPropertyElement(kAudioObjectPropertyElementMaster))
AudioObjectSetPropertyData(AudioObjectID(kAudioObjectSystemObject), &propertyAddress, 0, nil, propertySize, &deviceID)
}
I then found that the kAudioUnitSubType_HALOutput is the way to go for specifying a static device only accessible through this property. I can create a component of this type using:
var outputHAL = AudioComponentDescription(componentType: kAudioUnitType_Output, componentSubType: kAudioUnitSubType_HALOutput, componentManufacturer: kAudioUnitManufacturer_Apple, componentFlags: 0, componentFlagsMask: 0)
let component = AudioComponentFindNext(nil, &outputHAL)
guard component != nil else {
print("Can't get input unit")
exit(-1)
}
However I am confused about how you create a description of this component and then find the next device that matches the description. Is there a property where I can select the audio device ID and link the AUHAL to this?
I also cannot figure out how to assign an AUHAL to an AVAudioEngine. I can create a node for the HAL but cannot attach this to the engine. Finally is it possible to create multiple kAudioUnitSubType_HALOutput components and feed these into the mixer?
I have been trying to research this for the last week, but nowhere closer to the answer. I have read up on channel mapping and everything I need to know down the line, but at this level getting the audio at. lower level seems pretty undocumented, especially when using swift.

How to use kAudioUnitSubType_LowShelfFilter of kAudioUnitType_Effect which controls bass in core Audio?

i'm back with one more question related to BASS. I already had posted this question How Can we control bass of music in iPhone, but not get as much attention of your people as it should get. But now I have done some more search and had read the Core AUDIO. I got one sample code which i want to share with you people here is the link to download it iPhoneMixerEqGraphTest. Have a look on it in this code what i had seen is the developer had use preset Equalizer given by iPod in Apple. Lets see some code snippet too:----
// iPodEQ unit
CAComponentDescription eq_desc(kAudioUnitType_Effect, kAudioUnitSubType_AUiPodEQ, kAudioUnitManufacturer_Apple);
What kAudioUnitSubType_AUiPodEQ does is it get preset values from iPod's equalizer and return us in Xcode in an array which we can use in PickerView/TableView and can set any category like bass, rock, Dance etc. It is helpless for me as it only returns names of equalizer types like bass, rock, Dance etc. as i want to implement bass only and want to implement it on UISLider.
To implement Bass on slider i need values so that i can set minimum and maximum value so that on moving slider bass can be changed.
After getting all this i start reading Core Audio's Audio Unit framework's classes and got this
after that i start searching for bass control and got this
So now i need to implement this kAudioUnitSubType_LowShelfFilter. But now i don't know how to implement this enum in my code so that i can control the bass as written documentation. Even Apple had not write that how can we use it. kAudioUnitSubType_AUiPodEQ this category was returning us an array but kAudioUnitSubType_LowShelfFilter category is not returning any array. While using kAudioUnitSubType_AUiPodEQ this category we can use types of equalizer from an array but how can we use this category kAudioUnitSubType_LowShelfFilter. Can anybody help me regarding this in any manner? It would be highly appreciable.
Thanks.
Update
Although it's declared in the iOS headers, the Low Shelf AU is not actually available on iOS.
The parameters of the Low Shelf are different from the iPod EQ.
Parameters are declared and documented in `AudioUnit/AudioUnitParameters.h':
// Parameters for the AULowShelfFilter unit
enum {
// Global, Hz, 10->200, 80
kAULowShelfParam_CutoffFrequency = 0,
// Global, dB, -40->40, 0
kAULowShelfParam_Gain = 1
};
So after your low shelf AU is created, configure its parameters using AudioUnitSetParameter.
Some initial parameter values you can try would be 120 Hz (kAULowShelfParam_CutoffFrequency) and +6 dB (kAULowShelfParam_Gain) -- assuming your system reproduces bass well, your low frequency content should be twice as loud.
Can u tell me how can i use this kAULowShelfParam_CutoffFrequency to change the frequency.
If everything is configured right, this should be all that is needed:
assert(lowShelfAU);
const float frequencyInHz = 120.0f;
OSStatus result = AudioUnitSetParameter(lowShelfAU,
kAULowShelfParam_CutoffFrequency,
kAudioUnitScope_Global,
0,
frequencyInHz,
0);
if (noErr != result) {
assert(0 && "error!");
return ...;
}

Why does PhoneGap seem faster than Titanium?

I'm trying to measure the execution perfomance of a few cross-platform solutions, among which are: Titanium and PhoneGap.
So here's an example of the Titanium version of my performance tester, it's very simple, but I'm just trying to get a feeling of how fast my code gets executed:
var looplength;
var start1;
var start2;
var end1;
var end2;
var duration1;
var duration2;
var diff;
var diffpiter;
var power;
var info;
for (power = 0; power < 24; power++) {
looplength = Math.pow(2, power);
start1 = new Date().getTime();
for (iterator = 0; iterator < looplength; iterator++) {a=iterator;b=iterator;}
end1 = new Date().getTime();
start2 = new Date().getTime();
for (iterator = 0; iterator < looplength; iterator++) {a=iterator;}
end2 = new Date().getTime();
duration1 = end1 - start1;
duration2 = end2 - start2;
diff = duration1 - duration2;
diffpiter = diff / looplength;
info={title:'2^' + power + ' ' + diffpiter};
tableView.appendRow(Ti.UI.createTableViewRow(info),{animated:true});
}
The PhoneGap version is the same except for the last two lines which get replaced
document.write('2^' + power + ' ' + diffpiter + '<br />');
Both are executed on an iPhone 4S. I've run the test numerous times, to eliminate errors.
How in the name of all that is holy can the Titanium version measure ~0.0009 milliseconds per iteration while the PhoneGap version measures ~0.0002 milliseconds per iteration?
Titanium is supposed to compile my javascript code so I expect it to be faster. In this case however it's at least 4 times slower! I'm not an expert on performance testing, but the test I designed should be at least remotely accurate...
Thank you for any tips you can give me.
Titanium doesn't convert javascript code to objective-c. Titanium simply uses a javascript to objective-c bridge to communicate with objective-c iOS framework (most importantly User interface objects). More appropriate comparison would be to code titanium's User Interface element (button, label, window, view ), manipulate them and use html,css,image buttons in phonegap.
Phonegap also uses a bridge of it's own and if you know java or objective-c you can make plugins to use native User Interface elements and other Native features of iOS or Android.
http://zsprawl.com/iOS/2012/05/navigation-bar-with-nativecontrols-in-cordova/
This is basic JavaScript, and not all JavaScript is compiled to native code. Basically when you use the Titanium API, that will be converted to Objective-C or Java code. But to be flexible and dynamic there is a JavaScript interpreter compiled with the App, and that basically runs the JavaScript you have written.
This makes the App slower. But testing it purely on these things is useless. If you want to do a full suit of testing you need to use the Titanium API too, and compare that to the PhoneGap one.
What you'll notice, as Phonegap does not compile to native code, it will feel different, and visually Titanium will behave faster.
Oh man, I don't want to start a flame war but I will put in my two cents. First, full disclosure: I'm a contributor to PhoneGap and I've never used Titanium. However I am answering from 15 years of development experience.
I've never found tools that convert code from one language to another to be particularly efficient. Yes, native code should run faster than JavaScript code but I'm willing to bet there are inefficiencies introduced during the translation phase.
Again this is just from past experience using tools that compile one language into another it is not a knock on Titanium as that is a great framework.
In your TItanium code, your last line is creating UI objects - this is making a call to Objective-C to create a UITableViewRow and an animation object and then appending it to a UITableView - you are doing 3 operations. I'd be pretty confident that this is what is taking the time. The preferred Ti way of doing this would be create an array of title objects, then using setData on the table at the end.
PhoneGap has already created the UIWebView on app load and you are just updating the html in one DOM element so I would expect the UI will be faster.

Is there any audio ads for iPhone audio apps?

my app is like podcast for web articles. https://apps.apple.com/app/id1273954643
I plan to make a free version and am curious if there is audio ads for iPhone apps.
Since most users of my app don't see the screen, banner ads doesn't fit well.
I want to insert audio ads like spotify.
I checked http://www.medialets.com/ and http://www.greystripe.com/, but their show cases
are quite vague. I sent emails to them, but no reply yet.
Any help will be greatly appreciated.
Thanks!
Hmmm... this seems like an awesome business opportunity that hasn't been properly executed yet.
I have also seen mentions of audio ads being served up into client iPhone apps by TargetSpot.
I really like your idea.And after searching for a while i came across this helpful tutorial-
Though its kind of commercial but hope it will help you.
http://advertising.about.com/od/smallbusinesscampaigns/a/podcastweb.htm
if you're willing to use an API. You could use something like this
https://docs.api.audio/recipes/programmatic-audio-ads
#Check that you are using python 3.8 or further
#pip install -U aflr
import aflr
aflr.api_key = "APIKEY"
audience = [
{"number": "33", "location": "Buckingham"},
{"number": "22", "location": "Sunshine"},
]
text = """
<<soundSegment::effect1>>
<<sectionName::hello>>
If you have any plans for today, cancel them!
<<soundSegment::intro>>
<<sectionName::hello2>>
This really is the final call for {{location}} Hyundai's massive clear out sale! Only until midnight tonight, so come on down!
<<soundSegment::main>>
<<sectionName::main>>
We're clearing out all remaining 2020 Hyundais at Ottawa's top volume Hyundai dealers. These are the last days for clear out pricing and amazing clear out incentives. Zero percent financing for up to 84 months, and up to 7700 in cash price adjustments on all 2020 Hyundais at Hyundai on {{location}}. Pick one of the {{number}} Santa Fays in stock, a family-sized SUV with all-wheel drive and back-up cameras from just $85 weekly, zero down!
It's the easiest time to get into a new Hyundai, but these deals won't be around for long, ONLY until midnight TONIGHT!
<<soundSegment::outro>>
<<sectionName::outro>>
Get into a new Hyundai today. At {{location}} Hyundai, better cars for passionate car drivers. <break time="1s"/>
"""
script = aflr.Script().create(scriptText=text, scriptName="helloworld", moduleName="hello", projectName="hello")
print(script)
for item in audience:
r = aflr.Speech().create(
scriptId=script.get("scriptId"),
voice="en-US-GuyNeural",
speed=120,
silence_padding=0,
audience=[item],
)
print(r)
template = "hotwheels"
print(template)
for item in audience:
r = aflr.Mastering().create(
scriptId=script.get("scriptId"), soundTemplate=template, audience=[item]
)
print(r)
url = aflr.Mastering().download(
scriptId=script.get("scriptId"),
parameters=item,
destination=".",
)
print(f"✨ Mastered file for template {template} ✨")
print(url)
```
That way you could serve this, this is in python. You could also do it in Swift (but there's no sdk for this atm you'd need to write it yourself).
Disclamier - I work for www.api.audio

iPhone en_* sublanguage localization

I want to localize strings in my iphone app for en_GB and other 'en' sub-languages, but XCode and the iphone refuse to let this happen. I have created a localization of "Localizable.strings" for en_GB and en_US (I tried both hyphens and underscores) for testing purposes, but they just aren't recognized. The only language code that works is simply "en" (displayed as "English" in XCode).
I can't believe this isn't possible, so what am I doing wrong? I'm also hoping to get the typical 'cascading' behaviour where if a string isn't found in the sub-language e.g. "en_GB" then it should be taken from "en" if possible. Help?
When you choose 'English' from the list of languages on the iPhone preferences, that actually means the 'en_US' language.
So until apple update their software with additional sublanguages like "English (British)" etc. we are left with going by the locale region setting, and loading strings manually from another string table.
However, the language and regional locale are separated for a reason: a Spanish user in the UK may want dates/times formatted according to the local customs, but program strings in their native tongue. It would be incorrect to detect the regional locale (UK) and therefore display UK strings.
So basically there is no way to do this currently.
What you're doing should work according to the docs. But it appears that the iPhoneOS implementation is at odds with the documentation. According to Radar 6158876, there's no support for en_GB language, only locale (date formats and the like).
I found the same problem.
BTW, if you look at the iPhone Settings -> General -> International menu, it makes the distinction between language and region quite clear:
Languages:
-English
Region Format:
-United States
-United Kingdom
The localization framework only appears to pay attention to the language, not the region.
I'm tempted to raise an enhancement request for this with Apple, as IMO it is reasonable that a user might want to use British English (for the text) whilst being in the United States (where, say, phone numbers should be in US format).
This can actually be done - check my solution here - iPhone App Localization - English problems?
Create a separate string resource, say UKLocalization.strings, and create localizations for each of your supported languages. For all localizations other than en this file is empty. For en, it contains only the strings that have unique en_GB spelling.
Next, you create a replacement for NSLocalizationString that will first check the UKLocalization table before falling back to the standard localization table.
e.g.:
static NSString* _locTable = nil;
void RTLocalizationInit()
{
_locTable = nil;
NSString* country = [[NSLocale currentLocale] objectForKey:NSLocaleCountryCode];
if ([country isEqual:#"GB"])
{
_locTable = #"UKLocalization";
}
}
NSString* RTLocalizedString(NSString* key, NSString* ignored)
{
NSString* value = nil;
value = [[NSBundle mainBundle] localizedStringForKey:key value:nil table: _locTable];
if (value == key)
{
value = NSLocalizedString(key, #"");
}
return value;
}
I’m not sure in which version of iOS it was introduced, but iOS 7 definitely has a ‘British English’ language preference that will pick up resources from the en_GB.lproj directory. The various hacks floating around the web shouldn’t be necessary unless you’re after a more specialised* dialect.
*see what I did there ;)