Missing stereo channel in iOS5 - ios5

Just updated to iOS5 and noticed that the app we are working on has lost its audio output in the right channel. Being unsure with our own implementation of the audio unit component, I tried running the MixerHost sample from Apple and apparently the right channel is missing too.
I believe this has something to do with the recent changes to CoreAudio in iOS5 (maybe an additional unit or attribute needed?). Hopefully someone can point out what that is. Thanks for the help.

Related

Catalina Beta 5: Quicktime Audio Recording Not Working on 2018 Macbook Pros sw

Starting a Quicktime Audio recording with Catalina Dev Beta 5 on 2018 or later Macbook Pros outputs files with no sound (Macbook Pro Microhone selected). Example file here: https://www.dropbox.com/s/ib67k0vg8cm93fn/test_no_audio%20%281%29.aifc?dl=0
During the recording recording Console shows this error:
"CMIO_Unit_Converter_Audio.cpp:590:RebuildAudioConverter AudioConverterSetProperty() failed (1886547824)"
We have an application that records the screen and audio at the same time using AVFoundation and the resulting video files also do not have audio. However when inspecting the CMSampleBuffers, they seem fine: https://gist.github.com/paulius005/faef6d6250323b7d3386a9a70c08f70b
Is anyone else experiencing this issue or possibly have more visibility if it's something Apple is working on?
Anything else that I should be looking at to tackle the issue?
Yes, Apple is changing a lot of things related to the audio subsystem layer on Catalina. I am aware that various audio applications are being rewritten for Catalina. Also since beta2, each new beta release comes with some deprecations, but also comes with some new implementations [to the new audio layer of the MacOS].
Current Beta 5 Audio Deprecations:
The OpenAL framework is deprecated and remains present for
compatibility purposes. Transition to AVAudioEngine for spatial audio
functionality.
AUGraph is deprecated in favor of AVAudioEngine.
Inter-App audio is deprecated. Use Audio Units for this functionality.
Carbon component-based Audio Units are deprecated and support will be removed in a future release.
Legacy Core Audio HAL audio hardware plug-ins are no longer supported. Use Audio Server plug-ins for audio drivers.
__
About AVFoundation [which you are using]:
Deprecated on Beta 5:
The previously deprecated 32-bit QuickTime framework is no longer available in macOS 10.15.
The symbols for QTKit, which relied on the QuickTime framework, are still present but the classes are non-functional.
The above item: Apple shipped the symbols for QTkit on Catalina Beta 5, but they are nulled, non-functional. This means, an application will run, but will not produce any result if it is using those AVFoundation classes. (I don't know if those deprecations directly or indirectly affects your program, but they are about AVFoundation)
I think they will be fully removed on next betas, but for now they are nulled non-functional, otherwise it would completely cause instant crashes on many audio/AV applications which tried to load them. This seems to be going like a step-by-step "migration thing" from beta to beta, to give time(?) to developers rewrite their audio apps to the new audio subsystem.
You can find more details on the release notes [along with links to some new classes and functions documentation to replace the deprecated ones], but it is not a good/rich documentation yet.
https://developer.apple.com/documentation/macos_release_notes/macos_catalina_10_15_beta_5_release_notes
PS: About my opinions, point of view and information written here: I am a senior MacOS developer, but not on AV/Audio/Media subsystem, my area is Kernel/Networking/Security. But I am closely following all the changes that are happening to the MacOS operating system on each Catalina beta release since the first, and the changes I am noticing that Apple is making on the audio subsystem are significant changes.
I cannot specifically help you with the audio programming issue, but you asked if it could be something Apple is working on, and yes, it is.
I hope this information can help you get complementary information to solve your application issue.

AVAssetExportSession works on iPad, no audio on iPhone

I have the exact same code running on both the iPad and iPhone versions of my app and the code works fine on the iPad (the video is being exported properly with audio), but the exported video on the iPhone doesn't have any sound. I even ran the iPhone version on the iPad and it worked fine, which means that nothing should be wrong with the code itself.
Any insight on why the iPhone isn't exporting the video with audio would be much appreciated.
I have done some research and somebody mentioned that memory issues could be causing some export problems. The memory and CPU usage are fairly high during the video processing/exporting, but never high enough to receive a memory warning.
Thanks in advance.
You didn't mention if you stepped through the code (line by line) on the iPhone, setting breakpoints, watching each variable to make sure the value is correct, etc. This would be the first step.

MPMoviePlayerController setCurrentPlaybackRate not working

We have an http stream running on iPad iOS 4.3.3.
We are using MPMoviePlayerController. I am trying to change the playback rate in order to implement a custom fast forward experience by using:
[player setCurrentPlaybackRate:2.0];
But it isn't working. If I display the current playback rate immediately after the above line of code, it displays 1.0 only. Any idea if this doesn't work for a stream? The documentation doesn't say anything about it.
The property currentPlaybackRate does indeed not work when using HTTP-streaming. I would suggest you to file a bug report to Apple on this issue. Still I assume it will not work in the future but possibly another bug report motivates them Apple folks to update their documentation accordingly.

Reverb with OpenAL on iOS

Is there any possible way to do reverb using OpenAL on iOS? Anyone have any code snippets to achieve this effect? I know it's not included in the OpenAL library for iOS, but I would think there's still a way to program it in.
Thanks.
Reverb is now natively supported in OpenAL (as of iOS 5.0). You can view a sample implementation on the ObjectAL project:
https://github.com/kstenerud/ObjectAL-for-iPhone
Just grab the most recent source from this repository, load "ObjectAL.xcodeproj" and run the ObjectALDemo target on any iOS 5.0 device (should also work on the simulator).
The actual implementation lies in two places:
https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/OpenAL/ALListener.m
https://github.com/kstenerud/ObjectAL-for-iPhone/blob/master/ObjectAL/ObjectAL/OpenAL/ALSource.m
Look for the word 'reverb' in these files (and the corresponding header files) to find the name of the OpenAL properties and constants used to set and control the reverb effect.
Good luck!
You could use pre-rendered audio if the situation allows it. If you want to do it real time look into DSP. Theres no way do to this out of the box that I am aware of.
The additional desktop APIs like EFX and EAX use hardware signal processing. Maybe in the future these hand held devices will implement the full OpenAL and OpenGL APIs, but for now we have the stripped down versions, for practical reasons like cost and battery life etc.
I'm sure there is a way, but its not going to be easy.

iPhone SDK: Is it possible to process audio file from local library

Well, I will try best not to make it as a 'I just want the code' question...
I'm recently working on a project which requires some audio signal processing from local music files (e.g. iTunes Library). The whole work includes:
Get the PCM data of an audio file (normally from iTunes library); <--AudioQueue (?)
Write the PCM data to a new file (it seems that Apple does not allow direct modification on music tracks); <--CoreAudio(?)
Do some processing and modification, like filters, manipulators, etc. <-- Will be developed in C++
Play the processed track. <--RemoteIO
The problem is, after going through some blogs and discussions:
http://lists.apple.com/archives/coreaudio-api/2009/Aug/msg00100.html, http://atastypixel.com/blog/using-remoteio-audio-unit/
http://osdir.com/ml/coreaudio-api/2009-08/msg00093.html
as well as the official sample codes, I got a feeling that the CoreAudio SDK allow us to apply audio processing only on voice demos recorded from Mic.
My question is that:
Can I get raw data from iTunes library tracks instead of Mic input?
If the first question is 'No', is there a way to 'fool' the SDK to let it think it is getting data from Mic input, not from iTunes? (I have done some similar 'hacking' stuff in C# before XD)
If the whole processing just doesn't work, can anyone provide some alternative ideas?
Any help will be appreciated. Thank you very much :-)
Thanks.
Just found something really cool yesterday.
From iPhone Media Library to PCM Samples in Dozens of Confounding, Potentially Lossy Steps
(http://www.subfurther.com/blog/?p=1103
And also a class library by MIT:
TSLibraryImport: Objective-C class + sample code for importing files from user's iPod Library in iOS4.
(http://bitbucket.org/artgillespie/tslibraryimport/changeset/a81838f8c78a
Hope they help!
Cheers,
Manca
1) No. Apple does not allow direct access to PCM data of songs. Otherwise you could create music-sharing apps, which is not in Apple's interests.
2) No. Hacking and getting approved is impossible due to Apple's code approval mechanism.
3) The only alternative I could think of is that you have to do the processing part on PC/Mac and then transfer it to the iPhone. Or you would have to store the files in your own applications folder - you should be able to load and process these via CoreAudio.
I know this thread is old but... did this work for you, Manca? And did this app get approved?
EDIT: just discovered the AVAssetReader class, introduced since iOS 4.1, should help