Is there a "simple" way to play linear PCM audio on the iPhone? - iphone

I'm making a media playback app which gets given uncompressed linear PCM (über raw) audio from a third-party decoder, but I'm going crazy when it comes to just playing back the most simple audio format I can imagine..
The final app will get the PCM data progressively as the source file streams from a server, so I looked at Audio Queues initially (since I can seemingly just give it bytes on the go), but that turned out to be mindfudge - especially since Apple's own Audio Queue sample code seems to go off on a magical field trip of OpenGL and needless encapsulation..
I ended up using an AVAudioPlayer during (non-streaming) prototyping; I currently create a WAVE header and add it to the beginning of the PCM data to get AVAudioPlayer to accept it (since it can't take raw PCM)
Obviously that's no use for streaming (WAVE sets the entire file length in the header, so data can't be added on-the-go)..
..essentially: is there a way to just give iPhone OS some PCM bytes and have it play them?

You should revisit the AudioQueue code. Once you strip away all the guff you should have about 2 pages of code plus a callback in which you can supply the raw PCM. This callback is also synchronous with your main loop, so you don't even have to worry about locking.

Related

how to insert and overwrite audio file in iOS

Am developing an application which has audio recorder. The user should be able to play the audio file & insert recording into that.. cut unwanted audio .. overwrite some part of audio file.
Have you seen how to Insert , overwrite audio files -Audio Editing iphone? but no one answered this...
Atleast suggest me a way to implement this....
Thanx in advance...
What type of audio file are you talking about? You will almost certainly need to convert whatever you are using into PCM WAV data for this type of manipulation. Luckily, Core Audio, which others have pointed you towards has some convenience methods for doing this.
Once you have the raw PCM data, you can insert by simply inserting other PCM data at the desired point in the data. You want to make sure you don't do something like write in the middle of a stereo packet, or something like that, but besides that, most simply-formatted PCM data is pretty easy to manipulate. Think of it like a string -- you can start with "Hello World" and change it to "Hello, Beautiful World" but simply inserting data in the middle.
Overwriting is the same principal.
Once you are done with the edits, you'll need to transform the PCM data back into whatever format you had saved in before.
Have a look at Core Audio
Core Audio provides software interfaces for implementing audio features in applications you create for iOS and OS X. Under the hood, it handles all aspects of audio on each of these platforms. In iOS, Core Audio capabilities include recording, playback, sound effects, positioning, format conversion, and file stream parsing, as well as:

iOS Audio for a DirectSound programmer - what to use?

I'm a DirectSound programmer new to iOS. I want to implement the ability to play streaming multichannel audio, sometimes looping back to a specified point when the stream is finished playing (think of a song that has a little intro diddy that is played once, then the song loops indefinitely, skipping that intro).
With DirectSound and libvorbis, at least, I'd feed a chunk of the OGG data into the libvorbis decoder, it'd spit out some PCM, and I'd fill the buffer and queue it up to play right after the current sound buffer is finished, swapping between two buffers.
Probably looking at using some kind of hardware-supported format on iOS, like AAC. What programming APIs should I be using that will allow me to do multichannel and loop points? Any input is appreciated, thanks!
The iOS AVAssetReader class can be used to read compressed audio file data into a PCM buffer. Either the Audio Queue API (simpler) or the RemoteIO Audio Unit (lower latency) can be used to play buffers of PCM data.

Play and render stream using audio queues

I'm currently playing a stream on my iOS App but one feature we'd like to add is the visualization of the output wave. I use an output audio queue in order to play the stream, but have found no way to read the output buffer. Can this be achieved using audio queues or shall be done wit a lower level api?
To visualize, you presumably need PCM (uncompressed) data, so if you're pushing some compressed format into the queue like MP3 or AAC, then you never see the data you need. If you were working with PCM (maybe you're uncompressing it yourself with the Audio Conversion APIs), then you could visualize before putting samples into the queue. But then the problem would be latency - you want to visualize samples when they play, not when they go into the queue.
For latency reasons alone, you probably want to be using audio units.
It cannot actually be done. In order to do so, I need audio units to implement the streamer.

How can I record the audio output of the iPhone? (like sounds of my app)

I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.
Is it possible without the micro?
Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)
[Edit: removed comments on recording third-party app audio output ...]
With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.
Alas, you probably need to use CoreAudio which is a much more low level interface.
I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)

Playback MP3 using RemoteIO and AudioUnits on iPhone... possible?

I want to playback an mp3 rather than an uncompressed file using RemoteIO / AudioUnit.
Using uncompressed files obviously uses far too much disk space (30MB vs 3MB for mp3).
Is this even possible? If so, can you provide a little code headstart?
Thanks a million.
How low-level do you want to go? You could use the AudioHardware API:
err = AudioDeviceAddIOProc(deviceID, ioProc, self);
and in your ioProc fill the buffers yourself, listen for hardware changes, and deal with real-time threading and a lot of other low-level stuff.
I'm assuming you're already decoding the mp3 data with AudioConverter and know how to do that.
Yes, it is possible. But it requires multiple audio API use and multiple threads.
Due to the real-time constraints of the Audio Unit buffer callback thread, you will have do the conversion of compressed files to raw PCM samples outside the Audio Unit callback. You could use Extended Audio File Services or AVAssetReader to do the conversion to uncompressed samples outside the Audio Unit callback. However, you don't need to uncompress the entire file at once. A short uncompressed buffer of a fraction of a second will likely do, as long as you keep filling it far enough ahead of the Audio Unit callback buffer consumption rate. This can be done in a separate timer driven thread that monitors buffer consumption, and decompresses just enough audio accordingly, perhaps into a ring buffer or circular FIFO.
What you will end up with will be similar to a rewrite of the Audio Queue API layer, but with more customizability.