Correct approach to play a multiple files simultaneously using Core Audio - iphone

I've developed a model, which plays up to 10 tracks with a number of clips on each using AVFoundation.
Still I don't like the performance and sound corruption in deed.
I read documentation of Core Audio and tried out some samples.
Some plays only 1 file using AU Generator (AudioFilePlayer subtype).
Those samples where 2 files are playing use MultichannelMixer and custom buffers to render audio data.
Could I use MultichannelMixer and connect multiple Generators (AudioFilePlayer) to its nodes?
Or the best way is to render data by myself?
Thanks in advance!

Related

Play and render stream using audio queues

I'm currently playing a stream on my iOS App but one feature we'd like to add is the visualization of the output wave. I use an output audio queue in order to play the stream, but have found no way to read the output buffer. Can this be achieved using audio queues or shall be done wit a lower level api?
To visualize, you presumably need PCM (uncompressed) data, so if you're pushing some compressed format into the queue like MP3 or AAC, then you never see the data you need. If you were working with PCM (maybe you're uncompressing it yourself with the Audio Conversion APIs), then you could visualize before putting samples into the queue. But then the problem would be latency - you want to visualize samples when they play, not when they go into the queue.
For latency reasons alone, you probably want to be using audio units.
It cannot actually be done. In order to do so, I need audio units to implement the streamer.

create video from images [duplicate]

I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art

Capturing video while processing it through a shader on iPhone

I am trying to develop an iPhone app that processes/filters and records video.
I have two sample apps that have aspects of what I need and am trying to combine them.
AVCamDemo from the WWDC10 sample code package (Apple Developer ID required)
This deals with capturing/recording video.
Brad Larson's ColorTracking sample app referenced here
This deals with live processing of video using OpenGL ES
I get stuck when trying to combine the two.
What I have been trying to do is to use AVCaptureVideoOutput and the AVCaptureVideoDataOutputSampleBufferProtocol to process/filter the video frames through OpenGL ES (as in -2-), and at the same time somehow use AVCaptureMovieFileOutput to record the processed video (as in -1-).
Is this approach possible? If so, how would I need to set up the connections within the AVSession?
Or do I need to use the AVCaptureVideoDataOutputSampleBufferProtocol to process/filter the video AND then recombine the individual frames back into a movie - without using AVCaptureMovieFileOutput to save the movie file?
Any suggestions for the best approach to accomplish this are much appreciated!

Creating video file from images and audio( pre-recorded )

I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art

Can iphone mix two sound files or build custom equalizer?

Can iphone mix two sound files or build custom equalizer?
I have studied for weeks about this problem,
and it seems unable to use iphone-sdk to mix two or more sound files or to build custom equalizer.
Is anyone have the experience to do this?
Yes you can. AVAudioPlayer can play multiple sounds and you can control the volume for each. Or you can use Audio Units and have more control over the audio data.
aurioTouch is a good sample app for what you are thinking of.
For simple playback of sound files you can use the AVAudioPlayer class introduced in the 2.2 SDK. It provides playback and volume controls for playing any audio file. As far as I am aware, there are no restrictions on the number of sound files you can play on the iPhone. The only restriction on playing sound files is that you may only play one AAC or MP3 compressed file at a time, the rest of the files must be either uncompressed or in the IMA4 format.
If your needs are more low-level (If you need to do DSP) you might want to look at AudioQueue Services or AudioUnits - two Mac OS X audio processing APIs that are also available on the iPhone.