I am looking at Apple's 'aurioTouch' example for the iPhone and I would like to play an mp3 or wav instead of using the built in mic. I am very new to the audio portion of iPhone programming, but I think I need to modify the SetupRemoteIO(...) function and replace the AudioComponent named 'comp' with a custom AudioComponent that plays a file. Basically I want the app to function exactly the same as the original, but with an audio file as the input instead of the mic.
You just need to convert your audio file to pcm data and then feed that data to the RemoteIO interface during the playback callback.
To read your audio file in, you will want to use ExtAudioFileOpenURL and ExtAudioFileRead. Also make sure to set your audio format with ExtAudioFileSetProperty to convert to your target pcm format (which should be packed, signed integer PCM data).
Playback simply involves responding to the RemoteIO callback (which should be identical to aurioTouch's example) and feeding it the PCM data you loaded up.
The only other tricky part is that loading an entire mp3 file as PCM can take up a ton of memory. You might have to write a loading thread so that you can stay under your memory requirements by only loading your relevant chunk of the mp3.
Related
When decoding the audio data of an mp3 file I fetched, rendering it with my OfflineContext, and exporting it back to a .wav file, the sound is in slo-mo with a different pitch. Is it because the sample rates of my mp3 file and OfflineContext are different? If it is, how can I export the mp3 file in a different sample rate while not changing the pitch?
Edit:
I run decodeAudioData with the OfflineAudioContext that I use for the rendering: offlineContext.decodeAudioData(this.arrayBuffer). The sample rate of offlineContext is 48000, while the sample rate of my audioContext (used for normal playback, which works well) is 41000.
While creating the WAV file, the same sample rate than the offline context should be set in the WAV file header. For example, a WAV file with a sample rate of 44100 in the header that contains data chunks with a sample rate of 48000 will cause the playback to be in "slow motion" in iTunes.
Can I somehow use a loaded audio tag to create an audio buffer?
I know that the web audio API has a decodeAudioData method that can be used to create an audio buffer, but it does not accept an audio tag.
How can I take an audio tag and use it to create an audio buffer that can be played by a source node?
Sadly, it doesn't look like the audio tag can be loaded into a buffer. The only way so far to create an AudioBuffer is through XMLHttpRequest or creating an empty buffer.
I am working on media recording on IOS. I was able to record with both audio and video using UIImagePickerController. For my specification I want to record only audio. Is it possible with UIImagePickerController? or Do I need to think about other methods?
thanks.
UIImagePickerController cannot be used to record audio. Instead you should take a look at the SpeakHere sample code from Apple.
Also check Audio Queue - Recording to a compressed audio format on how to record as compressed audio format and conserve disk space.
I am looking to record and save a music/song file with one or more audio track(s) let's say a max of two tracks playing simultaneously while recording my vocals via the headset or the microphone. The finished product will be a single song file(mp3 or other format).
Also, the code should have the ability to filter out outside noise/interferance and add basic effects.
Appreciate any and all Xcode help!
I have done same thing using AVAudioSessionCategoryPlayAndRecord.
in my code, played karaoke file in MPMoviePlayer and at same time takes input from mic.
output is audio from MPMoviePlayer and it will be used as input also + input from mic.
i started to save this input in caf file, and upon finish product should be single file.
What I'm doing :
I need to play audio and video files that are not supported by Apple on iPhone/iPad for example mkv/mka files which my contain several audio channels.
I'm using libffmpeg to find audio and video streams in media file.
Video is being decoded with avcodec_decode_video2 and audio with avcodec_decode_audio3
the return values are following for each function are following
avcodec_decode_video2 - returns AVFrame structure which encapsulates information about the video video frame from the pakcage, specifically is has data field which is a pointer to the picture/channel planes.
avcodec_decode_audio3 - returns samples of type int16_t * which I guess is the raw audio data
So basically I've done all this and successfully decoding the media content.
What I have to do :
I've to play the audio and video accordingly using Apples services. The playback I need to perform should support mixing of audio channels while playing video, i.e. let say mkv file contains two audio channel and a video channel. So I would like to know which service will be the appropriate choice for me ? My research showed that AudioQueue service might be useful audio playback, and probably AVFoundation for video.
Please help to find the right technology for my case i.e. video playeback + audio playback with possible audio channel mixing.
You are on the right path. If you are only playing audio (not recording at all) then I would use AudioQueues. It will do the mixing for you. If you are recording then you should use AudioUnits. Take a look at the MixerHost example project from Apple. For video I recommend using OpenGL. Assuming the image buffer is in YUV420 then you can render this with a simple two pass shader setup. I do believe there is an Apple example project showing how to do this. In any case you could render any pixel format using OpenGL and a shader to convert the pixel format to RGBA. Hope this help.