I am working on a music player for the iPhone, and I would like to crossfade between songs. I know this is possible because My DJ does it...I can't find any documentation and don't even know where to begin on it. Can anyone give me a starting point or example code?
Thanks.
One option is to convert both songs into raw (linear PCM) samples. Then build your own mixer function or audio unit where the mix ratio of each sample comes from a ramp or custom mix curve function. Feed the output to the RemoteIO audio unit.
Related
aurio_touch sample is taking input from mic,but i want to take show frequency waves while playing sound files.
Is it possible through audio_touch,if yes then how?
please reply...
I am creating one sound based application where I need to generate different oscillations for sound wave. One part that I need to achieve in this is the discontinuous or paused beep. I am trying to do this using I'm doing variations in the sine wave using Core Audio, but I'm not getting the desired output.
Basically I need to generate variable sound oscillation patterns like Dog Whistler app.
Can anyone help me in proper direction for this
have a look at my other answer how to loop background music. You can create an audio file with 0.5 sec beep and 1 sec silence and loop that file infinitely via AVAudioPlayer. As well as pausing the beep sequence via [player pause];
This will work if you have some predefined beep intervals/oscillations but not if you need 100% user customization. You can also look for sound effects to adjust the pitch and frequency.
What I'm doing :
I need to play audio and video files that are not supported by Apple on iPhone/iPad for example mkv/mka files which my contain several audio channels.
I'm using libffmpeg to find audio and video streams in media file.
Video is being decoded with avcodec_decode_video2 and audio with avcodec_decode_audio3
the return values are following for each function are following
avcodec_decode_video2 - returns AVFrame structure which encapsulates information about the video video frame from the pakcage, specifically is has data field which is a pointer to the picture/channel planes.
avcodec_decode_audio3 - returns samples of type int16_t * which I guess is the raw audio data
So basically I've done all this and successfully decoding the media content.
What I have to do :
I've to play the audio and video accordingly using Apples services. The playback I need to perform should support mixing of audio channels while playing video, i.e. let say mkv file contains two audio channel and a video channel. So I would like to know which service will be the appropriate choice for me ? My research showed that AudioQueue service might be useful audio playback, and probably AVFoundation for video.
Please help to find the right technology for my case i.e. video playeback + audio playback with possible audio channel mixing.
You are on the right path. If you are only playing audio (not recording at all) then I would use AudioQueues. It will do the mixing for you. If you are recording then you should use AudioUnits. Take a look at the MixerHost example project from Apple. For video I recommend using OpenGL. Assuming the image buffer is in YUV420 then you can render this with a simple two pass shader setup. I do believe there is an Apple example project showing how to do this. In any case you could render any pixel format using OpenGL and a shader to convert the pixel format to RGBA. Hope this help.
The scenario is that I start the recording at in my iphone app maybe by using AVAudioRecoder and when i have some input sound above certain threshold then i will do some thing. Is it possible to process the input audio while recoding??
Is there any way to know the input loudness in iPhone. Like what is the level of the loudness in numbers or if there is any other measure for that.
Look at AVFoundation, AVAudioRecorder and enable the property meteringEnabled to get threshold change callbacks.
I'm recording audio using AVAudioRecorder and after recording I want to draw a wave form of the recorded audio. I've found a nice article about wave form drawing, but it first I need the frequencies at a certain sample rate as floats, right?
Do I need to do FFT on the audio and how do I do this? Is *AVAudioRecorder** even the API for this purpose? Or do I need to use some lower API to record the audio?
Hope someone can help me.
AVAudioRecorder doesn't look like it's much use for this (although it may be possible). You need to look at recording with AudioQueue.
The 'waveform' of the audio isn't the frequencies. The waveform is the value of the samples that make up the audio (you can get these when recording with an AudioQueue). FFT converts the audio samples from the time domain to the frequency domain - if you draw the output of the FFT you will have a Spectrograph instead of a waveform.