FMOD on non-playing audio - fmod

Hey, is there any way to get the audio spectrum of a section of a song using FMOD if it is not playing?
Can I render a full song waveform using FMOD (+opengl/openframeworks/etc.) before the song is playing?

Yes.
Yes but you will have to do the your own spectrum analysis on the time domain wavedata.
You can get the wave data from the FMOD::Sound using Sound::lock. To do this you would have to create the sound as FMOD_SAMPLE which means the entire song will be decompressed into memory. You can render the waveform using this data and also conduct your spectrum analysis. FMOD's inbuilt 'getSpectrum' function will only work with snapshots of the playing data in a Channel or ChannelGroup.

Related

How to record mic input and an audio track on iPhone at the same time

I am looking to record and save a music/song file with one or more audio track(s) let's say a max of two tracks playing simultaneously while recording my vocals via the headset or the microphone. The finished product will be a single song file(mp3 or other format).
Also, the code should have the ability to filter out outside noise/interferance and add basic effects.
Appreciate any and all Xcode help!
I have done same thing using AVAudioSessionCategoryPlayAndRecord.
in my code, played karaoke file in MPMoviePlayer and at same time takes input from mic.
output is audio from MPMoviePlayer and it will be used as input also + input from mic.
i started to save this input in caf file, and upon finish product should be single file.

Right choice to play audio and video content

What I'm doing :
I need to play audio and video files that are not supported by Apple on iPhone/iPad for example mkv/mka files which my contain several audio channels.
I'm using libffmpeg to find audio and video streams in media file.
Video is being decoded with avcodec_decode_video2 and audio with avcodec_decode_audio3
the return values are following for each function are following
avcodec_decode_video2 - returns AVFrame structure which encapsulates information about the video video frame from the pakcage, specifically is has data field which is a pointer to the picture/channel planes.
avcodec_decode_audio3 - returns samples of type int16_t * which I guess is the raw audio data
So basically I've done all this and successfully decoding the media content.
What I have to do :
I've to play the audio and video accordingly using Apples services. The playback I need to perform should support mixing of audio channels while playing video, i.e. let say mkv file contains two audio channel and a video channel. So I would like to know which service will be the appropriate choice for me ? My research showed that AudioQueue service might be useful audio playback, and probably AVFoundation for video.
Please help to find the right technology for my case i.e. video playeback + audio playback with possible audio channel mixing.
You are on the right path. If you are only playing audio (not recording at all) then I would use AudioQueues. It will do the mixing for you. If you are recording then you should use AudioUnits. Take a look at the MixerHost example project from Apple. For video I recommend using OpenGL. Assuming the image buffer is in YUV420 then you can render this with a simple two pass shader setup. I do believe there is an Apple example project showing how to do this. In any case you could render any pixel format using OpenGL and a shader to convert the pixel format to RGBA. Hope this help.

How to programmatically crossfade between songs

I am working on a music player for the iPhone, and I would like to crossfade between songs. I know this is possible because My DJ does it...I can't find any documentation and don't even know where to begin on it. Can anyone give me a starting point or example code?
Thanks.
One option is to convert both songs into raw (linear PCM) samples. Then build your own mixer function or audio unit where the mix ratio of each sample comes from a ramp or custom mix curve function. Feed the output to the RemoteIO audio unit.

How to detect ambient sound level on the iPhone?

I need to create an audio loudness (decibel) detector. To clarify, I am not trying to find the volume at which the iPhone is playing, but instead the volume of its surrounding in decibels. How can I do this?
It can be done using AVAudioRecorder: http://b2cloud.com.au/tutorial/obtaining-decibels-from-the-ios-microphone
Use one of the Audio Queue or Audio Unit APIs to record low latency audio, run the samples through a DSP filter to weight the spectrum for the particular type or color of loudness you want to measure, then calibrate the mic on all the particular models of iOS devices you want to run your detector on against calibrated sound sources, perhaps in an anechoic chamber.

Audio recording and wave form drawing on ios

I'm recording audio using AVAudioRecorder and after recording I want to draw a wave form of the recorded audio. I've found a nice article about wave form drawing, but it first I need the frequencies at a certain sample rate as floats, right?
Do I need to do FFT on the audio and how do I do this? Is *AVAudioRecorder** even the API for this purpose? Or do I need to use some lower API to record the audio?
Hope someone can help me.
AVAudioRecorder doesn't look like it's much use for this (although it may be possible). You need to look at recording with AudioQueue.
The 'waveform' of the audio isn't the frequencies. The waveform is the value of the samples that make up the audio (you can get these when recording with an AudioQueue). FFT converts the audio samples from the time domain to the frequency domain - if you draw the output of the FFT you will have a Spectrograph instead of a waveform.