How to apply reverb filter or any other sound effect to a .wav sound file? - iphone

I need to apply the reverb filter to my sound file in my ipad app.
I just found a keyword AVMetadataID3MetadataKeyReverb in the apple documentation, but not able to get how to use it.
This has been added from iOS 4.0.

The AVMetadataID3MetadataKeyReverb constant represents the RVRB field of and ID3(V2) tag - which is simply a piece of metadata that's part of an audio container file (like MP3).
The constant isn't related to applying an actual reverb effect to a piece of audio data, but to identifying different parts of ID3 tags when using the AV Foundation to retreive them from an audio file... Later when a supporting audio player plays those files it reads the tags and applies different filters (like the Reverb one) in real time while playing the file.
If you want to modify this value, you'll have to use some external library as Audio Toolbox only knows how to read ID3 tags not writing them. Check out TagLib
If you want to apply effects to some audio data check out BASS, they have an iPhone library, and many effects including reverb. There might be alternatives though.

Related

objective-c record audio session output

I am writing an app that generates music. I am using OpenAL to: modify gain; modify pitch; mix audio; and play the resulting audio. I now need to record the audio as it is being played. I understand that OpenAL does not let you record the output audio. The other options I have found is to use audio units. However because I need to mix/pitch/gain the audio and record it, it seems I need to write all the audio processing so I can have access to the output buffer. Is this correct? Or is there a different iOS API I can use to do this. If not then is there a 3rd party solution already that lets me record the output (paid solutions are fine)?
You are correct.
Audio Units are the only iOS public API that allows an app to both process and then record audio.
Trying to record the OpenAL output may well be a violation of Apple's rules against using non-public APIs.
The alternative may be to completely rewrite the portions of OpenAL you need (there may be open source for some portions) running on top of the RemoteIO Audio Unit.
The best way to go is likely to be Core Audio, since it will give you as much flexibility as you need. Take a look into the Extended Audio File Services reference pages.
Using and extended audio file you should be able to set up a file format and audio stream buffer to send the final mixed output to, and then use the ExtAudioFileWrite() function to write the samples to the file.

create video from images [duplicate]

I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art

Mixing sound files on an iPhone

I've got a couple of wav files and possibly a mp3 that I'd like to mix down to a single wav or mp3-file. I'm using C/C++/Obj-C (iPhone). I have really no experience with this sort of thing. If anyone could give me some pointers, I would be very grateful.
Basically what I want to do is similar things like for example Audacity can do, but programmatically. Isn't there a sound library where you can easily open audio files and "paste" them into a new one at defined positions? Where mixing is something you don't have to worry about?
Thanks.
Mixing two sound buffers of linear PCM is only a matter of adding each sample value in them together, and of course make sure you don't overflow. Normally you would use floating point values in the buffers though, so the issue is when you go back to the file. You should also have CoreAudio available on the iPhone, it has all the means to open/read/write sound files in different formats. I think there is also a more high level api available to the iPhone that isn't on the mac, look up the apple docs.
If you are specifically looking for the features of Audacity, it uses PortAudio under the hood (looks like an MIT license). Perhaps you can just try to use that?
Read Multimedia support as a starting point it contains alot info. Here is an extract:
There are 3 ways to mix the audio on iphone:
Audio Unit framework
Multichannel Mixer - "lets you mix multiple audio streams to a single stream"
3D Mixer unit - "lets you mix multiple audio streams, specify stereo output panning, manipulate sample rate"
OpenAl used in games development
Also check the following sample out: iPhoneMultichannelMixerTest:
Two input busses are created each with input volume controls. An overall mixer output volume control is also provided and each bus may be enabled or disabled

How can I record the audio output of the iPhone? (like sounds of my app)

I want to record the sound of my iPhone-App. So like someone is playing something on a iPhone instrument and after that you can hear it.
Is it possible without the micro?
Do you mean an App you build yourself? If yes, you could just save the rendered waveform (maybe encoded/compressed to save space) for later playback. (see: Extended Audio File Services, it can write the same AudioBufferList to a file that you would render to the RemoteAudio Unit when playing audio in your Instrument-App)
[Edit: removed comments on recording third-party app audio output ...]
With the AVFoundation you are currently using, you're always working on the level of the sound files. Your code never sees the actual audio signal. Thus, you can't 'grab' the audio signal that your app generates when it is used. Also, AVAudioPlayer does not provide any means of getting to the final signal. If you're using multiple instances of AVAudio player to play multiple sounds at the same time you also wouldn't be able to get at the mixed signal.
Alas, you probably need to use CoreAudio which is a much more low level interface.
I'd like to suggest an alternative approach: Instead of recording the audio output, why not record the sequence of actions together with their time which lead to the audio being played? Write this sequence of events to a file and read it back in to reproduce the 'performance' - it's a bit like your own MIDI sequencer :)

Creating video file from images and audio( pre-recorded )

I have come across some sample codes where set of images are added to make a QTmovie.
I am targeting this for OS X platform without any QT frameworks.
I have ague idea of creating a file with extension and embed it with appropriate metadata and find a way to insert images and audio in required format. So when the file is created it can simply be played.
I am not sure of what format/extension is better.
pointers are much appreciated.
Without QuickTime (or an equivalent multimedia framework), what you describe is quite a lot of work. Ordinarily, you would use a video compression algorithm (such as H.264) to encode your images into video, and an audio compression algorithm (such as AAC) to encode your audio track. Then you would write these streams into a container file, such as an MPEG-4 file, which interleaves the streams for playback, contains metadata and indexes and so on. Then for playback, you parse the file, decode the video and audio data, and schedule them for playback, taking care to keep them in sync.
QuickTime does all this (and more) for you, and it would be an enormous undertaking to write it all yourself. Is there some reason why you are running on OS X but cannot use QuickTime?
Given the question is tagged with iPhone, why can't you just use QTKit?
If you had to do it from scratch, you could adopt a very simple solution whereby you store your image sequence as a set of JPEG files (but then you would require libjpeg; use raw RGB or PPM if you must), the audio track as a raw WAV data, and then have another file (a text file you define) that stored timing information, so you would simply stream out the audio, and have the frame numbers of the images stored with their corresponding timecode/sample offset. That is a very simple solution that could be made to work without too much effort.
If you give us some more idea of what you are trying to achieve, we could offer some more specific suggestions.
If you want to write a program to do this, you could use Xuggler in Java to do it. It will allow you to save your final video in a format playable by almost any media player.
Start out by gaining an understanding of how video files (e.g. MP4, Quicktime) actually represent audio and video with this Overly Simplistic Guide to Internet Video.
Then, play around with the MediaTool tutorials. You can write programs that make raw images into video files (see this sample code). Finally, to write a program that makes audio and video that are in sync, see this tutorial; it generates a set of images, and makes some audio noise that is timed to change when a ball hits the edge of a box.
Hope that helps.
Art