I have a music file with a particular tone(music.mp4). I want to convert existing sound file (speech.mp4) into the tone which is specified in music.mp4. Its like converting a speech into some particular tone. I do not want to play both files simultaneously. I want to convert source file with help of some music file. So, output file will be converted file.
Is it possible? I searched for Audio Unit Hosting and Multimedia guide. But do not get any clue.
Thanks in advance.
The answer is: it sounds (no pun intended) like it would be possible with the iOS. You just need to find someone who knows how to program that specific functionality. I do not know why you would think to find the answer in the Apple Docs. I want to know if it's possible to program music that plays backwards. I want to know if I can program a sound that converts my words into something my dog understands. I can't imagine the documentation could possibly cover everything everyone ever wants to program an iPhone to do.
There are no iOS public APIs for frequency analysis of audio files. You would have to write your own DSP code for that. The AVFoundation and Accelerate frameworks have some audio file conversion and math functions that may help, but that is only a small portion of the code needed.
Related
I have a series of sounds that a user will play, rearrange, and edit etc. while using my app. When the user is finished, I want them to be able to save their work and record it to an mp3.
I don't want to play it through speakers and record it with the mic since that will result in low sound quality and interference. I cannot think of any ways of doing this that doesn't require extra hardware and/or a computer.
How can I do this using just their device?
Well, I would say it cant be done with AVFoundation.
My suggestion is to use Audio Units, and transform all your interactions to an audio graph. at some point you set a render notify on the RemoteIO so every time it renders sounds to the speakers you get a callback where you can write it down those frames/packets/data into a file.
I will probably suggest to use AAC(m4a) over MP3. I am not very fond of MP3, and to be honest as far as I know the sdk does not provide encoding to MP3, probably due to licensing issues. I could be wrong though. Check this sample code below, probably the best sample code you will ever find on Audio units on the web.
AudioGraph by Tom Zic
I'm new to programming and want to find something that I can work on to help learn more about it. I want to do this is C++ if possible. What i want to do is start working on developing a program that has a user interface and will convert an mp3 into an m4b (the format iphone uses for audiobooks. I have been looking for some source code examples but have had no luck. If anyone can give me some places to start that would be great. Thanks
This is really trivial with the Audio Converter Services that are part of Core Audio.
I know it is not cool to just post a book reference but I really highly recommend Learning Core Audio: A Hands-on Guide to Audio Programming for Mac and iOS if you want to do Core Audio stuff.
Chapter 6 is all about audio conversion from one format to another.
M4B simply is a MP4 file (AAC encoded) with the extension M4B. So it basically is an MP4 file with a renamed file extension.
Perhaps consider using an encoding library such as ffmpeg.
Is it possible to play .mid files directly via some API, or one have to convert the midi file to e.g. WAV first? if any one know please tell me i see others all the similar question but its nt wrkng well so if any one know than please tell. thanx.
You could use MusicPlayer APIs for playback.
To alter the tempo, see MusicPlayerSetPlayRateScalar.
On iOS, is it possible to get the user's audio stream in a decompressed format? For example, the MP3 is returned as a WAV that can be used for audio analysis? I'm relatively new to the iOS platform, and I remember seeing that this wasn't possible in older iOS versions. I read that iOS 4 brought in some advanced APIs but I'm not sure where I can find documentations/samples for these.
If you don't mind using API for iOS 4.1 and above, you could try using the AVAssetReader class and friends. In this similar question you have a full example on how to extract video frames. I would expect the same to work for audio, and the nice thing is that the reader deals with all the details of decompression. You can even do composition with AVComposition to merge several streams.
These classes are part of the AVFramework, which allows not only reading but also creating your own content.
Apple has an OpenAL example at http://developer.apple.com/library/mac/#samplecode/OpenALExample/Introduction/Intro.html where Scene.m should interest you.
The Apple documentation has this picture where the Core Audio framework clearly shows that it gives you MP3 out. It also states that you can access audio units in a more radical way if you so need.
The same Core Audio document gives also some information about using MIDI if it may help you.
Edit:
You're in luck today.
In this example an audio file is loaded and fed into an AudioUnit graph. You could fairly easily write an AudioUnit of your own to put into this graph and which analyzes the PCM stream as you see fit. You can even do it in the callback function, although that's probably not a good idea because callbacks are encouraged to be as simple as possible.
Is it possible to play .mid files directly via some API, or one have to convert the midi file to e.g. AAC first?
(2 years later…) You can use MusicPlayer and MusicSequence APIs. Available in iOS 5.
There is no Apple API for this. You could write your own, which i think would depend on what you are hoping it is going sound like.
There is lots of available source code for reading midi files and there are a few open source synths for the iphone - or you could use openAl for triggering samples. It probably isn't going to sound like Garageband tho.
If you want it to sound as good as possible you will have to convert it first.