In my app,i need to generate video with audio. Now i am using Air Play Sample But its contains only generate video only and i need to generate video with audio.Any one help me to solve with problem.
Regards,
First of all, you need to increase your accept rate.
Secondly, this is quite a lot what you are asking for. I'm not familiar with Air Play, but my suggestion would be to start with the SpeakHere project. That will get you familiarised with core-audio and give you a template for accessing the audio input.
As for the visual aspect I highly recommend O'Reilly's 3D Programming book which provides a lot of example code.
It will take a bit of time to assimilate both materials, but you'll end up with a better application.
Related
I'm very new to audio programming, but I know this must be possible. (This is an iOS/iPhone related question).
How would I go about changing the tempo of a loaded audio file without changing the pitch, and then playing it back?
I think I need to delve into the CoreAudio framework, but I'm not sure where to begin.
If anyone could let me know what classes I need to look at, or the general process involved, that would help me get started and I'd really appreciate it!
Cheers!
This question is highly related: it relates to pitch shifting, rather than time shifting, but I'd check out the comments and links.
Real-time Pitch Shifting on the iPhone
What you are looking for is a time-pitch modification library. Core Audio on iOS currently does not contain such, but there appear to be some 3rd party libraries available (commercially). There are also time pitch tutorials on the web, such as at dspdimention, which require a large amount of DSP development to get working.
I am trying to get started in advanced audio with the iPhone SDK. I really want to make professional level audio components. I know the basics (e.g. how to use NSAVAudioPlayers), but I don't know what to do for the more complicated sort of audio (e.g. osculation and audio cones). Does anyone know where to go for this? (I tried research online, and all that came up weer the sort of simplistic audio components).
Core Audio. That's where you'll find an OpenAL implementation. You might also want to look at the Audio Processing Graph API (also part of core audio).
Hi unfortunately I've not been able to figure out audio on the iPhone. The best I've come close to are the AVAudioRecorder/Player classes and I know that they are no good fo audio processing.
So i'm wondering if someone would be able to explain to me how to "listen" to the iPhone's mic input in chunks of say 1024 samples, analyse the samples and do stuff. And just keep going like that until my app terminates or tells it to stop. I'm not looking to save any data, all I want is to analyse the data in real time and do stuff in real time with it.
I've attempted to try and understand apples "aurioTouch" example but it's just way too complicated for me to understand.
So can someone explain to me how I should go about this?
If you want to analyze audio input in real-time, it doesn't get a lot simpler than Apple's aurioTouch iOS sample app with source code (there is also a mirror site). You can google a bit more info on using the Audio Unit RemoteIO API for recording, but you'll still have to figure out the real-time analysis DSP portion.
The Audio Queue API is a slight bit simpler for getting input buffers of raw PCM audio data from the mic, but not much simpler, and it has a higher latency.
Added later: There's also a version of aurioTouch converted to Swift here: https://github.com/ooper-shlab/aurioTouch2.0-Swift
AVAudioPlayer/Recorder class won't take you there if you wanna do any real time audio processing. The Audio Toolbox and Audio Unit frameworks are the way to go. Check here for apple's audio programming guide to see which framework suits your need. And believe me, these low level stuff is not easy and is poorly documented. CocoaDev has some tutorials where you can find sample codes. Also, there is an audio DSP library DIRAC I recently discovered for tempo and pitch manipulation. I haven't looked into it much but you might find it useful.
If all you want is samples with a minimum amount of processing by the OS, you probably want the Audio Queue API; see Audio Queue Services Programming Guide.
AVAudioRecorder is designed for recording to a file, and AudioUnit is more for "pluggable" audio processing (and on the Mac side of things, AU Lab is actually pretty cool).
My real objective is to be able to use 1 audio file and create X amount of different pitches and then playing them in the app using some code to handle the timing.
TIA for any helpful insight
You can read Core Audio documentation.
See this answer, which recommends using the AVFoundation framework.
Core Audio is supposed to be fairly low level. Great if you need more flexibility/control, but AVFoundation may be more appropriate for your app.
I may be doing an iPhone-based application doing near-real-time sound-processing (filtering, etc). I was wondering the best way to get started. Would I want to create an audio cue for recording and processing sound, as described here?
Edit:
I should be clear. I am not asking how to do signal processing, in general. I know some of that and my team's expert will handle the rest. I asking what the "low level" interfaces to sound data on the iphone are.
Edit2:
My iphone development has been pushed back a week or two so I don't have access to the deve kit right now. Once I have access to the kit, I'll mark one answer or another correct.
Sound processing is a big subject. AudioQueue will get you the raw data. Apple provides two samples that will get you started using AudioQueue: SpeakHere and AurioTouch.
I used SpeakHere as a starting point for some audio processing I wanted to do. It's relatively easy to understand, and has all the pieces to do input and output.