I am trying to do an audio visualisation for a stream. The audio has to play in the background and currently I am playing it with an AVPlayer but I cannot get the metering from it. How can I get the metering and make the visualisation? Any suggestions?
Here you have an example with waveforms: A cocoa audio player component which displays the waveform of the audio file
Here you have a LED bar gauge and another example how can be used: ATTabandHoldAudioRecord
Apple, also have SpeakHere example - the code includes a LevelView .. but this Apple sample code is anything but simple to implement ...
Related
We have a VOIP application, that records and plays audio. As such, we are using PlayAndRecord (kAudioSessionCategory_PlayAndRecord) audio session category.
So far, we have used it successfully with iPhone 4/4s/5 with both iOS 6 and iOS 7 where call audio and tones played clearly and were audible.
However, with iPhone 5s, we observed that both the call audio and tones sound robotic/garbled in speaker mode. When using earpiece/bluetooth/headset, sound is clear and audible.
iOS Version used with iPhone 5s: 7.0.4
We are using audiounits for recording/playing of call audio.
When setting audio properties like session category, audio route, session mode etc., we tried both the older (deprecated) AudioSessionSetProperty() and AVAudioSession APIs.
For playing tones, we are using AVAudioPlayer. Playing of tones during the VOIP call and also when pressing keypad controller within the app produces robotic sound.
When instantiating the audio component using AudioComponentInstanceNew, we set componentSubType to kAudioUnitSubType_VoiceProcessingIO.
When replacing kAudioUnitSubType_VoiceProcessingIO with kAudioUnitSubType_RemoteIO, we noticed that the sound of call audio and tones was no longer robotic, it was quite clear, but the volume level was very low when using speaker mode.
In summary, keeping all the other audio APIs the same:
kAudioUnitSubType_VoiceProcessingIO: Volume is high (desirable) but sound of tones and call audio was robotic in speaker mode.
kAudioUnitSubType_RemoteIO: Sound of tones and call audio was clear but it is not audible.
STEPS TO REPRODUCE
- Set audio session category to playAndRecord.
- Set audio route to speaker
- Set all the other audio properties like starting audio unit, activating the audio session, instantiating the audio components.
- Set the input and render callbacks
- Try both options
1. Play tones using AVAudioPlayer
2. Play call audio
Any suggestions on how to get over this issue. Raised as an issue with Apple but no response yet from them.
i have shared the code here github link
The only difference between kAudioUnitSubType_VoiceProcessingIO and kAudioUnitSubType_RemoteIO is that voiceProcessing includes code to tune out acoustic echo i.e. tunes out the noise from the speaker so the microphone doesn't pick it up. Its been a long time since I've played with the audio framework but I remember that to sound off there could be any number of things,
Are you doing any work in the audio callbacks that could be taking a long time?
The callbacks run on realtime threads. if your processing takes too long you can miss data. Would be helpful to track the data over a fixed period of time to see are you capturing it all. Use something like wireShark to sniff the network. Record the number of packets and see did the phone capture the same.
Are you modifying any of the audio?
Do you have a circular buffer that might be causing an issue?
I've had several issues doing this and one was using a third party circular buffer that was described as low latency and efficient ... it wasn't. I answered my own question here and included my circular buffer implementation that greatly improved my audio as the issue was I was skipping data.
Give this a go and let me know:
iOS UI are causing a glitch in my audio stream
Please be aware that some of this code is unique to the audio format ALaw, 0xD5 is a byte of silence in ALaw, if you are using linear PCM or any other that will probably be a noise of some kind.
I am developing an app in which I need to record the audio.
Now while recording the audio I need to show the custom volume indicator like, speedometer which will show the volume level of the recoding audio.
So please help me if any one has an idea about how to do that.
Thanks.
You can check this audio meter.. is not analog.. but.. http://developer.apple.com/library/ios/#samplecode/SpeakHere/Introduction/Intro.html
if you want analog.. you must combine Apple exemple with this analog speedometer .
~same answer here objective c audio meter.. or... AVAudioRecorder meter delay on iPad
What I'm doing :
I need to play audio and video files that are not supported by Apple on iPhone/iPad for example mkv/mka files which my contain several audio channels.
I'm using libffmpeg to find audio and video streams in media file.
Video is being decoded with avcodec_decode_video2 and audio with avcodec_decode_audio3
the return values are following for each function are following
avcodec_decode_video2 - returns AVFrame structure which encapsulates information about the video video frame from the pakcage, specifically is has data field which is a pointer to the picture/channel planes.
avcodec_decode_audio3 - returns samples of type int16_t * which I guess is the raw audio data
So basically I've done all this and successfully decoding the media content.
What I have to do :
I've to play the audio and video accordingly using Apples services. The playback I need to perform should support mixing of audio channels while playing video, i.e. let say mkv file contains two audio channel and a video channel. So I would like to know which service will be the appropriate choice for me ? My research showed that AudioQueue service might be useful audio playback, and probably AVFoundation for video.
Please help to find the right technology for my case i.e. video playeback + audio playback with possible audio channel mixing.
You are on the right path. If you are only playing audio (not recording at all) then I would use AudioQueues. It will do the mixing for you. If you are recording then you should use AudioUnits. Take a look at the MixerHost example project from Apple. For video I recommend using OpenGL. Assuming the image buffer is in YUV420 then you can render this with a simple two pass shader setup. I do believe there is an Apple example project showing how to do this. In any case you could render any pixel format using OpenGL and a shader to convert the pixel format to RGBA. Hope this help.
I'm doing research for a iPhone video chat app project.
I've tried this to capture image,
use AVCaptureVideoPreviewLayer to display camera view
and put a MPMoviePlayerController to play network stream video at the back.
They all work until I add an audio input to the AVCaptureSession.
The MPMoviePlayerController stop the AVCaptureSession if there is audio input.
I think of using AVAudioSession for both playing and recording audio and some other way to play video,
but the documentation of AVAudioPlayer said
"Apple recommends that you use this class for audio playback unless you are playing audio captured from a network stream or require very low I/O latency."
I found the Multimedia Programming Guide saying that
"To provide lowest latency audio, especially when doing simultaneous input and output (such as for a VoIP application), use the I/O unit or the Voice Processing I/O unit."
Is it the correct direction to implement video chat in iphone?
HI all
i am playing a video(.mp4 format) without sound (i mean it doesn't have sound it is a mute video ) and in the background i am playing an audio file (.mp3 format) when i play my code through simulator it works fine as i want like when i tap on the video it is just mute but behind i am playing the audio so for user it seems that video has this sound but when i installed my code in device and play video than it doesn't work like so it play video but without sound than how can i play an audio and a video together in the above format ?
actually we are not just playing a single video or audio file it just comes from an array by choosing randomly and same for the audio file so we cann't do this i think so any other idea for it ??
Should we use another format for audio aur video for doing this thing ??
thanks for the help
Balraj verma
The problem is that the default Audio Session does not allow audio mixing.
In the Reference Library (Working with Movie Players) they say you should use a mixable category configuration for your audio session, for example the Ambient category. In the Application Delegate:
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *setCategoryError = nil;
[audioSession setCategory:AVAudioSessionCategoryAmbient error: &setCategoryError];
Apple's documentation states that some audio formats are not suited to be played back simultaneously because of hardware restrictions. See Playing Multiple Sounds Simultaneously for more details.
A solution for you would be to use PCM encoded audio which should allow simultaneous playback.
I would like to add that I managed to play two mp3 audio files at the same time on 3G and 3GS iPhones. Which shouldn't be possible according to documentation but worked for me.
You can rather use the an instance of AvAudioPlayer in a new thread. Use the following link to see how does this work
http://www.mobileorchard.com/easy-audio-playback-with-avaudioplayer/
create an instance of MPMoviePlayer to start playback of videos.
Hope this will work.
You should combine the audio and video using some video editing software like iMovie or Windows Movie Maker. Using the software to "composite" audio and video together is not a good idea and adds additional overhead, synchronization issues, etc.
As far as I know, you can't play AVAudioPlayer and MPMoviePlayer at the same time.