I have got a quite strange problem here.
I am developing an IM software and need to play audio files recorded by another client on Android.
The same audio file I've got can be played with AVAudioPlayer on 3GS(IOS 4.2.1) device and simulator 4.2.
But when I tried by play it on iPhone4(iOS 4.3.3), the function "play" always return NO.
I also tried with two iPhone devices, the audio files recorded by iPhone client can be played on both 3GS and iPhone4.
So I asked the Android developers about the record parameters they've used. They said that the "AudioEncoder" used by them was "DEFAULT". There are also some other parameters as following:
**private AudioEncoder() {}
public static final int DEFAULT = 0;
/** AMR (Narrowband) audio codec */
public static final int AMR_NB = 1;
/** #hide AMR (Wideband) audio codec */
public static final int AMR_WB = 2;
/** #hide AAC audio codec */
public static final int AAC = 3;
/** #hide enhanced AAC audio codec */
public static final int AAC_PLUS = 4;
/** #hide enhanced AAC plus audio codec */
public static final int EAAC_PLUS = 5;**
Does anybody know what's the matter?
I've solved the issue.
Over iOS4.3 or later, the AMR format is no longer supported. If you want to play or record one, you have to use some other tools(which I use opencore-amr library) to transform the format.
Related
We are using exoplayer v2.x and are playing a HLS file which has 4 bitrate tracks.
When we configure exoplayer for adaptive playback, it is starting with a higher bitrate track but NOT switching back to a lower bitrate track when we throttle the network speed using Charles. The player seems to stick with the already selected higher bitrate track and keep on buffering instead of switching to a lower bitrate one.
We have configured the exoplayer in the following way:
private DefaultBandwidthMeter BANDWIDTH_METER =
new DefaultBandwidthMeter(mUiUpdateHandler, new BandwidthMeter.EventListener() {
#Override
public void onBandwidthSample(int elapsedMs, long bytes, long bitrate) {
Log.v(TAG, "Elapsed Time in MS " + elapsedMs + " Bytes " + bytes + " Bitrate " + bitrate);
bitrateEstimate = bitrate;
bytesDownloaded = bytes;
}
});
TrackSelection.Factory adaptiveTrackSelectionFactory =
new AdaptiveTrackSelection.Factory(BANDWIDTH_METER);
trackSelector = new DefaultTrackSelector(adaptiveTrackSelectionFactory);
player = ExoPlayerFactory.newSimpleInstance(getActivity(), trackSelector,
new CustomLoadControl(new CustomLoadControl.EventListener() {
#Override
public void onBufferedDurationSample(long bufferedDurationUs) {
long bufferedDurationMs = bufferedDurationUs;
}
}, mUiUpdateHandler), drmSessionManager, extensionRendererMode);
Can anyone please confirm this is the correct way to configure the player? Also has anyone observed this problem and have a fix for this?
Thanks in advance.
I'm using libogg and libogg, I've succeeded to add those libraries to my iPhone xCode project and encode my voice with Speex. The problem is that I cannot figure out how to pack those audio packet with ogg. Does someone know how a packet of that kind should look like or have a reference code I can use.
I know in Java it's pretty simple (you have a dedicated function for that) but not on iOS. Please help.
UPD 10.09.2013: Please, see the demo project, which basically takes pcm audiodata from wave container, encodes it with speex codec and pack everything into ogg container. Maybe later I'll create a full-fledged library/framework for all that speex routines on IOS.
UPD 16.02.2015: The demo project is republished on GitHub.
I also have been experimenting with Speex on iOS recently, with varied success, but here is something I found. Basically, if you want to pack some speex-encoded voice into an ogg file, you need to do follow three steps (assuming libogg and libspeex are already compiled and added to the project).
1) Add the first ogg page with Speex header; libspeex provides built-in tools for that (the code below is from my project, not optimal, just for the sake of example):
// create speex header
SpeexHeader spxHeader;
SpeexMode spxMode = speex_wb_mode;
int spxRate = 16000;
int spxNumberOfChannels = 1;
speex_init_header(&spxHeader, spxRate, spxNumberOfChannels, &spxMode);
// set audio and ogg packing parameters
spxHeader.vbr = 0;
spxHeader.bitrate = 16;
spxHeader.frame_size = 320;
spxHeader.frames_per_packet = 1;
// wrap speex header in ogg packet
int oggPacketSize;
_oggPacket.packet = (unsigned char *)speex_header_to_packet(&spxHeader, &oggPacketSize);
_oggPacket.bytes = oggPacketSize;
_oggPacket.b_o_s = 1;
_oggPacket.e_o_s = 0;
_oggPacket.granulepos = 0;
_oggPacket.packetno = 0;
// submit the packet to the ogg streaming layer
ogg_stream_packetin(&_oggStreamState, &_oggPacket);
free(_oggPacket.packet);
// form an ogg page
ogg_stream_flush(&_oggStreamState, &_oggPage);
// write the page to file
[_oggFile appendBytes:&_oggStreamState.header length:_oggStreamState.header_fill];
[_oggFile appendBytes:_oggStreamState.body_data length:_oggStreamState.body_fill];
2) Add the second ogg page with Vorbis comment:
// form any comment you like (I use custom struct with all fields)
vorbisCommentStruct *vorbisComment = calloc(sizeof(vorbisCommentStruct), sizeof(char));
...
// wrap Vorbis comment in ogg packet
_oggPacket.packet = (unsigned char *)vorbisComment;
_oggPacket.bytes = vorbisCommentLength;
_oggPacket.b_o_s = 0;
_oggPacket.e_o_s = 0;
_oggPacket.granulepos = 0;
_oggPacket.packetno = _oggStreamState.packetno;
// the rest should be same as in previous step
...
3) Add subsequent ogg pages with your speex-encoded audio in the similar manner.
First of all decide how many frames with audio data you want to have on every ogg page (0-255; I choose 79 quite arbitrarily):
_framesPerOggPage = 79;
Then for each frame:
// calculate current granule position of audio data within ogg file
int curGranulePos = _spxSamplesPerFrame * _oggTotalFramesCount;
// wrap audio data in ogg packet
oggPacket.packet = (unsigned char *)spxFrame;
oggPacket.bytes = spxFrameLength;
oggPacket.granulepos = curGranulePos;
oggPacket.packetno = _oggStreamState.packetno;
oggPacket.b_o_s = 0;
oggPacket.e_o_s = 0;
// submit packets to streaming layer until their number reaches _framesPerOggPage
...
// if we've reached this limit, we're ready to create another ogg page
ogg_stream_flush(&_oggStreamState, &_oggPage);
[_oggFile appendBytes:&_oggStreamState.header length:_oggStreamState.header_fill];
[_oggFile appendBytes:_oggStreamState.body_data length:_oggStreamState.body_fill];
// finally, if this is the last frame, flush all remaining packets,
// which have been created but not packed into a page, to the last page
// (don't forget to set oggPacket.e_o_s to 1 for this frame)
That's it. Hope it will help. Any corrections or questions are welcome.
I decode amrnb to PCM, then put right pcm buffer to Enqueue buffer (I'm sure PCM data is right), but no sound is heard. And when feeding buffer, log outputs:
/AudioTrack(14857): obtainBuffer timed out (is the CPU pegged?)
My code is below, and my questions are:
Is there something wrong when I use the OpenSL ES?
Is it true that OpenSL ES only works on the real device?
Sample code:
void AudioTest()
{
StartAudioPlay();
while(1)
{
//decode AMR to PCM
/* Convert to little endian and write to wav */
//write buffer to buffer queue
AudioBufferWrite(littleendian, 320);
}
}
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
//do nothing
}
void AudioBufferWrite(const void* buffer, int size)
{
(*gBQBufferQueue)->Enqueue(gBQBufferQueue, buffer, size );
}
// create buffer queue audio player
void SlesCreateBQPlayer(/*AudioCallBackSL funCallback, void *soundMix,*/ int rate, int nChannel, int bitsPerSample )
{
SLresult result;
// configure audio source
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM, 1, SL_SAMPLINGRATE_8,
SL_PCMSAMPLEFORMAT_FIXED_16, SL_PCMSAMPLEFORMAT_FIXED_16,
SL_SPEAKER_FRONT_CENTER, SL_BYTEORDER_LITTLEENDIAN};
SLDataSource audioSrc = {&loc_bufq, &format_pcm};
// configure audio sink
SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, gOutputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
// create audio player
const SLInterfaceID ids[3] = {SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND, SL_IID_VOLUME};
const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE};
result = (*gEngineEngine)->CreateAudioPlayer(gEngineEngine, &gBQObject, &audioSrc, &audioSnk,
3, ids, req);
// realize the player
result = (*gBQObject)->Realize(gBQObject, SL_BOOLEAN_FALSE);
// get the play interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_PLAY, &gBQPlay);
// get the buffer queue interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_BUFFERQUEUE,
&gBQBufferQueue);
// register callback on the buffer queue
result = (*gBQBufferQueue)->RegisterCallback(gBQBufferQueue, bqPlayerCallback, NULL/*soundMix*/);
// get the effect send interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_EFFECTSEND,
&gBQEffectSend);
// set the player's state to playing
result = (*gBQPlay)->SetPlayState(gBQPlay, SL_PLAYSTATE_PLAYING );
}
I'm not entirely sure, but I think you're correct in that the emulator's OpenSL ES support doesn't actually work. I've never gotten it to work in practice, while it works on any device I've tried.
In my application I have to support Android 2.2 as well, so I have a fallback to use JNI to access the Java AudioTrack APIs. I added a special case to my app to always use the AudioTrack interface when the emulator is detected.
I would like to take the live video with two USB webcams (Philips SPC 900NC), but I found that they cannot work simultaneously on my laptop. Either of the two USB webcams could work alone or work with another webcam (mounted on my laptop originally).
When I use the simulink block 'From video device', Matlab gave the error message with ' Multiple VIDEOINPUT objects cannot access the same device simultaneously.'. Then I checked the video input device with command 'imaqhwinfo', only one of the USB Philips webcam could be detected.
I would like to know that,
what's the reason of this situation? is it because the hardware limitation (USB bus bandwidth) or just matlab video object don't support same multiple video devices?
what's the solution of this? could anyone give me some suggestions?
You may interest in this link:
http://opencv.willowgarage.com/wiki/faq#How_to_use_2_cameras_.28multiple_cameras.29_with_cvCam_library
Which contains:
First, init the cvcam library and get the number of cams by:
int ncams = cvcamGetCamerasCount( ); //returns the number of available cameras in the system
Show dialog to choose which cameras in use
int* out; int nselected = cvcamSelectCamera(&out);
Get the selected cams and enable them.
int cam1 = out[0];
int cam2 = out[1];
cvcamSetProperty(cam1, CVCAM_PROP_ENABLE, CVCAMTRUE);
cvcamSetProperty(cam1, CVCAM_PROP_RENDER, CVCAMTRUE); //We'll render stream from this source
cvNamedWindow("Cam1", 1);
cvcamWindow MyWin1 = (cvcamWindow)cvGetWindowHandle("Cam1");
cvcamSetProperty(cam1, CVCAM_PROP_WINDOW, &MyWin1); // Selects a window for video rendering
//Same code for camera 2
cvcamSetProperty(cam2, CVCAM_PROP_ENABLE, CVCAMTRUE);
cvcamSetProperty(cam2, CVCAM_PROP_RENDER, CVCAMTRUE);
cvNamedWindow("Cam2", 1);
cvcamWindow MyWin2 = (cvcamWindow)cvGetWindowHandle("Cam2");
cvcamSetProperty(cam2, CVCAM_PROP_WINDOW, &MyWin1);
//If you want to open the property dialog for setting the video format parameters, uncomment this line
//cvcamGetProperty(cam1, CVCAM_VIDEOFORMAT, NULL);
//cvcamGetProperty(cam2, CVCAM_VIDEOFORMAT, NULL);
Enable the stereo mode (2 cameras working at the same time)
cvcamSetProperty(cam1, CVCAM_STEREO_CALLBACK , stereocallback); //stereocallback is the function running to process every frames
cvcamInit();
cvcamStart();
//Your app is working
while (1)
{
int key = cvWaitKey(5);
if (key == 27) break;
}
cvcamStop( );
cvcamExit( );
Define the stereocallback function outside of the function above.
void stereocallback(IplImage* image1, IplImage* image2) {
//Process 2 images here
}
#constant kAudioSessionProperty_AudioInputAvailable
A UInt32 with a value other than zero when audio input is available.
Use this property, rather than the device model, to determine if audio input is available.
A listener will notify you when audio input becomes available. For instance, when a headset is attached
to the second generation iPod Touch, audio input becomes available via the wired microphone.
So, if I wanted to get notified about kAudioSessionProperty_AudioInputAvailable, how would I do that?
You set up the listener like this:
AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, myCallback, NULL);
You have to define a callback function which gets called whenever the value changes:
void myCallback(void* inClientData, AudioSessionPropertyID inID, UInt32 inDataSize, const void* inData)
{
printf("value changed\n");
}