Why Exoplayer 2.x is not switching to lower bitrates on low network during adaptive playback? - exoplayer2.x

We are using exoplayer v2.x and are playing a HLS file which has 4 bitrate tracks.
When we configure exoplayer for adaptive playback, it is starting with a higher bitrate track but NOT switching back to a lower bitrate track when we throttle the network speed using Charles. The player seems to stick with the already selected higher bitrate track and keep on buffering instead of switching to a lower bitrate one.
We have configured the exoplayer in the following way:
private DefaultBandwidthMeter BANDWIDTH_METER =
new DefaultBandwidthMeter(mUiUpdateHandler, new BandwidthMeter.EventListener() {
#Override
public void onBandwidthSample(int elapsedMs, long bytes, long bitrate) {
Log.v(TAG, "Elapsed Time in MS " + elapsedMs + " Bytes " + bytes + " Bitrate " + bitrate);
bitrateEstimate = bitrate;
bytesDownloaded = bytes;
}
});
TrackSelection.Factory adaptiveTrackSelectionFactory =
new AdaptiveTrackSelection.Factory(BANDWIDTH_METER);
trackSelector = new DefaultTrackSelector(adaptiveTrackSelectionFactory);
player = ExoPlayerFactory.newSimpleInstance(getActivity(), trackSelector,
new CustomLoadControl(new CustomLoadControl.EventListener() {
#Override
public void onBufferedDurationSample(long bufferedDurationUs) {
long bufferedDurationMs = bufferedDurationUs;
}
}, mUiUpdateHandler), drmSessionManager, extensionRendererMode);
Can anyone please confirm this is the correct way to configure the player? Also has anyone observed this problem and have a fix for this?
Thanks in advance.

Related

Web audio playback contains clicks

I am trying to build a midi player using web audio API. I used tonejs to parse midi file into JSON. I am using mp3 files to play notes. Following are the relevant parts of the code:
//create audio samples
static async setupSample(audioContext, filepath) {
const response = await fetch(filepath);
const arrayBuffer = await response.arrayBuffer();
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
return audioBuffer;
}
//play a single sample
static playSample(audioContext, audioBuffer, time) {
const sampleSource = new AudioBufferSourceNode(audioContext, {
buffer: audioBuffer,
playbackRate: 1,
});
sampleSource.connect(audioContext.destination);
sampleSource.start(time);
return sampleSource;
}
Scheduling samples:
async start() {
this.startTime = this.audioCtx.currentTime;
this.play();
}
play() {
let nextNote = this.notes[this.noteIndex];
//schedule samples
while ((nextNote.time + this.startTime) - this.audioCtx.currentTime <= 0.250) {
let s = Audio.playSample(this.audioCtx, this.samples[nextNote.midi], this.startTime + nextNote.time);
s.stop(this.startTime + nextNote.time + nextNote.duration);
this.noteIndex++;
if (this.noteIndex == this.notes.length) {
break;
}
nextNote = this.notes[this.noteIndex];
}
if (this.noteIndex == this.notes.length) {
return;
}
requestAnimationFrame(() => {
this.play();
});
}
I am testing code with a midi file which contains C major scale. I have tested the midi file using timidity and it is fine.
The code does play the midi file correctly execpet a small problem: I hear some clicking sounds during playback. The clicking increases with increasing tempo but does not completely go away even with tempo as small as 50bpm. Any ideas what could be going wrong?
Full code can be viewed at : https://test.meedee.in/
Nothing is "wrong". You are observing a phenomenon intrinsic to the physics of audio.
Chopping audio samples arbitrarily like this creates clicks at the transitions. Any instantaneous change in level is heard as a click. To get rid of the clicks, apply an envelope to the sample, blend adjacent notes, or apply a low-pass filter.

AudioClip.Length is incorrect when loading from UnityWebRequestMultimedia GetAudioClip

When I get an unity audio clip from firebase by url
var request = UnityWebRequestMultimedia.GetAudioClip(url, AudioType.MPEG);
yield return request.SendWebRequest();
var clip = DownloadHandlerAudioClip.GetContent(request);
print("success..." + clip.length);
audio clip real length is 4.86, but after download clip.length is 8.1
my upload audioclip used unity micphone record , and use "SavWav" Convert to mp3 data,
download audioclip length is a unity bug bug not fixed ,
i use SavWav.TrimSilence(clip, 0, (clip_)=>{...} get the correct length
Hope that helps

Stream synthetized audio in real time in flutter

I'm trying to create an app generating a continuos sinewave of various frequency (controlled by the user) and I'm trying to play the data as it's generated in real time.
I'm using just_audio right now to play bytes generated using wave_generator, as follows (snippet from issue):
class BufferAudioSource extends StreamAudioSource {
final Uint8List _buffer;
BufferAudioSource(this._buffer) : super(tag: "Bla");
#override
Future<StreamAudioResponse> request([int? start, int? end]) {
start = start ?? 0;
end = end ?? _buffer.length;
return Future.value(
StreamAudioResponse(
sourceLength: _buffer.length,
contentLength: end - start,
offset: start,
contentType: 'audio/wav',
stream: Stream.value(List<int>.from(_buffer.skip(start).take(end - start))),
),
);
}
}
And I'm using the audio source like this:
StreamAudioSource _source = BufferAudioSource(_data!);
_player.setAudioSource(_source);
_player.play()
Is there a way I could feed the data to the player as soon as I generate it on the fly, using a sinewave generator, so that if the user changes the frequency, the playback will reflect the change as soon as it happens?
I tried looking online and on the repository github but I couldn't find anything.

OpenSL ES can not play audio on Android emulator

I decode amrnb to PCM, then put right pcm buffer to Enqueue buffer (I'm sure PCM data is right), but no sound is heard. And when feeding buffer, log outputs:
/AudioTrack(14857): obtainBuffer timed out (is the CPU pegged?)
My code is below, and my questions are:
Is there something wrong when I use the OpenSL ES?
Is it true that OpenSL ES only works on the real device?
Sample code:
void AudioTest()
{
StartAudioPlay();
while(1)
{
//decode AMR to PCM
/* Convert to little endian and write to wav */
//write buffer to buffer queue
AudioBufferWrite(littleendian, 320);
}
}
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
//do nothing
}
void AudioBufferWrite(const void* buffer, int size)
{
(*gBQBufferQueue)->Enqueue(gBQBufferQueue, buffer, size );
}
// create buffer queue audio player
void SlesCreateBQPlayer(/*AudioCallBackSL funCallback, void *soundMix,*/ int rate, int nChannel, int bitsPerSample )
{
SLresult result;
// configure audio source
SLDataLocator_AndroidSimpleBufferQueue loc_bufq = {SL_DATALOCATOR_ANDROIDSIMPLEBUFFERQUEUE, 2};
SLDataFormat_PCM format_pcm = {SL_DATAFORMAT_PCM, 1, SL_SAMPLINGRATE_8,
SL_PCMSAMPLEFORMAT_FIXED_16, SL_PCMSAMPLEFORMAT_FIXED_16,
SL_SPEAKER_FRONT_CENTER, SL_BYTEORDER_LITTLEENDIAN};
SLDataSource audioSrc = {&loc_bufq, &format_pcm};
// configure audio sink
SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, gOutputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
// create audio player
const SLInterfaceID ids[3] = {SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND, SL_IID_VOLUME};
const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE};
result = (*gEngineEngine)->CreateAudioPlayer(gEngineEngine, &gBQObject, &audioSrc, &audioSnk,
3, ids, req);
// realize the player
result = (*gBQObject)->Realize(gBQObject, SL_BOOLEAN_FALSE);
// get the play interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_PLAY, &gBQPlay);
// get the buffer queue interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_BUFFERQUEUE,
&gBQBufferQueue);
// register callback on the buffer queue
result = (*gBQBufferQueue)->RegisterCallback(gBQBufferQueue, bqPlayerCallback, NULL/*soundMix*/);
// get the effect send interface
result = (*gBQObject)->GetInterface(gBQObject, SL_IID_EFFECTSEND,
&gBQEffectSend);
// set the player's state to playing
result = (*gBQPlay)->SetPlayState(gBQPlay, SL_PLAYSTATE_PLAYING );
}
I'm not entirely sure, but I think you're correct in that the emulator's OpenSL ES support doesn't actually work. I've never gotten it to work in practice, while it works on any device I've tried.
In my application I have to support Android 2.2 as well, so I have a fallback to use JNI to access the Java AudioTrack APIs. I added a special case to my app to always use the AudioTrack interface when the emulator is detected.

How to capture continious image in Android

I'm trying to develop an android application which should take continuous images just like native camera in continuous shooting mode for 10 to 20 seconds.
I followed the sample program from the site
http://marakana.com/forums/android/examples/39.html
Now , i want to enhance this code to take continuous images (for 10 to 20 seconds) ,
first i tried to take 10 pics by using a for loop ,
i just put the takePicture() function in the loop , but that'S not working .
do i need to use threadS .
IF YES , THEN which part should i put in thread , the image capturing or image saving to
sd card
If any body having some sample code for taking continuous images , pls share.
Just put a counter in the jpegCallBack function, that decrements and calls your takePicture() again until the wished number of pictures is reached.
int pictureCounter = 10;
PictureCallback jpegCallback = new PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
// save your picture
if(--pictureCounter>=0) {
takePicture();
} else {
pictureCounter = 10; // reset the counter
}
}
I know it is very late to reply, but I just came across this question and thought it would be helpful for future visitors.
PictureCallback jpegCallback = new PictureCallback() {
public void onPictureTaken(byte[] data, Camera camera) {
//Save Picture here
preview.camera.stopPreview();
// if condition
preview.camera.startPreview();
// end if condition
}
};