Delay when using AudioQueueStart() - iphone

I am using the Audio Queue services to record audio on the iPhone. I am having a latency issue though when starting recording. Here is the code (approx):
OSStatus status = AudioQueueNewInput(
&recordState.dataFormat, // 1
AudioInputCallback, // 2
&recordState, // 3
CFRunLoopGetCurrent(), // 4
kCFRunLoopCommonModes, // 5
0, // 6
&recordState.queue); // 7
// create buffers
for(int i = 0; i < NUM_BUFFERS; i++)
{
if (status == 0)
status = AudioQueueAllocateBuffer(recordState.queue, BUFFER_SIZE, &recordState.buffers[i]);
}
DebugLog(#"Starting recording\n");
OSStatus status = 0;
for(int i = 0; i < NUM_BUFFERS; i++)
{
if (status == 0)
status = AudioQueueEnqueueBuffer(recordState.queue, recordState.buffers[i], 0, NULL);
}
DebugLog(#"Queued buffers\n");
if (status == 0)
{
// start audio queue
status = AudioQueueStart(recordState.queue, NULL);
}
DebugLog(#"Started recording, status = %d\n", status);
The log output looks like this:
2009-06-30 19:18:59.631 app[24887:20b] Starting recording
2009-06-30 19:18:59.828 app[24887:20b] Queued buffers
2009-06-30 19:19:00.849 app[24887:20b] Started recording, status = 0
Note the 1-second delay between the "Queued Buffers" message and 2nd "Starting recording" message. Any ideas how I can get rid of it, apart from starting recording as soon as I start my app?
BTW, the 1-second is pretty consistent in the Simulator and Device, and doesn't seem to be affected by number or size of buffers. Using good old mono 16-bit PCM.

Mike Tyson covers this in his blog.
However, if you're looking to quickly start recording you would do better to use a remote audio unit, or AVAudioEngine.

Related

I have two i2c devices. Both alone work well, but when I read from one, and then write to the other, the second one is not an ancknowledging

I have two devices MPU6050 and EEPROM 24C256. I can write and read from both alone. But when I try to read from MPU6050 and than write data to EEPROM in the same session, the EEPROM does not respond. I am using mbed OS libraries. And my question is.. Is it library, code or hardware problem?
MPU6050 read sequence:
enter image description here
EEPROM write page sequance:
enter image description here
//CODE
const char imuAddress = 0x68<<1;
const char imuDataAddress = 0x3B;
const char eepAddress = 0xA0;
const char data[3] = {1.1, 2.2, 3.3};
char acc[3];
//reading acceleration data from IMU
while(true){
i2c.start();
if(i2c.write(imuAddress) != 1){
i2c.stop();
continue;
}
if(i2c.write(imuDataAddress) != 1){
i2c.stop();
continue;
}
i2c.start();
if(i2c.write(imuAddress | 0x01) != 1){
i2c.stop();
continue;
}
for (int i = 0; i < 2; i++){
i2c.read(1); //read and respond ACK to continue
}
i2c.read(0); //read and respond NACK to stop reading
i2c.stop();
break;
}
//write data to EEPROM
while(true){
i2c.start();
if(i2c.write(eepAddress) != 1){ //here is the problem (EEPROM does not respond)
i2c.stop();
continue;
}
if(i2c.write(0x00) != 1){
i2c.stop();
continue;
}
if(i2c.write(0x00) != 1){
i2c.stop();
continue;
}
bool ack = true;
for(int i = 0; i < 3; i++){
if(i2c.write(data[i]) != 1){
i2c.stop();
ack = false;
break;
}
}
if (ack == true){
i2c.stop();
break;
}
}
I have a couple initial thoughts, but do you have access to an oscilloscope? It is odd that each one works individually. This makes me think that it may be an issue between the transition. (Possibly try a delay between each one? Also, possibly remove the stops in the initial read as they aren't necessary)
I think the best way to figure this out is to post a scope trace of the messaging for each one running individually and a trace of them running back to back.

Knowing when I can skip to any point in an audio file without buffering / delay in playback

I'm loading an MP3 on my webpage using audio = new Audio(). But I'd like to know that when setting audio.currentTime, the audio can skip to any point in the file -near the end or wherever -without any delay in playback. Ie I want to know when the MP3 has downloaded in its entirety.
Can I use the Audio object/element for this, or must I use an AudioContext as shown here?
Every AudioElement is exposing its buffered data as a TimeRanges object. TimeRanges is an object which tells you how many continuous parts aka ranges are already buffered. It does also have getters which return the respective start and end of each range in seconds.
In case your AudioElement is named audio the following code snippet will log the buffered time ranges at a given point in time.
const numberOfRanges = audio.buffered.length;
for (let i = 0; i < numberOfRanges; i += 1) {
console.log(
audio.buffered.start(i),
audio.buffered.end(i)
);
}
If you want to detect the point in time at which all data is buffered you could use a check similar to this one:
const isBufferedCompletely = (audio.buffered.length === 1
&& audio.buffered.start(0) === 0
&& audio.buffered.end(0) === audio.duration);
I used the Gist referenced in the comments below to construct an example. The following snippet will periodically check if the file is already buffered. It will log a message to the console once that is the case. I tested it on Chrome (v74) and Firefox (v66) on OS X. Please note that the file can't be played at the same time as the script will set the currentTime of the Audio Element.
const audio = new Audio('http://www.obamadownloads.com/mp3s/charleston-eulogy-speech.mp3');
audio.preload = 'auto';
function detectBuffered(duration) {
// Stick with the duration once it is known because it might get updated
// when reaching the end of the file.
if (duration === undefined && !isNaN(audio.duration)) {
duration = audio.duration;
}
const isBufferedCompletely = (audio.buffered.length === 1
&& audio.buffered.start(0) === 0
&& audio.buffered.end(0) === duration);
if (isBufferedCompletely) {
const seconds = Math.round(duration);
console.log('The complete file is buffered.');
console.log(`It is about ${ seconds } seconds long.`);
} else {
// Move the playhead of the audio element to get the browser to load
// the complete file.
if (audio.buffered.length > 0) {
audio.currentTime = Math.max(0, audio.buffered.end(0) - 1);
}
setTimeout(detectBuffered, 100, duration);
}
}
detectBuffered();

Stdout redirection not working in iOS without debugger

I am trying to redirect output so I can send it over the network. For some reason if you run the code while debugger attached it works perfectly. Once you start the application in normal way the code freezes on the read function and never returns. If someone has any pointers I will highly appreciate it.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^(void) {
static int pipePair[2];
if ( pipe(pipePair) != 0) {
return;
}
dup2(pipePair[1],STDOUT_FILENO);
while (true) {
char * buffer = calloc(sizeof(char), 1024);
ssize_t readCount = read(pipePair[0],buffer,1023);
if (readCount > 0) {
buffer[readCount] = 0;
NSString * log = [NSString stringWithCString:buffer encoding:NSUTF8StringEncoding];
//sent it over network
}
if (readCount == -1) {
return;
}
}
});
Apparently in iOS 5.1 writing to stdout was disallowed. http://spouliot.wordpress.com/2012/03/13/ios-5-1-vs-stdout/

AudioQueue does not output any sound

I have trouble getting sound output on my iPhone experiment and I'm out of ideas.
Here is my callback to fill the Audio Queue buffer
void AudioOutputCallback(void *user, AudioQueueRef refQueue, AudioQueueBufferRef inBuffer)
{
NSLog(#"callback called");
inBuffer->mAudioDataByteSize = 1024;
gme_play((Music_Emu*)user, 1024, (short *)inBuffer->mAudioData);
AudioQueueEnqueueBuffer(refQueue, inBuffer, 0, NULL);
}
I setup the audio queue using the following snippet
// Create stream description
AudioStreamBasicDescription streamDescription;
streamDescription.mSampleRate = 44100;
streamDescription.mFormatID = kAudioFormatLinearPCM;
streamDescription.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamDescription.mBytesPerPacket = 1024;
streamDescription.mFramesPerPacket = 1024 / 4;
streamDescription.mBytesPerFrame = 2 * sizeof(short);
streamDescription.mChannelsPerFrame = 2;
streamDescription.mBitsPerChannel = 16;
AudioQueueNewOutput(&streamDescription, AudioOutputCallback, theEmu, NULL, NULL, 0, &theAudioQueue);
OSStatus errorCode = AudioQueueAllocateBuffer(theAudioQueue, 1024, &someBuffer);
if( errorCode )
{
NSLog(#"Cannot allocate buffer");
}
AudioOutputCallback(theEmu, theAudioQueue, someBuffer);
AudioQueueSetParameter(theAudioQueue, kAudioQueueParam_Volume, 1.0);
AudioQueueStart(theAudioQueue, NULL);
The library I'm using is outputting linear PCM 16bit 44hz.
I usually use 3 buffers. You need at least 2 because as one gets played, the other one is filled by your code. If you have only one, there's not enough time to fill the same buffer and re-enqueue it and have playback be seamless. So it probably just stops your queue because it ran out of buffers.

why does MPMovieLoadState have state 5?

I find MPMoviePlayerController.h,there is
enum {
MPMovieLoadStateUnknown = 0,
MPMovieLoadStatePlayable = 1 << 0,
MPMovieLoadStatePlaythroughOK = 1 << 1, // Playback will be automatically started in this state when shouldAutoplay is YES
MPMovieLoadStateStalled = 1 << 2, // Playback will be automatically paused in this state, if started
};
typedef NSInteger MPMovieLoadState;
but when i did
NSLog(#"%d",player.loadState)
it prints out 5 or sometimes 3,how did it happen?As i know the loadstate has value of 0,1,2,4 refer to developer documentation.
Thank you!
The playState is a bitmask. Any number of bits can be set, such as
MPMovieLoadStatePlaythroughOK | MPMovieLoadStatePlayable
Check for states like this:
MPMovieLoadState state = [playerController loadState];
if( state & MPMovieLoadStatePlaythroughOK ) {
NSLog(#"State is Playthrough OK");
}