Audio Recording iPhone - values of AudioStreamBasicDescription - iphone

These are the values I pass in, it's the only combination of values I have got working.
dataFormat.mSampleRate = 44100;
dataFormat.mFormatID = kAudioFormatLinearPCM;
dataFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsBigEndian;
dataFormat.mBytesPerPacket = 4;
dataFormat.mFramesPerPacket = 1;
dataFormat.mBytesPerFrame = 4;
dataFormat.mChannelsPerFrame = 2;
dataFormat.mBitsPerChannel = 16;
status = AudioQueueNewInput( &dataFormat, AudioInputCallback, self, NULL, NULL, 0,
&queue);
status = AudioFileCreateWithURL(fileUrl, kAudioFileCAFType, &dataformat, kAudioFileFlags_EraseFile, &audioFile
The recording works, but it is a lot of noise during the recording, and on the playback. Can it have anything to do with this code?

I can see two possible errors. First, as #invalidname pointed out, recording in stereo probably isn't going to work on a mono device such as the iPhone. Well, it might work, but if it does, you're just going to get back dual-mono stereo streams anyways, so why bother? You might as well configure your stream to work in mono and spare yourself the CPU overhead.
The second problem is probably the source of your sound distortion. Your stream description format flags should be:
kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsPacked
Also, don't forget to set the mReserved flag to 0. While the value of this flag is probably being ignored, it doesn't hurt to explicitly set it to 0 just to make sure.
Edit: Another more general tip for debugging audio on the iPhone -- if you are getting distortion, clipping, or other weird effects, grab the data payload from your phone and look at the recording in a wave editor. Being able to zoom down and look at the individual samples will give you a lot of clues about what's going wrong.
To do this, you need to open up the "Organizer" window, click on your phone, and then expand the little arrow next to your application (in the same place where you would normally uninstall it). Now you will see a little downward pointing arrow, and if you click it, Xcode will copy the data payload from your app to somewhere on your hard drive. If you are dumping your recordings to disk, you'll find the files extracted here.

What's your input device? The mic on the provided earbuds or the phone's built-in mic or what? Or are you recording into the Simulator?
Aside from the noise, does everything else sound right: speed, pitch, etc.?
It probably isn't causing any problems, but you're specifying two-channel input, while your input device is probably mono.
One last thought: is this a first generation iPhone? I think there's a weird issue with that model where 8 KHz input gets upconverted to 44.1.

Related

How to know the delay of frames between 2 videos, to sync an audio from video 1 to video 2?

world.
I have many videos that I want to compare one-to-one to check if they are the same, and get from there the delay of frames, let's say. What I do now is opening both video files with virtualdub and checking manually at the beginning of video 1 that a given frame is at position, i.e., 4325. Then I check video 2 to see the position of the same frame, i.e., 5500. That would make a delay of +1175 frames. Then I check at the end of the video 1 another given frame, position let's say 183038. I check too the video 2 (imagine the position is 184213) and I calculate the difference, again +1175: eureka, same video!
The frame I chose to compare aren't exactly random, it must be one that I know it is exactly one I can compare to (for example, a scene change, an explosion that appears from one frame to another, a dark frame after a lighten one...) and I always try to check for the first comparison frames within the first 10000 positions and for the second check I take at the end.
What I do next is to convert the audio from video 1 to video 2 calculating the number of ms needed, but I don't need help with that. I'd love to automatize the comparison so I just have to select video 1 and video 2, nothing else, that way I could forget forever virtualdub and save a lot of time.
I'm tagging this post as powershell too because I'm making a script where at the moment I have to introduce the delay between frames (after comparing manually) myself. It would be perfect that I could add this at the beginning of the script.
Thanks!

How to getting acquired frames at full speed ? - Image Event Listener does not seem to be executing after every event

My goal is to read out 1 pixel from the GIF camera in VIEW mode (live acquisition) and save it to a file every time the data is updated. The camera is ostensibly updating every 0.0001 seconds, because this is the minimum acquisition time Digital Micrograph lets me select in VIEW mode for this camera.
I can attach an Image Event Listener to the live image of the camera, with the message map (messagemap = "data_changed:MyFunctiontoExecute"), and MyFunctiontoExecute is being successfully ran, giving me a file with numerous pixel values.
However, if I let this event listener run for a second, I only obtain close to 100 pixel values, when I was expecting closer 10,000 (if the live image is being updated every 0.0001 seconds).
Is this because the Live image is not updated as quickly I think?
The event-listener certainly is executed at each event.
However, the live-display of a high-speed camera will near-certainly not update at each acquired-frame. It will either perform some sort of cumulative or sampled display. The exact answer will depend on the exact system you are on and configurations that are made.
It should be noted that super-high frame-rates can usually only be achieved by dedicated firmware and optimized systems. It's unlikely that a "general software approach" - in particular of interpreted and non-compiled code - will be able to provide the necessary speed. This type of approach the problem might be doomed from the start.
(Instead, one will likely have to create a buffer and then set-up the system to acquire data directly into the buffer at highest-possible frame rate. This will be coding the camera-acquisition directly)

Wav loop points in Unity

I did read that Unity supports wav loop points metadata (e.g. https://stackoverflow.com/a/53934779/525873). We have, however, not found any official doc/release notes that confirm this. Loop points (using Wavosaur in my case) appear to still be ignored. We are on Unity 2018.2.17f1.
We know there are other options to make audio clips loop, but using wav loop points would be ideal. Anyone was able to get wav loop points to work in Unity?
Many thanks!
I might be wrong but I don't think looping other than 'the whole file' is natively supported. You can however achieve it by filling the audio buffer manually (using MonoBehaviour.OnAudioFilterRead )
Please keep in mind that this happens on the managed side so it might be a little bit expensive, especially if you want to do resampling

How to duplicate one stereo channel to the other stereo channel using AudioKit

I am using a Focusrite Scarlett 2i2 into a Mac. The signal into the Scarlett is a guitar.
With code along these lines I can get audio into the, app, but it is only the stereo left channel.
mic = AKMicrophone()
device = AKDevice(name:"Scarlett 2i4 USB", deviceID:56);
mic.setDevice(device)
let booster = AKBooster(mic, gain: 1.0)
AudioKit.output = booster
AudioKit.start()
mic.start()
Is there a simple way to combine left and right channels from a mic input into a single mono signal (or left and right with the same signal)?
I tried a variation on this answer about flipping left and right channels: AudioKit - Stereo channel flipping from input to output?
But that didn't work. FWIW, it also didn't work for purely flipping the channels (AKPanner seems to be able to pan something from the center to hard left, but not from hard left to center or right.)
Two other things that might be related:
It seems that AKStereoInput is not available for the Mac platform. Is that correct?
What exactly is "deviceID"? I seem to be able to change that and get the same result.
Thank you.
Yes, there is something called AKStereoFieldLimiter that does just that:
https://audiokit.io/docs/Classes/AKStereoFieldLimiter.html

OpenAL problem - changing gain of source

I've recently been working on porting my game to be cross-platform, and decided to go with OpenAL for my cross-platform audio engine.
I have 16 "channels" (OpenAL sources) for playing up to 16 sounds concurrently. To play a sound, I switch which buffer is linked to a given source, and also set the gain, source position, and so on in order to play a sound in a given "channel" (source).
The problem is, I've noticed that my "gain" settings do not seem to have immediate effect. For instance, if a loud "lightning" sound plays in a given source at 0.5 gain, then when I have a button click sound play at 0.15 gain later, this click starts off FAR too loud. Then, each subsequent time it is played, the volume decreases until around the 3rd or 4th click it sounds like it's around the proper 0.15 gain.
The first button click is barely audible, and it seems to ramp up in volume until it reaches the 0.15 gain.
So in short, a "source" seems to be remembering the former gain settings, even though I am resetting those before playing a new sound in the same source! Is this a bug? Or something I don't understand about OpenAL? How can I get it to "instantly" change to the new gain/position settings?
Relevant code to play a sound:
[Channel is a value between 0 and 15, soundID is a valid index into the gBuffer array, stereoPosition is an integer between -255 and 255, and volume is between 0 and 255. This is from a function that's a wrapper for my game that used to use values between 0-255, so it converts the values to proper OpenAL values.]
// Stop any sound currently playing in this channel ("source")
alSourceStop( gSource[channel] );
// What sound effect ("buffer") is currently linked with this channel? (Even if not currently playing)
alGetSourcei( gSource[channel], AL_BUFFER, &curBuffer );
// attach a different buffer (sound effect) to the source (channel) only if it's different than the previously-attached one.
// (Avoid error code by changing it only if it's different)
if (curBuffer != gBuffer[soundID])
alSourcei( gSource[channel], AL_BUFFER, gBuffer[soundID] );
// Loop the sound?
alSourcei( gSource[channel], AL_LOOPING, (loopType == kLoopForever) );
// For OpenAL, we do this BEFORE starting the sound effect, to avoid sudden changes a split second after it starts!
volume = (volume / 2) + 1; // Convert from 0-255 to 0-128
{
float sourcePos[3] = {(float)stereoPosition / 50, 0.0, 2.0};
// Set Source Position
alSourcefv( gSource[channelNum], AL_POSITION, sourcePos ); // fv = float vector
// Set source volume
alSourcef( gSource[channelNum], AL_GAIN, (float)newVolume / 255 );
}
// Start playing the sound!
alSourcePlay( gSource[channel] );
I can post setup code too if desired, but nothing fancy there. Just calling
alSourcef( gSource[n], AL_REFERENCE_DISTANCE, 5.0f );
for each source.
We just faced the same problem when setting the ByteOffset for seeking to a position in a sample and found now the solution to get it working:
Just always delete and recreate the source before setting any parameter on it.
So if you want to change the gain and/or other parameters:
OpenAL.AL.DeleteSource(oldSourceID);
newSourceId = OpenAL.AL.GenSource();
OpenAL.AL.Source(newSourceId , OpenTK.Audio.OpenAL.ALSourcef.Gain, yourVolume);
OpenAL.AL.Source(newSourceId , OpenTK.Audio.OpenAL.ALSourcef.xxx, yourOtherParameter);
Hope it works for you and finally have a workaround after 10 years :-)