I capture a video and handle the resulting YUV frames.
the output looks like the following:
Although it appears normally on my phone's screen. But my peer receives it like that img above.
Every item is repeated and shifted by some value horizontally and vertically
My captured video is 352x288 and my YPixelCount = 101376, UVPixelCount = YPIXELCOUNT/4
Any clue to solve this or a starting point to understand how to handle YUV video frames on iOS ?
NSNumber* recorderValue = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
[videoRecorderSession setSessionPreset:AVCaptureSessionPreset352x288];
And this is the captureOutput function
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if(CMSampleBufferIsValid(sampleBuffer) && CMSampleBufferDataIsReady(sampleBuffer) && ([self isQueueStopped] == FALSE))
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
UInt8 *baseAddress[3] = {NULL,NULL,NULL};
uint8_t *yPlaneAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0);
UInt32 yPixelCount = CVPixelBufferGetWidthOfPlane(imageBuffer,0) * CVPixelBufferGetHeightOfPlane(imageBuffer,0);
uint8_t *uvPlaneAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer,1);
UInt32 uvPixelCount = CVPixelBufferGetWidthOfPlane(imageBuffer,1) * CVPixelBufferGetHeightOfPlane(imageBuffer,1);
UInt32 p,q,r;
p=q=r=0;
memcpy(uPointer, uvPlaneAddress, uvPixelCount);
memcpy(vPointer, uvPlaneAddress+uvPixelCount, uvPixelCount);
memcpy(yPointer,yPlaneAddress,yPixelCount);
baseAddress[0] = (UInt8*)yPointer;
baseAddress[1] = (UInt8*)uPointer;
baseAddress[2] = (UInt8*)vPointer;
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
}
Is there anything wrong with the above code ?
Your code doesn't look to0 bad. I can see two mistakes and one potential problem:
The uvPixelCount is incorrect. The YUV 420 format means that there is color information for each 2 by 2 pixel block. So the correct count is:
uvPixelCount = (width / 2) * (height / 2);
You write something about yPixelCount / 4, but I cannot see that in your code.
The UV information is interleaved, i.e. the second plane alternatingly contains a U and a V value. Or put differently: there's a U value on all even byte addresses and a V value on all odd byte addresses. If you really need to separate the U and V information, memcpy won't do.
There can be some extra bytes after each pixel row. You should use CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0) to get the number of bytes between two rows. As a consequence, a single memcpy won't do. Instead you need to copy each pixel row separately to get rid of the extra bytes between the rows.
All these things only explain part of the resulting image. The remaining parts are probably due to differences between your code and what the receiving peer expect. You did't write anything about that? Does the peer really need separated U and V values? Does it you 4:2:0 compression as well? Does it you video range instead of full range as well?
If you provide more information, I can give your more hints.
Related
I am basing my code off of Portaudio's paex_record_file.c example. One of the parameters in the callback is inputBuffer, and I wanted to use its data to calculate other numbers with the double/float type. I changed the file from a .raw to a .txt, but notepad still cannot read it, leading me to believe its data is not actually encoded as a number. How is the data stored in inputBuffer and how can I do arithmetic with it (add, multiply, divide, etc)?
This is how I initialized inputParameters:
inputParameters.device = Pa_GetDefaultInputDevice(); /* default input device */
if (inputParameters.device == paNoDevice) {
fprintf(stderr,"Error: No default input device.\n");
goto error;
}
inputParameters.channelCount = 2; /* stereo input */
inputParameters.sampleFormat = paFloat32;
inputParameters.suggestedLatency = Pa_GetDeviceInfo( inputParameters.device )->defaultLowInputLatency;
inputParameters.hostApiSpecificStreamInfo = NULL;
This question is somewhat related to print floats from audio input callback function (unanswered).
The inputBuffer parameter to the callback is a void*. The actual type of the underlying buffer depends on the parameters and the flags that you pass to Pa_OpenStream.
If you specified paFloat32 then there will be a float* in there somewhere. However the are two possibilities:
Interleaved: inputParameters.sampleFormat = paFloat32;
Non-Interleaved: inputParameters.sampleFormat = paFloat32|paNonInterleaved;
You specified the interleaved option. In this case, inputBuffer points to a single buffer of interleaved floats. So you can write:
float *samples = (float*)inputBuffer;
In a two channel stream samples will contain interleaved left and right samples, e.g.:
samples[0]; // first left sample
samples[1]; // first right sample
samples[2]; // second left sample
samples[3]; // second right sample
// etc.
For completeness: If it had been a non-interleaved stream then inputBuffer points to an array of pointers to single-channel buffers. To extract the buffer pointers you would write something like:
float *left = ((float **) inputBuffer)[0];
float *right = ((float **) inputBuffer)[1];
Note that in all cases framesPerBuffer counts frames not samples. A frame includes one sample from each channel. For example, in a stereo stream, a frame includes both the left and right channel samples.
I've been working on a frequency detection application for iOS and I'm having an issue filling a user-defined AudioBufferList with audio samples from the microphone.
I'm getting a return code of -50 when I call AudioUnitRender in my InputCallback method. I believe this means one of my parameters is invalid. I'm guessing it's the AudioBufferList, but I haven't been able to figure out what is wrong with it. I think I've set it up so it matches the data format I've specified in my ASBD.
Below is the remote I/O setup and function calls that I believe could be incorrect:
ASBD:
size_t bytesPerSample = sizeof(AudioUnitSampleType);
AudioStreamBasicDescription localStreamFormat = {0};
localStreamFormat.mFormatID = kAudioFormatLinearPCM;
localStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
localStreamFormat.mBytesPerPacket = bytesPerSample;
localStreamFormat.mBytesPerFrame = bytesPerSample;
localStreamFormat.mFramesPerPacket = 1;
localStreamFormat.mBitsPerChannel = 8 * bytesPerSample;
localStreamFormat.mChannelsPerFrame = 2;
localStreamFormat.mSampleRate = sampleRate;
InputCallback Declaration:
err = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Input,
kOutputBus, &callbackStruct, sizeof(callbackStruct));
AudioBufferList Declaration:
// Allocate AudioBuffers
bufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer));
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mNumberChannels = 2;
bufferList->mBuffers[0].mDataByteSize = 1024;
bufferList->mBuffers[0].mData = calloc(256, sizeof(uint32_t));
InputCallback Function:
AudioUnit rioUnit = THIS->ioUnit;
OSStatus renderErr;
UInt32 bus1 = 1;
renderErr = AudioUnitRender(rioUnit, ioActionFlags, inTimeStamp, bus1, inNumberFrames, THIS->bufferList);
A few things to note:
Sample Rate = 22050 Hz
Since the canonical format of remote I/O data is 8.24-bit fixed point, I'm assuming the samples are 32 bits each (or 4 bytes). Since an unsigned int is 4 bytes, I'm using that to allocate my audio buffer.
I can get the same code to render audio correctly if I implement the audio data flow as PassThru rather than input only.
I've already looked at Michael Tyson's blog post on Remote I/O. Didn't see anything there different from what I'm doing.
Thanks again, you all are awesome!
Demetri
If you have 2 channels per frame, you cannot have bytesPerSample as the size of the frame. Since the terminology is a bit confusing:
A sample is a single value at a given position in a waveform
A channel refers to data associated with a particular audio stream, ie, left/right channel for stereo, a single channel for mono, etc.
A frame contains the samples for all channels for a given position in a waveform
A packet contains one or more frames
So basically, you need to use bytesPerSample * mChannelsPerFrame for mBytesPerFrame, and use mBytesPerFrame * mFramesPerPacket for mBytesPerPacket.
Also I noticed that you are using 32-bits for your sample size. I'm not sure if you really want to do this -- usually, you want to record audio using 16-bit samples. The sound difference between 16 and 32 bit audio is almost impossible for most listeners to hear (the average CD is mastered at 44.1kHz, 16-bit PCM), and it will spare you 50% of the I/O and storage costs.
One difference is that Tyson's RemoteIO blog post uses 2 bytes per sample of linear PCM. So this might be a format incompatible error.
The line bufferList = (AudioBufferList *)malloc(sizeof(AudioBuffer)); is also wrong. Since AudioBuffer is smaller than AudioBufferList, it allocates not enough memory.
My question is a little tricky, and I'm not exactly experienced (I might get some terms wrong), so here goes.
I'm declaring an instance of an object called "Singer". The instance is called "singer1". "singer1" produces an audio signal. Now, the following is the code where the specifics of the audio signal are determined:
OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
//Singer *me = (Singer *)inRefCon;
static int phase = 0;
for(UInt32 i = 0; i < ioData->mNumberBuffers; i++) {
int samples = ioData->mBuffers[i].mDataByteSize / sizeof(SInt16);
SInt16 values[samples];
float waves;
float volume=.5;
for(int j = 0; j < samples; j++) {
waves = 0;
waves += sin(kWaveform * 600 * phase)*volume;
waves += sin(kWaveform * 400 * phase)*volume;
waves += sin(kWaveform * 200 * phase)*volume;
waves += sin(kWaveform * 100 * phase)*volume;
waves *= 32500 / 4; // <--------- make sure to divide by how many waves you're stacking
values[j] = (SInt16)waves;
values[j] += values[j]<<16;
phase++;
}
memcpy(ioData->mBuffers[i].mData, values, samples * sizeof(SInt16));
}
return noErr;
}
99% of this is borrowed code, so I only have a basic understanding of how it works (I don't know about the OSStatus class or method or whatever this is. However, you see those 4 lines with 600, 400, 200 and 100 in them? Those determine the frequency. Now, what I want to do (for now) is insert my own variable in there in place of a constant, which I can change on a whim. This variable is called "fr1". "fr1" is declared in the header file, but if I try to compile I get an error about "fr1" being undeclared. Currently, my technique to fix this is the following: right beneath where I #import stuff, I add the line
fr1=0.0;//any number will work properly
This sort of works, as the code will compile and singer1.fr1 will actually change values if I tell it to. The problems are now this:A)even though this compiles and the tone specified will play (0.0 is no tone), I get the warnings "Data definition has no type or storage class" and "Type defaults to 'int' in declaration of 'fr1'". I bet this is because for some reason it's not seeing my previous declaration in the header file (as a float). However, again, if I leave this line out the code won't compile because "fr1 is undeclared". B)Just because I change the value of fr1 doesn't mean that singer1 will update the value stored inside the "playbackcallback" variable or whatever is in charge of updating the output buffers. Perhaps this can be fixed by coding differently? C)even if this did work, there is still a noticeable "gap" when pausing/playing the audio, which I need to eliminate. This might mean a complete overhaul of the code so that I can "dynamically" insert new values without disrupting anything. However, the reason I'm going through all this effort to post is because this method does exactly what I want (I can compute a value mathematically and it goes straight to the DAC, which means I can use it in the future to make triangle, square, etc waves easily). I have uploaded Singer.h and .m to pastebin for your veiwing pleasure, perhaps they will help. Sorry, I can't post 2 HTML tags so here are the full links.
(http://pastebin.com/ewhKW2Tk)
(http://pastebin.com/CNAT4gFv)
So, TL;DR, all I really want to do is be able to define the current equation/value of the 4 waves and re-define them very often without a gap in the sound.
Thanks. (And sorry if the post was confusing or got off track, which I'm pretty sure it did.)
My understanding is that your callback function is called every time the buffer needs to be re-filled. So changing fr1..fr4 will alter the waveform, but only when the buffer updates. You shouldn't need to stop and re-start the sound to get a change, but you will notice an abrupt shift in the timbre if you change your fr values. In order to get a smooth transition in timbre, you'd have to implement something that smoothly changes the fr values over time. Tweaking the buffer size will give you some control over how responsive the sound is to your changing fr values.
Your issue with fr being undefined is due to your callback being a straight c function. Your fr variables are declared as objective-c instance variables as part of your Singer object. They are not accessible by default.
take a look at this project, and see how he implements access to his instance variables from within his callback. Basically he passes a reference to his instance to the callback function, and then accesses instance variables through that.
https://github.com/youpy/dowoscillator
notice:
Sinewave *sineObject = inRefCon;
float freq = sineObject.frequency * 2 * M_PI / samplingRate;
and:
AURenderCallbackStruct input;
input.inputProc = RenderCallback;
input.inputProcRefCon = self;
Also, you'll want to move your callback function outside of your #implementation block, because it's not actually part of your Singer object.
You can see this all in action here: https://github.com/coryalder/SineWaver
I'm loading a grayscale png image and I want to access the underlying pixel data. However after I load get the pixel data via CGImageGetDataProvider, the length of the data returned is longer than expected.
CCGDataProviderRef provider = CGDataProviderCreateWithFilename(cStr);
CGImageRef image = CGImageCreateWithPNGDataProvider(provider, NULL, FALSE, kCGRenderingIntentDefault);
mapWidth = CGImageGetWidth(image);
mapHeight = CGImageGetHeight(image);
lookupMap = CGDataProviderCopyData(CGImageGetDataProvider(image));
mapWidth comes out to 1804 and
mapHeight comes out to 1005. The product of which is 1813020
When I call
CFDataGetLength(lookupMap)
the response is 1833120.
Where are these extra 20100 bytes coming from?
Any help here is much appreciated. Am I missing something about the underlying format of the image?
Upon further examination of the CFDataRef I found that if I loop through the buffer,
for each row bytes: 0 to 1803 will be correct from my image, and then the next 20 bytes are all zero. So this means that my returned image is actually coming back as a 1824 by 1005 image instead of 1804 by 1005. Still no explanation as to why.
There's a buffer being added to the end of each one of my rows.
I started using
CGImageGetBytesPerRow
and solved the mystery.
I have an Objective-C class (although I don't believe this is anything Obj-C specific) that I am using to write a video out to disk from a series of CGImages. (The code I am using at the top to get the pixel data comes right from Apple: http://developer.apple.com/mac/library/qa/qa2007/qa1509.html). I successfully create the codec and context - everything is going fine until it gets to avcodec_encode_video, when I get EXC_BAD_ACCESS. I think this should be a simple fix, but I just can't figure out where I am going wrong.
I took out some error checking for succinctness. 'c' is an AVCodecContext*, which is created successfully.
-(void)addFrame:(CGImageRef)img
{
CFDataRef bitmapData = CGDataProviderCopyData(CGImageGetDataProvider(img));
long dataLength = CFDataGetLength(bitmapData);
uint8_t* picture_buff = (uint8_t*)malloc(dataLength);
CFDataGetBytes(bitmapData, CFRangeMake(0, dataLength), picture_buff);
AVFrame *picture = avcodec_alloc_frame();
avpicture_fill((AVPicture*)picture, picture_buff, c->pix_fmt, c->width, c->height);
int outbuf_size = avpicture_get_size(c->pix_fmt, c->width, c->height);
uint8_t *outbuf = (uint8_t*)av_malloc(outbuf_size);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, picture); // ERROR occurs here
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
CFRelease(bitmapData);
free(picture_buff);
free(outbuf);
av_free(picture);
i++;
}
I have stepped through it dozens of times. Here are some numbers...
dataLength = 408960
picture_buff = 0x5c85000
picture->data[0] = 0x5c85000 -- which I take to mean that avpicture_fill worked...
outbuf_size = 408960
and then I get EXC_BAD_ACCESS at avcodec_encode_video. Not sure if it's relevant, but most of this code comes from api-example.c. I am using XCode, compiling for armv6/armv7 on Snow Leopard.
Thanks so much in advance for help!
I have not enough information here to point to the exact error, but I think that the problem is that the input picture contains less data than avcodec_encode_video() expects:
avpicture_fill() only sets some pointers and numeric values in the AVFrame structure. It does not copy anything, and does not check whether the buffer is large enough (and it cannot, since the buffer size is not passed to it). It does something like this (copied from ffmpeg source):
size = picture->linesize[0] * height;
picture->data[0] = ptr;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size2;
picture->data[3] = picture->data[1] + size2 + size2;
Note that the width and height is passed from the variable "c" (the AVCodecContext, I assume), so it may be larger than the actual size of the input frame.
It is also possible that the width/height is good, but the pixel format of the input frame is different from what is passed to avpicture_fill(). (note that the pixel format also comes from the AVCodecContext, which may differ from the input). For example, if c->pix_fmt is RGBA and the input buffer is in YUV420 format (or, more likely for iPhone, a biplanar YCbCr), then the size of the input buffer is width*height*1.5, but avpicture_fill() expects the size of width*height*4.
So checking the input/output geometry and pixel formats should lead you to the cause of the error. If it does not help, I suggest that you should try to compile for i386 first. It is tricky to compile FFMPEG for the iPhone properly.
Does the codec you are encoding support the RGB color space? You may need to use libswscale to convert to I420 before encoding. What codec are you using? Can you post the code where you initialize your codec context?
The function RGBtoYUV420P may help you.
http://www.mail-archive.com/libav-user#mplayerhq.hu/msg03956.html