How to encode using the FFMpeg in Android (using H263) - encoding

I am trying to follow the sample code on encoding in the ffmpeg document and successfully build a application to encode and generate a mp4 file but I face the following problems:
1) I am using the H263 for encoding but I can only set the width and height of the AVCodecContext to 176x144, for other case (like 720x480 or 640x480) it will return fail.
2) I can't play the output mp4 file by using the default Android player, isn't it support H263 mp4 file? p.s. I can play it by using other player
3) Is there any sample code on encoding other video frame to make a new video (which mean decode the video and encode it back in different quality setting, also i would like to modify the frame content)?
Here is my code, thanks!
JNIEXPORT jint JNICALL Java_com_ffmpeg_encoder_FFEncoder_nativeEncoder(JNIEnv* env, jobject thiz, jstring filename){
LOGI("nativeEncoder()");
avcodec_register_all();
avcodec_init();
av_register_all();
AVCodec *codec;
AVCodecContext *codecCtx;
int i;
int out_size;
int size;
int x;
int y;
int output_buffer_size;
FILE *file;
AVFrame *picture;
uint8_t *output_buffer;
uint8_t *picture_buffer;
/* Manual Variables */
int l;
int fps = 30;
int videoLength = 5;
/* find the H263 video encoder */
codec = avcodec_find_encoder(CODEC_ID_H263);
if (!codec) {
LOGI("avcodec_find_encoder() run fail.");
}
codecCtx = avcodec_alloc_context();
picture = avcodec_alloc_frame();
/* put sample parameters */
codecCtx->bit_rate = 400000;
/* resolution must be a multiple of two */
codecCtx->width = 176;
codecCtx->height = 144;
/* frames per second */
codecCtx->time_base = (AVRational){1,fps};
codecCtx->pix_fmt = PIX_FMT_YUV420P;
codecCtx->codec_id = CODEC_ID_H263;
codecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
/* open it */
if (avcodec_open(codecCtx, codec) < 0) {
LOGI("avcodec_open() run fail.");
}
const char* mfileName = (*env)->GetStringUTFChars(env, filename, 0);
file = fopen(mfileName, "wb");
if (!file) {
LOGI("fopen() run fail.");
}
(*env)->ReleaseStringUTFChars(env, filename, mfileName);
/* alloc image and output buffer */
output_buffer_size = 100000;
output_buffer = malloc(output_buffer_size);
size = codecCtx->width * codecCtx->height;
picture_buffer = malloc((size * 3) / 2); /* size for YUV 420 */
picture->data[0] = picture_buffer;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size / 4;
picture->linesize[0] = codecCtx->width;
picture->linesize[1] = codecCtx->width / 2;
picture->linesize[2] = codecCtx->width / 2;
for(l=0;l<videoLength;l++){
//encode 1 second of video
for(i=0;i<fps;i++) {
//prepare a dummy image YCbCr
//Y
for(y=0;y<codecCtx->height;y++) {
for(x=0;x<codecCtx->width;x++) {
picture->data[0][y * picture->linesize[0] + x] = x + y + i * 3;
}
}
//Cb and Cr
for(y=0;y<codecCtx->height/2;y++) {
for(x=0;x<codecCtx->width/2;x++) {
picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
}
}
//encode the image
out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, picture);
fwrite(output_buffer, 1, out_size, file);
}
//get the delayed frames
for(; out_size; i++) {
out_size = avcodec_encode_video(codecCtx, output_buffer, output_buffer_size, NULL);
fwrite(output_buffer, 1, out_size, file);
}
}
//add sequence end code to have a real mpeg file
output_buffer[0] = 0x00;
output_buffer[1] = 0x00;
output_buffer[2] = 0x01;
output_buffer[3] = 0xb7;
fwrite(output_buffer, 1, 4, file);
fclose(file);
free(picture_buffer);
free(output_buffer);
avcodec_close(codecCtx);
av_free(codecCtx);
av_free(picture);
LOGI("finish");
return 0; }

H263 accepts only certain resolutions:
128 x 96
176 x 144
352 x 288
704 x 576
1408 x 1152
It will fail with anything else.

The code supplied in the question (I used it myself at first) seems to only generate a very rudimentary, if any, container format.
I found that this example, http://cekirdek.pardus.org.tr/~ismail/ffmpeg-docs/output-example_8c-source.html, worked much better as it creates a real container for the video and audio streams. My video is now displayable on the Android device.

Related

Why is the voltage collected by stm32adc inaccurate?

I am using the STM32F103C8T6. I use the ADC for multi-channel voltage acquisition, and a potentiometer to control the voltage change. I find that the ADC value changes fluctuate. Why?
This is part of my ADC code:
uint32_t ADC_Get_Average(uint8_t ch,uint8_t times)
{
ADC_ChannelConfTypeDef sConfig;
uint32_t value_sum=0;
uint8_t i;
switch(ch)
{
case 1:sConfig.Channel = ADC_CHANNEL_1;break;
case 2:sConfig.Channel = ADC_CHANNEL_2;break;
case 3:sConfig.Channel = ADC_CHANNEL_3;break;
}
sConfig.SamplingTime = ADC_SAMPLETIME_1CYCLE_5;
sConfig.Rank = 1;
HAL_ADC_ConfigChannel(&hadc1,&sConfig);
for(i=0;i<times;i++)
{
HAL_ADC_Start(&hadc1);
HAL_ADC_PollForConversion(&hadc1,5);
value_sum += HAL_ADC_GetValue(&hadc1);
HAL_ADC_Stop(&hadc1);
}
return value_sum/times;
}
void ADC_PROC(void)
{
ADC_value1 = ADC_Get_Average(1,5) / 4096.0 * 3.3;
ADC_value2 = ADC_Get_Average(2,5) / 4096.0 * 3.3;
ADC_value3 = ADC_Get_Average(3,5) / 4096.0 * 3.3;
sprintf(adcbuff1, "V1:%.2fV", ADC_value1);
oled_show_string(24, 0, adcbuff1, 2);
sprintf(adcbuff2, "V2:%.2fV", ADC_value2);
oled_show_string(24, 2, adcbuff2, 2);
sprintf(adcbuff3, "V3:%.2fV", ADC_value3);
oled_show_string(24, 4, adcbuff3, 2);
}
enter image description here
I have tried to change the length of ADC's acquisition cycle side, but the effect is still the same, with severe fluctuations.

Sand 3D Printer Slicing Issue

For my doctoral thesis I am building a 3D printer based loosely off of one from the University of Twente:
http://pwdr.github.io/
So far, everything has gone relatively smoothly. The hardware part took longer than expected, but the electronics frighten me a little bit. I can sucessfully jog all the motors and, mechanically, everything does what is supposed to do.
However, now that I am working on the software side, I am getting headaches.
The Pwder people wrote a code that uses Processing to take an .STL file and slice it into layers. Upon running the code, a Processing GUI opens where I can load a model. The model loads fine (I'm using the Utah Teapot) and shows that it will take 149 layers.
Upon hitting "convert" the program is supposed to take the .STL file and slice it into layers, followed by writing a text file that I can then upload to an SD card. The printer will then print directly from the SD card.
However, when I hit "convert" I get an "Array Index Out of Bounds" error. I'm not quite sure what this means.. can anyone enlighten me?
The code can be found below, along with a picture of the error.
Thank you.
// Convert the graphical output of the sliced STL into a printable binary format.
// The bytes are read by the Arduino firmware
PrintWriter output, outputUpper;
int loc;
int LTR = 0;
int lowernozzles = 8;
int uppernozzles = 4;
int nozzles = lowernozzles+uppernozzles;
int printXcoordinate = 120+280; // Left margin 120
int printYcoordinate = 30+190; // Top margin 30
int printWidth = 120; // Total image width 650
int printHeight = 120; // Total image height 480
int layer_size = printWidth * printHeight/nozzles * 2;
void convertModel() {
// Create config file for the printer, trailing comma for convenience
output = createWriter("PWDR/PWDRCONF.TXT"); output.print(printWidth+","+printHeight/nozzles+","+maxSlices+","+inkSaturation+ ",");
output.flush();
output.close();
int index = 0;
byte[] print_data = new byte[layer_size * 2];
// Steps of 12 nozzles in Y direction
for (int y = printYcoordinate; y < printYcoordinate+printHeight; y=y+nozzles ) {
// Set a variable to know wheter we're moving LTR of RTL
LTR++;
// Step in X direction
for (int x = 0; x < printWidth; x++) {
// Clear the temp strings
String[] LowerStr = {""};
String LowerStr2 = "";
String[] UpperStr = {""};
String UpperStr2 = "";
// For every step in Y direction, sample the 12 nozzles
for ( int i=0; i<nozzles; i++) {
// Calculate the location in the pixel array, use total window width!
// Use the LTR to determine the direction
if (LTR % 2 == 1){
loc = printXcoordinate + printWidth - x + (y+i) * width;
} else {
loc = printXcoordinate + x + (y+i) * width;
}
if (brightness(pixels[loc]) < 100) {
// Write a zero when the pixel is white (or should be white, as the preview is inverted)
if (i<uppernozzles) {
UpperStr = append(UpperStr, "0");
} else {
LowerStr = append(LowerStr, "0");
}
} else {
// Write a one when the pixel is black
if (i<uppernozzles) {
UpperStr = append(UpperStr, "1");
} else {
LowerStr = append(LowerStr, "1");
}
}
}
LowerStr2 = join(LowerStr, "");
print_data[index] = byte(unbinary(LowerStr2));
index++;
UpperStr2 = join(UpperStr, "");
print_data[index] = byte(unbinary(UpperStr2));
index++;
}
}
if (sliceNumber >= 1 && sliceNumber < 10){
String DEST_FILE = "PWDR/PWDR000"+sliceNumber+".DAT";
File dataFile = sketchFile(DEST_FILE);
if (dataFile.exists()){
dataFile.delete();
}
saveBytes(DEST_FILE, print_data); // Savebytes directly causes bug under Windows
} else if (sliceNumber >= 10 && sliceNumber < 100){
String DEST_FILE = "PWDR/PWDR00"+sliceNumber+".DAT";
File dataFile = sketchFile(DEST_FILE);
if (dataFile.exists()){
dataFile.delete();
}
saveBytes(DEST_FILE, print_data); // Savebytes directly causes bug under Windows
} else if (sliceNumber >= 100 && sliceNumber < 1000){
String DEST_FILE = "PWDR/PWDR0"+sliceNumber+".DAT";
File dataFile = sketchFile(DEST_FILE);
if (dataFile.exists()){
dataFile.delete();
}
saveBytes(DEST_FILE, print_data); // Savebytes directly causes bug under Windows
} else if (sliceNumber >= 1000) {
String DEST_FILE = "PWDR/PWDR"+sliceNumber+".DAT";
File dataFile = sketchFile(DEST_FILE);
if (dataFile.exists()){
dataFile.delete();
}
saveBytes(DEST_FILE, print_data); // Savebytes directly causes bug under Windows
}
sliceNumber++;
println(sliceNumber);
}
What's happening is that print_data is smaller than index. (For example, if index is 123, but print_data only has 122 elements.)
Size of print_data is layer_size * 2 or printWidth * printHeight/nozzles * 4 or 4800
Max size of index is printHeight/nozzles * 2 * printWidth or 20*120 or 2400.
This seems alright, so I probably missed something, and it appears to be placing data in element 4800, which is weird. I suggest a bunch of print statements to get the size of print_data and the index.

iPhone: Problems encoding 32KHz PCM to 96Kbit AAC using AudioConverterFillComplexBuffer

Has anyone had success converting 32KHz PCM to 96Kbit AAC on iPhone/iOS?
I can not get this to work correctly on any hardware device. The code I wrote only works correctly in the simulator. When run on current-generation iPad/iPod/iPhone, my code 'skips' large chunks of audio.
The resulting encoded stream contains a repeating pattern of ~640ms of 'good' audio followed by ~640ms of 'bad' audio.
Encoding both 16bit linear and 8.24 fixed-point PCM yielded the same results.
Here is the code to setup an Audio Converter to encode MPEG4-AAC 96kbits # 32KHz:
AudioStreamBasicDescription descPCMFormat;
descPCMFormat.mSampleRate = 32000;
descPCMFormat.mChannelsPerFrame = 1;
descPCMFormat.mBitsPerChannel = sizeof(AudioUnitSampleType) * 8;
descPCMFormat.mBytesPerPacket = sizeof(AudioUnitSampleType);
descPCMFormat.mFramesPerPacket = 1;
descPCMFormat.mBytesPerFrame = sizeof(AudioUnitSampleType);
descPCMFormat.mFormatID = kAudioFormatLinearPCM;
descPCMFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
AudioStreamBasicDescription descAACFormat;
descAACFormat.mSampleRate = 32000;
descAACFormat.mChannelsPerFrame = 1;
descAACFormat.mBitsPerChannel = 0;
descAACFormat.mBytesPerPacket = 0;
descAACFormat.mFramesPerPacket = 1024;
descAACFormat.mBytesPerFrame = 0;
descAACFormat.mFormatID = kAudioFormatMPEG4AAC;
descAACFormat.mFormatFlags = 0;
AudioConverterNew(& descPCMFormat, & descAACFormat, &m_hCodec);
UInt32 ulBitRate = 96000;
UInt32 ulSize = sizeof(ulBitRate);
AudioConverterSetProperty(m_hCodec, kAudioConverterEncodeBitRate, ulSize, & ulBitRate);
Simple conversion routine. This routine is called every 32ms with a block of 1024 PCM samples, and expects 384 bytes of encoded AAC:
OSStatus CMyObj::Convert(
const AudioUnitSampleType * pSrc,
const size_t ulSrc,
uint8_t * pDst,
size_t & ulDst)
{
// error and sanity checking removed..
// assume caller is converting 1024 samples to at most 384 bytes
OSStatus osStatus;
m_pSrcPtr = (uint8_t*)pSrc;
m_ulSrcLen = ulSrc; // verified to be 1024*sizeof(AudioUnitSampleType);
AudioBufferList destBuffers;
destBuffers.mNumberBuffers = 1;
destBuffers.mBuffers[0].mNumberChannels = 1;
destBuffers.mBuffers[0].mDataByteSize = 384;
destBuffers.mBuffers[0].mData = pDst;
AudioStreamPacketDescription destDescription;
destDescription.mStartOffset = 0;
destDescription.mVariableFramesInPacket = 0;
destDescription.mDataByteSize = 384;
UInt32 ulDstPackets = 1;
osStatus = AudioConverterFillComplexBuffer(
m_hCodec,
InputDataProc,
this,
& ulDstPackets,
& destBuffers,
& destDescription);
ulDst = destBuffers.mBuffers[0].mDataByteSize;
return osStatus;
}
The input data proceedure simply provides the 1024 samples to the encoder:
static OSStatus CMyObj::InputDataProc(
AudioConverterRef hCodec,
UInt32 *pulSrcPackets,
AudioBufferList *pSrcBuffers,
AudioStreamPacketDescription **ppPacketDescription,
void *pUserData)
{
// error and sanity checking removed
CMyObj *pThis = (CMyObj*)pUserData;
const UInt32 ulMaxSrcPackets = pThis->m_ulSrcLen / sizeof(AudioUnitSampleType);
const UInt32 ulRetSrcPackets = min(ulMaxSrcPackets, *pulSrcPackets);
if( ulRetSrcPackets )
{
UInt32 ulRetSrcBytes = ulRetSrcPackets * sizeof(AudioUnitSampleType);
*pulSrcPackets = ulRetSrcPackets;
pSrcBuffers->mBuffers[0].mData = pThis->m_pSrcPtr;
pSrcBuffers->mBuffers[0].mDataByteSize = ulRetSrcBytes;
pSrcBuffers->mBuffers[0].mNumberChannels = 1;
pThis->m_pSrcPtr += ulRetSrcBytes;
pThis-> m_ulSrcLen -= ulRetSrcBytes;
return noErr;
}
*pulSrcPackets = 0;
pSrcBuffers->mBuffers[0].mData = NULL;
pSrcBuffers->mBuffers[0].mDataByteSize = 0;
pSrcBuffers->mBuffers[0].mNumberChannels = 1;
return 500; // local error code to signal end-of-packet
}
Everything works fine when run on the simulator.
When run on the device, however, InputDataProc is not called consistently. For up to 20 times in a row, calls to AudioConverterFillComplexBuffer provoke calls to InputDataProc, and everything looks fine. Then, for the next ~ 21 calls to AudioConverterFillComplexBuffer, InputDataProc will NOT be called. This pattern repeats forever:
-> Convert
-> AudioConverterFillComplexBuffer
-> InputDataProc
-> results in 384 bytes of 'good' AAC
-> Convert
-> AudioConverterFillComplexBuffer
-> InputDataProc
-> results in 384 bytes of 'good' AAC
.. repeats up to 18 more times
-> Convert
-> AudioConverterFillComplexBuffer
-> results in 384 bytes of 'bad' AAC
-> Convert
-> AudioConverterFillComplexBuffer
-> results in 384 bytes of 'bad' AAC
.. repeats up to 18 more times
Where is the converter getting the input data to create the 'bad' AAC, since it isn't calling InputDataProc?
Does anyone see anything glaringly wrong with this approach?
Are there any special settings that need to be made on the hardware codec (MagicCookies or ?) ?
Does the HW AAC codec support 32000 sample rate?
I find that: the default outputBitRate for 32KHz-input-PCM is 48000 bit, the default outputBitRate for 44.1KHz-input-PCM is 64000 bit.
When use the the default outputBitRate, 32KHz input makes huge noise.
Even use these codes from apple`s sample , 44.1KHz input have a little noise.
Then i fix the outputBitRate to 64kbs, 32KHz & 44.1KHz both works well。
UInt32 outputBitRate = 64000; // 64kbs
UInt32 propSize = sizeof(outputBitRate);
if (AudioConverterSetProperty(m_converter, kAudioConverterEncodeBitRate, propSize, &outputBitRate) != noErr) {
} else {
NSLog(#"upyun.com uplivesdk UPAACEncoder error 102");
}

ffmpeg +libx264 iPhone -> 'avcodec_encode_video' return always 0 . please advice

av_register_all();
AVCodec *codec;
AVCodecContext *c= NULL;
int out_size, size, outbuf_size;
//FILE *f;
uint8_t *outbuf;
printf("Video encoding\n");
/* find the mpeg video encoder */
codec =avcodec_find_encoder(CODEC_ID_H264);//avcodec_find_encoder_by_name("libx264"); //avcodec_find_encoder(CODEC_ID_H264);//CODEC_ID_H264);
NSLog(#"codec = %i",codec);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
/* put sample parameters */
c->bit_rate = 400000;
c->bit_rate_tolerance = 10;
c->me_method = 2;
/* resolution must be a multiple of two */
c->width = 352;//width;//352;
c->height = 288;//height;//288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10; /* emit one intra frame every ten frames */
//c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
c ->me_range = 16;
c ->max_qdiff = 4;
c ->qmin = 10;
c ->qmax = 51;
c ->qcompress = 0.6f;
'avcodec_encode_video' is always 0 .
I guess that because 'non-strictly-monotonic PTS' warning, do you konw same situation?
For me also it returns 0 always. But encodes fine. I dont think there is an issue if it returns 0. In the avcodec.h, you can see this
"On error a negative value is returned, on success zero or the number
* of bytes used from the output buffer."

Encoding images to video with ffmpeg

I am trying to encode series of images to one video file. I am using code from api-example.c, its works, but it gives me weird green colors in video. I know, I need to convert my RGB images to YUV, I found some solution, but its doesn't works, the colors is not green but very strange, so thats the code:
// Register all formats and codecs
av_register_all();
AVCodec *codec;
AVCodecContext *c= NULL;
int i, out_size, size, outbuf_size;
FILE *f;
AVFrame *picture;
uint8_t *outbuf;
printf("Video encoding\n");
/* find the mpeg video encoder */
codec = avcodec_find_encoder(CODEC_ID_MPEG2VIDEO);
if (!codec) {
fprintf(stderr, "codec not found\n");
exit(1);
}
c= avcodec_alloc_context();
picture= avcodec_alloc_frame();
/* put sample parameters */
c->bit_rate = 400000;
/* resolution must be a multiple of two */
c->width = 352;
c->height = 288;
/* frames per second */
c->time_base= (AVRational){1,25};
c->gop_size = 10; /* emit one intra frame every ten frames */
c->max_b_frames=1;
c->pix_fmt = PIX_FMT_YUV420P;
/* open it */
if (avcodec_open(c, codec) < 0) {
fprintf(stderr, "could not open codec\n");
exit(1);
}
f = fopen(filename, "wb");
if (!f) {
fprintf(stderr, "could not open %s\n", filename);
exit(1);
}
/* alloc image and output buffer */
outbuf_size = 100000;
outbuf = malloc(outbuf_size);
size = c->width * c->height;
#pragma mark -
AVFrame* outpic = avcodec_alloc_frame();
int nbytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
//create buffer for the output image
uint8_t* outbuffer = (uint8_t*)av_malloc(nbytes);
#pragma mark -
for(i=1;i<77;i++) {
fflush(stdout);
int numBytes = avpicture_get_size(PIX_FMT_YUV420P, c->width, c->height);
uint8_t *buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:#"10%d", i]];
CGImageRef newCgImage = [image CGImage];
CGDataProviderRef dataProvider = CGImageGetDataProvider(newCgImage);
CFDataRef bitmapData = CGDataProviderCopyData(dataProvider);
buffer = (uint8_t *)CFDataGetBytePtr(bitmapData);
avpicture_fill((AVPicture*)picture, buffer, PIX_FMT_RGB8, c->width, c->height);
avpicture_fill((AVPicture*)outpic, outbuffer, PIX_FMT_YUV420P, c->width, c->height);
struct SwsContext* fooContext = sws_getContext(c->width, c->height,
PIX_FMT_RGB8,
c->width, c->height,
PIX_FMT_YUV420P,
SWS_FAST_BILINEAR, NULL, NULL, NULL);
//perform the conversion
sws_scale(fooContext, picture->data, picture->linesize, 0, c->height, outpic->data, outpic->linesize);
// Here is where I try to convert to YUV
/* encode the image */
out_size = avcodec_encode_video(c, outbuf, outbuf_size, outpic);
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
free(buffer);
buffer = NULL;
}
/* get the delayed frames */
for(; out_size; i++) {
fflush(stdout);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, NULL);
printf("write frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, outbuf_size, f);
}
/* add sequence end code to have a real mpeg file */
outbuf[0] = 0x00;
outbuf[1] = 0x00;
outbuf[2] = 0x01;
outbuf[3] = 0xb7;
fwrite(outbuf, 1, 4, f);
fclose(f);
free(outbuf);
avcodec_close(c);
av_free(c);
av_free(picture);
printf("\n");
Please give me advice how to fix that problem.
You can see article http://unick-soft.ru/Articles.cgi?id=20. But it is article on Russian, but it includes code samples and VS Example.
Has anyone found a fix for this? I am seeing the green video problem on the decode side. That is, when I decode incoming PIX_FMT_YUV420 packets and then swsscale them to PIX_FMT_RGBA.
Thanks!
EDIT:
The green images are probably due to an arm optimization backfiring. I used this to fix the problem in my case:
http://ffmpeg-users.933282.n4.nabble.com/green-distorded-output-image-on-iPhone-td2231805.html
I guess the idea is to not specify any architecture (the config will you a warning about the architecture being unknown but you can continue to 'make' anyway). That way, the arm optimizations are not used. There maybe a slight performance hit (if any), but atleast it works! :)
I think the problem is most likely that you are using PIX_FMT_RGB8 as your input pixel format. This does not mean 8 bits per channel like the commonly used 24-bit RGB or 32-bit ARGB. It means 8 bits per pixel, meaning that all three color channels are housed in a single byte. I am guessing that this is not the format of your image since it is quite uncommon, so you need to use PIX_FMT_RGB24 or PIX_FMT_RGB32 depending on whether or not your input image has an alpha channel. See this documentation page for info on the pixel formats.