Low performance of mpeg1 encoding using libavcodec - libavcodec

I'm using mpeg1 codec to create proxy videos of my HD and SD Content.
Mpeg1 resolution and bitrate is;
For HD video proxy: 704x576 4.5 Mbit
For SD video proxy: 352x288 1.5 Mbit
My video params like this;
How can get more performance to change any params...
Best Regards,
AVCodecContext *c = &m_OutputCodecCtxVideo;
avcodec_get_context_defaults2( c, AVMEDIA_TYPE_VIDEO );
c->codec_id = CODEC_ID_MPEG1VIDEO;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->pix_fmt = PIX_FMT_YUV420P;
if( bHD ){
c->width = 704;
c->height = 576;
c->sample_aspect_ratio.num = 16;
c->sample_aspect_ratio.den = 11;
c->bit_rate = 4500000;
}
else{
c->width = 352;
c->height = 288;
c->sample_aspect_ratio.num = 12;
c->sample_aspect_ratio.den = 11;
c->bit_rate = 1500000;
}
c->time_base.num = 1;
c->time_base.den = 25;
c->gop_size = 15;
c->max_b_frames = 2;
// Needed to avoid using macroblocks in which some coeffs overflow.
// This does not happen with normal video, it just happens here as
// the motion of the chroma plane does not match the luma plane.
c->mb_decision = 2;
// some formats want stream headers to be separate
if( m_pFormatCtxDst->oformat->flags & AVFMT_GLOBALHEADER )
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
m_OutputCodecCtxVideo.thread_count = 0;

Related

Save frame from TangoService_connectOnFrameAvailable

How can I save a frame via TangoService_connectOnFrameAvailable() and display it correctly on my computer? As this reference page mentions, the pixels are stored in the HAL_PIXEL_FORMAT_YV12 format. In my callback function for TangoService_connectOnFrameAvailable, I save the frame like this:
static void onColorFrameAvailable(void* context, TangoCameraId id, const TangoImageBuffer* buffer)
{
...
std::ofstream fp;
fp.open(imagefile, std::ios::out | std::ios::binary );
int offset = 0;
for(int i = 0; i < buffer->height*2 + 1; i++) {
fp.write((char*)(buffer->data + offset), buffer->width);
offset += buffer->stride;
}
fp.close();
}
Then to get rid of the meta data in the first row and to display the image I run:
$ dd if="input.raw" of="new.raw" bs=1 skip=1280
$ vooya new.raw
I was careful to make sure in vooya that the channel order is yvu. The resulting output is:
What am I doing wrong in saving the image and displaying it?
UPDATE per Mark Mullin's response:
int offset = buffer->stride; // header offset
// copy Y channel
for(int i = 0; i < buffer->height; i++) {
fp.write((char*)(buffer->data + offset), buffer->width);
offset += buffer->stride;
}
// copy V channel
for(int i = 0; i < buffer->height / 2; i++) {
fp.write((char*)(buffer->data + offset), buffer->width / 2);
offset += buffer->stride / 2;
}
// copy U channel
for(int i = 0; i < buffer->height / 2; i++) {
fp.write((char*)(buffer->data + offset), buffer->width / 2);
offset += buffer->stride / 2;
}
This now shows the picture below, but there are still some artifacts; I wonder if that's from the Tango tablet camera or my processing of the raw data... any thoughts?
Can't say exactly what you're doing wrong AND tango images often have artifacts in them - yours are new, but I often see baby blue as a color where glare seems to be annoying deeper systems, and as it begins to loose sync with the depth system under load, you'll often see what looks like a shiny grid (its the IR pattern, I think) - At the end, any rational attempt to handle the image with openCV etc failed, so I hand wrote the decoder with some help from SO thread here
That said, given imagebuffer contains a pointer to the raw data from Tango, and various other variables like height and stride are filled in from the data received in the callback, then this logic will create an RGBA map - yeah, I optimized the math in it, so it's a little ugly - it's slower but functionally equivalent twin is listed second. My own experience says its a horrible idea to try and do this decode right in the callback (I believe Tango is capable of loosing sync with the flash for depth for purely spiteful reasons), so mine runs at the render stage.
Fast
uchar* pData = TangoData::cameraImageBuffer;
uchar* iData = TangoData::cameraImageBufferRGBA;
int size = (int)(TangoData::imageBufferStride * TangoData::imageBufferHeight);
float invByte = 0.0039215686274509803921568627451; // ( 1 / 255)
int halfi, uvOffset, halfj, uvOffsetHalfj;
float y_scaled, v_scaled, u_scaled;
int uOffset = size / 4 + size;
int halfstride = TangoData::imageBufferStride / 2;
for (int i = 0; i < TangoData::imageBufferHeight; ++i)
{
halfi = i / 2;
uvOffset = halfi * halfstride;
for (int j = 0; j < TangoData::imageBufferWidth; ++j)
{
halfj = j / 2;
uvOffsetHalfj = uvOffset + halfj;
y_scaled = pData[i * TangoData::imageBufferStride + j] * invByte;
v_scaled = 2 * (pData[uvOffsetHalfj + size] * invByte - 0.5f) * Vmax;
u_scaled = 2 * (pData[uvOffsetHalfj + uOffset] * invByte - 0.5f) * Umax;
*iData++ = (uchar)((y_scaled + 1.13983f * v_scaled) * 255.0);;
*iData++ = (uchar)((y_scaled - 0.39465f * u_scaled - 0.58060f * v_scaled) * 255.0);
*iData++ = (uchar)((y_scaled + 2.03211f * u_scaled) * 255.0);
*iData++ = 255;
}
}
Understandable
for (int i = 0; i < TangoData::imageBufferHeight; ++i)
{
for (int j = 0; j < TangoData::imageBufferWidth; ++j)
{
uchar y = pData[i * image->stride + j];
uchar v = pData[(i / 2) * (TangoData::imageBufferStride / 2) + (j / 2) + size];
uchar u = pData[(i / 2) * (TangoData::imageBufferStride / 2) + (j / 2) + size + (size / 4)];
YUV2RGB(y, u, v);
*iData++ = y;
*iData++ = u;
*iData++ = v;
*iData++ = 255;
}
}
I think that there is a better way to do if you can to do it offline.
The best way to save the image should be something like this (don't forgot to create the folder Pictures or you won't save anything)
void onFrameAvailableRouter(void* context, TangoCameraId id, const TangoImageBuffer* buffer) {
//To write the image in a txt file.
std::stringstream name_stream;
name_stream.setf(std::ios_base::fixed, std::ios_base::floatfield);
name_stream.precision(3);
name_stream << "/storage/emulated/0/Pictures/"
<<cur_frame_timstamp_
<<".txt";
std::fstream f(name_stream.str().c_str(), std::ios::out | std::ios::binary);
// size = 1280*720*1.5 to save YUV or 1280*720 to save grayscale
int size = stride_ * height_ * 1.5;
f.write((const char *) buffer->data,size * sizeof(uint8_t));
f.close();
}
Then to convert the .txt file to png you can do this
inputFolder = "input"
outputFolderRGB = "output/rgb"
outputFolderGray = "output/gray"
input_filename = "timestamp.txt"
output_filename = "rgb.png"
allFile = listdir(inputFolder)
numberOfFile = len(allFile)
if "input" in glob.glob("*"):
if "output/rgb" in glob.glob("output/*"):
print ""
else:
makedirs("output/rgb")
if "output/gray" in glob.glob("output/*"):
print ""
else:
makedirs("output/gray")
#The output reportories are ready
for file in allFile:
count+=1
print "current file : ",count,"/",numberOfFile
input_filename = file
output_filename = input_filename[0:(len(input_filename)-3)]+"png"
# load file into buffer
data = np.fromfile(inputFolder+"/"+input_filename, dtype=np.uint8)
#To get RGB image
# create yuv image
yuv = np.ndarray((height + height / 2, width), dtype=np.uint8, buffer=data)
# create a height x width x channels matrix with the datatype uint8 for rgb image
img = np.zeros((height, width, channels), dtype=np.uint8);
# convert yuv image to rgb image
cv2.cvtColor(yuv, cv2.COLOR_YUV2BGRA_NV21, img, channels)
cv2.imwrite(outputFolderRGB+"/"+output_filename, img)
#If u saved the image in graysacale use this part instead
#yuvReal = np.ndarray((height, width), dtype=np.uint8, buffer=data)
#cv2.imwrite(outputFolderGray+"/"+output_filename, yuvReal)
else:
print "not any input"
You just have to put your .txt in a folder input
It's a python script but if you prefer a c++ version it's very close.

Properly displaying cropped images in iOS v5.0.1 and below + ALAssetLibrary

I am trying to display photos from the camera roll within my application using ALAssetLibrary. All images are displaying fine except for cropped images. ALAsset returns unedited version of the image when using fullResolutionImage method of ALAssetRepresentationinstead of cropped version.
So I am trying to extract cropping information for ALAsset objects from its metadata. I googled for it and 've found that the cropping information is contained in the AdjustmentXMP key of metadata on my ALAssetRepresentation object. Using above info I am able to display cropped images correctly within my app but the approach on v5.1 and above but fails on iOS v5.0.
Meta data dictionary for a cropped image on iOS 5.1:
{
AdjustmentXMP = "<x:xmpmeta xmlns:x=\"adobe:ns:meta/\" x:xmptk=\"XMP Core 4.4.0\">\n <rdf:RDF xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\">\n <rdf:Description rdf:about=\"\"\n xmlns:aas=\"http://ns.apple.com/adjustment-settings/1.0/\">\n <aas:AffineA>1</aas:AffineA>\n <aas:AffineB>0</aas:AffineB>\n <aas:AffineC>0</aas:AffineC>\n <aas:AffineD>1</aas:AffineD>\n <aas:AffineX>-266</aas:AffineX>\n <aas:AffineY>-589</aas:AffineY>\n <aas:CropX>0</aas:CropX>\n <aas:CropY>0</aas:CropY>\n <aas:CropW>270</aas:CropW>\n <aas:CropH>162</aas:CropH>\n </rdf:Description>\n </rdf:RDF>\n</x:xmpmeta>\n";
ColorModel = RGB;
DPIHeight = 72;
DPIWidth = 72;
Depth = 8;
Orientation = 1;
PixelHeight = 1024;
PixelWidth = 768;
"{Exif}" = {
ApertureValue = "2.526069";
BrightnessValue = "0.1544926";
ColorSpace = 1;
ComponentsConfiguration = (
0,
0,
0,
1
);
DateTimeDigitized = "2013:01:22 14:12:59";
DateTimeOriginal = "2013:01:22 14:12:59";
ExifVersion = (
2,
2
);
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.06666667";
FNumber = "2.4";
Flash = 16;
FlashPixVersion = (
1,
0
);
FocalLenIn35mmFilm = 33;
FocalLength = "4.13";
ISOSpeedRatings = (
400
);
MeteringMode = 5;
PixelXDimension = 768;
PixelYDimension = 1024;
SceneCaptureType = 0;
SensingMethod = 2;
ShutterSpeedValue = "3.906905";
SubjectArea = (
1631,
1223,
881,
881
);
WhiteBalance = 0;
};
"{GPS}" = {
Altitude = "216.1379";
AltitudeRef = 0;
DateStamp = "2013:01:22";
Latitude = "28.46366666666667";
LatitudeRef = N;
Longitude = "77.04916666666666";
LongitudeRef = E;
TimeStamp = "08:42:56.00";
};
"{TIFF}" = {
DateTime = "2013:01:22 14:14:39";
Make = Apple;
Model = "iPhone 5";
Orientation = 1;
ResolutionUnit = 2;
Software = "QuickTime 7.7.1";
XResolution = 72;
YResolution = 72;
"_YCbCrPositioning" = 1;
};
}
Meta data dictionary for a cropped image on iOS 5.0.1:
Metadata: {
ColorModel = RGB;
Depth = 8;
PixelHeight = 2048;
PixelWidth = 1078;
"{JFIF}" = {
DensityUnit = 0;
JFIFVersion = (
1,
1
);
XDensity = 1;
YDensity = 1;
};
}
As you see above metadata dictionary on v5.1 contains AdjustmentXMP which contains cropping information while the same dictionary on v5.0 doesn't have AdjustmentXMP key. So the cropping fails on v5.0.1.
Any pointers as to how to display cropped images on devices with iOS v5.0.1 and below ?

Converting Matlab gaussian derivatives to Opencv

I trying to convert an old exercise i made in matlab to OpenCV. Code is posted below. I havent been able to find any functions in OpenCv that does what i want, might be because of other names then what i expect.
Here are the outputs when taken the max response in each location as label. Clearly somethings wronge.
Here is the matlab code:
function responses = getBifResponsesEx(im, myEps, sigma, kernelSize)
if ( nargin == 3 )
if ( sigma >= 1 )
kernelSize = 6*sigma + 1;
else
kernelSize = 7;
end
end
responses = zeros(size(im,1), size(im,2), 7);
%
% Gaussian derivatives
%
kernVal = ceil(kernelSize/2) - 1;
x = (-kernVal:kernVal);
g = 1/(2*pi*sigma^2)*exp(-(x.^2./(2*(sigma^2))));
g = g/sum(g);
dg = -2*x/(2*sigma^2).*g*sigma;
ddg = ((2*x/(2*sigma^2)).^2 - 1/(sigma^2)).*g*sigma;
%
% Gaussian convolution of the image
%
s00 = filter2(g, im);
s00 = filter2(g', s00);
s10 = filter2(g', im);
s10 = filter2(dg, s10);
s01 = filter2(g, im);
s01 = filter2(dg', s01);
s11 = filter2(dg, im);
s11 = filter2(dg', s11);
s20 = filter2(g', im);
s20 = filter2(g', s20);
s20 = filter2(ddg, s20);
s02 = filter2(g, im);
s02 = filter2(g, s02);
s02 = filter2(ddg', s02);
%
% Symmetry types - MISSING CODE!!!!
%
lam = sigma^2*(s20+s02);
gam = sigma^2*(sqrt((s20-s02).^2+4*s11.^2));
responses(:,:,1) = myEps*s00;
responses(:,:,2) = 2*sigma*sqrt(s10.^2+s01.^2);
responses(:,:,3) = +lam;
responses(:,:,4) = -lam;
responses(:,:,5) = 2^-.5*(gam+lam);
responses(:,:,6) = 2^-.5*(gam-lam);
responses(:,:,7) = gam;
end
And here is my converted page. From what i can see, it goes wronge with the s20,s02 responses. Anyone able to tell me what to do?
void extract_bif_features(const cv::Mat & src,
std::vector<cv::Mat> & dst, BIFParams params)
{
float sigma = params.sigma;
float n=0;
int kernelSize;
if(sigma>=1)
kernelSize = 6*sigma + 1;
else
kernelSize = 7;
cv::Mat gray,p00,p10,p01,p11,p20,p02;
cv::cvtColor(src,gray,CV_BGR2GRAY);
auto kernVal = (int)ceil(kernelSize/2.0) - 1;
cv::Mat_<float> g(1,kernelSize);float*gp = g.ptr<float>();
cv::Mat_<float> dg(1,kernelSize);float*dgp = dg.ptr<float>();
cv::Mat_<float> ddg(1,kernelSize); float*ddgp = ddg.ptr<float>();
cv::Mat_<float> X(1,kernelSize);float*xp = X.ptr<float>();
auto gsum=0.0f;
for(int x = -kernVal;x<=kernVal;++x)
{
xp[x+kernVal] = x;
gp[x+kernVal] = 1/(2*CV_PI*sigma*sigma)*exp(-(x*x/(2*(sigma*sigma))));
gsum += gp[x+kernVal];
}
g = g/gsum;
cv::multiply((-2*X / (2*sigma*sigma)),g*sigma,dg);
cv::pow((2*X/(2*sigma*sigma)),2,ddg);
ddg -=1/(sigma*sigma);
cv::multiply(ddg,g*sigma,ddg);
std::cout << ddg<< std::endl;
std::cout << dg<< std::endl;
cv::sepFilter2D(gray,p00,CV_32FC1,g,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE); cv::sepFilter2D(gray,p01,CV_32FC1,dg,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE); cv::sepFilter2D(gray,p10,CV_32FC1,g,dg,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE); cv::sepFilter2D(gray,p11,CV_32FC1,dg,dg,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
//NOT SURE HERE
cv::sepFilter2D(gray,p20,CV_32FC1,g,ddg,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
//cv::sepFilter2D(p20,p20,CV_32FC1,1,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE); cv::sepFilter2D(gray,p02,CV_32FC1,g,ddg,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE); //cv::sepFilter2D(p02,p02,CV_32FC1,g,1,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(gray,p20,CV_32FC1,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(p20,p20,CV_32FC1,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(p20,p20,CV_32FC1,ddg,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(gray,p02,CV_32FC1,g,cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(p02,p02,CV_32FC1,g.t(),cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
cv::filter2D(p02,p02,CV_32FC1,ddg.t(),cv::Point(-1,-1),0.0,cv::BORDER_REPLICATE);
dst.resize(6);
auto sigma_square = sigma*sigma;
cv::Mat Lam = sigma_square * (p20+p02);
cv::Mat Gam ;
cv::sqrt((((p20-p02)*(p20-p02))+4*p11*p11),Gam);
Gam *= sigma_square ;
cv::Mat test = p10*p10;
//slop
cv::sqrt(p10*p10 + p01*p01,dst[0]);
dst[0] = dst[0]*2*sigma;//slop
//blob
dst[1] = Lam;
dst[2] = -1*Lam;
//line
dst[3] = sqrt(2.0f)*(Gam+Lam);
dst[4] = sqrt(2.0f)*(Gam-Lam);
//saddle
dst[5] = Gam;
}
The answer is from what i get sofar.
cv::multiply((p20-p02),(p20-p02),Gam); is not the same as Gam = (p20-p02)*(p20-p02);
Full Code: Classify according to higest response, Griffin(2008).
RIDAR_API void extract_bif_features(const cv::Mat & src,
std::vector<cv::Mat> & dst, BIFParams params)
{
float sigma = params.sigma;
float eta = params.eta;
int kernelSize;
if(sigma>=1)
kernelSize = 4*sigma + 1;
else
kernelSize = 5;
auto kernVal = (int)ceil(kernelSize/2.0) - 1;
cv::Mat_<float> g(1,kernelSize);float*gp = g.ptr<float>();
cv::Mat_<float> X(1,kernelSize);float*xp = X.ptr<float>();
auto gsum=0.0f;
for(int x = -kernVal;x<=kernVal;++x)
{
xp[x+kernVal] = x;
gp[x+kernVal] = 1/(2*CV_PI*sigma*sigma)*exp(-(x*x/(2*(sigma*sigma))));
gsum += gp[x+kernVal];
} g = g/gsum;
cv::Mat dg = -2*X.mul(g*sigma) / (2*sigma*sigma);
cv::Mat ddg = ((2*X/(2*sigma*sigma)).mul((2*X/(2*sigma*sigma))) - 1/(sigma*sigma)).mul(g*sigma);
cv::Mat gray,p00,p10,p01,p11,p20,p02;
cv::cvtColor(src,gray,CV_BGR2GRAY);
cv::filter2D(gray,p00,CV_32FC1,g);
cv::filter2D(p00,p00,CV_32FC1,g.t());
cv::filter2D(gray,p10,CV_32FC1,g.t());
cv::filter2D(p10,p10,CV_32FC1,dg);
cv::filter2D(gray,p01,CV_32FC1,g);
cv::filter2D(p01,p01,CV_32FC1,dg.t());
cv::filter2D(gray,p11,CV_32FC1,dg);
cv::filter2D(p11,p11,CV_32FC1,dg.t());
cv::filter2D(gray,p20,CV_32FC1,g.t());
cv::filter2D(p20,p20,CV_32FC1,g.t());
cv::filter2D(p20,p20,CV_32FC1,ddg);
cv::filter2D(gray,p02,CV_32FC1,g);
cv::filter2D(p02,p02,CV_32FC1,g);
cv::filter2D(p02,p02,CV_32FC1,ddg.t());
#ifdef DISPLAY_WHILE_RUNNING
double max,min;
cv::imshow("p00",p00/255);
//
cv::minMaxIdx(p01,&min,&max);
cv::imshow("p01",(p01-min)/(max-min));
//
cv::minMaxIdx(p10,&min,&max);
cv::imshow("p10",(p10-min)/(max-min));
cv::minMaxIdx(p11,&min,&max);
cv::imshow("p11",(p11-min)/(max-min));
cv::minMaxIdx(p02,&min,&max);
cv::imshow("p02",(p02-min)/(max-min));
cv::minMaxIdx(p20,&min,&max);
cv::imshow("p20",(p20-min)/(max-min));
cv::waitKey();
#endif
dst.resize(7);
auto sigma_square = sigma*sigma;
auto p2d = p20-p02;
//LAM
dst[2] = sigma_square * (p20+p02);
//GAM
cv::sqrt( (p2d).mul(p2d) + (4.0f * p11.mul(p11)) ,dst[6] );
dst[6] = dst[6] * sigma_square;
//FLAT
dst[0] = eta*p00;
//slop
cv::sqrt(p10.mul(p10)+p01.mul(p01),dst[1]);
dst[1] *= 2.0f*sigma;
//blob dst[2]
dst[3] = -dst[2];
//line
dst[4] = pow(2.0,-0.5)*(dst[6]+dst[2]);
dst[5] = pow(2.0,-0.5)*(dst[6]-dst[2]);
//saddle dst[6]
#ifdef DISPLAY_WHILE_RUNNING
double max,min;
cv::minMaxIdx(dst[0],&min,&max);//
cv::imshow("FLAT",(dst[0]-min)/(max-min));
cv::minMaxIdx(dst[1],&min,&max);//
cv::imshow("SLOPE",(dst[1]-min)/(max-min));
cv::minMaxIdx(dst[2],&min,&max);
cv::imshow("BLOB+",(dst[2]-min)/(max-min));
cv::minMaxIdx(dst[3],&min,&max);//
cv::imshow("BLOB-",(dst[3]-min)/(max-min));
cv::minMaxIdx(dst[4],&min,&max);//
cv::imshow("LINE+",(dst[4]-min)/(max-min));
cv::minMaxIdx(dst[5],&min,&max);
cv::imshow("LINE-",(dst[5]-min)/(max-min));
cv::minMaxIdx(dst[6],&min,&max);
cv::imshow("SADDLE",(dst[6]-min)/(max-min));
cv::waitKey();
#endif
}
why dont you take average on dg and ddg?
cv::filter2D(gray,p20,CV_32FC1,g.t());
cv::filter2D(p20,p20,CV_32FC1,g.t());
why take two times of filter here?
//GAM
cv::sqrt( (p2d).mul(p2d) + (4.0f * p11.mul(p11)) ,dst[6] );
dst[6] = dst[6] * sigma_square;
where you get this formula ?

iPhone EXIF data

I am using the iPhone EXIF data from a photo that I am capturing.
Currently the EXIF data I am getting back is:
{
ApertureValue = "2.970854";
ColorSpace = 1;
ComponentsConfiguration = (1,2,3,);
ExifVersion = (2,2,1);
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.03333334";
FNumber = "2.8";
Flash = 16;
FlashPixVersion = (1,0);
FocalLength = "3.85";
ISOSpeedRatings = (500);
MeteringMode = 3;
PixelXDimension = 640;
PixelYDimension = 480;
SceneCaptureType = 0;
SensingMethod = 2;
Sharpness = 2;
ShutterSpeedValue = "4.911055";
SubjectArea = (319,239,230,172);
WhiteBalance = 0;
}
I want to be able to add some fields such as geolocation, date and time.
How can I specify new keys that I want returned?
Anyone know how this is possible in ios 4.1?
I recently struggled with the same thing. See my answer to this question which shows how to do geotagging and datetime.

iphone encode problem with ffmpeg

I need to encode a video from image.
I use ffmpeg and compiling rigth.
My problem is that when i try to opne video with quicktime on iphone, this give me a message "this movie format is not supported".
I create a file mp4 with this parameter on context:
context->time_base.num = 1;
context->time_base.den = 15;
context->codec_type = CODEC_TYPE_VIDEO;
context->codec_id = CODEC_ID_H264;
context->bit_rate = 1000000;
context->width = width;
context->height = height;
context->keyint_min = 10;
context->i_quant_factor = 0.71;
context->bit_rate_tolerance = 20000;
context->rc_max_rate = 100000;
context->rc_buffer_size = 8835000;
context->qcompress = 0.6;
context->qmin = 10;
context->qmax = 30;
context->max_qdiff = 4;
context->gop_size = 30;
context->time_base.num = 1;
context->time_base.den = 30;
context->sample_aspect_ratio = av_d2q(1, 255);
context->profile = 30;
context->pix_fmt = PIX_FMT_YUV420P;
context->flags |= CODEC_FLAG_LOOP_FILTER;
where is my mistake??
thanks
You should use AVFoundation instead, that lets you use the hardware acceleration when encoding.