Properly displaying cropped images in iOS v5.0.1 and below + ALAssetLibrary - iphone

I am trying to display photos from the camera roll within my application using ALAssetLibrary. All images are displaying fine except for cropped images. ALAsset returns unedited version of the image when using fullResolutionImage method of ALAssetRepresentationinstead of cropped version.
So I am trying to extract cropping information for ALAsset objects from its metadata. I googled for it and 've found that the cropping information is contained in the AdjustmentXMP key of metadata on my ALAssetRepresentation object. Using above info I am able to display cropped images correctly within my app but the approach on v5.1 and above but fails on iOS v5.0.
Meta data dictionary for a cropped image on iOS 5.1:
{
AdjustmentXMP = "<x:xmpmeta xmlns:x=\"adobe:ns:meta/\" x:xmptk=\"XMP Core 4.4.0\">\n <rdf:RDF xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\">\n <rdf:Description rdf:about=\"\"\n xmlns:aas=\"http://ns.apple.com/adjustment-settings/1.0/\">\n <aas:AffineA>1</aas:AffineA>\n <aas:AffineB>0</aas:AffineB>\n <aas:AffineC>0</aas:AffineC>\n <aas:AffineD>1</aas:AffineD>\n <aas:AffineX>-266</aas:AffineX>\n <aas:AffineY>-589</aas:AffineY>\n <aas:CropX>0</aas:CropX>\n <aas:CropY>0</aas:CropY>\n <aas:CropW>270</aas:CropW>\n <aas:CropH>162</aas:CropH>\n </rdf:Description>\n </rdf:RDF>\n</x:xmpmeta>\n";
ColorModel = RGB;
DPIHeight = 72;
DPIWidth = 72;
Depth = 8;
Orientation = 1;
PixelHeight = 1024;
PixelWidth = 768;
"{Exif}" = {
ApertureValue = "2.526069";
BrightnessValue = "0.1544926";
ColorSpace = 1;
ComponentsConfiguration = (
0,
0,
0,
1
);
DateTimeDigitized = "2013:01:22 14:12:59";
DateTimeOriginal = "2013:01:22 14:12:59";
ExifVersion = (
2,
2
);
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.06666667";
FNumber = "2.4";
Flash = 16;
FlashPixVersion = (
1,
0
);
FocalLenIn35mmFilm = 33;
FocalLength = "4.13";
ISOSpeedRatings = (
400
);
MeteringMode = 5;
PixelXDimension = 768;
PixelYDimension = 1024;
SceneCaptureType = 0;
SensingMethod = 2;
ShutterSpeedValue = "3.906905";
SubjectArea = (
1631,
1223,
881,
881
);
WhiteBalance = 0;
};
"{GPS}" = {
Altitude = "216.1379";
AltitudeRef = 0;
DateStamp = "2013:01:22";
Latitude = "28.46366666666667";
LatitudeRef = N;
Longitude = "77.04916666666666";
LongitudeRef = E;
TimeStamp = "08:42:56.00";
};
"{TIFF}" = {
DateTime = "2013:01:22 14:14:39";
Make = Apple;
Model = "iPhone 5";
Orientation = 1;
ResolutionUnit = 2;
Software = "QuickTime 7.7.1";
XResolution = 72;
YResolution = 72;
"_YCbCrPositioning" = 1;
};
}
Meta data dictionary for a cropped image on iOS 5.0.1:
Metadata: {
ColorModel = RGB;
Depth = 8;
PixelHeight = 2048;
PixelWidth = 1078;
"{JFIF}" = {
DensityUnit = 0;
JFIFVersion = (
1,
1
);
XDensity = 1;
YDensity = 1;
};
}
As you see above metadata dictionary on v5.1 contains AdjustmentXMP which contains cropping information while the same dictionary on v5.0 doesn't have AdjustmentXMP key. So the cropping fails on v5.0.1.
Any pointers as to how to display cropped images on devices with iOS v5.0.1 and below ?

Related

iOS iPhone CGImage Memory problems

I'm trying to create a visual representation of some data I have.
The function I have works and creates the image perfectly (and VERY quickly) but under instruments the Real Memory usage rockets and eventually crashes the app.
I have replaced my function with a return [UIImage imageNamed:#"blah"]; and the memory problems vanish completely.
Was wondering if someone could see why and where the memory is being taken up and how I could possibly free it again?
I have run in instruments under the allocations and leaks tool but nothing shows up in there.
The function is...
- (UIImage*)imageOfMapWithDeadColor:(UIColor *)deadColor aliveColor:(UIColor *)aliveColor
{
//translate colours into rgb components
if ([deadColor isEqual:[UIColor whiteColor]]) {
dr = dg = db = 255;
} else if ([deadColor isEqual:[UIColor blackColor]]) {
dr = dg = db = 0;
} else {
[deadColor getRed:&drf green:&dgf blue:&dbf alpha:&blah];
dr = drf * 255;
dg = dgf * 255;
db = dbf * 255;
}
if ([aliveColor isEqual:[UIColor whiteColor]]) {
ar = ag = ab = 255;
} else if ([aliveColor isEqual:[UIColor blackColor]]) {
ar = ag = ab = 0;
} else {
[aliveColor getRed:&arf green:&agf blue:&abf alpha:&blah];
ar = arf * 255;
ag = agf * 255;
ab = abf * 255;
}
//create bytes of image from the cell map
int yRef, cellRef;
unsigned char *cell_ptr = cells;
for (int y=0; y<self.height; y++)
{
yRef = y * (self.width * 4);
int x = 0;
do
{
cellRef = yRef + 4 * x;
if (*cell_ptr & 0x01) {
//alive colour
buffer[cellRef] = ar;
buffer[cellRef + 1] = ag;
buffer[cellRef + 2] = ab;
buffer[cellRef + 3] = 255;
} else {
//dead colour
buffer[cellRef] = dr;
buffer[cellRef + 1] = dg;
buffer[cellRef + 2] = db;
buffer[cellRef + 3] = 255;
}
cell_ptr++;
} while (++x < self.width);
}
//create image
imageRef = CGImageCreate(self.width, self.height, 8, 32, 4 * self.width, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, provider, NULL, NO, kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:imageRef];
//return image
return image;
}
a few notes...
buffer is an ivar GLubyte malloc-ed in the init.
cells is the original unsigned char array that I'm taking the data from.
Like I said, the function works perfectly (creating the image etc...) I just get a massive usage of memory.
Oh... this function is called LOTS, like 30 times a second or more (which I know will make a difference).
Thanks for any help.
You use CGImageCreate and the docs say :
Return Value
A new Quartz bitmap image. You are responsible for releasing this object by calling CGImageRelease.

Low performance of mpeg1 encoding using libavcodec

I'm using mpeg1 codec to create proxy videos of my HD and SD Content.
Mpeg1 resolution and bitrate is;
For HD video proxy: 704x576 4.5 Mbit
For SD video proxy: 352x288 1.5 Mbit
My video params like this;
How can get more performance to change any params...
Best Regards,
AVCodecContext *c = &m_OutputCodecCtxVideo;
avcodec_get_context_defaults2( c, AVMEDIA_TYPE_VIDEO );
c->codec_id = CODEC_ID_MPEG1VIDEO;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->pix_fmt = PIX_FMT_YUV420P;
if( bHD ){
c->width = 704;
c->height = 576;
c->sample_aspect_ratio.num = 16;
c->sample_aspect_ratio.den = 11;
c->bit_rate = 4500000;
}
else{
c->width = 352;
c->height = 288;
c->sample_aspect_ratio.num = 12;
c->sample_aspect_ratio.den = 11;
c->bit_rate = 1500000;
}
c->time_base.num = 1;
c->time_base.den = 25;
c->gop_size = 15;
c->max_b_frames = 2;
// Needed to avoid using macroblocks in which some coeffs overflow.
// This does not happen with normal video, it just happens here as
// the motion of the chroma plane does not match the luma plane.
c->mb_decision = 2;
// some formats want stream headers to be separate
if( m_pFormatCtxDst->oformat->flags & AVFMT_GLOBALHEADER )
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
m_OutputCodecCtxVideo.thread_count = 0;

mono for android application

I am working on gaming application in mono android. I want sample code for background image scrolling vertically from top to bottom. I have a code but it is not working properly.So plz somebody help me.
mBGFarMoveY = mBGFarMoveY + 3;
int newFarY = mBackgroundImageFar.Height + (+ mBGFarMoveY);
if (newFarY <= 0)
{
mBGFarMoveY = 0;
canvas.DrawBitmap (mBackgroundImageFar,0,mBGFarMoveY,null);
}
else
{
canvas.DrawBitmap (mBackgroundImageFar,0,mBGFarMoveY,null);
canvas.DrawBitmap (mBackgroundImageFar,0, newFarY, null);
}
Thanks&Regard's,
Chakradhar.
What do you see and what do you expect? There are several issues with your code as far as I can see.
The position is not calculated based upon time, so scrolling will
be jumpy.
The overlap code doesn't look too great and blits a lot out of bounds. I'm not sure
what 'canvas' is, but if it is the one from android.graphics, you
can specify the source and destination rectangles to blit rather
than just the 'y' position.
so something like (untested and I've not written code for this platform before but you should get the idea):
y = (time_seconds * pixels_per_second);
y = y % image.Height; // wrap
src_rect.left = 0;
src_rect.right = image.Width - 1;
src_rect.top = y;
src_rect.bottom = image.Height - 1;
dst_rect.left = 0;
dst_rect.right = image.Width - 1;
dst_rect.top = 0;
dst_rect.bottom = image.Height - 1;
if (y == 0) {
canvas.DrawBitmap(image, src_rect, dst_rect, null);
}
else {
dst_rect.bottom = src_rect.height() - 1;
canvas.DrawBitmap(image, src_rect, dst_rect, null);
src_rect.top = 0;
src_rect.bottom = y - 1;
dst_rect.top = dst_rect.bottom + 1;
dst_rect.bottom = image.Height - 1;
canvas.DrawBitmap(image, src_rect, dst_rect, null);
}

iPhone EXIF data

I am using the iPhone EXIF data from a photo that I am capturing.
Currently the EXIF data I am getting back is:
{
ApertureValue = "2.970854";
ColorSpace = 1;
ComponentsConfiguration = (1,2,3,);
ExifVersion = (2,2,1);
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.03333334";
FNumber = "2.8";
Flash = 16;
FlashPixVersion = (1,0);
FocalLength = "3.85";
ISOSpeedRatings = (500);
MeteringMode = 3;
PixelXDimension = 640;
PixelYDimension = 480;
SceneCaptureType = 0;
SensingMethod = 2;
Sharpness = 2;
ShutterSpeedValue = "4.911055";
SubjectArea = (319,239,230,172);
WhiteBalance = 0;
}
I want to be able to add some fields such as geolocation, date and time.
How can I specify new keys that I want returned?
Anyone know how this is possible in ios 4.1?
I recently struggled with the same thing. See my answer to this question which shows how to do geotagging and datetime.

iphone encode problem with ffmpeg

I need to encode a video from image.
I use ffmpeg and compiling rigth.
My problem is that when i try to opne video with quicktime on iphone, this give me a message "this movie format is not supported".
I create a file mp4 with this parameter on context:
context->time_base.num = 1;
context->time_base.den = 15;
context->codec_type = CODEC_TYPE_VIDEO;
context->codec_id = CODEC_ID_H264;
context->bit_rate = 1000000;
context->width = width;
context->height = height;
context->keyint_min = 10;
context->i_quant_factor = 0.71;
context->bit_rate_tolerance = 20000;
context->rc_max_rate = 100000;
context->rc_buffer_size = 8835000;
context->qcompress = 0.6;
context->qmin = 10;
context->qmax = 30;
context->max_qdiff = 4;
context->gop_size = 30;
context->time_base.num = 1;
context->time_base.den = 30;
context->sample_aspect_ratio = av_d2q(1, 255);
context->profile = 30;
context->pix_fmt = PIX_FMT_YUV420P;
context->flags |= CODEC_FLAG_LOOP_FILTER;
where is my mistake??
thanks
You should use AVFoundation instead, that lets you use the hardware acceleration when encoding.