I am using the iPhone EXIF data from a photo that I am capturing.
Currently the EXIF data I am getting back is:
{
ApertureValue = "2.970854";
ColorSpace = 1;
ComponentsConfiguration = (1,2,3,);
ExifVersion = (2,2,1);
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.03333334";
FNumber = "2.8";
Flash = 16;
FlashPixVersion = (1,0);
FocalLength = "3.85";
ISOSpeedRatings = (500);
MeteringMode = 3;
PixelXDimension = 640;
PixelYDimension = 480;
SceneCaptureType = 0;
SensingMethod = 2;
Sharpness = 2;
ShutterSpeedValue = "4.911055";
SubjectArea = (319,239,230,172);
WhiteBalance = 0;
}
I want to be able to add some fields such as geolocation, date and time.
How can I specify new keys that I want returned?
Anyone know how this is possible in ios 4.1?
I recently struggled with the same thing. See my answer to this question which shows how to do geotagging and datetime.
Related
I am trying to merge two sets of points from two different views to one single point cloud and visualize it with PCL cloud viewer.
mPtrPointCloud->points.clear();
mPtrPointCloud->points.resize(mFrameSize * 2);
auto it = mPtrPointCloud->points.begin();
received = PopReceived();
if(received != nullptr)
{
// p_data_cloud = (float*)received->mTransformedPC.data;
p_data_cloud = (float*)received->mCVPointCloud.data;
index = 0;
for (size_t i = 0; i < mFrameSize; ++i)
{
float X = p_data_cloud[index];
if (!isValidMeasure(X)) // Checking if it's a valid point
{
it->x = it->y = it->z = it->rgb = 0;
}
else
{
it->x = X;
it->y = p_data_cloud[index + 1];
it->z = p_data_cloud[index + 2];
it->rgb = convertColor(p_data_cloud[index + 3]); // Convert a 32bits float into a pcl .rgb format
}
index += 4;
++it;
}
}
frame = PopFrame();
if(frame != nullptr)
{
// p_data_cloud = frame->mSLPointCloud.getPtr<float>();
p_data_cloud = (float*)frame->mCVPointCloud.data;
index = 0;
for (size_t i = 0; i < mFrameSize; ++i)
{
float X = p_data_cloud[index];
if (!isValidMeasure(X)) // Checking if it's a valid point
{
it->x = it->y = it->z = it->rgb = 0;
}
else
{
it->x = X;
it->y = p_data_cloud[index + 1];
it->z = p_data_cloud[index + 2];
it->rgb = convertColor(p_data_cloud[index + 3]); // Convert a 32bits float into a pcl .rgb format
}
index += 4;
++it;
}
}
mPtrPCViewer->showCloud(mPtrPointCloud);
What I want to have is two sets of points are "fused" to one frame. However, it seems these two sets of points are still shown separately one after the other.
Could anyone help to explain how to really merge two sets of points into one cloud? Thanks
(1) Create a new empty pointcloud which will be the merged pointcloud at the end
pcl::PointCloud<pcl::PointXYZ> mPtrPointCloud;
(2) Transform point clouds to origin
pcl::PointCloud<pcl::PointXYZ> recieved_transformed;
Eigen::Transform<Scalar, 3, Eigen::Affine> recieved_transformation_mat(recieved.sensor_origin_ * recieved.sensor_orientation_);
pcl::transformPointCloud(recieved, recieved_transformed, recieved_transformation_mat);
pcl::PointCloud<pcl::PointXYZ> frame_transformed;
Eigen::Transform<Scalar, 3, Eigen::Affine> frame_transformation_mat(frame.sensor_origin_ * frame.sensor_orientation_);
pcl::transformPointCloud(frame, frame_transformed, frame_transformation_mat);
(3) Use the += operator
mPtrPointCloud += received_transformed;
mPtrPointCloud += frame_transformed;
(4) Visualize merged pointcloud
mPtrPCViewer->showCloud(mPtrPointCloud);
That's it. See also example http://pointclouds.org/documentation/tutorials/concatenate_clouds.php
http://pointclouds.org/documentation/tutorials/matrix_transform.php
I am trying to display photos from the camera roll within my application using ALAssetLibrary. All images are displaying fine except for cropped images. ALAsset returns unedited version of the image when using fullResolutionImage method of ALAssetRepresentationinstead of cropped version.
So I am trying to extract cropping information for ALAsset objects from its metadata. I googled for it and 've found that the cropping information is contained in the AdjustmentXMP key of metadata on my ALAssetRepresentation object. Using above info I am able to display cropped images correctly within my app but the approach on v5.1 and above but fails on iOS v5.0.
Meta data dictionary for a cropped image on iOS 5.1:
{
AdjustmentXMP = "<x:xmpmeta xmlns:x=\"adobe:ns:meta/\" x:xmptk=\"XMP Core 4.4.0\">\n <rdf:RDF xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\">\n <rdf:Description rdf:about=\"\"\n xmlns:aas=\"http://ns.apple.com/adjustment-settings/1.0/\">\n <aas:AffineA>1</aas:AffineA>\n <aas:AffineB>0</aas:AffineB>\n <aas:AffineC>0</aas:AffineC>\n <aas:AffineD>1</aas:AffineD>\n <aas:AffineX>-266</aas:AffineX>\n <aas:AffineY>-589</aas:AffineY>\n <aas:CropX>0</aas:CropX>\n <aas:CropY>0</aas:CropY>\n <aas:CropW>270</aas:CropW>\n <aas:CropH>162</aas:CropH>\n </rdf:Description>\n </rdf:RDF>\n</x:xmpmeta>\n";
ColorModel = RGB;
DPIHeight = 72;
DPIWidth = 72;
Depth = 8;
Orientation = 1;
PixelHeight = 1024;
PixelWidth = 768;
"{Exif}" = {
ApertureValue = "2.526069";
BrightnessValue = "0.1544926";
ColorSpace = 1;
ComponentsConfiguration = (
0,
0,
0,
1
);
DateTimeDigitized = "2013:01:22 14:12:59";
DateTimeOriginal = "2013:01:22 14:12:59";
ExifVersion = (
2,
2
);
ExposureMode = 0;
ExposureProgram = 2;
ExposureTime = "0.06666667";
FNumber = "2.4";
Flash = 16;
FlashPixVersion = (
1,
0
);
FocalLenIn35mmFilm = 33;
FocalLength = "4.13";
ISOSpeedRatings = (
400
);
MeteringMode = 5;
PixelXDimension = 768;
PixelYDimension = 1024;
SceneCaptureType = 0;
SensingMethod = 2;
ShutterSpeedValue = "3.906905";
SubjectArea = (
1631,
1223,
881,
881
);
WhiteBalance = 0;
};
"{GPS}" = {
Altitude = "216.1379";
AltitudeRef = 0;
DateStamp = "2013:01:22";
Latitude = "28.46366666666667";
LatitudeRef = N;
Longitude = "77.04916666666666";
LongitudeRef = E;
TimeStamp = "08:42:56.00";
};
"{TIFF}" = {
DateTime = "2013:01:22 14:14:39";
Make = Apple;
Model = "iPhone 5";
Orientation = 1;
ResolutionUnit = 2;
Software = "QuickTime 7.7.1";
XResolution = 72;
YResolution = 72;
"_YCbCrPositioning" = 1;
};
}
Meta data dictionary for a cropped image on iOS 5.0.1:
Metadata: {
ColorModel = RGB;
Depth = 8;
PixelHeight = 2048;
PixelWidth = 1078;
"{JFIF}" = {
DensityUnit = 0;
JFIFVersion = (
1,
1
);
XDensity = 1;
YDensity = 1;
};
}
As you see above metadata dictionary on v5.1 contains AdjustmentXMP which contains cropping information while the same dictionary on v5.0 doesn't have AdjustmentXMP key. So the cropping fails on v5.0.1.
Any pointers as to how to display cropped images on devices with iOS v5.0.1 and below ?
I'm using mpeg1 codec to create proxy videos of my HD and SD Content.
Mpeg1 resolution and bitrate is;
For HD video proxy: 704x576 4.5 Mbit
For SD video proxy: 352x288 1.5 Mbit
My video params like this;
How can get more performance to change any params...
Best Regards,
AVCodecContext *c = &m_OutputCodecCtxVideo;
avcodec_get_context_defaults2( c, AVMEDIA_TYPE_VIDEO );
c->codec_id = CODEC_ID_MPEG1VIDEO;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->pix_fmt = PIX_FMT_YUV420P;
if( bHD ){
c->width = 704;
c->height = 576;
c->sample_aspect_ratio.num = 16;
c->sample_aspect_ratio.den = 11;
c->bit_rate = 4500000;
}
else{
c->width = 352;
c->height = 288;
c->sample_aspect_ratio.num = 12;
c->sample_aspect_ratio.den = 11;
c->bit_rate = 1500000;
}
c->time_base.num = 1;
c->time_base.den = 25;
c->gop_size = 15;
c->max_b_frames = 2;
// Needed to avoid using macroblocks in which some coeffs overflow.
// This does not happen with normal video, it just happens here as
// the motion of the chroma plane does not match the luma plane.
c->mb_decision = 2;
// some formats want stream headers to be separate
if( m_pFormatCtxDst->oformat->flags & AVFMT_GLOBALHEADER )
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
m_OutputCodecCtxVideo.thread_count = 0;
I am working on gaming application in mono android. I want sample code for background image scrolling vertically from top to bottom. I have a code but it is not working properly.So plz somebody help me.
mBGFarMoveY = mBGFarMoveY + 3;
int newFarY = mBackgroundImageFar.Height + (+ mBGFarMoveY);
if (newFarY <= 0)
{
mBGFarMoveY = 0;
canvas.DrawBitmap (mBackgroundImageFar,0,mBGFarMoveY,null);
}
else
{
canvas.DrawBitmap (mBackgroundImageFar,0,mBGFarMoveY,null);
canvas.DrawBitmap (mBackgroundImageFar,0, newFarY, null);
}
Thanks&Regard's,
Chakradhar.
What do you see and what do you expect? There are several issues with your code as far as I can see.
The position is not calculated based upon time, so scrolling will
be jumpy.
The overlap code doesn't look too great and blits a lot out of bounds. I'm not sure
what 'canvas' is, but if it is the one from android.graphics, you
can specify the source and destination rectangles to blit rather
than just the 'y' position.
so something like (untested and I've not written code for this platform before but you should get the idea):
y = (time_seconds * pixels_per_second);
y = y % image.Height; // wrap
src_rect.left = 0;
src_rect.right = image.Width - 1;
src_rect.top = y;
src_rect.bottom = image.Height - 1;
dst_rect.left = 0;
dst_rect.right = image.Width - 1;
dst_rect.top = 0;
dst_rect.bottom = image.Height - 1;
if (y == 0) {
canvas.DrawBitmap(image, src_rect, dst_rect, null);
}
else {
dst_rect.bottom = src_rect.height() - 1;
canvas.DrawBitmap(image, src_rect, dst_rect, null);
src_rect.top = 0;
src_rect.bottom = y - 1;
dst_rect.top = dst_rect.bottom + 1;
dst_rect.bottom = image.Height - 1;
canvas.DrawBitmap(image, src_rect, dst_rect, null);
}
I need to encode a video from image.
I use ffmpeg and compiling rigth.
My problem is that when i try to opne video with quicktime on iphone, this give me a message "this movie format is not supported".
I create a file mp4 with this parameter on context:
context->time_base.num = 1;
context->time_base.den = 15;
context->codec_type = CODEC_TYPE_VIDEO;
context->codec_id = CODEC_ID_H264;
context->bit_rate = 1000000;
context->width = width;
context->height = height;
context->keyint_min = 10;
context->i_quant_factor = 0.71;
context->bit_rate_tolerance = 20000;
context->rc_max_rate = 100000;
context->rc_buffer_size = 8835000;
context->qcompress = 0.6;
context->qmin = 10;
context->qmax = 30;
context->max_qdiff = 4;
context->gop_size = 30;
context->time_base.num = 1;
context->time_base.den = 30;
context->sample_aspect_ratio = av_d2q(1, 255);
context->profile = 30;
context->pix_fmt = PIX_FMT_YUV420P;
context->flags |= CODEC_FLAG_LOOP_FILTER;
where is my mistake??
thanks
You should use AVFoundation instead, that lets you use the hardware acceleration when encoding.