OpenCV in iOS not decoding frames properly - iphone

I'm trying to use OpenCV on iOS to do some pixel analysis on video frames. I've tried several .MOV files (all MPEG-4 AVC) but none of them seem to decode properly.
Problems:
- All cvGetCaptureProperty calls return a value of 1
- cvGrabFrame(capture) always returns true (it doesn't seem to find the last frame)
Things that are actually working
- Frame height and width are correctly determined
Any ideas? I have OpenCV 2.3.2 from http://aptogo.co.uk/2011/09/opencv-framework-for-ios/
NSURL *file = [[NSBundle mainBundle] URLForResource:#"sky" withExtension:#"MOV"];
CvCapture* capture = cvCaptureFromFile([[file path] UTF8String]);
if (!capture)
{
NSLog(#"Error loading file");
return;
}
cvQueryFrame(capture);
int width = cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH);
int height = cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT);
NSLog(#"dimensions = %dx%d", width, height); // returns 1x1
double framesPerSecond = cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
NSLog(#"framesPerSecond = %f", framesPerSecond); // returns 1
int frameCount = (int)cvGetCaptureProperty(capture, CV_CAP_PROP_FRAME_COUNT);
NSLog(#"frameCount = %d", frameCount); // returns 1
int frameCounter = 0;
while (cvGrabFrame(capture))
{
frameCounter++;
//NSLog(#"got a frame! %d", frameCounter);
if (frameCounter % 50 == 0)
{
IplImage* frame = cvRetrieveFrame(capture);
NSLog(#"frame width: %d", frame->width); // works correctly
NSLog(#"frame height: %d", frame->height); // works correctly
}
if (frameCounter > 1000)
break; // this is here because the loop never stops on its own
}
cvReleaseCapture(&capture);

Related

iOS ARKit AVDepthData Returns Enormous Numbers

Not sure what I'm doing wrong here, but AVDepthData, which is supposed to be in meters for non-disparity data is returning HUGE numbers. I have the code...
-(bool)getCurrentFrameDepthBufferIntoBuffer:(ARSession*)session buffer:(BytePtr)buffer width:(int)width height:(int)height bytesPerPixel:(int)bytesPerPixel
{
// do we have a current frame
if (session.currentFrame != nil)
{
// do we have a captured image?
if (session.currentFrame.capturedDepthData != nil)
{
// get depth data parameters
int ciImageWidth = (int)CVPixelBufferGetWidth(session.currentFrame.capturedDepthData.depthDataMap);
int ciImageHeight = (int)CVPixelBufferGetHeight(session.currentFrame.capturedDepthData.depthDataMap);
// how many bytes per pixel
int bytesPerPixel;
if (session.currentFrame.capturedDepthData.depthDataType == kCVPixelFormatType_DisparityFloat16 ||
session.currentFrame.capturedDepthData.depthDataType == kCVPixelFormatType_DepthFloat16)
bytesPerPixel = 2;
else
bytesPerPixel = 4;
// copy to passed buffer
CVPixelBufferLockBaseAddress(session.currentFrame.capturedDepthData.depthDataMap, kCVPixelBufferLock_ReadOnly);
memcpy(buffer, session.currentFrame.capturedDepthData.depthDataMap, ciImageWidth*ciImageHeight*bytesPerPixel);
float *floatBuffer = (float*)buffer;
float maxDepth = 0.0f;
float minDepth = 0.0f;
for (int i=0; i < ciImageWidth*ciImageHeight; i++)
{
if (floatBuffer[i] > maxDepth)
maxDepth = floatBuffer[i];
if (floatBuffer[i] < minDepth)
minDepth = floatBuffer[i];
}
NSLog(#"In iOS, max depth is %f min depth is %f", maxDepth, minDepth);
CVPixelBufferUnlockBaseAddress(session.currentFrame.capturedDepthData.depthDataMap, kCVPixelBufferLock_ReadOnly);
}
}
return true;
}
But it's returning min and max values like...
2019-06-27 12:32:32.167868+0900 AvatarBuilder[13577:2650159] In iOS, max depth is 3531476501829561451725831270301696000.000000 min depth is -109677129931746407817494761329131520.000000
Which looks nothing like meters.
Ugh, it was my memcpy. I had to do...
float *bufferAddress = (float*)CVPixelBufferGetBaseAddress(session.currentFrame.capturedDepthData.depthDataMap);
memcpy(buffer, bufferAddress, ciImageWidth*ciImageHeight*bytesPerPixel);

iPhone - A problem with decoding H264 using ffmpeg

I am working with ffmpeg to decode H264 stream from server.
I referenced DecoderWrapper from http://github.com/dropcam/dropcam_for_iphone.
I compiled it successfully, but I don't know how use it.
Here are the function that has problem.
- (id)initWithCodec:(enum VideoCodecType)codecType
colorSpace:(enum VideoColorSpace)colorSpace
width:(int)width
height:(int)height
privateData:(NSData*)privateData {
if(self = [super init]) {
codec = avcodec_find_decoder(CODEC_ID_H264);
codecCtx = avcodec_alloc_context();
// Note: for H.264 RTSP streams, the width and height are usually not specified (width and height are 0).
// These fields will become filled in once the first frame is decoded and the SPS is processed.
codecCtx->width = width;
codecCtx->height = height;
codecCtx->extradata = av_malloc([privateData length]);
codecCtx->extradata_size = [privateData length];
[privateData getBytes:codecCtx->extradata length:codecCtx->extradata_size];
codecCtx->pix_fmt = PIX_FMT_YUV420P;
#ifdef SHOW_DEBUG_MV
codecCtx->debug_mv = 0xFF;
#endif
srcFrame = avcodec_alloc_frame();
dstFrame = avcodec_alloc_frame();
int res = avcodec_open(codecCtx, codec);
if (res < 0)
{
NSLog(#"Failed to initialize decoder");
}
}
return self;
}
What is the privateData parameter of this function? I don't know how to set the parameter...
Now avcodec_decode_video2 returns -1;
The framedata is coming successfully.
How solve this problem.
Thanks a lot.
Take a look at your ffmpeg example, where in PATH/TO/FFMPEG/doc/example/decoder_encoder.c,
and this link:
http://cekirdek.pardus.org.tr/~ismail/ffmpeg-docs/api-example_8c-source.html
Be careful, this code just too old, some function's name has already changed.

How do you set the framerate when recording video on the iPhone?

I would like to write a camera application where you record video using the iPhone's camera, but I can't find a way to alter the framerate of the recorded video. For example, I'd like to record at 25 frames per second instead of the default 30.
Is it possible to set this framerate in any way, and if yes how?
You can use AVCaptureConnection's videoMaxFrameDuration and videoMinFrameDuration properties. See http://developer.apple.com/library/ios/#DOCUMENTATION/AVFoundation/Reference/AVCaptureConnection_Class/Reference/Reference.html#//apple_ref/doc/uid/TP40009522
Additionally, there is an SO question that addresses this (with a good code example):
I want to throttle video capture frame rate in AVCapture framework
As far as I could tell, you can't set the FPS for recording. Look at the WWDC 2010 video for AVFoundation. It seems to suggest that you can but, again, as far as I can tell, that only works for capturing frame data.
I'd love to be proven wrong, but I'm pretty sure that you can't. Sorry!
You will need AVCaptureDevice.h
Here is working code here:
- (void)attemptToConfigureFPS
{
NSError *error;
if (![self lockForConfiguration:&error]) {
NSLog(#"Could not lock device %# for configuration: %#", self, error);
return;
}
AVCaptureDeviceFormat *format = self.activeFormat;
double epsilon = 0.00000001;
int desiredFrameRate = 30;
for (AVFrameRateRange *range in format.videoSupportedFrameRateRanges) {
NSLog(#"Pre Minimum frame rate: %f Max = %f", range.minFrameRate, range.maxFrameRate);
if (range.minFrameRate <= (desiredFrameRate + epsilon) &&
range.maxFrameRate >= (desiredFrameRate - epsilon)) {
NSLog(#"Setting Frame Rate.");
self.activeVideoMaxFrameDuration = (CMTime){
.value = 1,
.timescale = desiredFrameRate,
.flags = kCMTimeFlags_Valid,
.epoch = 0,
};
self.activeVideoMinFrameDuration = (CMTime){
.value = 1,
.timescale = desiredFrameRate,
.flags = kCMTimeFlags_Valid,
.epoch = 0,
};
// self.activeVideoMinFrameDuration = self.activeVideoMaxFrameDuration;
// NSLog(#"Post Minimum frame rate: %f Max = %f", range.minFrameRate, range.maxFrameRate);
break;
}
}
[self unlockForConfiguration];
// Audit the changes
for (AVFrameRateRange *range in format.videoSupportedFrameRateRanges) {
NSLog(#"Post Minimum frame rate: %f Max = %f", range.minFrameRate, range.maxFrameRate);
}
}

AVCapture appendSampleBuffer

I am going insane with this one - have looked everywhere and tried anything and everything I can thinks of.
Am making an iPhone app that uses AVFoundation - specifically AVCapture to capture video using the iPhone camera.
I need to have a custom image that is overlayed on the video feed included in the recording.
So far I have the AVCapture session set up, can display the feed, access the frame, save it as a UIImage and marge the overlay Image onto it. Then convert this new UIImage into a CVPixelBufferRef. annnd to double check that the bufferRef is working I converted it back to a UIImage and it displays the image fine still.
The trouble starts when I try to convert the CVPixelBufferRef into a CMSampleBufferRef to append to the AVCaptureSessions assetWriterInput. The CMSampleBufferRef always returning NULL when I attempt to create it.
Here is the -(void)captureOutput function
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
UIImage *botImage = [self imageFromSampleBuffer:sampleBuffer];
UIImage *wheel = [self imageFromView:wheelView];
UIImage *finalImage = [self overlaidImage:botImage :wheel];
//[previewImage setImage:finalImage]; <- works -- the image is being merged into one UIImage
CVPixelBufferRef pixelBuffer = NULL;
CGImageRef cgImage = CGImageCreateCopy(finalImage.CGImage);
CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
int status = CVPixelBufferCreateWithBytes(NULL,
self.view.bounds.size.width,
self.view.bounds.size.height,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(image),
CGImageGetBytesPerRow(cgImage),
NULL,
0,
NULL,
&pixelBuffer);
if(status == 0){
OSStatus result = 0;
CMVideoFormatDescriptionRef videoInfo = NULL;
result = CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixelBuffer, &videoInfo);
NSParameterAssert(result == 0 && videoInfo != NULL);
CMSampleBufferRef myBuffer = NULL;
result = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pixelBuffer, true, NULL, NULL, videoInfo, NULL, &myBuffer);
NSParameterAssert(result == 0 && myBuffer != NULL);//always null :S
NSLog(#"Trying to append");
if (!CMSampleBufferDataIsReady(myBuffer)){
NSLog(#"sampleBuffer data is not ready");
return;
}
if (![assetWriterInput isReadyForMoreMediaData]){
NSLog(#"Not ready for data :(");
return;
}
if (![assetWriterInput appendSampleBuffer:myBuffer]){
NSLog(#"Failed to append pixel buffer");
}
}
}
Another solution I keep hearing about is using a AVAssetWriterInputPixelBufferAdaptor which eliminates the need to do the messy CMSampleBufferRef wrapping. However I have scoured stacked and apple developer forums and docs and can't find a clear description or example on how to set this up or how to use it. If anyone has a working example of it could you please show me or help me nut out the above issue - have been working on this non-stop for a week and am at wits end.
Let me know if you need any other info
Thanks in advance,
Michael
You need AVAssetWriterInputPixelBufferAdaptor, here is the code to create it :
// Create dictionary for pixel buffer adaptor
NSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];
// Create pixel buffer adaptor
m_pixelsBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:bufferAttributes];
And the code to use it :
// If ready to have more media data
if (m_pixelsBufferAdaptor.assetWriterInput.readyForMoreMediaData) {
// Create a pixel buffer
CVPixelBufferRef pixelsBuffer = NULL;
CVPixelBufferPoolCreatePixelBuffer(NULL, m_pixelsBufferAdaptor.pixelBufferPool, &pixelsBuffer);
// Lock pixel buffer address
CVPixelBufferLockBaseAddress(pixelsBuffer, 0);
// Create your function to set your pixels data in the buffer (in your case, fill with your finalImage data)
[self yourFunctionToPutDataInPixelBuffer:CVPixelBufferGetBaseAddress(pixelsBuffer)];
// Unlock pixel buffer address
CVPixelBufferUnlockBaseAddress(pixelsBuffer, 0);
// Append pixel buffer (calculate currentFrameTime with your needing, the most simplest way is to have a frame time starting at 0 and increment each time you write a frame with the time of a frame (inverse of your framerate))
[m_pixelsBufferAdaptor appendPixelBuffer:pixelsBuffer withPresentationTime:currentFrameTime];
// Release pixel buffer
CVPixelBufferRelease(pixelsBuffer);
}
And don't forget to release your pixelsBufferAdaptor.
I do it by using CMSampleBufferCreateForImageBuffer() .
OSStatus ret = 0;
CMSampleBufferRef sample = NULL;
CMVideoFormatDescriptionRef videoInfo = NULL;
CMSampleTimingInfo timingInfo = kCMTimingInfoInvalid;
timingInfo.presentationTimeStamp = pts;
timingInfo.duration = duration;
ret = CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixel, &videoInfo);
if (ret != 0) {
NSLog(#"CMVideoFormatDescriptionCreateForImageBuffer failed! %d", (int)ret);
goto done;
}
ret = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixel, true, NULL, NULL,
videoInfo, &timingInfo, &sample);
if (ret != 0) {
NSLog(#"CMSampleBufferCreateForImageBuffer failed! %d", (int)ret);
goto done;
}

How to get actual pdf page size in iPad?

How can I get PDF page width and height in iPad? Any document or suggestions on how I can find this information?
I was using the answers here until I realised a one-liner for this
CGRect pageRect = CGPDFPageGetBoxRect(pdf, kCGPDFMediaBox);
// Which you can convert to size as
CGSize size = pageRect.size;
Used in Apple's sample ZoomingPDFViewer app
Here's the code to do it; also to convert a point on the page from PDF coordinates to iOS coordinates. See also Get PDF hyperlinks on iOS with Quartz
#import <CoreGraphics/CoreGraphics.h>
. . . . . . . . . . .
NSString *pathToPdfDoc = [[NSBundle mainBundle] pathForResource:#"Test2" offType:#"pdf"];
NSURL *pdfUrl = [NSURL fileURLWithPath:pathToPdfDoc];
CGPDFDocumentRef document = CGPDFDocumentCreateWithURL((CFURLRef)pdfUrl);
CGPDFPageRef page = CGPDFDocumentGetPage(document, 1); // assuming all the pages are the same size!
// code from https://stackoverflow.com/questions/4080373/get-pdf-hyperlinks-on-ios-with-quartz,
// suitably amended
CGPDFDictionaryRef pageDictionary = CGPDFPageGetDictionary(page);
//******* getting the page size
CGPDFArrayRef pageBoxArray;
if(!CGPDFDictionaryGetArray(pageDictionary, "MediaBox", &pageBoxArray)) {
return; // we've got something wrong here!!!
}
int pageBoxArrayCount = CGPDFArrayGetCount( pageBoxArray );
CGPDFReal pageCoords[4];
for( int k = 0; k < pageBoxArrayCount; ++k )
{
CGPDFObjectRef pageRectObj;
if(!CGPDFArrayGetObject(pageBoxArray, k, &pageRectObj))
{
return;
}
CGPDFReal pageCoord;
if(!CGPDFObjectGetValue(pageRectObj, kCGPDFObjectTypeReal, &pageCoord)) {
return;
}
pageCoords[k] = pageCoord;
}
NSLog(#"PDF coordinates -- bottom left x %f ",pageCoords[0]); // should be 0
NSLog(#"PDF coordinates -- bottom left y %f ",pageCoords[1]); // should be 0
NSLog(#"PDF coordinates -- top right x %f ",pageCoords[2]);
NSLog(#"PDF coordinates -- top right y %f ",pageCoords[3]);
NSLog(#"-- i.e. PDF page is %f wide and %f high",pageCoords[2],pageCoords[3]);
// **** now to convert a point on the page from PDF coordinates to iOS coordinates.
double PDFHeight, PDFWidth;
PDFWidth = pageCoords[2];
PDFHeight = pageCoords[3];
// the size of your iOS view or image into which you have rendered your PDF page
// in this example full screen iPad in portrait orientation
double iOSWidth = 768.0;
double iOSHeight = 1024.0;
// the PDF co-ordinate values you want to convert
double PDFxval = 89; // or whatever
double PDFyval = 520; // or whatever
// the iOS coordinate values
int iOSxval, iOSyval;
iOSxval = (int) PDFxval * (iOSWidth/PDFWidth);
iOSyval = (int) (PDFHeight - PDFyval) * (iOSHeight/PDFHeight);
NSLog(#"PDF: %f %f",PDFxval,PDFyval);
NSLog(#"iOS: %i %i",iOSxval,iOSyval);
The following method returns the height of the first pdf page scaled according to a webview:
-(NSInteger) pdfPageSize:(NSData*) pdfData{
CGPDFDocumentRef document = CGPDFDocumentCreateWithProvider(CGDataProviderCreateWithCFData((__bridge CFDataRef)pdfData));
CGPDFPageRef page = CGPDFDocumentGetPage(document, 1);
CGRect pageRect = CGPDFPageGetBoxRect(page, kCGPDFMediaBox);
double pdfHeight, pdfWidth;
pdfWidth = pageRect.size.width;
pdfHeight = pageRect.size.height;
double iOSWidth = _webView.frame.size.width;
double iOSHeight = _webView.frame.size.height;
double scaleWidth = iOSWidth/pdfWidth;
double scaleHeight = iOSHeight/pdfHeight;
double scale = scaleWidth > scaleHeight ? scaleWidth : scaleHeight;
NSInteger finalHeight = pdfHeight * scale;
NSLog(#"MediaRect %f, %f, %f, %f", pageRect.origin.x, pageRect.origin.y, pageRect.size.width, pageRect.size.height);
NSLog(#"Scale: %f",scale);
NSLog(#"Final Height: %i", finalHeight);
return finalHeight;
}
It's a lot smaller than Donal O'Danachair's and does basically the same thing. You could easily adapt it to your View size and to return the width along with the height.
Swift 3.x (based on answer provided by Ege Akpinar)
let pageRect = pdfPage.getBoxRect(CGPDFBox.mediaBox) // where pdfPage is a CGPDFPage
let pageSize = pageRect.size
A more full example showing how to extract the dimensions of page in a PDF file:
/**
Returns the dimensions of a PDF page.
- Parameter pdfURL: The URL of the PDF file.
- Parameter pageNumber: The number of the page to obtain dimensions (Note: page numbers start from `1`, not `0`)
*/
func pageDimension(pdfURL: URL, pageNumber: Int) -> CGSize? {
// convert URL to NSURL which is toll-free-briged with CFURL
let url = pdfURL as NSURL
guard let pdf = CGPDFDocument(url) else { return nil }
guard let pdfPage = pdf.page(at: pageNumber) else { return nil }
let pageRect = pdfPage.getBoxRect(CGPDFBox.mediaBox)
let pageSize = pageRect.size
return pageSize
}
let url = Bundle.main.url(forResource: "file", withExtension: "pdf")!
let dimensions = pageDimension(pdfURL: url, pageNumber: 1)
print(dimensions ?? "Cannot get dimensions")
I took Donal O'Danachair's answer and made a few modifications so the rect size is also scaled to the pdf's size. This code snipped actually gets all the annotations off a pdf page and creates the CGRect from the PDF rect. Part of the code is form the answer to a question Donal commented on his.
CGPDFDictionaryRef pageDictionary = CGPDFPageGetDictionary(pageRef);
CGFloat boundsWidth = pdfView.bounds.size.width;
CGFloat boundsHeight = pdfView.bounds.size.height;
CGRect cropBoxRect = CGPDFPageGetBoxRect(pageRef, kCGPDFCropBox);
CGRect mediaBoxRect = CGPDFPageGetBoxRect(pageRef, kCGPDFMediaBox);
CGRect effectiveRect = CGRectIntersection(cropBoxRect, mediaBoxRect);
CGFloat effectiveWidth = effectiveRect.size.width;
CGFloat effectiveHeight = effectiveRect.size.height;
CGFloat widthScale = (boundsWidth / effectiveWidth);
CGFloat heightScale = (boundsHeight / effectiveHeight);
CGFloat pdfScale = (widthScale < heightScale) ? widthScale : heightScale;
CGFloat x_offset = ((boundsWidth - (effectiveWidth * pdfScale)) / 2.0f);
CGFloat y_offset = ((boundsHeight - (effectiveHeight * pdfScale)) / 2.0f);
y_offset = (boundsHeight - y_offset); // Co-ordinate system adjust
//CGFloat x_translate = (x_offset - effectiveRect.origin.x);
//CGFloat y_translate = (y_offset + effectiveRect.origin.y);
CGPDFArrayRef outputArray;
if(!CGPDFDictionaryGetArray(pageDictionary, "Annots", &outputArray)) {
return;
}
int arrayCount = CGPDFArrayGetCount( outputArray );
if(!arrayCount) {
//continue;
}
self.annotationRectArray = [[NSMutableArray alloc] initWithCapacity:arrayCount];
for( int j = 0; j < arrayCount; ++j ) {
CGPDFObjectRef aDictObj;
if(!CGPDFArrayGetObject(outputArray, j, &aDictObj)) {
return;
}
CGPDFDictionaryRef annotDict;
if(!CGPDFObjectGetValue(aDictObj, kCGPDFObjectTypeDictionary, &annotDict)) {
return;
}
CGPDFDictionaryRef aDict;
if(!CGPDFDictionaryGetDictionary(annotDict, "A", &aDict)) {
return;
}
CGPDFStringRef uriStringRef;
if(!CGPDFDictionaryGetString(aDict, "URI", &uriStringRef)) {
return;
}
CGPDFArrayRef rectArray;
if(!CGPDFDictionaryGetArray(annotDict, "Rect", &rectArray)) {
return;
}
int arrayCount = CGPDFArrayGetCount( rectArray );
CGPDFReal coords[4];
for( int k = 0; k < arrayCount; ++k ) {
CGPDFObjectRef rectObj;
if(!CGPDFArrayGetObject(rectArray, k, &rectObj)) {
return;
}
CGPDFReal coord;
if(!CGPDFObjectGetValue(rectObj, kCGPDFObjectTypeReal, &coord)) {
return;
}
coords[k] = coord;
}
char *uriString = (char *)CGPDFStringGetBytePtr(uriStringRef);
//******* getting the page size
CGPDFArrayRef pageBoxArray;
if(!CGPDFDictionaryGetArray(pageDictionary, "MediaBox", &pageBoxArray)) {
return; // we've got something wrong here!!!
}
int pageBoxArrayCount = CGPDFArrayGetCount( pageBoxArray );
CGPDFReal pageCoords[4];
for( int k = 0; k < pageBoxArrayCount; ++k )
{
CGPDFObjectRef pageRectObj;
if(!CGPDFArrayGetObject(pageBoxArray, k, &pageRectObj))
{
return;
}
CGPDFReal pageCoord;
if(!CGPDFObjectGetValue(pageRectObj, kCGPDFObjectTypeReal, &pageCoord)) {
return;
}
pageCoords[k] = pageCoord;
}
#if DEBUG
NSLog(#"PDF coordinates -- bottom left x %f ",pageCoords[0]); // should be 0
NSLog(#"PDF coordinates -- bottom left y %f ",pageCoords[1]); // should be 0
NSLog(#"PDF coordinates -- top right x %f ",pageCoords[2]);
NSLog(#"PDF coordinates -- top right y %f ",pageCoords[3]);
NSLog(#"-- i.e. PDF page is %f wide and %f high",pageCoords[2],pageCoords[3]);
#endif
// **** now to convert a point on the page from PDF coordinates to iOS coordinates.
double PDFHeight, PDFWidth;
PDFWidth = pageCoords[2];
PDFHeight = pageCoords[3];
// the size of your iOS view or image into which you have rendered your PDF page
// in this example full screen iPad in portrait orientation
double iOSWidth = 768.0;
double iOSHeight = 1024.0;
// the PDF co-ordinate values you want to convert
double PDFxval = coords[0]; // or whatever
double PDFyval = coords[3]; // or whatever
double PDFhval = (coords[3]-coords[1]);
double PDFwVal = coords[2]-coords[0];
// the iOS coordinate values
CGFloat iOSxval, iOSyval,iOShval,iOSwval;
iOSxval = PDFxval * (iOSWidth/PDFWidth);
iOSyval = (PDFHeight - PDFyval) * (iOSHeight/PDFHeight);
iOShval = PDFhval *(iOSHeight/PDFHeight);// here I scale the width and height
iOSwval = PDFwVal *(iOSWidth/PDFWidth);
#if DEBUG
NSLog(#"PDF: { {%f %f }, { %f %f } }",PDFxval,PDFyval,PDFwVal,PDFhval);
NSLog(#"iOS: { {%f %f }, { %f %f } }",iOSxval,iOSyval,iOSwval,iOShval);
#endif
NSString *uri = [NSString stringWithCString:uriString encoding:NSUTF8StringEncoding];
CGRect rect = CGRectMake(iOSxval,iOSyval,iOSwval,iOShval);// create the rect and use it as you wish