FastPDFKit get document size - iphone

I might be overlooking this, but how can one get the size of a PDF if you're using FastPDFKit?
I'm trying to create PNG's from a pdf, but without knowing the actual dimensions of the page it's rather hard to get it right.
EDIT:
I searching in the documentation before, but all I found was this method:
- (void)getCropbox:(CGRect *)cropbox andRotation:(int *)rotation forPageNumber:(NSInteger)pageNumber withBuffer:(BOOL)withOrWithout
But I have no idea how to use it, it doesn't return anything.

Not sure how familiar you are with PDF, but each page specifies a number of "boxes" that are various rectangles of interest (crop, media, etc)
Most apps using PDF elect to use the crop box which defines the region of the page that should be displayed. (See section 10.10.1 of the PDF 1.7 specification: http://www.adobe.com/devnet/pdf/pdf_reference_archive.html)
Also of note is that a page can specify a rotation angle of 0, 90, 180, or 270 which you need to apply to the crop box yourself. (see /Rotate in table TABLE 3.27 of the PDF 1.7 spec)
So using the above API you would call it like so:
[somePDFDoc getCropbox:&cropbox andRotation:&rotation forPageNumber:10 withBuffer:NO];
This would give you the cropbox rect and it's rotation.
NOTE: I have never used FastPDFKit
NOTE2: If the value of cropbox is CGRectZero, you want to use one of the other rects. Most viewers use the media box instead.

Related

Using GhostScript to export PNGs at fixed size

We have many square EPS images, which we would like to export via script to PNG at very specific formats/sizes, namely
8192x8192, greyscale, no alpha, no anti-aliasing
2048x2048,greyscale, no alpha, anti-aliased.
We have had no luck scripting the "professional" tools Photoshop or Illustrator to do this (although we can do so through the UI, their weak scripting support does not give control over alpha or precise image export size, so we either always get alpha in the large images, or we sometimes get slightly inaccurate image sizes which breaks subsequent algorithms.)
Our first attempt at doing the high resolution version of this was:
gs -sDEVICE=pnggray -o cover.png -dDEVICEWIDTHPOINTS=8192 -dDEVICEHEIGHTPOINTS=8192 -dGraphicsAlphaBits=1 -dPDFFitPage=true cover.eps
However, this does not seem to resize the image to fill the box as expected.
Is there a way, given a square EPS, to get Ghostscript to do what we want?
Your problem with EPS files is that they do not request a media size. That's because EPS files are intended to be included in other PostScript programs, so they need to be resized by the application generating the PostScript.
To that end, EPS files include comments (which are ignored by PostScript interpreters) which define the BoundingBox of the EPS. An application which places EPS can quickly scan the EPS to find this information, then it sets the CTM appropriately in the final PostScript program it is creating and inserts the content of the EPS.
The FitPage switch in Ghostscript relies on having a known media size (and you should set -dFIXEDMEDIA when using this) and a requested media size, figuring out what scale factor to apply to the request in order to make it fit the actual size, and setting up the CTM to apply that scaling.
If you don't ever get a media size request (which you won't with an EPS) then no scaling will take place.
Now Ghostscript does have a different switch, EPSCrop which picks up the comments from the EPS and uses that to set the media size (Ghostscript has mechanisms to permit processing of comments for this reason, amongst others). You could implement a similar mechanism to pick up the BoundingBox comments, and scale the EPS so that it fits a desired target media size.
I could probably knock something up, but I'd have to mess around creating an example file to work from.....
Do not accidentally specify PDFFitPage in the command line above. Specify EPSFitPage when dealing with EPS files. PDFFitPage will silently do nothing.

Making the "Region of Interest" (ROI) transparent in MATLAB

I've already made a function to cut out the image, and the part I cut out has a black background. I'm trying to make the black part transparent, then I can generate an image sequence that I can create a video with. I've tried converting the image to a double and then replacing the 0 values with NaN:
J = imread('imgExample.jpg');
J2 = im2double(J);
J2(J2 == 0) = NaN;
imwrite(J2, 'newImg.jpg');
but when I convert it into a video, it doesn't seem to stay. Is there any way to get the black part of the image to be transparent?
From clarifications in comments, you are trying to create a video format that supports alpha transparency using matlab.
In general this seems impossible using matlab alone (at least in matlab 2013 which is the version I use). If you'd like to check if the newest matlab supports videos with alpha transparency, type doc videowriter and have a look at the available formats. If you see anything with transparency options there, take it from there. But the most I see on mine is 24bit RGB videos (i.e. three channels, no transparency).
So matlab does not have the ability to produce native .avi video with alpha transparency.
However, note that this is a very rare video format anyway, and even if you did manage to produce such a video, you would still have to find a suitable viewer which supports playing videos with transparency!
It's therefore important for you to tell us your particular use-case because it may be you're actually trying to do something much simpler (which may or may not be solvable via matlab) (i.e. a case of the XY Problem
E.g. you may be trying to create a video with transparency for the web instead, like here https://developers.google.com/web/updates/2013/07/Alpha-transparency-in-Chrome-video
If this is the case, then I would recommend you attempting the method outlined there; you can create individual .png "frames" with transparency in matlab using the imwrite function. have a look at its documentation, particularly the section about png images and the 'Alpha' property. But beyond that, you'd need an external tool to combine them into a .webm file, since matlab doesn't seem to have a tool like that (at least none that I can see at a glance; there might be a 3rd-party toolkit if you look on the web).
Hope this helps.

find the frame or position from the content drawn on the screen [duplicate]

Here's a screenshot of the twitter app for reference: http://screencast.com/t/YmFmYmI4M
What I want to do is place a floating pop-over on top of a substring in an NSAttributedString that could span multiple lines. NSAttributedString is a requirement for the project.
In the screenshot supplied, you can see that links are background-highlighted, so it leads me to believe that they're using CoreText and NSAttributedStrings. I also found something called CTRunRef ( http://developer.apple.com/library/ios/#documentation/Carbon/Reference/CTRunRef/Reference/reference.html ) which looks promising, but I'm having trouble fitting it all together conceptually.
In short, if I have a paragraph in core text and when I tap on a word, how do I find the bounding box for that word?
Set some attribute in the attributed string that won't effect display, but will cause it to be laid out as a seperate glyph run, then use CoreText to layout the string
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString(attrString);
CTFrameRef ctframe = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, NULL);
Now you will have to hunt through the frame to find the relevant chunk of text. Get the array of CTLineRefs with CTFrameGetLines().
Iterate through the array, testing if the touch was on that line, by checking it is within the rect returned by CTLineGetImageBounds(). If it is, now look through the glyph runs in the line.
Again, you can get an array of CTRunRefs with CTLineGetGlyphRuns(). Check whether the tap was within the glyph run with CTRunGetImageBounds(), and if it was you can find the the range of indices in the original attributed string that the glyph run corresponds to with CTRunGetStringIndices().
you need to find Y by CTLine and X by CTRun width and height you can get by word and font itself. ill attache my project link, its really simple code but you can reedit in order to meet your needs. hope it helps cheers if you improve general logic please let me know thx.
textViewProject
The link given by George was very helpful and got me what I wanted. But strange thing happened. It was working in iOS SDK 4.0 but in the iOS SDK 5 the position of the link appeared in wrong position on the view.
So I had to tweak the code a little bit. For the x coordinates of the touchable button, I had to use CTRunGetTypographicBounds instead of the CTRunGetImageBounds function.
So over all, in the tweaked code:
The y Coordinate , width and height was calculated using CTRunGetImageBounds.
And x coordinate was calculated using CTRunGetTypographicBounds.
I've been working on a small library that does exactly that. You can find it here: https://github.com/pothibo/CMFramework
However, this library is in its alpha stage, there's optimization needed and some feature are lacking (line height is one of the urgent feature I want to add)
If you decide to use it and find issue, don't hesitate to post issues on github!

Is it possible to determine the (pixel-)width of text-strings bevore SVGs are created with scripts

I am about the create a bunch of SVG graphics with (probabably) a perl script. These SVG graphics will contain text blocks. Since I want to "connect" such text blocks (of varying widths) with lines I'd like to know what width a text will be so that I can draw the connecting lines' length accordingly.
I have seen in SVG get text element width that it could be possible with java script. But that's probably not what I am after since I don't intend to host the SVG in a browser.
So, I thought that maybe there's a way to find out the desired width at the script's runtime. If someone can point me to a solution (also outside the realm of perl but on windows), I'd be very gratefu.
I did that exactly that about a year ago using PDF::API2 and advancewidth function: https://metacpan.org/module/PDF::API2::Content#width-txt-advancewidth-string-text_state-
Note that you need to correlate DPI of PDF and SVG: they may be different (I actually did that just dividing values by 1.25, you can be better).
PDF::API2 gives you very accurate values that works for Inkscape (in my case) well.

iPhone CoreText: Find the pixel coordinates of a substring

Here's a screenshot of the twitter app for reference: http://screencast.com/t/YmFmYmI4M
What I want to do is place a floating pop-over on top of a substring in an NSAttributedString that could span multiple lines. NSAttributedString is a requirement for the project.
In the screenshot supplied, you can see that links are background-highlighted, so it leads me to believe that they're using CoreText and NSAttributedStrings. I also found something called CTRunRef ( http://developer.apple.com/library/ios/#documentation/Carbon/Reference/CTRunRef/Reference/reference.html ) which looks promising, but I'm having trouble fitting it all together conceptually.
In short, if I have a paragraph in core text and when I tap on a word, how do I find the bounding box for that word?
Set some attribute in the attributed string that won't effect display, but will cause it to be laid out as a seperate glyph run, then use CoreText to layout the string
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString(attrString);
CTFrameRef ctframe = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, NULL);
Now you will have to hunt through the frame to find the relevant chunk of text. Get the array of CTLineRefs with CTFrameGetLines().
Iterate through the array, testing if the touch was on that line, by checking it is within the rect returned by CTLineGetImageBounds(). If it is, now look through the glyph runs in the line.
Again, you can get an array of CTRunRefs with CTLineGetGlyphRuns(). Check whether the tap was within the glyph run with CTRunGetImageBounds(), and if it was you can find the the range of indices in the original attributed string that the glyph run corresponds to with CTRunGetStringIndices().
you need to find Y by CTLine and X by CTRun width and height you can get by word and font itself. ill attache my project link, its really simple code but you can reedit in order to meet your needs. hope it helps cheers if you improve general logic please let me know thx.
textViewProject
The link given by George was very helpful and got me what I wanted. But strange thing happened. It was working in iOS SDK 4.0 but in the iOS SDK 5 the position of the link appeared in wrong position on the view.
So I had to tweak the code a little bit. For the x coordinates of the touchable button, I had to use CTRunGetTypographicBounds instead of the CTRunGetImageBounds function.
So over all, in the tweaked code:
The y Coordinate , width and height was calculated using CTRunGetImageBounds.
And x coordinate was calculated using CTRunGetTypographicBounds.
I've been working on a small library that does exactly that. You can find it here: https://github.com/pothibo/CMFramework
However, this library is in its alpha stage, there's optimization needed and some feature are lacking (line height is one of the urgent feature I want to add)
If you decide to use it and find issue, don't hesitate to post issues on github!