I need to render rich text using Core Text in my view (simple formatting, multiple fonts in one line of texts, etc.). I am wondering if text rendered this way can be selected by user using (standard copy / paste function)?
I implemented a text selection in CoreText. It is really a hard work... But it's doable.
Basically you have to save all CTLine rects and origins using CTFrameGetLineOrigins(1), CTLineGetTypographicBounds(2), CTLineGetStringRange(3) and CTLineGetOffsetForStringIndex(4).
The line rect can be calculated using the origin(1), ascent(2), descent(2) and offset(3)(4) as shown bellow.
lineRect = CGRectMake(origin.x + offset,
origin.y - descent,
offset,
ascent + descent);
After doing that, you can test which line has the touched point looping the lines (always remember that CoreText uses inverse Y coordinates).
Knowing the line that has the touched point, you can know the letter that is located at that point (or the nearest letter) using CTLineGetStringIndexForPosition.
Here's one screenshot.
For that loupe, I used the code shown in this post.
Edit:
To draw the blue background selection, you have to paint the rect using CGContextFillRect. Unfortunately, there's no background color in NSAttributedString.
Related
I am drawing text in a PDF page using iTextSharp, and I have two requirements:
1) the text needs to be searchable by Adobe Reader and such
2) I need character-level control over where the text is drawn.
I can draw the text word-by-word using PdfContentByte.ShowText(), but I don't have control over where each character is drawn.
I can draw the text character-by-character using PdfContentByte.ShowText() but then it isn't searchable.
I'm now trying to create a PdfTextArray, which would seem to satisfy both of my requirements, but I'm having trouble calculating the correct offsets.
So my first question is: do you agree that PdfTextArray is what I need to do, in order to satisfy both of my original requirements?
If so, I have the PdfTextArray working correctly (in that it's outputting text) but I can't figure out how to accurately calculate the positioning offset that needs to get put between each pair of characters (right now I'm just using the fixed value -200 just to prove that the function works).
I believe the positioning offset is the distance from the right edge of the previous character to the left edge of the new character, expressed in "thousandths of a unit of text space". That leaves me two problems:
1) How wide is the previous character (in points), as drawn in the specified font & height? (I know where its left edge is, since I drew it there)
2) How do I convert from points to "units of text space"?
I'm not doing any fancy scaling or rotating, so my transformation matrices should all be identity matrices, which should simplify the calculations ...
Thanks,
Chris
Here's a screenshot of the twitter app for reference: http://screencast.com/t/YmFmYmI4M
What I want to do is place a floating pop-over on top of a substring in an NSAttributedString that could span multiple lines. NSAttributedString is a requirement for the project.
In the screenshot supplied, you can see that links are background-highlighted, so it leads me to believe that they're using CoreText and NSAttributedStrings. I also found something called CTRunRef ( http://developer.apple.com/library/ios/#documentation/Carbon/Reference/CTRunRef/Reference/reference.html ) which looks promising, but I'm having trouble fitting it all together conceptually.
In short, if I have a paragraph in core text and when I tap on a word, how do I find the bounding box for that word?
Set some attribute in the attributed string that won't effect display, but will cause it to be laid out as a seperate glyph run, then use CoreText to layout the string
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString(attrString);
CTFrameRef ctframe = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, NULL);
Now you will have to hunt through the frame to find the relevant chunk of text. Get the array of CTLineRefs with CTFrameGetLines().
Iterate through the array, testing if the touch was on that line, by checking it is within the rect returned by CTLineGetImageBounds(). If it is, now look through the glyph runs in the line.
Again, you can get an array of CTRunRefs with CTLineGetGlyphRuns(). Check whether the tap was within the glyph run with CTRunGetImageBounds(), and if it was you can find the the range of indices in the original attributed string that the glyph run corresponds to with CTRunGetStringIndices().
you need to find Y by CTLine and X by CTRun width and height you can get by word and font itself. ill attache my project link, its really simple code but you can reedit in order to meet your needs. hope it helps cheers if you improve general logic please let me know thx.
textViewProject
The link given by George was very helpful and got me what I wanted. But strange thing happened. It was working in iOS SDK 4.0 but in the iOS SDK 5 the position of the link appeared in wrong position on the view.
So I had to tweak the code a little bit. For the x coordinates of the touchable button, I had to use CTRunGetTypographicBounds instead of the CTRunGetImageBounds function.
So over all, in the tweaked code:
The y Coordinate , width and height was calculated using CTRunGetImageBounds.
And x coordinate was calculated using CTRunGetTypographicBounds.
I've been working on a small library that does exactly that. You can find it here: https://github.com/pothibo/CMFramework
However, this library is in its alpha stage, there's optimization needed and some feature are lacking (line height is one of the urgent feature I want to add)
If you decide to use it and find issue, don't hesitate to post issues on github!
Does CGContextMoveToPoint work with CGContextShowText? I'm trying to draw to a PDF. Without any translating of the CTM, if I draw text, I see it in the bottom left side of the screen. Then I try to move to point (100,100), and the text is still there. But if I translate the CTM to position 100, 100, then I see the text at that point. Does CGContextMoveToPoint work with CGContextShowText? Otherwise it seems like I translate my CTM, then I need to make the reverse translation, then move it somewhere else to draw other text (like if I were doing a title, and then starting a paragraph). Thanks!
You need to use CGContextSetTextPosition() instead. I don't know why Quartz keeps different positions for text and graphics, but that's the way it is.
I've got a few short paragraphs of text that I'd like to place wrapped around an image in my app's view. Typically, the image will be on the left side of the text and the text would flow as you normally expect. The image has varying height and width and size-wise, 4-30kb png files. The text has varying length and can be anywhere between 2 sentences to a few paragraphs. There's markup with text so different lines can be formatted accordingly.
I've been using UIWebView within my UIView to do this in the quickest manner but what I've noticed is that even though my text and images are local, there's a noticeable delay in when the UIWebView loads and shows the image and text. Basically, you see the view area blank and then a short moment later, you see the image load. Most of the time, the text is shown as loaded before the image.
I'm using UIWebView's 'loadHTMLString' to load the local html text and images.
What I want to achieve would look like:
| [ ] | Some text starts here and is long enough to fil until the end of the line. Then
| [image] | the text continues so it wraps around a few more lines to fit the entire
| [ ] | height of the image.
Eventually, we'll show the text below the image, just like you see in a newspaper.
The content would continue at variable length for the rest of the screen.
Is there a better way to display formatted text with images? If there's something better than UIWebView, I'd love to move to that.
I think your basic strategy should be (1) try to get the performance of UIWebView to an acceptable level and (2) if that fails, and it's really important, roll your own text layout code.
Some ideas:
Try to load content into the UIWebView class as early as possible, before it is displayed on screen.
See if using inline styles (instead of a linked stylesheet) improves load time.
See if using inline images (by using <img src="data:...") improves load time.
If none of that works, and you want to roll your own, you're going to have to write your own text layout code. You could probably do this by using NSString's UIKit additions: split the text into words, compute the size of each word (using sizeWithFont:), and lay out the text line by line. You should probably compute on a word-by-word basis, but actually draw a whole line at a time, that should perform better. Or you might be able to make a new string with newlines inserted at the right places and draw the whole thing in one go.
Good luck!
Try the following .This is without CoreText and Html
UIBezierPath * imgRect = [UIBezierPath bezierPathWithRect:CGRectMake(0, 0, 100, 100)];
UIImageView *imageView= [[UIImageView alloc]initWithFrame:CGRectMake(0, 0, 100, 100)];
imageView.image=[UIImage imageNamed:#"defaultImage"];
[self.textView addSubview:imageView];
self.textView.textContainer.exclusionPaths = #[imgRect];
Here's a screenshot of the twitter app for reference: http://screencast.com/t/YmFmYmI4M
What I want to do is place a floating pop-over on top of a substring in an NSAttributedString that could span multiple lines. NSAttributedString is a requirement for the project.
In the screenshot supplied, you can see that links are background-highlighted, so it leads me to believe that they're using CoreText and NSAttributedStrings. I also found something called CTRunRef ( http://developer.apple.com/library/ios/#documentation/Carbon/Reference/CTRunRef/Reference/reference.html ) which looks promising, but I'm having trouble fitting it all together conceptually.
In short, if I have a paragraph in core text and when I tap on a word, how do I find the bounding box for that word?
Set some attribute in the attributed string that won't effect display, but will cause it to be laid out as a seperate glyph run, then use CoreText to layout the string
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString(attrString);
CTFrameRef ctframe = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, 0), path, NULL);
Now you will have to hunt through the frame to find the relevant chunk of text. Get the array of CTLineRefs with CTFrameGetLines().
Iterate through the array, testing if the touch was on that line, by checking it is within the rect returned by CTLineGetImageBounds(). If it is, now look through the glyph runs in the line.
Again, you can get an array of CTRunRefs with CTLineGetGlyphRuns(). Check whether the tap was within the glyph run with CTRunGetImageBounds(), and if it was you can find the the range of indices in the original attributed string that the glyph run corresponds to with CTRunGetStringIndices().
you need to find Y by CTLine and X by CTRun width and height you can get by word and font itself. ill attache my project link, its really simple code but you can reedit in order to meet your needs. hope it helps cheers if you improve general logic please let me know thx.
textViewProject
The link given by George was very helpful and got me what I wanted. But strange thing happened. It was working in iOS SDK 4.0 but in the iOS SDK 5 the position of the link appeared in wrong position on the view.
So I had to tweak the code a little bit. For the x coordinates of the touchable button, I had to use CTRunGetTypographicBounds instead of the CTRunGetImageBounds function.
So over all, in the tweaked code:
The y Coordinate , width and height was calculated using CTRunGetImageBounds.
And x coordinate was calculated using CTRunGetTypographicBounds.
I've been working on a small library that does exactly that. You can find it here: https://github.com/pothibo/CMFramework
However, this library is in its alpha stage, there's optimization needed and some feature are lacking (line height is one of the urgent feature I want to add)
If you decide to use it and find issue, don't hesitate to post issues on github!