I am creating a syntax highlighter for the iPhone and in order to display text with multiple formats, I have sub-classed UIView and modified the drawRect: method so that each line is displayed with the proper syntax highlighting (highlighting is done earlier with RegEx, text is drawn with CGContextShowTextAtPoint() one line at a time). Everything works ok, I store each line of text as an NSString in an NSMutableArray, I handle the keyboard through a hidden UITextField and its delegate methods, the cursor is a blinking CALayer that can be moved around with touches and I have a custom scroll view that handles scrolling. However, I have two problems that I can't seem to wrap my head around:
Word wrap, right now the text just keeps going off the left end of the screen. To keep things fast I only redraw the portions of the view that have changed (usually just the line being edited, but sometimes the lines below as well e.g. if you press return halfway through the document) with setNeedsDisplayInRect:. This makes word wrap complicated because then you have to draw more than one line on the screen, even though it still is only one object in the array.
UIViews have a maximum content size of 1024x1024 which equates to about 64 lines. I need the ability to display more than that. I am thinking about using multiple CALayers one after another, but I am having trouble drawing content to the layers (using drawLayer:inContext: and drawInContext:).
So my questions are:
Does anyone have any, even general, suggestions about how to accomplish either of these two points. Or,
Has someone already written a custom text-drawing view that handles these things that I could use instead.
Thanks,
Kyle
EDIT: The scrolling problem is pretty much solved, however I am still having trouble with word-wrap. My trouble is that everything is done by line: the view updates one line at a time, the text is stored as an array of lines, the highlighter highlights one line at a time, etc. and having a single index in the array (one line of text) take up multiple lines on the screen raises some problems, for example, I had to implement my own movable cursor and when you move the cursor it needs to be able to turn a display line (found by dividing touch.x by the line height) into a text line (an index in the array). Any ideas?
You should first spend some quality time understanding how this problem was solved on Mac:
http://developer.apple.com/documentation/Cocoa/Conceptual/TextArchitecture/Tasks/AssembleSysByHand.html
http://developer.apple.com/documentation/Cocoa/Conceptual/TextLayout/TextLayout.html
In particular, you should become familiar with line fragment generation, which is the problem you're trying to solve for word-wrap, and you should understand glyph generation in order to do rich text well. You don't need to implement all of the flexibility of NSTypesetter or NSLayoutManager of course, but the text system on Mac is incredibly powerful, and you should learn from its example. Implementing something similar to NSAttributedString for iPhone may be valuable to you and improve your performance.
Unless you're moving things around a lot (for which this would best work in a UIScrollView), I don't think you should need to use CALayers here. That seems overkill for the problem, and may actually adversely impact the optimizations already provided by UIScrollView. If you're seeing performance problems, first make sure you're not doing redundant calculations within your drawRect:.
Check out TTStyledText in Three20 library. Not sure how well it matches your goals, but might serve you as an example. (The library itself is a bit bloated, but is a wonderful source to look at.)
It may be best to draw your text within a CATiledLayer hosted within your UIView, in order to get around the 1024x1024 texture size limit (which appears to actually be 2048x2048 on all existing devices). For an example of text drawing within a CALayer, I'd refer you to the CPTextLayer class within the Core Plot framework. This layer (which inherits from the CPLayer class within that same framework) does cross-platform (Mac and iPhone) text rendering in a CALayer. You might be able to extend it to work as a CATiledLayer for longer text blocks.
One thing to be aware of is that platform-specific drawAtPoint: methods are used in this layer, instead of the CGContextShowTextAtPoint() function you are using. The reason for this is that CGContextShowTextAtPoint() only works on ASCII text, meaning that you can't do Unicode text rendering with it. There is an example of using CGContextShowTextAtPoint() to draw text, within a #define'd out portion at the bottom of the renderAsVectorInContext: method.
Related
I have an infinite scrollview in which I add images as the user scrolls. Those images have varying heights and I've been trying to come up with the best way of finding a clear space inside the current bounds of the view that would allow me to add the image view.
Is there anything built-in that would make my search more efficient?
The problem is I want the images to be sort of glued to one another with no blank space between them. Making the search through 320x480 pixels tends to be quite a CPU hog. Does anyone know an efficient method to do it?
Thanks!
It seems that you're scrolling this thing vertically (you mentioned varying image heights).
There's nothing built in to UIScrollView that will do this for you. You'll have to track your UIImageView subviews manually. You could simply maintain the max y coordinate occupied by you images as you add them.
You might consider using UITableView instead, and implementing a very customized tableView:heightForRowAtIndexPath: in your delegate. You would probably need to do something special with the actual cells as well, but it would seem to make your job a little easier.
Also, for what it's worth, you might find a way to avoid making your solution infinite. Be careful about your memory footprint! iOS will shut your app off if things get out of hand.
UPDATE
Ok, now I understand what you're going for. I had imagined that you were presenting photographs or something rectangular like that. If I were trying to cover a scroll view with UILeafs (wah wah) I would take a statistical approach. I would 'paint' leaves randomly along horizontal/vertical strips as the user scrolls. Perhaps that's what you're doing already? Whatever you're doing I think it looks good.
Now I guess that the reason you're asking is to prevent the little random white spots that show through - is that right? If I may suggest a different solution: try to color the background of your scroll view to something earthy that looks good if it shows through here and there.
Also, it occurred to me that you could use a larger template image -- something that already has a nice distribution of leaves -- with transparency all along the outside outline of the leaves but nowhere else. Then you could tile these, but with overlap, so that the alpha just shows through to the leaves below. You could have a number of these images so that it doesn't look obvious. This would take away all of the uncertainty and make your retiling very efficient.
Also, consider learning about CoreAnimation (CALayer in particular) and CoreGraphics/Quartz 2D ). Proper use of these libraries will probably yield great improvements in rendering speed.
UPDATE 2:
If your images are all 150px wide, then split your scrollview into columns and add/remove based on those (as discussed in chat).
Good luck!
In HTML, with CSS, one can do "float" for images, just like this
the image is surrounded by text.
How can I do it WITHOUT UIWebView?
I can't use UIWebview, because
UIWebView has a slight lag after you loadHTML. Its content doesn't show immediately, not like UILabel or UIImage
The image is async loaded, so I need a white space holder there with UIActivityView spinning
Is that possible I just use a UIImageView & UILabel to do float?
Thanks
The "easy" way is probably to use two labels and an image. If your text/images are static, then you can split the text at the right place beforehand. If not, you'll have to do some text size measurements; see UIStringDrawing.h. Figuring out where all the line breaks should be is non-trivial.
The "hard" way is to create your own subclass of UIView with text and image properties and have it do all the string/image layout and drawing. It's not actually that much harder than figuring out where the line breaks should be dynamically.
The "harder" way is to use Core Text (iOS 3.2+); the Columnar Layout example in the Core Text Programming Guide should work, but also note that CTFramesetterCreateFrame() can take an arbitrary CGPath, so you can just pass it a path of the available area. This might actually be slightly easier than using UIKit string-sizing and trying to find out where all the line breaks are (you just need to write a lot of Core Text boilerplate instead).
I'm writing a iPhone app that needs to render i18n text that includes diacriticals (tildes, accents, etc.). Apple provides the UIFont class which can be used to get a given typeface/font-size combination's leading, ascent, descent, etc.
The problem is that this information does not accurately reflect diacriticals. Specifically, diacriticals on capital letters often exceed the lineHeight (the UIFont property formerly known as leading).
The same problem exists throughout the frameworks, ie. NSString:sizeWithFont has the same issue.
I need to know the true bounding box for text as I am using OpenGL which does not have text drawing support and therefore requires rendering text to a texture.
Currently, I'm using a hack to get around this issue. Is there a better way?
It's not possible with NSString, since it just returns a size. You can try CoreText which seems to support returning bounding boxes, but that's a bit overkill.
It's a difficult problem when Unicode supports things like è̀̀̀ (see also: zalgo); things can render above the top of a line so you can't just draw the characters. Some text-drawing APIs make you specify the baseline and give you the bounding box so you can get both ascenders and descenders, but UIKit doesn't do this.
Then, you have crazy cursive fonts with the occasional huge ascender. It's unclear how to handle these either.
The lazy way is to render to a texture with margins at the top and bottom (0.5 lines? 1 line?) and not care too much about the extra overhead of some transparent pixels.
I haven't looked at CoreText much, but it doesn't look particularly promising.
Is there a way to draw on the iPhone screen (on a UIView in a UIWindow) outside of that view's drawRect() method? If so, how do I obtain the graphics context?
The graphics guide mentions class NSGraphicsContext, but the relevant chapter seems like a blind copy/paste from Mac OS X docs, and there's no such class in iPhone SDK.
EDIT: I'm trying to modify the contents of the view in a touch event handler - highlight the touched visual element. In Windows, I'd use GetDC()/ReleaseDC() rather than the full cycle of InvalidateRect()/WM_PAINT. Trying to do the same here. Arranging the active (touchable) elements as subviews is a huge performance penalty, since there are ~hundred of them.
No. Drawing is drawRect:'s (or a CALayer's) job. Even if you could draw elsewhere, it would be a code smell (as it is on the Mac). Any other code should simply update your model state, then set yourself as needing display.
When you need display, moving the display code elsewhere isn't going to make it go any faster. When you don't need display (and so haven't been set as needing display), the display code won't run if it's in drawRect:.
I'm trying to modify the contents of the view in a touch event handler - highlight the touched visual element. In Windows, I'd use [Windows code]. … Arranging the active (touchable) elements as subviews is a huge performance penalty, since there are ~hundred of them.
It sounds like Core Animation might be more appropriate for this.
I dont think ull be able to draw outside drawRect...but to get the current graphic context all you do is CGContextRef c = UIGraphicsGetCurrentContext(); hope that helps.
I want to read the string displayed on the screen with finger touch, means as my finger moves over the text displayed on screen, the text below the finger should get highlighted. Is there any way to do this using UITextView or any other class. Also i want to play the sound associated with that word from a sound file which has the sentence already recorded.
If anyone knows it kindly reply.
Thanks in advance.
So that's two questions.
To know where your finger is pointing in the text you need to know where the UITextView has laid out the characters. This is not something the UITextView is documented to support, so next up is drawing text yourself. This stackoverflow question on embedding a custom font has a few links and even some code that will do the trick. What you need to add is keeping track of character advances so you know exactly where each character is rendered (or you could even use other core graphics functions to get exact bounding boxes of each character).
Clearly once you know what character has been tapped you also know the word it is part of. You'll would probably have the best quality if the sentence is recorded both in full and as single words, so that you can play back either when needed. This other stackoverflow question on playing sounds loaded from the internet has a few links and snippets which relate to loading and playing back audio.
I am doing this in an iPhone text editor I am writing. For multiple reasons, I have subclassed a UIView to draw the text (instead of a UITextView) and I had to implement something like this to provide a movable cursor like a text view has. What I did was use a fixed-width font (Courier to be exact), then I took the x-coordinate of the touch and divided it by the width of the character and rounded the answer down. This gives you a character index.
However, if it is possible, find a way to do this without drawing the text yourself (if the text is user entered or really anything other than hardcoded) because Apple provides a lot of functionality in its UITextView class that is a pain to replicate: editing, cursor, word wrap, scrolling, etc.
It might be possible, if you can't get touch events from a subclassed UITextView, to put a transparent view/layer over the top of the text and get touch events from that, then you would only have to figure out a way to turn this on and off for editing.
Kyle
I came up with a handy trick to do this. In drawRect, mimic the text that the UITextView will draw, but don't actually draw the text. Set the font to Helvetica 17-point, and the only semi-tricky part is handling word wrapping. You already have the text to be displayed, and you can get the size of each word in that default font by calling sizeWithFont. Then save the rectangles for each word, and when the user touches the view, find which of the stored rectangles contains the touched point. To test and calibrate the geometry (line spacing, etc), draw the text yourself in a different color. When you get it so that you can only see one of the font colors, you've got it perfect.
I can post my code for this if someone wants it.