How to draw text in NPAPI plugin CGContextRef? - plugins

I want to draw text in the NPAPI plugin CGContextRef, but i don’t know to make it work.
I get the CGContext Ref as follow:
int16_t NPP_HandleEvent(NPP instance, void* event)
{
int16_t iRet = 0;
PluginObject *obj = (PluginObject *)instance->pdata;
NPCocoaEvent *cocoaEvent = (NPCocoaEvent *)event;
switch(cocoaEvent->type)
{
case NPCocoaEventDrawRect:
obj->m_NPContext = CGContextRetain(cocoaEvent->data.draw.context);
DrawSealOnContext(obj->m_NPContext, obj->m_pstSealAPInfo);
iRet = 1;
break;
default:
iRet = 0;
break;
}
return iRet;
}
In function DrawSealOnContext, I want to draw an Ellipse and some text in the window. the function as follow:
int DrawSealOnContext(CGContextRef contextRef, PSEAL_APPEARANCE_INFO pstSealAPInfo)
{
// draw ellipses
CGRect rect = {2, 25, 146, 100};
CGContextSetLineWidth(contextRef, 4.0);
CGContextSetStrokeColorWithColor(contextRef, [NSColor blueColor].CGColor);
CGContextBeginPath(contextRef);
CGContextAddEllipseInRect(contextRef, rect);
CGContextDrawPath(contextRef, kCGPathStroke);
// draw text
CGContextSetFillColorWithColor(contextRef, [NSColor blueColor].CGColor);
NSMutableParagraphStyle * paragraphStyle = [[[NSParagraphStyle defaultParagraphStyle] mutableCopy] autorelease];
[paragraphStyle setAlignment:NSCenterTextAlignment];
NSDictionary * attributes = [NSDictionary dictionaryWithObject:paragraphStyle
forKey:NSParagraphStyleAttributeName];
NSString * mystr = #“hello\n and";
NSRect strFrame = { { 0, 0 }, { 150, 150 } };
[mystr drawInRect:strFrame withAttributes:attributes];
}
I can get the ellipse on the screen, but the text doesn’t show up?
I also try this:
// draw text
CGContextShowText(contextRef, "hello", 5);
It doesn’t work either.
what’s wrong with my program, really appreciate your
answers.

I have solve this problem, with the help of this article:
http://www.cocoabuilder.com/archive/cocoa/240527-setting-cgcontextref-to-the-current-context.html
The problem is that, function
[mystr drawInRect:strFrame withAttributes:attributes];
draw test in current context, but my current is not the the context "CGContextRef contextRef" i decleared. So, before i used drawInRect method, i need to set contextRef my current context.
How to set CGContextRef to current context, please refer to the website i pasted.
Hope this answer can solve your problem!

Related

Is there a way to create rounded NSLabel in MacOS?

Currently, in my SpriteKit app, I'm adding a new NSLabel node into another node which has the shape of arc, like:
let labelNode = SKLabelNode(text: text)
labelNode.color = .black
labelNode.horizontalAlignmentMode = .center
labelNode.verticalAlignmentMode = .center
labelNode.position = CGPoint(x: node.frame.midX, y: node.frame.midY)
labelNode.fontColor = .red
addChild(labelNode)
but after adding it looks like this
So is there a way, where I can make those NSLabels inside of arcs "rounded", in the shape of arcs in which they are locating?
After a few hours of searching, I really could not find anything helpful, so I hope you can show me the path
P.S I'm asking for MacOS, not for iOS.
Apple published Objective-C sample code in 2013:
Drawing Along a Path Using Core Text with Cocoa
CoreTextArcCocoa demonstrates how you can use Core Text to draw text along an arc in a Cocoa application. It also illustrates how you can use the Cocoa font panel to receive font settings that can be used by Core Text to select the font used for drawing.
You might be able to adapt that to your needs. Remember that you can call Objective-C code from Swift by including the header files in your bridging header.
However, if your label text doesn't change at runtime (or only has a few possible values), it's probably easier to just use an image editor to create images of curved text.
In case the link to the sample project breaks, here's the relevant part of the header file “APLCoreTextArcView.h”:
#import <Cocoa/Cocoa.h>
#interface APLCoreTextArcView : NSView
#property (nonatomic) NSFont *font;
#property (nonatomic) NSString *string;
#property (readonly, nonatomic) NSAttributedString *attributedString;
#property (nonatomic) CGFloat radius;
#property (nonatomic) BOOL showsGlyphBounds;
#property (nonatomic) BOOL showsLineMetrics;
#property (nonatomic) BOOL dimsSubstitutedGlyphs;
#end
And here's the relevant part of the implementation file “APLCoreTextArcView.m”:
#import "APLCoreTextArcView.h"
#import <AssertMacros.h>
#define ARCVIEW_DEFAULT_FONT_NAME #"Didot"
#define ARCVIEW_DEFAULT_FONT_SIZE 64.0
#define ARCVIEW_DEFAULT_RADIUS 150.0
typedef struct GlyphArcInfo {
CGFloat width;
CGFloat angle; // in radians
} GlyphArcInfo;
static void PrepareGlyphArcInfo(CTLineRef line, CFIndex glyphCount, GlyphArcInfo *glyphArcInfo)
{
NSArray *runArray = (__bridge NSArray *)CTLineGetGlyphRuns(line);
// Examine each run in the line, updating glyphOffset to track how far along the run is in terms of glyphCount.
CFIndex glyphOffset = 0;
for (id run in runArray) {
CFIndex runGlyphCount = CTRunGetGlyphCount((__bridge CTRunRef)run);
// Ask for the width of each glyph in turn.
CFIndex runGlyphIndex = 0;
for (; runGlyphIndex < runGlyphCount; runGlyphIndex++) {
glyphArcInfo[runGlyphIndex + glyphOffset].width = CTRunGetTypographicBounds((__bridge CTRunRef)run, CFRangeMake(runGlyphIndex, 1), NULL, NULL, NULL);
}
glyphOffset += runGlyphCount;
}
double lineLength = CTLineGetTypographicBounds(line, NULL, NULL, NULL);
CGFloat prevHalfWidth = glyphArcInfo[0].width / 2.0;
glyphArcInfo[0].angle = (prevHalfWidth / lineLength) * M_PI;
// Divide the arc into slices such that each one covers the distance from one glyph's center to the next.
CFIndex lineGlyphIndex = 1;
for (; lineGlyphIndex < glyphCount; lineGlyphIndex++) {
CGFloat halfWidth = glyphArcInfo[lineGlyphIndex].width / 2.0;
CGFloat prevCenterToCenter = prevHalfWidth + halfWidth;
glyphArcInfo[lineGlyphIndex].angle = (prevCenterToCenter / lineLength) * M_PI;
prevHalfWidth = halfWidth;
}
}
#implementation APLCoreTextArcView
- (id)initWithFrame:(NSRect)frame {
self = [super initWithFrame:frame];
if (self) {
_font = [NSFont fontWithName:ARCVIEW_DEFAULT_FONT_NAME size:ARCVIEW_DEFAULT_FONT_SIZE];
_string = #"Curvaceous Type";
_radius = ARCVIEW_DEFAULT_RADIUS;
_showsGlyphBounds = NO;
_showsLineMetrics = NO;
_dimsSubstitutedGlyphs = NO;
}
return self;
}
- (void)drawRect:(NSRect)rect {
// Don't draw if we don't have a font or string
if (self.font == NULL || self.string == NULL)
return;
// Initialize the text matrix to a known value
CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
// Draw a white background
[[NSColor whiteColor] set];
NSRectFill(rect);
CTLineRef line = CTLineCreateWithAttributedString((__bridge CFAttributedStringRef)self.attributedString);
assert(line != NULL);
CFIndex glyphCount = CTLineGetGlyphCount(line);
if (glyphCount == 0) {
CFRelease(line);
return;
}
GlyphArcInfo * glyphArcInfo = (GlyphArcInfo*)calloc(glyphCount, sizeof(GlyphArcInfo));
PrepareGlyphArcInfo(line, glyphCount, glyphArcInfo);
// Move the origin from the lower left of the view nearer to its center.
CGContextSaveGState(context);
CGContextTranslateCTM(context, CGRectGetMidX(NSRectToCGRect(rect)), CGRectGetMidY(NSRectToCGRect(rect)) - self.radius / 2.0);
// Stroke the arc in red for verification.
CGContextBeginPath(context);
CGContextAddArc(context, 0.0, 0.0, self.radius, M_PI, 0.0, 1);
CGContextSetRGBStrokeColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextStrokePath(context);
// Rotate the context 90 degrees counterclockwise.
CGContextRotateCTM(context, M_PI_2);
/*
Now for the actual drawing. The angle offset for each glyph relative to the previous glyph has already been calculated; with that information in hand, draw those glyphs overstruck and centered over one another, making sure to rotate the context after each glyph so the glyphs are spread along a semicircular path.
*/
CGPoint textPosition = CGPointMake(0.0, self.radius);
CGContextSetTextPosition(context, textPosition.x, textPosition.y);
CFArrayRef runArray = CTLineGetGlyphRuns(line);
CFIndex runCount = CFArrayGetCount(runArray);
CFIndex glyphOffset = 0;
CFIndex runIndex = 0;
for (; runIndex < runCount; runIndex++) {
CTRunRef run = (CTRunRef)CFArrayGetValueAtIndex(runArray, runIndex);
CFIndex runGlyphCount = CTRunGetGlyphCount(run);
Boolean drawSubstitutedGlyphsManually = false;
CTFontRef runFont = CFDictionaryGetValue(CTRunGetAttributes(run), kCTFontAttributeName);
/*
Determine if we need to draw substituted glyphs manually. Do so if the runFont is not the same as the overall font.
*/
if (self.dimsSubstitutedGlyphs && ![self.font isEqual:(__bridge NSFont *)runFont]) {
drawSubstitutedGlyphsManually = true;
}
CFIndex runGlyphIndex = 0;
for (; runGlyphIndex < runGlyphCount; runGlyphIndex++) {
CFRange glyphRange = CFRangeMake(runGlyphIndex, 1);
CGContextRotateCTM(context, -(glyphArcInfo[runGlyphIndex + glyphOffset].angle));
// Center this glyph by moving left by half its width.
CGFloat glyphWidth = glyphArcInfo[runGlyphIndex + glyphOffset].width;
CGFloat halfGlyphWidth = glyphWidth / 2.0;
CGPoint positionForThisGlyph = CGPointMake(textPosition.x - halfGlyphWidth, textPosition.y);
// Glyphs are positioned relative to the text position for the line, so offset text position leftwards by this glyph's width in preparation for the next glyph.
textPosition.x -= glyphWidth;
CGAffineTransform textMatrix = CTRunGetTextMatrix(run);
textMatrix.tx = positionForThisGlyph.x;
textMatrix.ty = positionForThisGlyph.y;
CGContextSetTextMatrix(context, textMatrix);
if (!drawSubstitutedGlyphsManually) {
CTRunDraw(run, context, glyphRange);
}
else {
/*
We need to draw the glyphs manually in this case because we are effectively applying a graphics operation by setting the context fill color. Normally we would use kCTForegroundColorAttributeName, but this does not apply as we don't know the ranges for the colors in advance, and we wanted demonstrate how to manually draw.
*/
CGFontRef cgFont = CTFontCopyGraphicsFont(runFont, NULL);
CGGlyph glyph;
CGPoint position;
CTRunGetGlyphs(run, glyphRange, &glyph);
CTRunGetPositions(run, glyphRange, &position);
CGContextSetFont(context, cgFont);
CGContextSetFontSize(context, CTFontGetSize(runFont));
CGContextSetRGBFillColor(context, 0.25, 0.25, 0.25, 0.5);
CGContextShowGlyphsAtPositions(context, &glyph, &position, 1);
CFRelease(cgFont);
}
// Draw the glyph bounds
if ((self.showsGlyphBounds) != 0) {
CGRect glyphBounds = CTRunGetImageBounds(run, context, glyphRange);
CGContextSetRGBStrokeColor(context, 0.0, 0.0, 1.0, 1.0);
CGContextStrokeRect(context, glyphBounds);
}
// Draw the bounding boxes defined by the line metrics
if ((self.showsLineMetrics) != 0) {
CGRect lineMetrics;
CGFloat ascent, descent;
CTRunGetTypographicBounds(run, glyphRange, &ascent, &descent, NULL);
// The glyph is centered around the y-axis
lineMetrics.origin.x = -halfGlyphWidth;
lineMetrics.origin.y = positionForThisGlyph.y - descent;
lineMetrics.size.width = glyphWidth;
lineMetrics.size.height = ascent + descent;
CGContextSetRGBStrokeColor(context, 0.0, 1.0, 0.0, 1.0);
CGContextStrokeRect(context, lineMetrics);
}
}
glyphOffset += runGlyphCount;
}
CGContextRestoreGState(context);
free(glyphArcInfo);
CFRelease(line);
}
- (NSAttributedString *)attributedString {
// Create an attributed string with the current font and string.
assert(self.font != nil);
assert(self.string != nil);
// Create our attributes.
NSDictionary *attributes = #{NSFontAttributeName: self.font, NSLigatureAttributeName: #0};
assert(attributes != nil);
// Create the attributed string.
NSAttributedString *attrString = [[NSAttributedString alloc] initWithString:self.string attributes:attributes];
return attrString;
}
#end

How to erase part of an image as the user touches it

My big picture goal is to have a grey field over an image, and then as the user rubs on that grey field, it reveals the image underneath. Basically like a lottery scratcher card. I've done a bunch of searching through the docs, as well as this site, but can't find the solution.
The following is just a proof of concept to test "erasing" an image based on where the user touches, but it isn't working. :(
I have a UIView that detects touches, then sends the coords of the move to the UIViewController that clips the image in a UIImageView by doing the following:
- (void) moveDetectedFrom:(CGPoint) from to:(CGPoint) to
{
UIImage* image = bkgdImageView.image;
CGSize s = image.size;
UIGraphicsBeginImageContext(s);
CGContextRef g = UIGraphicsGetCurrentContext();
CGContextMoveToPoint(g, from.x, from.y);
CGContextAddLineToPoint(g, to.x, to.y);
CGContextClosePath(g);
CGContextAddRect(g, CGRectMake(0, 0, s.width, s.height));
CGContextEOClip(g);
[image drawAtPoint:CGPointZero];
bkgdImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[bkgdImageView setNeedsDisplay];
}
The problem is that the touches are sent to this method just fine, but nothing happens on the original.
Am I doing the clip path incorrectly? Or?
Not really sure...so any help you may have would be greatly appreciated.
Thanks in advance,
Joel
I've been trying to do the same thing a lot of time ago, using just Core Graphics, and it can be done, but trust me, the effect is not as smooth and soft as the user expects to be. So, i knew how to work with OpenCV, (Open Computer Vision Library), and as it was written in C, i knew i could ise it on the iPhone.
Doing what you want to do with OpenCV is extremely easy.
First you need a couple of functions to convert a UIImage to an IplImage wich is the type used in OpenCV to represent images of all kinds, and the other way.
+ (IplImage *)CreateIplImageFromUIImage:(UIImage *)image {
CGImageRef imageRef = image.CGImage;
//This is the function you use to convert a UIImage -> IplImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
IplImage *iplimage = cvCreateImage(cvSize(image.size.width, image.size.height), IPL_DEPTH_8U, 4);
CGContextRef contextRef = CGBitmapContextCreate(iplimage->imageData, iplimage->width, iplimage->height,
iplimage->depth, iplimage->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault);
CGContextDrawImage(contextRef, CGRectMake(0, 0, image.size.width, image.size.height), imageRef);
CGContextRelease(contextRef);
CGColorSpaceRelease(colorSpace);
return iplimage;}
+ (UIImage *)UIImageFromIplImage:(IplImage *)image {
//Convert a IplImage -> UIImage
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
NSData * data = [[NSData alloc] initWithBytes:image->imageData length:image->imageSize];
//NSData *data = [NSData dataWithBytes:image->imageData length:image->imageSize];
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);
CGImageRef imageRef = CGImageCreate(image->width, image->height,
image->depth, image->depth * image->nChannels, image->widthStep,
colorSpace, kCGImageAlphaPremultipliedLast|kCGBitmapByteOrderDefault,
provider, NULL, false, kCGRenderingIntentDefault);
UIImage *ret = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
[data release];
return ret;}
Now that you have both the basic functions you need you can do whatever you want with your IplImage:
this is what you want:
+(UIImage *)erasePointinUIImage:(IplImage *)image :(CGPoint)point :(int)r{
//r is the radious of the erasing
int a = point.x;
int b = point.y;
int position;
int minX,minY,maxX,maxY;
minX = (a-r>0)?a-r:0;
minY = (b-r>0)?b-r:0;
maxX = ((a+r) < (image->width))? a+r : (image->width);
maxY = ((b+r) < (image->height))? b+r : (image->height);
for (int i = minX; i < maxX ; i++)
{
for(int j=minY; j<maxY;j++)
{
position = ((j-b)*(j-b))+((i-a)*(i-a));
if (position <= r*r)
{
uchar* ptr =(uchar*)(image->imageData) + (j*image->widthStep + i*image->nChannels);
ptr[1] = ptr[2] = ptr[3] = ptr[4] = 0;
}
}
}
UIImage * res = [self UIImageFromIplImage:image];
return res;}
Sorry for the formatting.
If you want to know how to port OpenCV to the iPhone Yoshimasa Niwa's
If you want to check out an app currently working with OpenCV on the AppStore go get :Flags&Faces
You usually want to draw into the current graphics context inside of a drawRect: method, not just any old method. Also, a clip region only affects what is drawn to the current graphics context. But instead of going into why this approach isn't working, I'd suggest doing it differently.
What I would do is have two views. One with the image, and one with the gray color that is made transparent. This allows the graphics hardware to cache the image, instead of trying to redraw the image every time you modify the gray fill.
The gray one would be a UIView subclass with CGBitmapContext that you would draw into to make the pixels that the user touches clear.
There are probably several ways to do this. I'm just suggesting one way above.

Pixel-Position of Cursor in UITextView

Is there a way of getting the position (CGPoint) of the cursor (blinking bar) in an UITextView (preferable relative to its content). I don’t mean the location as an NSRange. I need something around:
- (CGPoint)cursorPosition;
It should be a non-private API way.
Requires iOS 5
CGPoint cursorPosition = [textview caretRectForPosition:textview.selectedTextRange.start].origin;
Remember to check that selectedTextRange is not nil before calling this method. You should also use selectedTextRange.empty to check that it is the cursor position and not the beginning of a text range. So:
if (textview.selectedTextRange.empty) {
// get cursor position and do stuff ...
}
SWIFT 4 version:
if let cursorPosition = textView.selectedTextRange?.start {
// cursorPosition is a UITextPosition object describing position in the text (text-wise description)
let caretPositionRectangle: CGRect = textView.caretRect(for: cursorPosition)
// now use either the whole rectangle, or its origin (caretPositionRectangle.origin)
}
textView.selectedTextRange?.start returns a text position of the cursor, and we then simply use textView.caretRect(for:) to get its pixel position in textView.
It's painful, but you can use the UIStringDrawing additions to NSString to do it. Here's the general algorithm I used:
CGPoint origin = textView.frame.origin;
NSString* head = [textView.text substringToIndex:textView.selectedRange.location];
CGSize initialSize = [head sizeWithFont:textView.font constrainedToSize:textView.contentSize];
NSUInteger startOfLine = [head length];
while (startOfLine > 0) {
/*
* 1. Adjust startOfLine to the beginning of the first word before startOfLine
* 2. Check if drawing the substring of head up to startOfLine causes a reduction in height compared to initialSize.
* 3. If so, then you've identified the start of the line containing the cursor, otherwise keep going.
*/
}
NSString* tail = [head substringFromIndex:startOfLine];
CGSize lineSize = [tail sizeWithFont:textView.font forWidth:textView.contentSize.width lineBreakMode:UILineBreakModeWordWrap];
CGPoint cursor = origin;
cursor.x += lineSize.width;
cursor.y += initialSize.height - lineSize.height;
return cursor;
}
I used [NSCharacterSet whitespaceAndNewlineCharacterSet] to find word boundaries.
This can also be done (presumably more efficiently) using CTFrameSetter in CoreText, but that is not available in iPhone OS 3.1.3, so if you're targeting the iPhone you will need to stick to UIStringDrawing.
Yes — as in there's a method to get the cursor position. Just use
CGRect caretRect = [textView rectContainingCaretSelection];
return caretRect.origin;
No — as in this method is private. There's no public API for this.
I try to mark a selected text, i.e. I receive a NSRange and want to draw a yellow rectangle behind that text. Is there another way?
I can advise you some trick:
NSRange selectedRange = myTextView.selectedRange;
[myTextView select:self];
UIMenuController* sharedMenu = [UIMenuController sharedMenuController];
CGRect menuFrame = [sharedMenu menuFrame];
[sharedMenu setMenuVisible:NO];
myTextView.selectedRange = selectedRange
Using this code, you can know get the position of the cut/copy/past menu and there place your yellow rectangle.
I did not find a way to get the menu position witout forcing it to appear by a simulated select operation.
Regards
Assayag
Take a screenshot of the UITextView, then search the pixel data for colors that match the color of the cursor.
-(CGPoint)positionOfCursorForTextView:(UITextView)textView {
//get CGImage from textView
UIGraphicsBeginImageContext(textView.bounds.size);
[textView.layer renderInContext:UIGraphicsGetCurrentContext()];
CGImageRef textImageRef = UIGraphicsGetImageFromCurrentImageContext().CGImage;
UIGraphicsEndImageContext();
//get raw pixel data
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
uint8_t * textBuffer = (uint8_t*)malloc(Width * Height * 4);
NSUInteger bytesPerRow = 4 * Width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(textBuffer, Width, Height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, Width, Height), textImageRef);
CGContextRelease(context);
//search
for(int y = 0; y < Height; y++)
{
for(int x = 0; x < Width * 4; x += 4)
{
int red = textBuffer[y * 4 * (NSInteger)Width + x];
int green = textBuffer[y * 4 * (NSInteger)Width + x + 1];
int blue = textBuffer[y * 4 * (NSInteger)Width + x + 2];
int alpha = textBuffer[y * 4 * (NSInteger)Width + x + 3];
if(COLOR IS CLOSE TO COLOR OF CURSOR)
{
free(textBuffer);
CGImageRelease(textImageRef);
return CGPointMake(x/4, y);
}
}
}
free(textBuffer);
CGImageRelease(textImageRef);
return CGPointZero;
}

iPhone: Draw rotated text?

I want to draw some text in a view, rotated 90°. I'm pretty new to iPhone development, and poking around the web reveals a number of different solutions. I've tried a few and usually end up with my text getting clipped.
What's going on here? I am drawing in a fairly small space (a table view cell), but there has to be a "right" way to do this… right?
Edit: Here are a couple of examples. I'm trying to display the text "12345" along the black bar at the left.
First attempt, from RJShearman on the Apple Discussions
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSelectFont (context, "Helvetica-Bold", 16.0, kCGEncodingMacRoman);
CGContextSetTextDrawingMode (context, kCGTextFill);
CGContextSetRGBFillColor(context, 1.0, 0.0, 0.0, 1.0);
CGContextSetTextMatrix (context, CGAffineTransformRotate(CGAffineTransformScale(CGAffineTransformIdentity, 1.f, -1.f ), M_PI/2));
CGContextShowTextAtPoint (context, 21.0, 55.0, [_cell.number cStringUsingEncoding:NSUTF8StringEncoding], [_cell.number length]);
CGContextRestoreGState(context);
(source: deeptechinc.com)
Second attempt, from zgombosi on iPhone Dev SDK. Identical results (the font was slightly smaller here, so there's less clipping).
CGContextRef context = UIGraphicsGetCurrentContext();
CGPoint point = CGPointMake(6.0, 50.0);
CGContextSaveGState(context);
CGContextTranslateCTM(context, point.x, point.y);
CGAffineTransform textTransform = CGAffineTransformMakeRotation(-1.57);
CGContextConcatCTM(context, textTransform);
CGContextTranslateCTM(context, -point.x, -point.y);
[[UIColor redColor] set];
[_cell.number drawAtPoint:point withFont:[UIFont fontWithName:#"Helvetica-Bold" size:14.0]];
CGContextRestoreGState(context);
Attempt two. There is almost identical clipping http://dev.deeptechinc.com/sidney/share/iphonerotation/attempt2.png
It turns out that the my table cell was always initialized 44px high regardless of the row height, so all of my drawing was getting clipped 44px from the top of the cell.
To draw larger cells it was necessary to set the content view's autoresizingMask with
cellContentView.autoresizingMask = UIViewAutoresizingFlexibleHeight;
or
cellContentView.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
…and drawRect is called with the correct size. In a way, this makes sense, because UITableViewCell's initWithStyle:reuseIdentifier: makes no mention of the size of the cell, and only the table view actually knows how big each row is going to be, based on its own size and its delegate's response to tableView:heightForRowAtIndexPath:.
I read the Quartz 2D Programming Guide until the drawing model and functions started to make sense, and the code to draw my rotated text became simple and obvious:
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextRotateCTM(context, -(M_PI/2));
[_cell.number drawAtPoint:CGPointMake(-57.0, 5.5) withFont:[UIFont fontWithName:#"Helvetica-Bold" size:16.0]];
CGContextRestoreGState(context);
Thanks for the tips, it looks like I'm all set.
Use :-
label.transform = CGAffineTransformMakeRotation(- 90.0f * M_PI / 180.0f);
where label is the object of UILabel.
Here's a tip. I presume you're doing this drawing in drawRect. Why don't you draw a frame around drawRect to see how big the rect is and if that is why you get clipping.
An alternative is to put your text in a UILabel, and then rotate that 90 degrees when you make your cells in cellForRowAtIndexPath.
You know about the UITableViewDelegate method heightForRowAtIndexPath right?
Here's a simple tutorial on various graphics level methods. Presuming you know how big your text is you should be able to size your table view row size appropriately.
Also, I'd check to make sure that the bounds after any transform actually meet your expectations. (Either use a debugger or log statement to verify this).
to what #Sidnicious said, and what i collected through out stack overflow, i want to give a usage example - appended my code to completely draw a ruler to the left screen side, with numbers rotated:
RulerView : UIView
// simple testing for iPhones (check for device descriptions to get all iPhones + iPads)
- (float)getPPI
{
switch ((int)[UIScreen mainScreen].bounds.size.height) {
case 568: // iPhone 5*
case 667: // iPhone 6
return 163.0;
break;
case 736: // iPhone 6+
return 154.0;
break;
default:
return -1.0;
break;
}
}
- (void)drawRect:(CGRect)rect
{
[[UIColor blackColor] setFill];
float ppi = [self getPPI];
if (ppi == -1.0) // unable to draw, maybe an ipad.
return;
float linesDist = ppi/25.4; // ppi/mm per inch (regular size iPad would be 132.0, iPhone6+ 154.0)
float linesWidthShort = 15.0;
float linesWidthMid = 20.0;
float linesWidthLong = 25.0;
for (float i = 0, c = 0; i <= self.bounds.size.height; i = i + linesDist, c = c +1.0)
{
bool isMid = (int)c % 5 == 0;
bool isLong = (int)c % 10 == 0;
float linesWidth = isLong ? linesWidthLong : isMid ? linesWidthMid : linesWidthShort;
UIRectFillUsingBlendMode( (CGRect){0, i, linesWidth, .5} , kCGBlendModeNormal);
/* FONT: Numbers without rotation (yes, is short)
if (isLong && i > 0 && (int)c % 10 == 0)
[[NSString stringWithFormat:#"%d", (int)(c/10)] drawAtPoint:(CGPoint){linesWidthLong +2, i -5} withAttributes:#{
NSFontAttributeName: [UIFont systemFontOfSize:9],
NSBaselineOffsetAttributeName: [NSNumber numberWithFloat:1.0]
}];
*/
// FONT: Numbers with rotation (yes, requires more effort)
if (isLong && i > 0 && (int)c % 10 == 0)
{
NSString *str = [NSString stringWithFormat:#"%d", (int)(c/10)];
NSDictionary *attrs = #{
NSFontAttributeName: [UIFont systemFontOfSize:9],
NSBaselineOffsetAttributeName: [NSNumber numberWithFloat:0.0]
};
CGSize textSize = [str sizeWithAttributes:attrs];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextRotateCTM(context, +(M_PI/2));
[str drawAtPoint:(CGPoint){i - (textSize.width/2), -(linesWidthLong + textSize.height +2)} withAttributes:attrs];
CGContextRestoreGState(context);
}
}
}
After I discovered that I needed to add the following to the top of my file I liked Matt's approach. Very simple.
#define degreesToRadian(x) (M_PI * (x) / 180.0)
mahboudz's suggestion will probably be your path of least resistance. You can rotate the UILabel 90deg with this: [label setTransform:CGAffineTransformMakeRotation(DegreesToRadians(-90.0f))]; You'll just have to calculate your cell height based upon the label width. -Matt – Matt Long Nov 10 at 0:09

How to display text using Quartz on the iPhone?

I've been trying to display text using a Quartz context, but no matter what I've tried I simply haven't had luck getting the text to display (I'm able to display all sorts of other Quartz objects though). Anybody knows what I might be doing wrong?
example:
-(void)drawRect:(CGRect)rect
{
// Drawing code
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSelectFont(context, "Arial", 24, kCGEncodingFontSpecific);
CGContextSetTextPosition(context,80,80);
CGContextShowText(context, "hello", 6);
//not even this works
CGContextShowTextAtPoint(context, 1,1, "hello", 6);
}
OK, I got it. First off, change your encoding mode to kCGEncodingMacRoman. Secondly, insert this line underneath it:
CGContextSetTextMatrix(canvas, CGAffineTransformMake(1, 0, 0, -1, 0, 0));
This sets the conversion matrix for text so that it is drawn correctly. If you don't put that line in, your text will be upside down and back to front. No idea why this wasn't the default. Finally, make sure you've set the right fill colour. It's an easy mistake to make if you forget to change from the backdrop colour to the text colour and end up with white-on-white text.
Here is a fragment of code that I'm using.
UIColor *mainTextColor = [UIColor whiteColor];
[mainTextColor set];
drawTextLjust(#"Sample Text", 8, 50, 185, 18, 16);
And:
static void drawTextLjust(NSString* text, CGFloat y, CGFloat left, CGFloat right,
int maxFontSize, int minFontSize) {
CGPoint point = CGPointMake(left, y);
UIFont *font = [UIFont systemFontOfSize:maxFontSize];
[text drawAtPoint:point forWidth:right - left withFont:font
minFontSize:minFontSize actualFontSize:NULL
lineBreakMode:UILineBreakModeTailTruncation
baselineAdjustment:UIBaselineAdjustmentAlignBaselines];
}