I made a "Circle" view with this drawRect
- (void)drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(ctx, color.CGColor);
CGContextAddEllipseInRect(ctx, rect);
CGContextFillPath(ctx);
}
When I try to scale the view up using CGAffineTransformMakeScale(2.0, 2.0), the result is blurry and pixelated on the edges. However, the programming guide says that Quartz uses vector-based commands to draw views, and that they would continue to look good when using affine transforms:
The Quartz drawing system uses a vector-based drawing model. Compared to a raster-based drawing model, in which drawing commands operate on individual pixels, drawing commands in Quartz are specified using a fixed-scale drawing space, known as the user coordinate space. iPhone OS then maps the coordinates in this drawing space onto the actual pixels of the device. The advantage of this model is that graphics drawn using vector commands continue to look good when scaled up or down using an affine transform.
Or am I not using vector-based commands? If not, how would I do that to draw a circle?
Thanks.
Applying a transform to a view does not cause it to be redrawn. All that it does is scale the view's layer, which is a bitmap texture stored on the GPU. This will lead to blurry graphics.
When drawing a view on the iPhone, -drawRect: is called to supply the content for the view's layer. That content is then cached as a texture on the GPU.
What they are referring to in the guide is the application of a transform during -drawRect:, when the vector graphics are being drawn. If you use a transform there (through CGContextConcatCTM() or the like), the circle will be drawn smoothly at the larger scale. However, you will also need to resize your view to reflect this larger shape. I recommend using a scale property on your custom view subclass that you can set to a different scale factor and that will handle resizing the view and redrawing its contents sharply.
It depends when you're applying the scaling transformation, I expect. If you draw it first, then do the scale transform, then it will look pixelated (because it's been scaled post drawing). If you do the scale before you do the drawing routine, I'd expect it would work as expected.
So yes, you're using vector-based commands to achieve this; I suspect it's an ordering issue. When are you making the transformation and drawing?
You could try calling setNeedsDisplay after the transform, I'm not sure if this will work or not but it's worth a shot.
Related
i would like to make an app where you can paint like in the GLPaint sample code, but also zoom in to paint in more detail within your painting.
but i get the feeling, that using OpenGL ES 1.0 which is used in the GLPaint app, is pretty difficult to learn and could be a little bit of an overkill for my need.
if i am chaning the main views frame with the setFrame method to zoom with gesturerecognizer, the already painted lines get erased with every change of the frames size.
so i tried to realize it with another idea: in the touchmoves method i add at "many" positions uiimageviews with an image of the brush, it is slower than the glpaint app and a little bit of a memomy management mess, but i donĀ“t see another way to go there.
any suggestions, learn openGL ES 1.0 or 2.0 or trying to realise the last idea
You can certainly achieve what you are doing, however it will require some effort.
Usually zooming is quite straight-forward as most OpenGL scenes typically do not rely on the the accumulation buffer as the GLPaint sample code does.
If you try and just zoom your the view in GLPaint, your new painting will be drawn at some adjusted scale over your original drawing - which is almost certainly not what you want.
A work-around is instead of drawing directly to your presenting screen buffer, you would first render to a texture buffer, then render said texture buffer on a quad (or equivalent). That way the quad scene can be cleared and re-rendered every frame refresh (at any scale you choose) while your paint buffer retains its accumulation buffer.
This has been tested and works.
I am quite sure the image view method will be an overkill after drawing for a few minutes... You can do all the zooming quite nicely with openGL and I suggest you do that. The bast practice would be to create a canvas as large as possible so when you zoom in you will not lose any resolution.
About zooming: Do not try to resize the GL frame or any frame for that matter because even if you manage to do that successfully you will lose resolution. You should use standard matrices to translate and scale the scene or just play around with glOrtho (set its values to the rect you are currently seeing). Once you get that part there are sadly 2 more things to do that require a bit of math, first is you will have to compute the new touch positions in the openGL scene as location in view will not know about your zooming and translating, second is you probably need to scale the brush as well (make smaller when the scene is bigger so you can draw details).
About the canvas: I do suggest you draw to a FBO rather then your main render buffer and present the texture to your main render scene. Note here that FBO will have attached texture and will be a size of power of 2 (create 2048x2048 or 4096x4096 for newer devices) but you will probably just be using some part of it to keep the same ratio as the screen (glViewport should do the job) so you will have to compute the texture coordinates. Overall the drawing mechanism doesn't change much.
So to sum this up, imagine you have a canvas (FBO) to which you apply the brush of certain size and position on touches events, then you use that canvas as a texture and draw it on your main GL view.
I have to draw the following shape in a rectangle. What is the best way to do it? The blue areas are the background color. The black is a border and the red is the interior color. I want to paint the black and red only.
Thanks
It totally depends on how you would use the shape; whether they will move, how many of them will be displayed, whether they will be scaled while being displayed, etc.
In general, OpenGL ES is considered to be the fastest way of drawing on iOS devices. However, if you have only a small number of those shape (say, <10~100?) and the other part of the application does not have a lot of fast animations Quart 2D is usually enough in terms of drawing, in order to, say, achieve a 30/60Hz drawing rate.
How you use Quartz2D still matters a lot. If you need to redraw the shapes frequently, you would need to draw the shape on CALayers, and rather than redrawing the shapes, you should move and transform the layers.
Comparing drawing as a bitmap and a vector shape, I believe both would work fine for this kind of shape (especially because you would not redraw the shape so often, but only work with the layer on which the image already is drawn). But if your shapes are scaled frequently, you would consider vector images for the quality of the image.
To summarize, learn (if you don't already know) how to draw into a graphics context first (see Drawing and Printing Guide for iOS). You should be able to draw a simple vector shape or a bitmap image by overriding drawRect or similar methods inside a UIView object. Then if you need to animate those shapes, learn how to create a CALayer and draw on the layer (see Core Animation Programming Guide). Finally, if you need to create many duplicates of the shape on the screen, learn how to use CGLayer to replicate an image (see Quartz 2D Programming Guide).
I am wondering which is the best way, in terms of speed and efficiency, to draw a frame around an image on iPhone, especially when I have to draw lots of these images:
1) Drawing the image and then the frame around
or
2) Drawing a rect, filling it with a color and then drawing the image within that rect leaving some offset pixel to mimic the frame
Does Quartz draw everything that it is told to or is it smart enough to draw only what is really visible?
My feeling is that the first approach is better because there is actually less drawing done. Is it really so?
Thanks
P.
Quartz drawing will only take place within the bounds of the view, if you are doing custom drawing in -drawRect:.
That said, I think that you will see the best performance if you simply create UIImageViews for each image, then use the borderWidth, borderColor, and possibly cornerRadius properties on your view's layer to set a border. For example:
imageView.layer.cornerRadius = 10.0f;
imageView.layer.borderWidth = 3.0f;
imageView.layer.borderColor = [[UIColor blackColor] CGColor];
will place a 3-pixel-wide black border around your view, and give it a 10 pixel radius at the corners.
If performance is a problem, you should try to minimize the number of operations you perform on the graphics context, especially the ones that have no visible components.
In your particular case, I think you need to test both options on an iPhone (ist gen if possible) and benchmark them. Maybe it's easier to just fill the whole rectangle rather than calculate which pixels are part of the frame and which aren't?
It depends on the graphics chip.
I need to draw a path along the shape of an image in a way that it is always matching its position on the image independent of the image scale. Think of this like the hybrid view of Google Maps where streets names and roads are superimposed on top of the aerial pictures.
Furthermore, this path will be drawn by the user's finger movements and I need to be able to retrieve the path keypoints on the image pixel coordinates. The user zooms-in in order to more precisely set the paths location.
I manage to somehow make it work using this approach:
-Create a custom UIView called CanvasView that handles touches interaction and delivers scaling, rotation, translation values to either the UIImageView or PathsView (see bellow) depending on a flag: deliverToImageOrPaths.
-Create a UIImageView holding the base image. This is set as a children of CanvasView
-Create a custom UIView called PathsView that keeps track of the 2D paths geometry and draws itself with a custom drawRect. This is set as children of the UIImageView.
So hierarchy: CanvasView -> UIImageView ->PathsView
In this way when deliverToImageOrPaths is YES, finger gestures transforms both the UIImageView and its child PathsView. When deliverToImageOrPaths is NO the gestures affect only the PathsView altering its geometry. So far so good.
QUESTION:
The problem I have is that when scaling the base UIImageView (via its .transform property) the PathsView is scaled with aliasing artifacts. drawRect is still being called on the PathsView but I guess it's performing the drawing using the original buffer size and then interpolating.
How can I solve this issue? Are there better ways to implement these features?
PS: I tried changing the PathsView layer class to CATiledLayer with levelsOfDetailBias 4 and levelsOfDetail 4. It solves the aliasing problem to some extent but it's unacceptable slow to render.
If you're holding onto the path values as they are being drawn, you can capture the % scaling of the underlying image, then reposition the path values and redraw them in the PathsView's drawRect method.
It would involve a bit of math (mostly along the lines of map projections) but instead of scaling bitmaps and getting artifacts, you would be scaling point distances and redrawing using vectors, which would make it smooth at any resolution (especially if you're connecting dots with beziers instead of pixels or lines).
If you're laying paint along the paths as the user draws them and throwing away the point data, then the only solution is to do bitmap anti-aliasing which is essentially what CATiledLayer is doing for you and as you've discovered, can be slow.
If you are targeting iPhone OS 3.0+ you could use a CAShapeLayer for your path. You assign a CGPathRef to the layer and it will handle all the drawing and scaling for you. You just set the layer's transform.
I'm having a performance issue.
I've created an UIView and overwrited it's drawRect function. At this function, I was drawing an image (big one), and over that, an white square at the entire screen with a polygon inside it, with CGContextEOFillPath. The result is an white screen with portion of the image (defined by the polygon) displayed.
After that, I created a function to animate the transition of that polygon to another one. Besides the polygon animation, the image should also be scaled and moved to fix the diplayed at the screen. I did that with an NSTimer. The animation of the polygon consists in calculating the distance between each vertex and moving then to a position according to elapsedTime. It worked just fine at the simulator, but got really stucked at device.
Reading about performance, here at stackoverflow, I found the alternative to use beginAnimations and commitAnimations. I'm changing everything to use that approach with the image. But what can I do with the polygon. The polygon is drawn with CGContextMoveToPoint and CGContextAddLineToPoint, so I believe it can't be animated with beginAnimations. An I correct? Is there a better approach to do that?
The desired result is something like this comic reader app: http://www.comixology.com/iphoneapp (click on guided tour. at the middle of the video they show the "automatic masking" feature)
My suggestion would be to use a CAShapeLayer overlaid on your main image view, with the CAShapeLayer being the size of the view you want to mask and having a polygon path for a hole in the center of it. CAShapeLayers let you animate from one CGPathRef to another smoothly, as long as the two paths have the same number of control points. You will need to use a CABasicAnimation here to do that animating, rather than a UIView begin / commitAnimations block, but it's not too difficult.
Joe Ricioppo has a nice example of animating CAShapeLayer paths in his post here.
With Core Animation you can animate "animatable" (sic) properties. Apple's documentation enumerates animatable properties in Mac OS X:
http://url.akosma.com/55
In the case of the iPhone, the UIView documentation explicitly says "animatable" when a given property is, hum, animatable. The most powerful of these are (IMHO) UIView's "transform" property, which takes CGAffineTransform structs as inputs, or CALayer's "transform" property (which takes CATransform3D structs). Both are animatable and give you tremendous power to create any kind of transition you want.
Now, in your case, indeed, you can't animate the polygon in an "easy" way. My bet would be in your case to try to map CGAffineTransforms that fit your needs (scale, translation) and apply that to a fixed view, non-animated, created using your Quartz code.
I hope I'm clear enough :)