As the apple's document said, UIGraphicsGetCurrentContext can only be used in drawRect method.
If you want to use it in another place, you have to push a context before.
Now I want to use UIGraphicsGetCurrentContext to get a context in my own method called render.
How can I get a context to push?
Can I get the context in drawRect and save it in a non-local variable?
And push it in another method, then use UIGraphicsGetCurrentContext to get it to use.
If so, why need I push it and get it again? I can use the non-local variable directly.
You can call setNeedsDisplay of the view that you need redrawn on timer, and have its drawRect call your render (instead of calling your render on timer directly). This way you would avoid unusual manipulations with your CG Context, and prevent rendering when the rectangle has been scrolled off the screen.
Edit:
You use UIGraphicsPushContext and UIGraphicsPopContext when you want a specific context to become the context on which UI Kit operates. My initial understanding of what they do was incorrect (I'm relatively new to iOS development myself). For example, there are operations (e.g. some operations setting a color or other drawing parameters) that operate implicitly on the current context. If you set up a context for, say, drawing on a bitmap, and then you want to use an operation that modifies the state of the current context (i.e. an operation that modifies the context parameters, but does not take a specific context reference as a parameter), you push the bitmap context to make it current, perform the operation that implicitly references it, and pop the context right back.
Special thanks go to rob mayoff for explaining this to me.
To get a CGContext into which you can draw in a render call outside of a drawRect, you can allocate your own graphics bitmap, and create a context from that bitmap. Then you can draw to that context anytime.
If you wish to display that context after drawing into it, you can use it to create an image, and then draw that image to a view during the's UIView's drawRect. Or, alternatively, you could assign that image to a view's CALayer's contents, which should be flushed to the display sometime during the UI run loop's processing.
You can use CGContextSaveCGState and CGContextRestoreCGState to push and pop your current graphic context. At this link a simple example.
Related
I am pretty sure this is a straightforward problem I must be confused about (I am a total newbie on the iPhone): I am trying to continuously draw shapes using the CG APIs into a bitmap object, and to always show an up-to-date version of that bitmap on the screen, inside a UIScrollView object so the user can pan, zoom, etc.
Using UIView with drawRect is no good because this is an ongoing thing -- I have a separate thread where all the drawing commands are issued, and I need them to be applied and accumulated on that same framebuffer (kind of like a web page rendering engine). What I keep seeing is that all GCContext APIs related to images seem to CREATE a brand new image whenever you want it rendered to the screen. That sounds bad because it forces the creation of another copy of the image.
I guess I am looking for a way to create an off-screen bitmap, render to it as much as I want using Core Graphics, and whenever I want to, blt that image to the screen, but still retaining the ability to keep drawing to it and blt it to the screen again, later on.
Thanks in advance!
You could use a CGLayer. Create it once with CGLayerCreateWithContext() and later retrieve the layer's context with CGLayerGetContext() whenever you need to draw into it.
To draw the layer into another graphics context, call CGContextDrawLayerAtPoint() or CGContextDrawLayerInRect().
I'm using Apple's sample application GLPaint as a basis for an OpenGL ES painting application, but I can't figure out how to implement undo functionality within it.
I don't want to take images of every stroke and store them. Is there any way of using different frame buffer objects to implement undo? Do you have other suggestions for better ways of doing this?
Use vertex buffer objects (VBO) to render your content. On every new stroke copy the last VBO to some least recently used (LRU) list. If your LRU is full, delete the least recently used VBO. To restore (undo) the last stroke just use the most recently used VBO of the LRU and render it.
VBO:
http://developer.apple.com/iphone/library/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/TechniquesforWorkingwithVertexData/TechniquesforWorkingwithVertexData.html
LRU:
http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used
I would recommend using NSUndoManager to store a list of the actual drawing actions undertaken by the user (draw line from here to here using this paintbrush, etc.). If stored as a list of x, y coordinates for vector drawing, along with all other metadata required to recreate that part of the drawing, you won't be using anywhere near as much memory as storing images, vertex buffer objects, or framebuffer objects.
In fact, if you store these drawing steps in a Core Data database, you can almost get undo / redo for free. See my answer here for more.
To undo in a graphical application, you can use coreData.
here is a detailed blogpost and read this one as well.
Either you can use NSUndoManager, class provided by iOS
Or you can save current state of screen area by:
CGContextRef current = UIGraphicsGetCurrentContext();
You can have one array as stack with screen image objects and on undo action you can pop value from stack and on each change push value into the stack.
Is there a way to draw on the iPhone screen (on a UIView in a UIWindow) outside of that view's drawRect() method? If so, how do I obtain the graphics context?
The graphics guide mentions class NSGraphicsContext, but the relevant chapter seems like a blind copy/paste from Mac OS X docs, and there's no such class in iPhone SDK.
EDIT: I'm trying to modify the contents of the view in a touch event handler - highlight the touched visual element. In Windows, I'd use GetDC()/ReleaseDC() rather than the full cycle of InvalidateRect()/WM_PAINT. Trying to do the same here. Arranging the active (touchable) elements as subviews is a huge performance penalty, since there are ~hundred of them.
No. Drawing is drawRect:'s (or a CALayer's) job. Even if you could draw elsewhere, it would be a code smell (as it is on the Mac). Any other code should simply update your model state, then set yourself as needing display.
When you need display, moving the display code elsewhere isn't going to make it go any faster. When you don't need display (and so haven't been set as needing display), the display code won't run if it's in drawRect:.
I'm trying to modify the contents of the view in a touch event handler - highlight the touched visual element. In Windows, I'd use [Windows code]. … Arranging the active (touchable) elements as subviews is a huge performance penalty, since there are ~hundred of them.
It sounds like Core Animation might be more appropriate for this.
I dont think ull be able to draw outside drawRect...but to get the current graphic context all you do is CGContextRef c = UIGraphicsGetCurrentContext(); hope that helps.
I'm using a NSTimer to fire a drawRect within the app's main view. The drawRect draws a few images, and blends each using kCGBlendModeScreen (which is an absolute must). However, the drawRect takes just a tad longer than desired, so the timer doesn't always get fired at the desired rate (more like half as often).
I've optimized the graphics used as much as I feel is possible, so I'm wondering if it's possible to "outsource" the drawing by creating a new view, and calling that view's drawRect from within a thread created inside of the timer's callback method. (In other words, thread the call to a new view's drawRect, such as [someNewView setNeedsDisplay] ...)
If so, how might I approach something like that, in code?
...
I'd use Core Animation, but I remember reading that it didn't support alpha blend modes. If I'm wrong, I'd be open to seeing some example code that allows animation of all the images in separate transformations (e.g. individual rotations for each image), while still keeping them able to blend using kCGBlendModeScreen that I'm currently implementing.
Thanks for any tips!
The answer is "no." You should never, ever do drawing(or anything with UIKit) from a secondary thread. If you're experiencing performance issues, you should perform all of your computations on another thread ahead of drawing so that drawing takes a minimal amount of time.
I'm making a complex drawing using quartz based on passed in information. The only part I haven't been able to figure out is how do I clear the lines, rectangles, etc that I've already drawn? Basically, I want to erase the whole drawing and just draw it again from the new data.
If you set your UIView's clearContextBeforeDrawing property to YES, then the system should take care of filling its area with its backgroundColor before calling its drawRect: method.
If you want to clear something that's not tracked as part of the current state, it's probably less expensive to just release your old context and start a new one.