So I've got a big CALayer in an NSView that is larger than my window (using Cocoa on Mac OS X).
Every time I use renderInContext: the only thing that renders is what's viewable in the window, and nothing outside it.
How can I create a bitmap of something outside my visible Rect and export it as a PNG?
I've looked at a bunch of Core Graphics methods but can't find the answer anywhere:(
This turned out to be really easy
myLayer.masksToBounds = false
This removes the mask the main window puts on the CALayer and allows it to be exported, even though you can't see it.
Related
I'm just getting started with Mono programming using GTK, and have been pleasantly surprised. However, I have come across a hurdle I haven't been able to get over yet.
In the app I'm working on, I am able to load a JPEG image into a Pixmap and draw it to my GUI's Drawing Area. That works fine. However, I want to be able to take a second JPEG image, make it partially transparent, and draw it over the first. So far, I haven't been able to figure out a decent way to do this.
Is it somehow possible to change the alpha value of an entire Pixmap before I draw it? I'm not sure where to go from here.
If you're using GtkDrawingArea you should be using Cairo to do the drawing itself. As an alternative to using cairo_paint() there is a cairo_paint_with_alpha() which lets you specify the opacity you wish to paint with.
I am pretty sure this is a straightforward problem I must be confused about (I am a total newbie on the iPhone): I am trying to continuously draw shapes using the CG APIs into a bitmap object, and to always show an up-to-date version of that bitmap on the screen, inside a UIScrollView object so the user can pan, zoom, etc.
Using UIView with drawRect is no good because this is an ongoing thing -- I have a separate thread where all the drawing commands are issued, and I need them to be applied and accumulated on that same framebuffer (kind of like a web page rendering engine). What I keep seeing is that all GCContext APIs related to images seem to CREATE a brand new image whenever you want it rendered to the screen. That sounds bad because it forces the creation of another copy of the image.
I guess I am looking for a way to create an off-screen bitmap, render to it as much as I want using Core Graphics, and whenever I want to, blt that image to the screen, but still retaining the ability to keep drawing to it and blt it to the screen again, later on.
Thanks in advance!
You could use a CGLayer. Create it once with CGLayerCreateWithContext() and later retrieve the layer's context with CGLayerGetContext() whenever you need to draw into it.
To draw the layer into another graphics context, call CGContextDrawLayerAtPoint() or CGContextDrawLayerInRect().
Is there a way to draw on the iPhone screen (on a UIView in a UIWindow) outside of that view's drawRect() method? If so, how do I obtain the graphics context?
The graphics guide mentions class NSGraphicsContext, but the relevant chapter seems like a blind copy/paste from Mac OS X docs, and there's no such class in iPhone SDK.
EDIT: I'm trying to modify the contents of the view in a touch event handler - highlight the touched visual element. In Windows, I'd use GetDC()/ReleaseDC() rather than the full cycle of InvalidateRect()/WM_PAINT. Trying to do the same here. Arranging the active (touchable) elements as subviews is a huge performance penalty, since there are ~hundred of them.
No. Drawing is drawRect:'s (or a CALayer's) job. Even if you could draw elsewhere, it would be a code smell (as it is on the Mac). Any other code should simply update your model state, then set yourself as needing display.
When you need display, moving the display code elsewhere isn't going to make it go any faster. When you don't need display (and so haven't been set as needing display), the display code won't run if it's in drawRect:.
I'm trying to modify the contents of the view in a touch event handler - highlight the touched visual element. In Windows, I'd use [Windows code]. … Arranging the active (touchable) elements as subviews is a huge performance penalty, since there are ~hundred of them.
It sounds like Core Animation might be more appropriate for this.
I dont think ull be able to draw outside drawRect...but to get the current graphic context all you do is CGContextRef c = UIGraphicsGetCurrentContext(); hope that helps.
I'm trying to place a red tint on all the screens of my iPhone application. I've experimented on a bitmap and found I get the effect I want by compositing a dark red color onto the screen image using Multiply (kCGBlendModeMultiply).
So the question is how to efficiently do this in real time on the iPhone?
One dumb way might be to grab a bitmap of the current screen, composite into the bitmap and then write the composited bitmap back to the screen. This seems like it would almost certainly be too slow. In addition, I need some way of knowing when part of the screen has been redrawn so I can update the tinting.
I can almost get the effect I want by putting a red, translucent, fullscreen UIView above everything. That tints everything red within further intervention on my part, but the effect is much "muddier" than results from the composite.
So do any wizards out there know of some mechanism I can use to automatically composite the red over the app in similar fashion to what the translucent red UIView does?
I managed to somewhat make this work but with some side-effects:
I setup a UIView on top of all my app-views (attached to the window) which is not userInteractionEnabled and which is opaque
This UIView carries some custom drawRect-method which first fills the complete area with red color and then after having made a "screenshot" of my window-viewhierarchy I am rendering this image with
CGContextSetBlendMode( c, kCGBlendModeMultiply);
to the UIView.
To constantly update this UIView to the current state of the apps UIViews I constantly produce "screenshots" and render them as fast as possible.
I setup an NSTimer which is doing this snapshotting/rendering in a defined frequency and which is added to the the NSRunLoop for "Tracking".
RESULT: some really laggy response from the UI with several fancy effects, but still usable though if you do not set the frequency of snapshotting/rendering to high.
See screenshot here...
The result looks okay, but the usability really suffers a lot. I had a look at the OpenGL-examples before trying this aproach, but OpenGL is a whole lot of different (mostly C) code which seems to be very near to the hardware and which gives you a real headache.
So, the described approach is what I will shoot for with my next app. I hope Apple accepts it even though it degrades UXP during nightvision mode. They should simply make CALayer filter-backed then my problem will definitely be solved a whole lot better and performing nicely.
You could try this: subclass UIView. Add code to -drawRect method to draw the overlay. Make your UIView subclass pose as UIView everywhere in your app with
class_poseAs ([CustomUIView class], [UIView class]);
I'm writing an app that offloads some heavy drawing into an EAGLView, and it does some lightweight stuff in UIKit on top. It seems that the GL render during an orientation change is cached somewhere, and I can't figure out how to get access to it. Specifically:
After an orientation change, calling glClear(GL_COLOR_BUFFER_BIT) isn't enough to clear the GL display (drawing is cached somewhere?) How can I clear this cache?
After an orientation change, glReadPixel() can no longer access pixels drawn before the orientation change. How can I get access to where this is stored?
I'm not a expert but I was reading the Optimizing OpenGL ES for iPhone OS and it stated:
Avoid Mixing OpenGL ES with Native Platform Rendering
You may like to check it out to see if it helps.
After an orientation change, calling glClear(GL_COLOR_BUFFER_BIT) isn't enough to clear the GL display (drawing is cached somewhere?) How can I clear this cache?
You're drawing too an offscreen image. That image only becomes available for CoreAnimation compositing when you call -presentRenderbuffer.
After an orientation change, glReadPixel() can no longer access pixels drawn before the orientation change. How can I get access to where this is stored?
I assume you're using the RetainedBacking option. Without that option, you can never read the contents of a previous frame, even outside of rotation. When you call -presentRenderbuffer, the contents of the offscreen image are shipped off to CA for compositing, and a new image takes its place. The contents of the new image are undefined.
Assuming you are using something derived from the EAGLView sample code and that you are using RetainedBacking, when the rotation occurs, your framebuffer is resized by deallocating and reallocating. Any of the existing contents will be lost when this occurs.
You can either:
1) save the contents yourself across the transition by calling ReadPixels
2) never reallocate the framebuffer, and instead rotate the UIView (or CALayer) using the transform property. Doing so can cause quite a performance hit during compositing, but you'll get the rotation you're looking for without having to resize your framebuffer.