How to not clear context before drawing - iphone

I need to draw incrementally in a subclassed UIView, but the view is cleared every time I call [self setNeedsDisplay].
I'm doing this:
- (id)initWithCoder:(NSCoder *)aDecoder {
if ((self = [super initWithCoder:aDecoder])) {
self.clearsContextBeforeDrawing = FALSE;
}
return self;
}
Am I missing something? How do I stop UIView from clearing?

If you call setNeedsDisplay it will call drawRect with the full bounds.
If you are drawing incrementally (portions of the view), you need to call:
[self setNeedsDisplayInRect:rect];
Of course that means your drawRect needs to be able to handle just drawing a portion of the view.
More info here: Drawing incrementally in a UIView (iPhone)
Also, you're setting it to FALSE and it's typically NO in objective-c. One resolves to 0 and the other to (BOOL)0 at compile time. A bit odd but not causing issues:
Is there a difference between YES/NO,TRUE/FALSE and true/false in objective-c?

Sorry.
drawRect: is not guaranteed use the saved contents of any previous drawing of a regular UIView. The framebuffer is likely behind an opaque path to the GPU, and can be thought of as essentially write only. So you always have to be able to recreate the entire view if you implement a drawRect for a regular UIView.
But you can draw into another context including previous graphics contents if you create your own bitmap drawing context, and draw into that context instead of a UIView. You can then convert that bitmap context into an image and display that image as the content of the CALayer of a UIView.

Related

How could I trick [CALayer renderInContext: ] to render only a section of the layer?

I'm well aware that there's no nice way to render a small part of a UIView to an image, besides rendering the whole view and cropping. (VERY expensive on something like an iPad 3, where the resulting image is huge). See here for a description of the renderInContext method (there's no alternatives).
The part of the view that I want to render can't be predetermined, meaning I can't set up the view hierarchy for the small section to be it's own UIView (and therefore CALayer).
My Idea...
I had an idea, but I need some direction if I'm going to succeed. What if I create a category on UIView (or CALayer) that adds a method along the lines of:
[UIView renderSubframe:(CGFrame)frame];
How? Well, I'm thinking that if a dummy view the size of the sub frame was created, then the view could be shifted temporarily onto this. Then the dummy view's layer could call renderInContext:, resulting in an efficient and fast way of getting views.
So...
I'm really not that up to speed with CALayer/Quartz core stuff... Will this have any chance of working? If not, what's another way I could achieve the same thing - what's the most efficient way I could achieve the same result (or has anyone else already faced this problem and come up with a different solution)?
Here's the category I ended up writing. Not as hard as I first thought, thanks to a handy CGAffineTransform.
#import "UIView+RenderSubframe.h"
#import <QuartzCore/QuartzCore.h>
#implementation UIView (RenderSubframe)
- (UIImage *) renderWithBounds:(CGRect)frame {
CGSize imageSize = frame.size;
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextConcatCTM(c, CGAffineTransformMakeTranslation(-frame.origin.x, -frame.origin.y));
[self.layer renderInContext:c];
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenshot;
}
#end
A couple of different ways you might do this:
specify a CGContextClipToRect for your context before calling renderInContext;
use:
setNeedsDisplayInRect:
Marks the region within the specified rectangle as needing to be updated.
- (void)setNeedsDisplayInRect:(CGRect)theRect
This would make the rendering happen only in the specified rect. I am not sure though whether this would work as per your requirement.
I think that the first option should work seamlessly.

Loading UIView transform & center from settings gives different position

I'm using pan, pinch, and rotate UIGestureRecognizers to allow the user to put certain UI elements exactly where they want them. Using the code from here http://www.raywenderlich.com/6567/uigesturerecognizer-tutorial-in-ios-5-pinches-pans-and-more (or similar code from here http://mobile.tutsplus.com/tutorials/iphone/uigesturerecognizer/) both give me what I need for the user to place these UI elements as they desire.
When the user exits "UI layout" mode, I save the UIView's transform and center like so:
NSString *transformString = NSStringFromCGAffineTransform(self.transform);
[[NSUserDefaults standardUserDefaults] setObject:transformString forKey:#"UItransform", suffix]];
NSString *centerString = NSStringFromCGPoint(self.center);
[[NSUserDefaults standardUserDefaults] setObject:centerString forKey:#"UIcenter"];
When I reload the app, I read the UIView's transform and center like so:
NSString *centerString = [[NSUserDefaults standardUserDefaults] objectForKey:#"UIcenter"];
if( centerString != nil )
self.center = CGPointFromString(centerString);
NSString *transformString = [[NSUserDefaults standardUserDefaults] objectForKey:#"UItransform"];
if( transformString != nil )
self.transform = CGAffineTransformFromString(transformString);
And the UIView ends up rotated and scaled correctly, but in the wrong place. Further, upon entering "UI layout" mode again, I can't always grab the view with the various gestures (as though the view as displayed is not the view as understood by the gesture recognizer?)
I also have a reset button that sets the UIView's transform to the identity and its center to whatever it is when it loads from the NIB. But after loading the altered UIView center and transform, even the reset doesn't work. The UIView's position is wrong.
My first thought was that since those gesture code examples alter center, that rotations must be happening around different centers (assuming some unpredictable sequence of moves, rotations, and scales). As I don't want to save the entire sequence of edits (though that might be handy if I want to have some undo feature in the layout mode), I altered the UIPanGestureRecognizer handler to use the transform to move it. Once I got that working, I figured just saving the transform would get me the current location and orientation, regardless of in what order things happened. But no such luck. I still get a wacky position this way.
So I'm at a loss. If a UIView has been moved and rotated to a new position, how can I save that location and orientation in a way that I can load it later and get the UIView back to where it should be?
Apologies in advance if I didn't tag this right or didn't lay it out correctly or committed some other stackoverflow sin. It's the first time I've posted here.
EDIT
I'm trying the two suggestions so far. I think they're effectively the same thing (one suggests saving the frame and the other suggests saving the origin, which I think is the frame.origin).
So now the save/load from prefs code includes the following.
Save:
NSString *originString = NSStringFromCGPoint(self.frame.origin);
[[NSUserDefaults standardUserDefaults] setObject:originString forKey:#"UIorigin"];
Load (before loading the transform):
NSString *originString = [[NSUserDefaults standardUserDefaults] objectForKey:#"UIorigin"];
if( originString ) {
CGPoint origin = CGPointFromString(originString);
self.frame = CGRectMake(origin.x, origin.y, self.frame.size.width, self.frame.size.height);
}
I get the same (or similar - it's hard to tell) result. In fact, I added a button to just reload the prefs, and once the view is rotated, that "reload" button will move the UIView by some offset repeatedly (as though the frame or transform are relative to itself - which I'm sure is a clue, but I'm not sure what it's pointing to).
EDIT #2
This makes me wonder about depending on the view's frame. From Apple http://developer.apple.com/library/ios/#documentation/WindowsViews/Conceptual/ViewPG_iPhoneOS/WindowsandViews/WindowsandViews.html#//apple_ref/doc/uid/TP40009503-CH2-SW6 (emphasis mine):
The value in the center property is always valid, even if scaling or rotation factors have been added to the view’s transform. The same is not true for the value in the frame property, which is considered invalid if the view’s transform is not equal to the identity transform.
EDIT #3
Okay, so when I'm loading the prefs in, everything looks fine. The UI panel's bounds rect is {{0, 0}, {506, 254}}. At the end of my VC's viewDidLoad method, all still seems okay. But by the time things actually are displayed, bounds is something else. For example: {{0, 0}, {488.321, 435.981}} (which looks like how big it is within its superview once rotated and scaled). If I reset bounds to what it's supposed to be, it moves back into place.
It's easy enough to reset the bounds to what they're supposed to be programatically, but I'm actually not sure when to do it! I would've thought to do it at the end of viewDidLoad, but bounds is still correct at that point.
EDIT #4
I tried capturing self.bounds in initWithCoder (as it's coming from a NIB), and then in layoutSubviews, resetting self.bounds to that captured CGRect. And that works.
But it seems horribly hacky and fraught with peril. This can't really be the right way to do this. (Can it?) skram's answer below seems so straightforward, but doesn't work for me when the app reloads.
You would save the frame property as well. You can use NSStringFromCGRect() and CGRectFromString().
When loading, set the frame then apply your transform. This is how I do it in one of my apps.
Hope this helps.
UPDATE: In my case, I have Draggable UIViews that rotation and resizing can be applied to. I use NSCoding to save and load my objects, example below.
//encoding
....
[coder encodeCGRect:self.frame forKey:#"rect"];
// you can save with NSStringFromCGRect(self.frame);
[coder encodeObject:NSStringFromCGAffineTransform(self.transform) forKey:#"savedTransform"];
//init-coder
CGRect frame = [coder decodeCGRectForKey:#"rect"];
// you can use frame = CGRectFromString(/*load string*/);
[self setFrame:frame];
self.transform = CGAffineTransformFromString([coder decodeObjectForKey:#"savedTransform"]);
What this does is save my frame and transform, and load them when needed. The same method can be applied with NSStringFromCGRect() and CGRectFromString().
UPDATE 2: In your case. You would do something like this..
[self setFrame:CGRectFromString([[NSUserDefaults standardUserDefaults] valueForKey:#"UIFrame"])];
self.transform = CGAffineTransformFromString([[NSUserDefaults standardUserDefaults] valueForKey:#"transform"]);
Assuming you're saving to NSUserDefaults with UIFrame, and transform keys.
I am having trouble reproducing your issue. I have used the following code, which does the following:
Adds a view
Moves it by changing the centre
Scales it with a transform
Rotates it with another transform, concatenated onto the first
Saves the transform and centre to strings
Adds another view and applies the centre and transform from the string
This results in two views in exactly the same place and position:
- (void)viewDidLoad
{
[super viewDidLoad];
UIView *view1 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
view1.layer.borderWidth = 5.0;
view1.layer.borderColor = [UIColor blackColor].CGColor;
[self.view addSubview:view1];
view1.center = CGPointMake(150,150);
view1.transform = CGAffineTransformMakeScale(1.3, 1.3);
view1.transform = CGAffineTransformRotate(view1.transform, 0.5);
NSString *savedCentre = NSStringFromCGPoint(view1.center);
NSString *savedTransform = NSStringFromCGAffineTransform(view1.transform);
UIView *view2 = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
view2.layer.borderWidth = 2.0;
view2.layer.borderColor = [UIColor greenColor].CGColor;
[self.view addSubview:view2];
view2.center = CGPointFromString(savedCentre);
view2.transform = CGAffineTransformFromString(savedTransform);
}
Giving:
This ties up with what I would expect from the documentation, in that all transforms happen around the centre point and so that is never affected. The only way I can imagine that you're not able to restore items to their previous state is if somehow the superview was different, either with its own transform or a different frame, or a different view altogether. But I can't tell that from your question.
In summary, the original code in your question ought to be working, so there is something else going on! Hopefully this answer will help you narrow it down.
You should the also save the UIView's location,
CGPoint position = CGPointMake(self.view.origin.x, self.view.origin.y)
NSString _position = NSStringFromCGPoint(position);
// Do the saving
I'm not sure of everything that's going on, but here are some ideas that may help.
1- skram's solution seems plausible, but it's the bounds you want to save, not the frame. (Note that, if there's been no rotation, the center and bounds define the frame. So, setting the two is the same as setting the frame.)
From, the View Programming Guide for IOS you linked to:
Important If a view’s transform property is not the identity
transform, the value of that view’s frame property is undefined and
must be ignored. When applying transforms to a view, you must use the
view’s bounds and center properties to get the size and position of
the view. The frame rectangles of any subviews are still valid because
they are relative to the view’s bounds.
2- Another idea. When you reload the app, you could try the following:
First, set the view's transform to the identity transform.
Then, set the view's bounds and center to the saved values.
Finally, set the view's transform to the saved transform.
Depending on where your app is restarting, it may be starting back up with some of the old geometry. I really don't think this will change anything, but it's easy enough to try.
Update: After some testing, it really does seem like this wouldn't have any effect. Changing the transform does not seem to change the bounds or center (although it does change the frame.)
3- Lastly, you may save some trouble by rewriting the pinch gesture recognizer to operate on the bounds rather than the transform. (Again, use bounds, not frame, because an earlier rotation could have rendered the frame invalid.) In this way, the transform is used only for rotations, which, I think, cannot be done any other way without redrawing.
From the same guide, Apple's recommendation is:
You typically modify the transform property of a view when you want to
implement animations. For example, you could use this property to
create an animation of your view rotating around its center point. You
would not use this property to make permanent changes to your view,
such as modifying its position or size a view within its superview’s
coordinate space. For that type of change, you should modify the frame
rectangle of your view instead.
Thanks to all who contributed answers! The sum of them all led me to the following:
The trouble seems to have been that the bounds CGRect was being reset after loading the transform from preferences at startup, but not when updating the preferences while modifying in real time.
I think there are two solutions. One would be to first load the preferences from layoutSubviews instead of from viewDidLoad. Nothing seems to happen to bounds after layoutSubviews is called.
For other reasons in my app, however, it's more convenient to load the preferences from the view controller's viewDidLoad. So the solution I'm using is this:
// UserTransformableView.h
#interface UserTransformableView : UIView {
CGRect defaultBounds;
}
// UserTransformableView.m
- (id)initWithCoder:(NSCoder *)aDecoder {
self = [super initWithCoder:aDecoder];
if( self ) {
defaultBounds = self.bounds;
}
return self;
}
- (void)layoutSubviews {
self.bounds = defaultBounds;
}

Avoiding UIView to create very large drawing canvas

I derived a class from UIView, only to realize that there are real limitations on its size due to memory. I have this UIView inside UIScrollView.
Is there a way for me to put something inside a scroll view that is not a UIView-derived class but into which I can still draw, and which can be very very large?
I don't mind having to respond to expose-rectangle events, like one does when using conventional windowing systems.
Thanks.
The things inside of a UIScrollView must be UIViews, which are size-restricted for memory reasons. UIView maintains a bitmapped backing store for performance reasons, so it has to allocate memory proportional to its size.
The usual way that you handle this is to generate several UIViews and swap them out as the user scrolls around. The other version of that is to use CATiledLayer. Neither of those give you the "giant canvas" drawing model, though. It's up to you to break things up and draw them as needed. This is the usual approach, though.
If you really want a giant canvas, my recommendation would be a CGPDFContext. There is rich existing support for these, particularly using UIWebView (remember, you can open data: URIs to avoid reading files from disk). And you can draw parts of them directly by applying affine transforms and then CGContextDrawPDFPage. CGBitmapContext is another approach, but it could require a lot more memory for a small amount of drawing.
So you have a UIView inside a UIScrollView, but you want your UIView to have very large bounds (i.e., so it matches the size of your UIScrollView's contentSize). But you don't want to draw the entire UIView every time it needs displaying, nor can you fit its entire contents in memory at once.
Make your UIView uses a CAScrollLayer backing, as follows:
// MyCustomUIView.m
+ (Class) layerClass
{
return [CAScrollLayer class];
}
Add a method to update the scroll position when the user scrolls the UIScrollView containing your UIView:
// MyCustomUIView.m
- (void) setScrollOffset:(CGPoint)scrollOffset
{
CAScrollLayer *scrollLayer = (CAScrollLayer*)self.layer;
[scrollLayer scrollToPoint:scrollOffset];
}
Ensure that when you draw your UIView, you only draw the portions contained in the CGRect provided to you:
- (void)drawRect:(CGRect)rect
{
// Only draw stuff that lies inside 'rect'
// CGRectIntersection might be handy here!
}
Now, in your UIScrollViewDelegate, you'll need to notify your CAScrollLayer backed view when the parent UIScrollView updates:
// SomeUIScrollViewDelegate.m
- (void) scrollViewDidScroll:(UIScrollView *)scrollView
{
// Offset myCustomView within the scrollview so that it is always visible
myCustomView.frame = CGRectMake(scrollView.contentOffset.x,
scrollView.contentOffset.y,
scrollView.bounds.size.width,
scrollView.bounds.size.height);
// "Scroll" myCustomView so that the correct portion is rendered
[myCustomView setScrollOffset:self.contentOffset];
// Tell it to update its display
[myCustomView setNeedsDisplay];
}
You can also use CATiledLayer, which is easier because you do not have to track the scroll position — instead your drawRect method will be called with each tile as-needed. However this will cause your view to fade in slowly. It might be desirable if you intend to cache parts of your view and don't mind the slow updates.

How to keep the previous drawing when using Quartz on the iPhone?

I want to draw a simple line on the iPhone by touching and dragging across the screen. I managed to do that by subclassing UIView and change the default drawRect: method. At the same time in my viewcontroller I detect the touch event and call the [myView setNeedsDisplay] when necessary. The problem is that when I try to draw the second line the previous line disappears. Is there a way to keep the previous line on the screen?
Any input will be very much appreciated.
The usual method is to use CGBitmapContextCreate(). Create it in -init/-init-WithFrame:/etc and call CGContextRelease() in -dealloc. I'm not sure how you handle the 2x scale of the "retina display" with CGBitmapContextCreate().
(UIGraphicsBeginImageContext() is easier, but it might not be safe to do UIGraphicsBeginImageContext(); myContext = CFRetain(UIGraphicsGetCurrentContext()); UIGraphicsEndImageContext();.)
Then do something like
#import <QuartzCore/CALayer.h>
-(void)displayLayer:(CALayer*)layer
{
UIGraphicsPushContext(mycontext);
... Update the bitmap context by drawing a line...
UIGraphicsPopContext();
CGImageRef cgImage = CGBitmapContextCreateImage(mycontext);
layer.contents = (id)cgImage;
CFRelease(cgImage);
}
I've used -displayLayer: (a CALayer delegate function; a UIView is its layer's delegate) instead of -drawRect: for efficiency: if you use -drawRect:, CALayer creates a second context (and thus a second copy of the bitmap).
Alternatively, you might have luck with CGLayer. I've never seen a reason to use it instead of a bitmap context; it might be more efficient in some cases.
Alternatively, you might get away with setting self.clearsContextBeforeDrawing = NO, but this is very likely to break. UIKit (or, more accurately, CoreAnimation) expects you to draw the whole view contained in the clip rect (that's the "rect" argument of -drawRect:; it's not guaranteed to be the bounds). If your view goes offscreen, CoreAnimation might decide that it wants the memory back. Or CoreAnimation might only draw the part of the view that's on-screen. Or CoreAnimation might do double-buffered drawing, causing your view to appear to flip between two states.
If you use drawRect: to draw, then you need to draw the whole area. So you need to store not only the data for the latest part but everything.
As an alternative, you might draw directly into a bitmap, or generate dynamically subviews for your lines (makes only sense for very limited drawing (i.e. some few vector-based stuff).

Saving CoreGraphics drawing actions?

Is there some way I can do something like:
#implementation MyView
- (void)drawRect:(CGRect)rect
{
// Get the thing I'm supposed to draw (CGImageRef, pattern, etc.) and draw it
// i.e. not real code
CGContextDrawWhatever(self.objectThatHoldsDrawing.drawing);
}
Have a look at CGPathRef, CGLayer (or maybe even just CALayer, but that's mostly equivalent to a drawRect: method) and rendering (e.g. buffering) to image context and just renderind the image.