I was looking at some animations in my iPhone app and felt like it was kind of ugly. And then I undertsood it: it just doesn't animate through subpixel states.
So, in case I use usual +beginAnimations/+commitAnimations, moving some stuff just a few pixels look "jumpy". How can I avoid it? Are there any flags to make it animate through float coords or whatever?
Just to give you an idea of what I have and what I'm looking for, please refer to the picture:
alt text http://www.ptoing.net/subpixel_aa.gif
Thanks in advance,
Anton
That's funny, but I found that UIImageView animate its content using aniti-aliasing, while edges of the view are not anti-aliased (hard). It seems to be because UIView itself should maintain the same bounds, while subpixel rendering might add to the bound a bit.
So, I ended up just putting an image with some transparent space around the picture, and it all went smooth.
Just don't let UIView cut its contents for you :)
You can try deriving the item you're animating from a custom UIView that overrides its drawRect method and sets anti-aliasing on for it then lets the parent draw into it. Something along the lines of:
- (void) drawRect:(CGRect)area
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetShouldAntialias(context, true);
CGContextSetAllowsAntialiasing(context, true);
[super drawRect:area];
// We have to turn it back off since it's not saved in graphic state.
CGContextSetAllowsAntialiasing(context, false);
CGContextRestoreGState(context);
}
On the other hand, it might be too late in the rendering pipeline by the time you get here, so you may have to end up rolling your own animation scheme that lets you have full control over pixel-positioning.
Per Jeeva's comment above, you can make UIView edges render with Anti-aliasing by settings the following option in the info.plist to YES.
Renders with edge antialiasing
Here is the link Jeeva pointed to:
http://www.techpaa.com/2012/06/avoiding-view-edge-antialiasing.html
Related
I put some image views on scroll view.
And when I drag this scroll view, I didn't have any problems.
But after I applied shadow effect to these image views, dragging this scroll view has bad performance.
I used shadowOpacity, shadowRadius and shadowOffset methods.
ex:
[ [ anImageView layer ] shadowOpacity: 1.0 ];
If using shadow effect causes bad performance seriously, I will draw shadow of the images directly.
If there are some tips about this issues, please let me know them.
I want to use shadow effect on iOS programically, because I have the worst drawing skill.
Thank you for your reading.
See CALayer.shouldRasterize (iOS 3.2+, but so is shadowOffset/etc):
When the value of this property is YES, the layer is rendered as a bitmap in its local coordinate space and then composited to the destination with any other content. Shadow effects and any filters in the filters property are rasterized and included in the bitmap.
You probably also want to set rasterizationScale appropriately.
While using the reasterized layer indeed increases the performance you will get better (nicer) results using the shadowpath proerty as #wayne-hartman suggests.
Check http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths on how to use the CALayer shadow path.
Whenever you work with shadows its better to use a bezier path as the background. This will help you set the shadowPath, which will drastically improve performance. Rasterize will improve performance, but setShadowPath will improve 5x more than just setting rasterize.
path = [UIBezierPath bezierPathWithRoundedRect:CGRectMake(0, 0, 100, 100) cornerRadius:10];
[self.layer setShadowColor:[UIColor blackColor].CGColor];
[self.layer setShadowOpacity:1.0f];
[self.layer setShadowRadius:10.0f];
[self.layer setShadowPath:[path CGPath]];
I've had exactly the same problem. Drawing the shadow is a fairly costly multi-pass operation, so I can kind of understand it and I think the shadow is drawn continuously as you scroll. The only work-around I've found is to render the shadow manually into an image and display that image behind the images in the scroll. This seems to work well.
I want to implement dialog borders that scale to the size I require the dialog to be. Perhaps there is a better more conventional name for this sort of thing. If there is, if someone would edit the title, that'd be great.
Anyhow, I'd like to do this so I can have dialogs of any size without the visual artifacts that come with scaling border art to small, large, or wacky unproportional dimentions. I have a few ideas on how this is done, but am not sure which is better for iphone. I have a few questions.
1) Should I make a containing view object that basically overloads its drawRect method and draws the images where they should be at their appropriate scale when the method is called, or should I main a containing view object that simply contains 8 UIImageViews? I suspect the latter approach won't work if I need to actively scale the resulting dialog class like in an animation.
1b) If overloading drawRect is the way to go, does someone have some sample code or a link to an example that demonstrates drawing an image directly from drawRect()?
2) Is it generally better to create
a) a 3 x 3 image where the segments are in their appropriate 1x1 grid of the image? If so, is it simple to draw from a portion of this image onto my target view in drawRect (if the former assumption is correct that I should use drawRect)?
b) The pieces separately in 8 different files?
UPDATE:
To clarify, the idea is to take any customized border art and be able to stretch the 2nd, 4th, 6th, and 8th cell (in a 3x3-cell grid) to form a border of any size with just those assets. Stretching just a plain image would result in distortion of the corners, so I'd like to stretch those even numbered cells as needed and tack on the corners so there is no distortion. I'd seen this done before so thought it might be a standard thing and have a standard naming to it other than what I called it.
Anyhow, I was advised that adding 8 UIImageViews to a container would not be as efficient as drawing the UIImages on the fly in drawRect so took that approach using CGContextDrawImage() after applying the necessary transformations to the context to translate and scale the Y. Because this function draws from the bottom left corner of an image but onto a top-left origined UIView, the image is upside down without the Y axis invert. I noticed the suggestion to use UIImage functions like drawAtPoint works as well and similarly but for the invert since UIImage draws in the same orientation as UIViews. I will continue my implementation with the former and see how it goes, but one other question.
Would someone happen to know which of these approaches is more efficeint, faster, etc?
I'm not sure I follow, but here's my best shot at an answer...
Using drawRect: or adding individual UIImageViews to a parent view is entirely up to you. UIImageView gives you a bit of encapsulated functionality for free, but otherwise they are the same as far as appearances go.
If you do want to go the drawRect route, you just need to use UIImage's drawAtPoint: method. Do the math for where you want it to be, and draw it. You can calculate your points based on the parent view's dimensions.
As far as scaling, it's impossible to resize these images without scaling them, so I'd plan ahead and make your originals as large or larger than you ever expect to display them.
Hope that helps a little?
Cheers
If you want a border on a dialog box, assuming the box is a UIView (or subclass), then set the layer's border properties and let the system draw the border for you.
#import <QuartzCore/QuartzCore.h>
// ...
view.layer.borderWidth = 2;
view.layer.borderColor = [UIColor whiteColor].CGColor;
view.layer.cornerRadius = 0; // 0=square corners, >0 for rounded
I need to mask a "texture" image with a rotated greyscale image.
I found out, that I have to do it with CGImages or CGlayers (if there is a simplier way using UIImageViews only, please let me know about it).
My problem is simple:
The antialias of any
rotation-transformed CG stuff is quiet
jaggy...
... but the antialias of a rotation-transformed UIImageView is kinda perfect. How can I produce that beautiful antialiased rotations?
I've uploaded a "proof" involving actual iPhone Simulator screenshots, to see what am I talkin' about: http://gotoandplay.freeblog.hu/files/Proof.png
I've tried to use CGImages, CGLayers, UIImageViews "captured" with renderInContext, I've tried to CGContextSetInterpolationQuality to high, and also tried to set CGContextSetAllowsAntialiasing - CGContextSetShouldAntialias, but every case returned the same jaggy result.
I'm planning to learn using OpenGL next year, but this development should released using CoreGraphics only. Please let me know how to get a perfectly rendered rotated image, I just can't accept it's impossible.
To add 1px transparent border to your image use this
CGRect imageRect = CGRectMake(0, 0, image.size.width, image.size.height);
UIGraphicsBeginImageContext( imageRect.size );
[image drawInRect:CGRectMake(1,1,image.size.width-2,image.size.height-2)];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Have you tried adding a clear, 1 pixel border around your image? I've heard of that recommended as a trick to avoid aliasing, by giving CoreGraphics some room to work with when blending the edges.
I am having a similar problem, looks like I'm going to move it over to OpenGL ES as well. I can't nail down an effective solution that doesn't hurt performance.
For reference of future CoreGraphics explorers, putting a 1-pixel transparent border did make for a noticeable improvement in my experiments, but it appears that as Eonil mentioned, you end up with multiple stages of antialiasing/smoothing/interpolation working against each other. IE: CGLayer does some interpolation for it's rotation, then context it's being drawn to will do some interpolation/antialiasing, so on so forth until it ends up looking pretty rough.
I actually ended up with better results by disabling interpolation and antialiasing on the destination context, though it was still obviously jaggy (less artifacts overall though). I was able to achieve the best overall appearance by enabling interpolation and antialiasing when constructing the CGLayer, and disabling it for the destination context when re-drawing it. This approach, obviously, is fraught with other problems.
I'm having a performance issue.
I've created an UIView and overwrited it's drawRect function. At this function, I was drawing an image (big one), and over that, an white square at the entire screen with a polygon inside it, with CGContextEOFillPath. The result is an white screen with portion of the image (defined by the polygon) displayed.
After that, I created a function to animate the transition of that polygon to another one. Besides the polygon animation, the image should also be scaled and moved to fix the diplayed at the screen. I did that with an NSTimer. The animation of the polygon consists in calculating the distance between each vertex and moving then to a position according to elapsedTime. It worked just fine at the simulator, but got really stucked at device.
Reading about performance, here at stackoverflow, I found the alternative to use beginAnimations and commitAnimations. I'm changing everything to use that approach with the image. But what can I do with the polygon. The polygon is drawn with CGContextMoveToPoint and CGContextAddLineToPoint, so I believe it can't be animated with beginAnimations. An I correct? Is there a better approach to do that?
The desired result is something like this comic reader app: http://www.comixology.com/iphoneapp (click on guided tour. at the middle of the video they show the "automatic masking" feature)
My suggestion would be to use a CAShapeLayer overlaid on your main image view, with the CAShapeLayer being the size of the view you want to mask and having a polygon path for a hole in the center of it. CAShapeLayers let you animate from one CGPathRef to another smoothly, as long as the two paths have the same number of control points. You will need to use a CABasicAnimation here to do that animating, rather than a UIView begin / commitAnimations block, but it's not too difficult.
Joe Ricioppo has a nice example of animating CAShapeLayer paths in his post here.
With Core Animation you can animate "animatable" (sic) properties. Apple's documentation enumerates animatable properties in Mac OS X:
http://url.akosma.com/55
In the case of the iPhone, the UIView documentation explicitly says "animatable" when a given property is, hum, animatable. The most powerful of these are (IMHO) UIView's "transform" property, which takes CGAffineTransform structs as inputs, or CALayer's "transform" property (which takes CATransform3D structs). Both are animatable and give you tremendous power to create any kind of transition you want.
Now, in your case, indeed, you can't animate the polygon in an "easy" way. My bet would be in your case to try to map CGAffineTransforms that fit your needs (scale, translation) and apply that to a fixed view, non-animated, created using your Quartz code.
I hope I'm clear enough :)
I have some PNGs with transparent backgrounds that I would like to add shadows to programatically. I've seen examples of adding shadows to square objects, but haven't seen any with complex shapes.
So the two steps I think I'd have to do would be:
Isolate the PNG shape
Draw a shape behind the PNG that is blurred, faded, and offset.
I don't have much experience with drawing within Cocoa, so any insight on where to begin would be much appreciated!
Screenshot:
(source: iworkinprogress.com)
Simplest way is to call CGContextSetShadow in your drawRect: before you draw the images.
- (void)drawRect:(CGRect)invalidRect
{
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetShadow(c, CGSizeMake(5.0f, 5.0f), 5.0f);
[myImage drawAtPoint:CGPointMake(50.0f, 50.0f)];
}
I found this category to be very useful: UIImage+Shadow.m
https://gist.github.com/kompozer/387210
I am not really a graphics person, but what about this: if you have a mask for these images, or if you can create one programatically, then you can probably use a blur function to add a shadow like effect.
Experiment in Photoshop/Acorn/Pixelmator?
Since you want shadows like they all have the same light source... it seems like you might actually be better off with an OpenGL view, that casts a light from above and the images would sit slightly above a flat plane to cast a shadow on. I'd look for 3D OpenGL frameworks that would let you add things pretty easily...