colorWithPatternImage and colorWithPatternImage.CGColor Flip - iphone

I have an pattern image(red texture with shadow on bottom).
When I use this code
view.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"myPattern.png"]];
it's OK, and shadow is in bottom. But when I use
view.layer.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"subscribe-pattern.png"]].CGColor;
it's a trouble. Image becomes fliped (shadow is on top). Can you tell me, how I can fix this problem? I need to have unflipped image using second method.

As bobnoble mentions in his comment, Core Graphics and iOS have flipped origins. iOS starts at the top left corner of the screen, and Core Graphics starts at the bottom left.
By accessing the layer property of a view you're dropping down to the Core Graphics level. If you must drop down to that level, you have to start working upside-down. An easy fix is to pre-flip your pattern. Just save it from your graphics editor upside-down. If you need to use the same image at both the UI and CG levels, you can always save two versions.
In code you can flip and translate the CG layer before drawing the upside-down image, then return the layer to its previous state for further drawing. It's kind of complex, but well covered by many Core Graphics tutorials.

Related

I Have bad performance on using shadow effect

I put some image views on scroll view.
And when I drag this scroll view, I didn't have any problems.
But after I applied shadow effect to these image views, dragging this scroll view has bad performance.
I used shadowOpacity, shadowRadius and shadowOffset methods.
ex:
[ [ anImageView layer ] shadowOpacity: 1.0 ];
If using shadow effect causes bad performance seriously, I will draw shadow of the images directly.
If there are some tips about this issues, please let me know them.
I want to use shadow effect on iOS programically, because I have the worst drawing skill.
Thank you for your reading.
See CALayer.shouldRasterize (iOS 3.2+, but so is shadowOffset/etc):
When the value of this property is YES, the layer is rendered as a bitmap in its local coordinate space and then composited to the destination with any other content. Shadow effects and any filters in the filters property are rasterized and included in the bitmap.
You probably also want to set rasterizationScale appropriately.
While using the reasterized layer indeed increases the performance you will get better (nicer) results using the shadowpath proerty as #wayne-hartman suggests.
Check http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths on how to use the CALayer shadow path.
Whenever you work with shadows its better to use a bezier path as the background. This will help you set the shadowPath, which will drastically improve performance. Rasterize will improve performance, but setShadowPath will improve 5x more than just setting rasterize.
path = [UIBezierPath bezierPathWithRoundedRect:CGRectMake(0, 0, 100, 100) cornerRadius:10];
[self.layer setShadowColor:[UIColor blackColor].CGColor];
[self.layer setShadowOpacity:1.0f];
[self.layer setShadowRadius:10.0f];
[self.layer setShadowPath:[path CGPath]];
I've had exactly the same problem. Drawing the shadow is a fairly costly multi-pass operation, so I can kind of understand it and I think the shadow is drawn continuously as you scroll. The only work-around I've found is to render the shadow manually into an image and display that image behind the images in the scroll. This seems to work well.

What's the best design for image-based iphone app?

I would appreciate some advice on an iphone game design. I want to display some backround image and other images on top of it (buildings, characters etc). The backround is going to be large (up to 10 times the size of the screen) so only a piece of the background file will be displayed at once. The idea is to replace this piece when character gets close to screen borders. I need to make this background transition as a smooth animation. Also, I need to have a zoom in/out feature, preferably animated. Some images on the screen will be static (buildings) and some will require some animation (character walking).
What is the best design:
Use Core Graphics combined with
"sprite" classes - displaying
sprite's UIImage with
CGContextDrawImage
Use UIKit -
create UIImageview to hold every
image and add them as subviews in a
single view appplication
Use OpenGL
ES project
Option 1) turned out to be very slow. It seems like CoreGraphics is not meant to display images in a game loop. But maybe there is a way to make it effecient? Maybe combine it with Core Animation somehow?
Option 2) is my current choice. I am hoping the view to cache the image it holds and thus be more effecient than CG. But will the animation provided by UIImageview will be satisfactory? I think the views shouldn't be added all at once, but rather created & added (removed?) dynamically when background moves. Is it a good idea?
Option 3) would probably give the best control over the images but it seems like quite an overhead. I only need to display images, not vector graphics. Plus I'm new to Mac programming and I don't want to get stuck in some complex technology.
I appreciate any advice, thanks :)
I highly recommend Cocos2D as I've done my own development here on my blog. It was really easy to do. I follow Ray Wenderlich's tutorials and he provides great tools for doing everything you describe.
You asked "The backround is going to be large (up to 10 times the size of the screen) so only a piece of the background file will be displayed at once."
The tiled image system is very powerful and fast performance. If you use google maps you will see and example of a tiled image. Scroll off to a new are and blocks appear. In a local app you could take your image that is 10 times the size of the screen and cut in to tiles that are say 100px by 100px and each screen will only load the tiles that are displayed. When the user moves only the needed tiles are loaded. This saves memory and dramatically improves speed. It is the base reason why tables can fly, only the cells one screen are loaded, as is scrolls off the screen it's memory is reused for the next cell.
If option 2 is sufficiently performant for your needs I would stick with that - it's as easy a system as you'll get on the iPhone and fine for very simple graphics. A related option that might buy you a little bit of speed is using CALayers to implement the graphics. CALayers are almost as easy to use, but are a bit more lightweight than UIViews (in some ways you can think of UIViews as just wrappers for CALayers with additional overhead for managing things like touch events, etc.)
If you're interested I would read the Core Animation Programming Guide (I would provide a link but I think my reputation is too low, but Google should track it down for you). Core Animation is a big subject and can be pretty daunting but if you just use layers (i.e. not the animation parts of it) it's not so bad. Here's a quick example to give you a sense of what using layers looks like:
// NOTE: I haven't compiled this code so it may have typos/errors I haven't noticed
UIView* canvasView; // the view that will be the "canvas" for your game
... // initialize the canvas, etc.
CALayer* imageLayer = [CALayer layer];
UIImage* image = [UIImage imageNamed: #"MyImage.png"];
imageLayer.content = (id)image.CGImage;
imageLayer.bounds = CGRectMake(0, 0, image.size.width, image.size.height);
imageLayer.position = CGPointMake(100, 100); // NOTE: Unlike UIViews CALayers have their origin at the center
[canvasView.layer addSublayer:imageLayer];
So basically it looks a lot like working with views but with some added performance (and occasional headache).
P.S. - One thing to keep in mind is that if you make changes to a layer's property that is animatable (e.g. position, opacity, etc.) Core Animation will implicitly animate it (e.g. if you write imageLayer.position = somePoint; the layer animates to that position rather than having it's position set immediately. There's easy ways to work around that but that's a topic for another question/answer.

implementing stretchable dialog borders in iphone sdk

I want to implement dialog borders that scale to the size I require the dialog to be. Perhaps there is a better more conventional name for this sort of thing. If there is, if someone would edit the title, that'd be great.
Anyhow, I'd like to do this so I can have dialogs of any size without the visual artifacts that come with scaling border art to small, large, or wacky unproportional dimentions. I have a few ideas on how this is done, but am not sure which is better for iphone. I have a few questions.
1) Should I make a containing view object that basically overloads its drawRect method and draws the images where they should be at their appropriate scale when the method is called, or should I main a containing view object that simply contains 8 UIImageViews? I suspect the latter approach won't work if I need to actively scale the resulting dialog class like in an animation.
1b) If overloading drawRect is the way to go, does someone have some sample code or a link to an example that demonstrates drawing an image directly from drawRect()?
2) Is it generally better to create
a) a 3 x 3 image where the segments are in their appropriate 1x1 grid of the image? If so, is it simple to draw from a portion of this image onto my target view in drawRect (if the former assumption is correct that I should use drawRect)?
b) The pieces separately in 8 different files?
UPDATE:
To clarify, the idea is to take any customized border art and be able to stretch the 2nd, 4th, 6th, and 8th cell (in a 3x3-cell grid) to form a border of any size with just those assets. Stretching just a plain image would result in distortion of the corners, so I'd like to stretch those even numbered cells as needed and tack on the corners so there is no distortion. I'd seen this done before so thought it might be a standard thing and have a standard naming to it other than what I called it.
Anyhow, I was advised that adding 8 UIImageViews to a container would not be as efficient as drawing the UIImages on the fly in drawRect so took that approach using CGContextDrawImage() after applying the necessary transformations to the context to translate and scale the Y. Because this function draws from the bottom left corner of an image but onto a top-left origined UIView, the image is upside down without the Y axis invert. I noticed the suggestion to use UIImage functions like drawAtPoint works as well and similarly but for the invert since UIImage draws in the same orientation as UIViews. I will continue my implementation with the former and see how it goes, but one other question.
Would someone happen to know which of these approaches is more efficeint, faster, etc?
I'm not sure I follow, but here's my best shot at an answer...
Using drawRect: or adding individual UIImageViews to a parent view is entirely up to you. UIImageView gives you a bit of encapsulated functionality for free, but otherwise they are the same as far as appearances go.
If you do want to go the drawRect route, you just need to use UIImage's drawAtPoint: method. Do the math for where you want it to be, and draw it. You can calculate your points based on the parent view's dimensions.
As far as scaling, it's impossible to resize these images without scaling them, so I'd plan ahead and make your originals as large or larger than you ever expect to display them.
Hope that helps a little?
Cheers
If you want a border on a dialog box, assuming the box is a UIView (or subclass), then set the layer's border properties and let the system draw the border for you.
#import <QuartzCore/QuartzCore.h>
// ...
view.layer.borderWidth = 2;
view.layer.borderColor = [UIColor whiteColor].CGColor;
view.layer.cornerRadius = 0; // 0=square corners, >0 for rounded

iPhone UIImage overlap render bug

I've come across a strange render bug on iPhone OS 3.0...
I have two images. One is a non-transparent PNG that is predominately black with a white gradient fading upward.
The second is a transparent PNG with translucent clouds.
When I overlay the two using UIImageView, the intersection of the clouds and white gradient triggers a render bug that causes a rather odd looking graphical glitch that removes all opacity from the image on top (in this case the clouds), and causes the glitched portion of the image to render on top of all layers in the current view (including ones it is technically underneath).
It only occurs at the intersection of the two portions of the images. So typically only a very small block is experiencing the error while the rest of the images render normally.
Has anyone seen this and does anyone have a fix? I want to check before I move on to Core Animation which will hopefully address the problem (since I imagine that CA or even OpenGL is more apt to handle overlapping alpha channels).
Screenshot found here:
http://www.jasconi.us/glitch.jpg
You can see the intersect of the two images at the lower right.
From your description, this seems to be a bug in Apple's code. I would report it to Apple and wait for a fix.
In the meantime, you can try to implement the same functionality in Core Animation or OpenGL in the hope that the bug is in the higher-level UIImageView, but since the UIImageView itself uses Core Animation, it's possible that this bug is simply unavoidable until it's fixed.
I assume you're displaying them using UIImageView? If so, have you set opaque to NO on the transparent view?

How do I use CALayer with the iPhone?

Currently, I have a UIView subclass that "stamps" a single 2px by 2px CGLayerRef across the screen, up to 160 x 240 times.
I currently animate this by moving the UIView "up" the screen 2 pixels (actually, a UIImageView) and then drawing the next "row".
Would using multiple CALayer layers speed up performance of rendering this animation?
Are there tutorials, sample applications or code snippets for use of CALayer with the iPhone SDK?
The reason I ask is that most of the code snippets I find that demonstrate simple examples of CALayer employ method calls that do not work with the iPhone SDK. I appreciate any advice or pointers.
Okay, well, if you want something that has some good examples of CA good that draws things like that and works on the phone, I recommend the GeekGameBoard code that Jens Aflke published (it is an improved version of some Apple demo code).
Based on what you are describing I think you are doing somthing way more complicated than it needs be. My impression is you want basically a static view that you are animating by shifting its position so that it is partially off screen. If you just need to set some static content in your drawRect going through layers is not going to be faster than just calling CGFillRect() with your color. After that you could just use implicit animations and the animator proxy on UIView to move the view. I suspect you could even get rid of the custom drawRect: implementation with a patterned UIColor, but I honestly have not benchmarked the difference between the two.
What CALayer methods are you seeing that don't work on iPhone? Aside from animation features tied to CoreImage I have not noticed much that is missing. The big thing you are likely to notice is that all views are layer backed (so you do not need to do anything special to use layers, you can just grab a UIView's layer through the layer accessors methos), and the coordinate system has a top left origin.
In any event, generally having more things is slower than having fewer things. If you are just repeating the same pattern over and over again you are likely to find the best performance is implementing a custom UIView/CALayer/UIColor that knows how to draw what you want, rather than placing visually identical layers or views next to each other.
Having said that, generally layers are lighter weight than views, so if you have a lot of separate elements that you need to keep logically separated you will find that moving to layers can be a win over using views.
You might want to look at -[UIColor initWithPatternImage:] depending on exactly what you are trying to do. If you are using this two pixel pattern as a background color you could just make a UIColor that draws it and set the background.
What CALayer methods are you seeing that don't work on iPhone?
As one example, I tried implementing the grid demo here, without much luck. It looks like CAConstraintLayoutManager and CAConstraint are not available in QuartzCore.h.
In another attempt, I tried a very simple, small 20x20 CALayer object as a sublayer of my UIView's layer property, but that didn't show up.
Right now, I have a custom UIView of which I override the drawRect method. In drawRect I grab a context and render two types of CGLayerRefs:
At "off" cells I draw the background color across the entire 320x480 canvas.
At "on" cells, I either draw a single CGLayerRef across a grid of 320x480 pixels (initialization) or across a 320x2 row (animation).
During animation, I make a UIImageView clip view from 320x478 pixels, and draw a single row. This "pushes" my bitmap up the screen two pixels at a time.
Basically, I'd like to test whether or not using CALayer will accomplish two things:
Make my rendering faster, if CALayer has less overhead than what I'm doing now
Make my animation smoother, by letting me transition a layer up the screen smoothly
Unfortunately, I can't seem to get a basic CALayer working at the moment, and haven't found a good chunk of sample code to look at and play with.