iPhone UIImage overlap render bug - iphone

I've come across a strange render bug on iPhone OS 3.0...
I have two images. One is a non-transparent PNG that is predominately black with a white gradient fading upward.
The second is a transparent PNG with translucent clouds.
When I overlay the two using UIImageView, the intersection of the clouds and white gradient triggers a render bug that causes a rather odd looking graphical glitch that removes all opacity from the image on top (in this case the clouds), and causes the glitched portion of the image to render on top of all layers in the current view (including ones it is technically underneath).
It only occurs at the intersection of the two portions of the images. So typically only a very small block is experiencing the error while the rest of the images render normally.
Has anyone seen this and does anyone have a fix? I want to check before I move on to Core Animation which will hopefully address the problem (since I imagine that CA or even OpenGL is more apt to handle overlapping alpha channels).
Screenshot found here:
http://www.jasconi.us/glitch.jpg
You can see the intersect of the two images at the lower right.

From your description, this seems to be a bug in Apple's code. I would report it to Apple and wait for a fix.
In the meantime, you can try to implement the same functionality in Core Animation or OpenGL in the hope that the bug is in the higher-level UIImageView, but since the UIImageView itself uses Core Animation, it's possible that this bug is simply unavoidable until it's fixed.

I assume you're displaying them using UIImageView? If so, have you set opaque to NO on the transparent view?

Related

Unity sprites don't render properly

I recently came across a problem I can't solve which involves not being able to draw my sprites properly. I have tried a lot of different things and couldn't find any solution.
Here is how the image should look like in unity:
And here is how it actually looks like:
If someone could tell me how to fix this, I would be very grateful.
Presumably the top image is a screenshot of your image manipulation program, many of which use the chequerboard pattern to mean transparency. As such, the image you have exported is a gradient going from almost solid white at the bottom to transparent at the top. This is why the image appears as such in Unity.
Also, if you're wondering why the image appears as though it has bands of different colours, this is due to a problem called colour banding. This can be fixed by using a technique called Dither (which adds some noise to the image), but how you do so will depend on which image manipulation program you are using.

White edge when orienting dark colored view?

I am just playing with an iPhone test app that orients the view when the simulator is rotated left or right. Everything worked fine when my view had a white background, however when I altered the view background to a darker color I noticed a white edge (1 pixel) around the view as it rotates. Has anyone come across this before or know how I might get rid of it. Indeed it might be an issue with the simulator that won't show on the actual device, but I just thought it worth asking for future reference.
gary
I have seen something similar before..
Go through the nib (.xib) files and make sure that all of the views have an appropriate dark background colour set, most notably the nib file which contains the "window". Click on them and set the background colour.
I've never noticed it but then again I've never tried to rotate a very dark view. There might be a small border to keep the view from visually bleeding into the status bars or other views.
Rotation and scaling in the discrete domain( as against analog) is quite difficult, because after a rotation/scaling the rotated location may not be a pixel location. You can suppress the effect by smoothening the edges, but I believe that is not implemented, and so the bug.

Tinting iPhone application screen red

I'm trying to place a red tint on all the screens of my iPhone application. I've experimented on a bitmap and found I get the effect I want by compositing a dark red color onto the screen image using Multiply (kCGBlendModeMultiply).
So the question is how to efficiently do this in real time on the iPhone?
One dumb way might be to grab a bitmap of the current screen, composite into the bitmap and then write the composited bitmap back to the screen. This seems like it would almost certainly be too slow. In addition, I need some way of knowing when part of the screen has been redrawn so I can update the tinting.
I can almost get the effect I want by putting a red, translucent, fullscreen UIView above everything. That tints everything red within further intervention on my part, but the effect is much "muddier" than results from the composite.
So do any wizards out there know of some mechanism I can use to automatically composite the red over the app in similar fashion to what the translucent red UIView does?
I managed to somewhat make this work but with some side-effects:
I setup a UIView on top of all my app-views (attached to the window) which is not userInteractionEnabled and which is opaque
This UIView carries some custom drawRect-method which first fills the complete area with red color and then after having made a "screenshot" of my window-viewhierarchy I am rendering this image with
CGContextSetBlendMode( c, kCGBlendModeMultiply);
to the UIView.
To constantly update this UIView to the current state of the apps UIViews I constantly produce "screenshots" and render them as fast as possible.
I setup an NSTimer which is doing this snapshotting/rendering in a defined frequency and which is added to the the NSRunLoop for "Tracking".
RESULT: some really laggy response from the UI with several fancy effects, but still usable though if you do not set the frequency of snapshotting/rendering to high.
See screenshot here...
The result looks okay, but the usability really suffers a lot. I had a look at the OpenGL-examples before trying this aproach, but OpenGL is a whole lot of different (mostly C) code which seems to be very near to the hardware and which gives you a real headache.
So, the described approach is what I will shoot for with my next app. I hope Apple accepts it even though it degrades UXP during nightvision mode. They should simply make CALayer filter-backed then my problem will definitely be solved a whole lot better and performing nicely.
You could try this: subclass UIView. Add code to -drawRect method to draw the overlay. Make your UIView subclass pose as UIView everywhere in your app with
class_poseAs ([CustomUIView class], [UIView class]);

Changing color of part of an image

I have a png image file that is partly opaque and partly transparent. I display it in a UIImageView as a mask of sorts over another UIImageView layered behind it (as a sibling subview of a common superview). It gives me perfect borders around something painted using a finger on the lower UIImageView in my stack of UIImageViews. Perhaps there are better ways to do this, but I am new-ish, and this is the best way I came up with thus far. None the less, my app is in the App Store and now I want to enhance it to provide more images to use as the mask of sorts over the finger painting. But I don't want to bloat my bundle size by adding more static mask images as I did for the initial implementation. Not to mention I don't want to spend lots of time in photoshop making 100 masks. I'd rather programmatically change the color of the mask, without affecting the clear portion in the middle, which is not a simple regtangle or circle, but rather a complex shape. So my question is this: How can I change the colored portion of my loaded image without affecting the clear color portion in the middle? Is there a reasonably easy way to do this? Essentially I want to do what is described in this post (How would I tint an image programmatically on the iPhone?) without affecting the clear portion of my image. Thanks for any insights.
Have a look at the Tinted Image sample project. Try out the different modes until you get the effect you want.

How do I use CALayer with the iPhone?

Currently, I have a UIView subclass that "stamps" a single 2px by 2px CGLayerRef across the screen, up to 160 x 240 times.
I currently animate this by moving the UIView "up" the screen 2 pixels (actually, a UIImageView) and then drawing the next "row".
Would using multiple CALayer layers speed up performance of rendering this animation?
Are there tutorials, sample applications or code snippets for use of CALayer with the iPhone SDK?
The reason I ask is that most of the code snippets I find that demonstrate simple examples of CALayer employ method calls that do not work with the iPhone SDK. I appreciate any advice or pointers.
Okay, well, if you want something that has some good examples of CA good that draws things like that and works on the phone, I recommend the GeekGameBoard code that Jens Aflke published (it is an improved version of some Apple demo code).
Based on what you are describing I think you are doing somthing way more complicated than it needs be. My impression is you want basically a static view that you are animating by shifting its position so that it is partially off screen. If you just need to set some static content in your drawRect going through layers is not going to be faster than just calling CGFillRect() with your color. After that you could just use implicit animations and the animator proxy on UIView to move the view. I suspect you could even get rid of the custom drawRect: implementation with a patterned UIColor, but I honestly have not benchmarked the difference between the two.
What CALayer methods are you seeing that don't work on iPhone? Aside from animation features tied to CoreImage I have not noticed much that is missing. The big thing you are likely to notice is that all views are layer backed (so you do not need to do anything special to use layers, you can just grab a UIView's layer through the layer accessors methos), and the coordinate system has a top left origin.
In any event, generally having more things is slower than having fewer things. If you are just repeating the same pattern over and over again you are likely to find the best performance is implementing a custom UIView/CALayer/UIColor that knows how to draw what you want, rather than placing visually identical layers or views next to each other.
Having said that, generally layers are lighter weight than views, so if you have a lot of separate elements that you need to keep logically separated you will find that moving to layers can be a win over using views.
You might want to look at -[UIColor initWithPatternImage:] depending on exactly what you are trying to do. If you are using this two pixel pattern as a background color you could just make a UIColor that draws it and set the background.
What CALayer methods are you seeing that don't work on iPhone?
As one example, I tried implementing the grid demo here, without much luck. It looks like CAConstraintLayoutManager and CAConstraint are not available in QuartzCore.h.
In another attempt, I tried a very simple, small 20x20 CALayer object as a sublayer of my UIView's layer property, but that didn't show up.
Right now, I have a custom UIView of which I override the drawRect method. In drawRect I grab a context and render two types of CGLayerRefs:
At "off" cells I draw the background color across the entire 320x480 canvas.
At "on" cells, I either draw a single CGLayerRef across a grid of 320x480 pixels (initialization) or across a 320x2 row (animation).
During animation, I make a UIImageView clip view from 320x478 pixels, and draw a single row. This "pushes" my bitmap up the screen two pixels at a time.
Basically, I'd like to test whether or not using CALayer will accomplish two things:
Make my rendering faster, if CALayer has less overhead than what I'm doing now
Make my animation smoother, by letting me transition a layer up the screen smoothly
Unfortunately, I can't seem to get a basic CALayer working at the moment, and haven't found a good chunk of sample code to look at and play with.