MKOverlayPathView gets "fuzzy" on high zoom factor - iphone

I'm developing an application that uses an MKOverlayPathView to highlight certain positions on an MKMapView. Basically the idea is that the overlay path view gets passed a list of CLLocationCoordinate2D structs, then connects them linearly using CGPathAddLineToPoint() in its custom drawRect: implementation.
However, in both the Simulator and on a device (iPhone 3GS running 4.2.1), I notice some strange behavior: the path behaves perfectly well up to a certain zoom level, then will begin to "fuzz" (for lack of a better term) in certain segments. These problem areas are always sharply delineated, and the rest of the path will be just fine:
I've tried calling setNeedsLayout and invalidatePath on the path view (from the MKMapView's regionDidChangeAnimated notification method), but both of those will just cause the fuzzed area to disappear, rather than be redrawn properly. Is there a fix for this?

Related

Osmdroid, Custom Overlay Drawing

For an Android App I have created a custom overlay here, to display various items of game data on the map.
In general the overlay works fine and smoothly!
However there is one geometrical constellation which is not working as expected: Some objects within the overlay are not bound to geocoordinates, but to current user location. Now it happens, that Osmdroid is selectively redrawing the screen. When the screen is not focussed to user location, the user location bound stuff is not updated correctly: New stuff gets only drawn in some selective rectangle, old stuff isn't deleted outside that rectangle.
So far I failed to find a mechanism to communicate the required redraw of an overlay to the basic Osmdroid system? I.e. to invalidate the surroundings of the current user location? Any hint, clue or pointer?
By studying the sample code I realized they really consider it the overlay responsibility to issue appropriate invalidate calls to the map component to ensure their own optical integrity.
I am still struggling with the coordinates to do the right invalidate(left, top, right, buttom) calls cause my updates actually happen on location changes and it is unclear to me if the screen pixel of the map to be invalidated need to get measured relative to the old location or the new one. This is actually a timing question.
However taking the CPU byte and issuing postInvalidate() just looks as intended and it is unclear how much performance is really lost.

How does CATiledLayer know when to provide a new tile?

Because of various reasons, I am considering to make my own implementation of CATiledLayer. I have done some investigation, but I don't seem to be able to figure out how CATiledLayer knows which tile to provide.
For example, when you scroll the layer, setPosition: or setBounds: are never called. It looks like the background thread just calls drawLayer:inContext: of the delegate out of the blue without any triggers.
I have found out that CATiledLayer calls setContent: with an instance of "CAImageProvider", and the all calls to drawLayer:inContext: originate from that class. So probably that one is the key in determining what tile to draw. But I cannot find any documentation on that class.
So... does anybody know how this is working, and how I might be able to override it?
As for the disadvantages of CATiledLayer:
it always uses screen resolution (or x2, x4, etc); you cannot set it to the native resolution of your source images
you cannot specify any other scaling factor than 2
you have to specify the levelsOfDetail and levelsOfDetailBias, for which I see no implementation reason at all. If you have content that is infinitely scalable, like fractals, then this is very limiting.
most importantly: if you restrict it to zooming in only one direction (I do that by forcing the scale factor of one direction to 1 in setTransform:), it acts all weird
In drawLayer:inContext:, you can get the bounding box using CGContextGetClipBoundingBox. CGContextGetCTM should give you information about the current resolution.

iPhone: no way to draw on screen outside drawRect?

Is there a way to draw on the iPhone screen (on a UIView in a UIWindow) outside of that view's drawRect() method? If so, how do I obtain the graphics context?
The graphics guide mentions class NSGraphicsContext, but the relevant chapter seems like a blind copy/paste from Mac OS X docs, and there's no such class in iPhone SDK.
EDIT: I'm trying to modify the contents of the view in a touch event handler - highlight the touched visual element. In Windows, I'd use GetDC()/ReleaseDC() rather than the full cycle of InvalidateRect()/WM_PAINT. Trying to do the same here. Arranging the active (touchable) elements as subviews is a huge performance penalty, since there are ~hundred of them.
No. Drawing is drawRect:'s (or a CALayer's) job. Even if you could draw elsewhere, it would be a code smell (as it is on the Mac). Any other code should simply update your model state, then set yourself as needing display.
When you need display, moving the display code elsewhere isn't going to make it go any faster. When you don't need display (and so haven't been set as needing display), the display code won't run if it's in drawRect:.
I'm trying to modify the contents of the view in a touch event handler - highlight the touched visual element. In Windows, I'd use [Windows code]. … Arranging the active (touchable) elements as subviews is a huge performance penalty, since there are ~hundred of them.
It sounds like Core Animation might be more appropriate for this.
I dont think ull be able to draw outside drawRect...but to get the current graphic context all you do is CGContextRef c = UIGraphicsGetCurrentContext(); hope that helps.

Advance way of using UIView convertRect method to detect CGRectIntersectsRect multiple times

I recently asked a question regarding collision detection within subviews, with a perfect answer. I've come to the last point in implementing the collision on my application but I've come across a new issue.
Using convertRect was fine getting the CGRect from the subView. I needed it to be a little more complex as it wasn't exactly rectangles that needed to be detected.
on XCode I created an abstract class called TileViewController. Amongst other properties it has a IBOutlet UIView *detectionView;
I now have multiple classes that inherit from TileViewController, and each class there are multiple views nested inside the detectionView which I have created using Interface Builder.
The idea is an object could be a certain shape or size, I've programatically placed these 'tiled' detection points bottom center of each object. A user can select an item and interactive with it, in this circumstance move it around.
Now the method itself works to an extent but I don't think it's working with the nested values properly as the detection is off.
A simplified version of this method works - Using CGRectIntersectsRect on the detectionView itself so I'm wondering if I'm looping through and checking the views correctly?
I wasn't sure whether it was comparing in the same view but I suspect it is, I did modify the code slightly at one point, rather then comparing the values in self.view I took the viewController.detectView's UIViews into the interactiveView.detectView but the outcome was the same.
It's rigged so the subviews change colour, but they change colour when they are not even touching, and when they do touch the wrong UIviews are changing colour
I worked out my issue when using convertRect.
I thought I'd read the documentation again (believe me I've been reading it) but I missed a key piece of information previously. To use convertRect:toView: to use the method the view needs to be target of the conversion operation as mentioned in the doc, but I was using the view itself as the target instead of the parent view
interactRect = [detectInteractView convertRect:[detectInteractView frame] toView:parentView];
This is wrong, I know there isn't many details in this post but ultimately you can't use the same UIView as the target view, or at least if you can I couldn't get it to work here!

MKMapView tile-based overlay

I want to draw a tile-based overlay on top of a MKMapView, but there's no obvious way to do this.
This appears to be tricky since you can zoom to any level with MKMapView (unlike Google Maps).
Has anyone attempted to do this?
Incase this question is still getting views readers should check out the HazardMap and TileMap demo code from WWDC2010.
I'm working on a game where I need to overlay objects on the map and have them scroll and zoom with the map.
Using annotation views I've been able to solve the first problem and partially solve the second problem. Annotations automatically move with the map. For scaling them, I use the mapView:regionDidChangeAnimated: delegate method to resize my annotations after a zoom event. The problem is that the annotations don't rescale until after the zoom gesture is complete.
I can think of two approaches other than filing a bug with Apple requesting that they provide an API for map overlays:
Put a (mostly invisible) view over the top of the MKMapView that intercepts zoom and scroll events, handles them, and passes them on to the map view.
Customize the open-source RouteMe library with tiles from Open Street Map or CloudMade (the former is slow, the latter costs money). But it's fully open source so you should be able to do overlays to your heart's content. You could also run your own tile server that does the tile overlays on the server.
Something I discovered later:
http://www.gisnotes.com/wordpress/2009/10/iphone-devnote-14-drawing-a-point-line-polygon-on-top-of-mkmapview/
Not quite a tile-based solution, but interesting nonetheless.