HUGE image to show on the iPhone - iphone

Ok, here's my problem. I have a HUGE jpg file, 18000 x 18000 pixels 41MB in size.
If you really need to know, it's a map of a section of the country with services.
My project is really simple. I just need to be able to zoom and display this granddaddy size image. All the way from aspect fit to 100% on the iPhone. I'm not too sure if this can be done or how long it will take. Would appreciate any insights.
I have tried using imageView but I read that it really shouldnt exceed 1024 x 1024. That is way below what I have. If you have an idea how to go about doing this, please share!

You should split the image into tiles, at a range of magnifications. Calculate and build these off-line, and ship them as individual files in the app bundle. Given the zoom in your display, pick the closest zoom size. You then select which tiles are needed to cover the screen, and make a grid of them. As the user zooms, select the appropriate tile size.
The benefit of this is that you don't ever have to load HUGE files into memory, only as much as needed.
This is how Google maps does it.
Can't give you any code, sorry!

You should follow an approach similar to what Google Maps and other map sites do. You need to slice the whole map in sections, so the users don't need to load the whole map if it's not always necessary (plus makes loading time way faster)
There's a couple of solutions that might work for you like OpenLayers or even creating a Custom Google Map with your images as seen here and here

Here is an example from Apple for processing large images called PhotoScroller. The images have already been tiled. If you need an example of tiling an image in Cocoa check out cimgf.com

Related

GPUImage: to use or not to use smoothlyScaleOutput

I am using GPUImage for applying filters in my app (using it to apply one value filters to still photos). I'm trying to decide if I need to use smoothlyScaleOutput or not. The downside of it is it takes a long time to load large photos. I've read Brad Larson say:
The smoothlyScaleOutput: option for a photo tells the framework to use
trilinear filtering when downsampling the photo. That is, for large
photos that you're shrinking down, it will produce a much smoother
output. If you don't need to shrink a photo, you can turn that off for
better performance and a slightly sharper picture.
Just curious but for those out there who've used GPUImage in the same fashion that I am, is it worth it for me to enable this option? It's hard for me to tell the difference, but I've only sampled a handful of photos using some of the basic filters (brightness, contrast, sepia, etc.).
Also was confused about what might constitute shrinking a photo. Would that mean using UIImage drawInRect to draw the photo at a smaller resolution? Or would that mean taking a large photo and transforming its size to fit in a UIImageView? Or both?
Any help would be appreciated thanks.

Anyway to overcome the 5 custom icon urls per request?

From the Google Image API documentation
Static Maps service allows up to five unique custom icons per request. Note that each of these unique icons may be used multiple times within the static map
I have more than 5 custom icons per request, maybe up to 40.
Is there a way to overcome this? Is it possible to use sprites in static maps to overcome this?
Here's how I got around this:
You probably already know how, and depending on your source it's going to be different anyway, but collect up all your map data. Required bits are going to be: center point, zoom, map type, and output image size. I am going to assume sensor (if the application has access to GPS) is false. Also you are going to need all of your marker information which will include the icon you are going to use, and the geo coordinates of them.
I POSTed this all to the CF page that is going to make all the magic happen.
Map your first 5 points as normal. Get the results as a .png
Map your next 5 points but add "style=feature:all|visibility:off" to the query string, get result as a .png. This will give you a png with a transparent background but will have all of your marker icons on it. It will be the same size as your initial map, and the markers will be placed correctly withing that rectangle.
Watermark that image on top of your initial map. NOTE: this step is probably going to vary the most depending on your language of choice and what image manipulation features it offers.
Repeat 4 and 5 until you have all of your markers.
Write out you image with all of the markers now on it.
Serve up a link to that file instead of using the normal google link.
I have a more detailed explanation here with some code example in ColdFusion.

Fake long exposure on iOS

I have to implement long exposure photo capabilities to an app. Since i know that this is not really possible i have to fake it. It should work like "Slow Shutter" or "Magic Shutter".
Sadly i got no clue how to achieve this. I know how to take images with the camera (through AVFoundation) but i'm stuck at merging them to fake long shutter times.
Possibly i need to manipulate and combine all the images with coregraphics but i'm not sure about this (even the how). Maybe there's a better solution to this.
I would appreciate every help i can get here,
thank you people!
You might try the plus lighter blend mode.
Well, I suppose it would be possible to average together the results of several shots. I've mucked around a bit with the core graphics stuff to resize images (averaging together adjacent pixels), but with lower res images. The algorithm I used is here -- maybe it'll give you some ideas.
There may, of course, be a better way, and some tricks for working efficiently with high-res images. Can't help you there.
Convert the images to pixel bitmaps. Align and stack the bitmaps. Then try applying various 3D convolution filters to the 3D pixel array.

Using custom map with MKMapKit

I am creating an iPhone app for OS4.0, and I am attempting to integrate a custom map with a standard MKMapView. I have been provided a map in .eps format (vector image), and I want to somehow overlay this on an MKMapView in and restrict the scrolling boundaries of the map so users cannot scroll outside the boundaries of the custom map. What's the best way to go about this?
I have read some stuff about hosting map tiles on a server, but this seems overly complex for my application. This would just be a map for an attraction roughly the size of a public zoo, so I would think that it would be conceivable to just convert the .eps to a .png file, and overlay it, but this might not give the best performance.
I understand that I could conceivable use a UIScrollView to do the job, but the problem is that I have dynamically generated MKPinAnnotationViews placed on the map, whose position must be based on latitude and longitude, so I can't think on an elegant or reasonable way to do it with a scrollview. Any ideas?
Thanks!
-Matt
Apple has a great bit of example code that will show you what you need to do. Check out the TileMap sample - it is available as part of the (free) WWDC 2010 samples download.
It shows you how to use the gdal2tiles utility to convert an input map into a tree of overlay tiles.
Another good bit of Apple sample code to check out is HazardMap, which is part of the regular SDK samples.

How to show a big image on the iPhone (without overflowing the memory)?

I have an application that let's users view images. The user decides what images to use, so the size can range from 10x10 to 10000000x10000000 (I am exeggerating). All is well up to a certain size, when the image is bigger than the iPhone's memory. Quite understandably.
But how do I fix it? Is there a way to load only a portion of the image (I'm using an CATiledLayer, so I could load/release tile by tile).
Thanks in advance!
Unless you have an uncompressed image format it would be very hard to load the image in patches, you will have to provide the patches that the user would load, determine what portion of the image to show, and load the correct patches. There is an example for this "ScrollViewSuite" that demonstrate that technique. But this technique does require a preprocessing step.