skmaps in mobile is slow - openstreetmap

We use the skmaps's(Skobbler) SDK in android and IOS,
Our function is to show one pin in skmaps, the other settings is default.
We can show the map on both android and IOS,
but the loading time is too long, it takes about 5~15 seconds.
I've been trying different Wifi network and different device(iphone 6s plus and HTC M9), but it's still slow.
Can anyone help me?

The SDK doesn’t seem to be slow at rendering.
So check the downloading process for the requested vector tiles.
How to improve it:
reduce the requested area - increase the zoom level so that as soon as some vector data becomes available something will get rendered (i.e. set the zoom level to 18 or 17)
reduce the size of the requested vector tiles - switch to using LightMaps (initMapSettings.setMapDetailLevel(SKMapsInitSetti ngs.SK_MAP_DETAIL_LIGHT) - the light maps contain fewer elements and thus are smaller and will be rendered faster
don't use pannable maps but use static maps when displaying the location of a certain POI - for this exact use case, showing a mini map associated with a POI most products use a static map (fixed png/jpg) as it renders instantly. This is especially relevant in scenarios when the POIs are in different parts of the world

Related

MKMapView, 15k annotations

I have a list of 15k I need to display on a MKMapView embedded in my app.
I think this is probably too many to load at once, and I want to check if there is a standard way to do this. The XML file with the informations about the pins is stored on a webserver.
I think I have a few options, but still I'm not sure where the bottleneck would be (network, displaying many pins at once, loading the pins on the map the first time, etc):
Parse the whole xml file and add all the annotations. Force the user zoom so you can't see too many pins together
Parse the whole xml file and add all the annotations. Use a library for grouping the Pins.
Load only the top 50 pins in the area the user is currently in. Everytime the position is updated call a script on the webserver that only serves 50 positions based on map latitude-longitute and zooming.
Cache everything in coredata and do the same as the previous point.
Any considerations about performances I should do? Any other solutions? Will these perform well enough?
Thanks!
The bottleneck will be displaying that many pins at one time on the map. You shouldn't display more than around 500 at one time. Zoomed in might be OK, but zoomed out will affect performance and map visibility.
Here's a library that will do clustering for you:
http://applidium.com/en/news/too_many_pins_on_your_map/

Best method for including images on ipad apps, concerning different models

I am getting ready to design a new iPad app, which will be locked in landscape mode. It's going to be relatively image heavy, as it will be a sales utility for hotels and resorts. There will be quite a few "hero shots" of hotels, meaning full or half screen images meant to highlight the property.
With the jump in resolution between the 2 and the 3, I'd like for it to look presentable on both versions. What is the standard way of designing for this? Do I include the highest res shots possible for the 3, and then let it scale down in software when rendered on lower versions? Do I include two versions of every image? Do I include just the highest resolution that the 2 can render, and hope for the best on the 3?

load own map based on location on iphone4

i want to load my own map based on current location , lets say i have all the images based on zoom level
like zoom=16 (then all images of 16) , zoom=14(all images of 14 levl)
But how to load these maps based on location , i mean how to get notified so that i can load images???
I believe this might be exactly what you need: Route Me, an iOS map library for performing the standard UIMapView types of things with your own maps.

How to accommodate for the iPhone 4 screen resolution?

According to Apple, the iPhone 4 has a new and better screen resolution:
3.5-inch (diagonal) widescreen Multi-Touch display
960-by-640-pixel resolution at 326 ppi
This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most (if not all) developers do is: they designed everything in such a way, that a touchable area is (for example) 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom.
When we develop high-resolution apps, they probably won't work on older devices. And if they do, they would suffer a lot from 4-times the size of any image, having to scale them down in memory.
According to Supporting High-Resolution Screens In Views, from the Apple docs:
On devices with high-resolution screens, the imageNamed:,
imageWithContentsOfFile:, and
initWithContentsOfFile: methods
automatically looks for a version of
the requested image with the #2x
modifier in its name. It if finds one,
it loads that image instead. If you do
not provide a high-resolution version
of a given image, the image object
still loads a standard-resolution
image (if one exists) and scales it
during drawing.
When it loads an image, a UIImage object automatically sets the size and
scale properties to appropriate values
based on the suffix of the image file.
For standard resolution images, it
sets the scale property to 1.0 and
sets the size of the image to the
image’s pixel dimensions. For images
with the #2x suffix in the filename,
it sets the scale property to 2.0 and
halves the width and height values to
compensate for the scale factor. These
halved values correlate correctly to
the point-based dimensions you need to
use in the logical coordinate space to
render the image.
This is purely speculation, but if the resolution really is 960 x 640 - that's exactly twice as high a resolution as the current version. It would be trivially simple for the iPhone to check the apps build target and detect a legacy version of the app and simply scale it by 2. You'd never notice the difference.
Engadget's reporting of the keynote included the following transcript from Steve Jobs
...It makes it so your apps run
automatically on this, but it renders
your text and controls in the higher
resolution. Your apps look even
better, but if you do a little bit of
work, then they will look stunning. So
we suggest that you do that
So I infer from that, if you use existing APIs your app will get scaled up. If you take advantage of new iOS4 APIs, you can get all groovy with the new pixels.
It sounds like the display will be ok but I'm concerned about the logic in my game. Will touchesBegan positions return points in the new resolution? The screen bounds will be different, these types of things could potentially be problems for me.
Scaling to a double resolution for display purpose is straight forward, but will this scalling apply to all api's that input/output a screen coordinate? If not things are going to break aren't they?
Fair enough if it's been handled extensively throughout the framework.. I would imagine there are a lot of potential api's this effects.
For people who are coming to this thread looking for a solution to a mobile web interface, check out this post on the Webkit blog: http://webkit.org/blog/55/high-dpi-web-sites/
It seems that Webkit has solved this problem four years ago.
Yes it is true.
According to WWDC it appears that apple has build it some form of automatic conversion so that the resolution for applications will not be completely off. Think up-convert for dvd to HDTV's.
My guess would be that apple knows what most of the standards developers have been using and will already be using these for an immediate conversion. Of course if you are programming an application to take advantage of the new resolution it will look much nicer than whatever the result of apples auto-conversion is.
All of your labels and system buttons will be at 326dpi but your images will still be pixel doubled until you add the hi-res resources. I am currently updating my apps. If you build and run on the iPhone 4 sim then it is presented at 50%, go to Window > Scale > 100% to see the real difference! Labels are smooth, my images look shocking!

Motion detection using iPhone

I saw at least 6 apps in AppStore that take photos when detect motion (i.e. a kind of Spy stuff). Does anybody know what is a general way to do such thing using iPhone SDK?
I guess their apps take photos each X seconds and compare current image with previous to determine if there any difference (read "motion"). Any better ideas?
Thank you!
You could probably also use the microphone to detect noise. That's actually how many security system motion detectors work - but they listen in on ultrasonic sound waves. The success of this greatly depends on the iPhone's mic sensitivity and what sort of API access you have to the signal. If the mic's not sensitive enough, listening for regular human-hearing-range noise might be good enough for your needs (although this isn't "true" motion-detection).
As for images - look into using some sort of string-edit-distance algorithm, but for images. Something that takes a picture every X amount of time, and compares it to the previous image taken. If the images are too different (edit distance too big), then the alarm sounds. This will account for slow changes in daylight, and will probably work better than taking a single reference image at the beginning of the surveillance period and then comparing all other images to that.
If you combine these two methods (image and sound), it may get you what you need.
You could have the phone detect light changes meaning using the sensor at the top front of the phone. I just don't know how you would access that part of the phone
I think you've about got it figured out- the phone probably keeps images where the delta between image B and image A is over some predefined threshold.
You'd have to find an image library written in Objective-C in order to do the analysis.
I have this application. I wrote a library for Delphi 10 years ago but the analysis is the same.
The point is to make a matrix from whole the screen, e.g. 25x25, and then make an average color for each cell. After that, compare the R,G,B,H,S,V of average color from one picture to another and, if the difference is more than set, you have motion.
In my application I use fragment shader to show motion in real time. Any question, feel free to ask ;)