TileOverlay on Windows Phone Bing Maps? - bing-maps

For my Windows Phone Mango app, I want to make overlay a heatmap on Bing Maps, and a tile overlay seems the best way to do it. I've been having trouble finding any good documentation or code samples to work from. It seems like most people are pointing the tile source to a web service. I'd rather render the heatmap on the phone itself - is that possible?

One of the main reasons to use tilelayers to represent data on a map is that the computation and rendering involved in creating the layer is performed in advance, generally as a one-off or infrequent task. Then, at runtime, the only work the client needs to do is to retrieve the pre-rendered tile images from the server and display them straight on the map, which is a simple, low-resource activity.
Rendering tiles can be a resource-intensive task, both in terms of processing and memory usage - for example, I can only render about 3 tiles per second on a quadcore desktop machine with 8Gb RAM. Even if it's technically possible to create the tiles dynamically on a handheld device, the performance is almost certainly going to be unacceptable for any user. You've also got the question of how you're going to store the data from which the layer is created. Since you're talking about plotting a heatmap, I'm guessing you have a reasonably large dataset of points - did you envisage these stored locally on the device, or retrieved over the network? (either will create different problems).
Basically, while it may be theoretically possible to create tile layers dynamically on the client, doing so would negate almost any benefit of using tilelayers in the first place, which is why you probably won't find any code samples explaining how to do so. Perhaps you could explain your comment why you'd rather create the heatmap on the phone?
It's pretty easy to create a server-side tile renderer using .NET or PHP that renders and server tile images to a Bing Maps client, or you can use an existing map rendering library such as mapnik.org or geoserver.org.

Related

How to get the best performance on leafletjs maps? Should I use 256 px or 64 px tiles, what is the difference?

I am using leafletjs on a custom map (a game world map). It's an 10240px image and I got it properly setup, but I am noticing a bit of laggy behavior when zooming and navigating. My tiles are properly cut by an automated Photoshop script, and the tile file sizes are also optimized.
Currently my tilesize is the default in leafletjs (256px). But I am wondering if using 64px tiles, would make the performance better or worse? When should 64px tiles be used, and when 256px or 512px tiles should be used?
I couldn't find an particular answer which would satisfy my curiosity on this topic. And I hope someone could clear this up a bit for me. Blessings!
Welcome to SO!
That is a performance optimisation topic that has no fit-for-all answer unfortunately. It is similar to the question of bundling assets (CSS, JS) or even inlining them in the HTML web page.
The base idea is to decrease the time it takes to display meaningful information to the visitor (some refer to as "first paint", although in case of interactive mapping, as you realised this also includes when navigating).
Smaller (but more) files helps avoid loading unnecessary data. For mapping, smaller tiles means less extra data (part of tiles outside viewport), and also in case of Leaflet, faster display of whole tiles (since Leaflet does not display partial tiles, but waits for a tile to be fully loaded to display it).
Bigger (and lesser) files help decrease the number of network requests (usually browsers limit simultaneous requests to 2 per domain by default) and the associated overhead (both in terms of data and time).
When it comes to only map tiles, a rough and simple rule could be: the more detailed and heavy your tiles, the smaller they should be. But as always, experimenting different settings is still necessary.
As for the number of simultaneous requests limit, the "easy" trick, if you can implement it server-side, is to use multiple sub-domains (which can still point to you exact same server files).

Leaflet: How to avoid high file count for custom layers?

For my non-commercial, low-traffic web site, I successfully use Leaflet with standard raster tile layers from well-known sources.
I'd like to add additional layers containing very localized high-resolution maps. I've succeeded in making a usable raster tile-set from such a map, hosting the tiles on my own server, and adding that as an additional layer. But this creates a huge file count. My cheap shared-hosting account promises unlimited storage but limits file (actually, inode) counts. If I add one more such tile-set, I risk getting thrown off my server.
Clearly I can look for a hosting account with higher limits, and I'm exploring Cloud alternatives, too. (Comments welcome!)
Any other ideas? Are there free or very low-cost alternatives for non-commercial ventures to use for low-traffic tile storage?
Or: As I look at the localized, high-resolution maps – I see I could fairly easily trace them to create vector artwork without much loss of data -- and some gains in clarity. I use Adobe Illustrator. Is there a reasonably painless way to get from an .ai file (or some similar vector format) to a Leaflet layer? With a substantially lower file count compared to the raster alternative?
Apologies if I've misused terminology --please correct me-- or if I've cluelessly missed some incredibly obvious way of solving this problem.
TIA
This sounds like a good use case for Leaflet.TileLayer.MBTiles plugin: (demo)
A LeafletJS plugin to load tilesets in .mbtiles format.
The idea is to write your Tiles into a single .mbtiles file (actually an SQLite database), so that you just need to host that single file on your server, instead of your thousands of individual tiles.
The drawback is that visitors now need to download the entire file before the map can actually display your tiles. But then navigation is extremely smooth, since tiles no longer need to be fetched from the network, but are all locally available to the browser.
As for generating the .mbtiles file, there are many implementations that can do the job.

Modeling a Physical Place inside iPhone Application

I need to find a way to model a physical place inside an iPhone application. For example, I want to be able to take images for a restaurant and then use some tools or programming API to model this resturant as a 3d place and make the user able to navigate and explore the place and rooms.
I have thought about HTML 5 inside a web view but I don't think the WebGL is compatible with iPhone Web View (Safari Engine).
Can you please recommend a method, API, Commercial Library or anything to help me achieve this task?
First, you need to be able to display 3D models for IPhone. One of the most popular 3D engine is Unity3D:
http://unity3d.com/
It is extremely easy to start playing with Unity3D. You even have a free license with limited features:
http://unity3d.com/unity/licenses
Then, you now need to reconstruct a 3D model from pictures. This is not a trivial problem so it is better if you know some computer vision. You can try to play with OpenCV:
http://opencv.willowgarage.com/wiki/
Best regards.
Actually Nuke from the Foundry has a decent start at the future of creating computer models from images.
Basically it takes a high contrast point and tracks it through successive moments. Given hundreds and thousands of tracked points, the next step is to calculate the perspective change between points.
Say two points are a known pixel distance apart at time zero and a certain time period later they are a different distance apart. This change in difference could be a bad tracking point. But assuming that the two points are perfectly tracking, then the distance change could be caused by a camera motion laterally or rotationally. And in real space a point further away from you will have a different perspective then a closer point . This perspective change is a mathematical certainty.
Initially the tracking is typically used to refilm a piece of film to stabilize it. But the process the software uses to analyze the film can be saved , it is often called a point cloud. connection of many close points that track very closely usually are because the points are parts of a surface, so a model can be built.
But my friend, we are barbarians to the speed and software that can do that perfectly yet. Or all the CG Artists out there would not have anything to model in Maya except fantasy monsters and space ships that don't exist yet....

Indoor map creator

I have designed and developed couple of navigation apps using google API and osmdroid API for android powered devices. Now I am looking to create an Indoor navigation system using osmdroid API. But, in order to do so I need to create tiles similar to regular map tiles from an simple PNG file with naming convention similar to OpenStreetMap.
Please suggest me how to do this?
Cheers,
Susheel
You could design your indoor map using JOSM. Save it to a .osm file. Don't upload the data to OpenStreetMap unless it is a appropriate to do so (OpenStreetMap has some basic some indoor features, e.g. a highway=footway running through a shopping mall, but generally a lot of very detailed indoor stuff will be inappropriate for OSM) But...
With a .osm file you could then use one of the OpenStreetMap rendering tools to create a raster map, and chop it into tiles. For quick satisfaction I'd recommend Maperative, although I'm not sure how easy the last tile chopping step is. I've never done this with Maperative. Mapnik has a nice generate_tiles.py output, which will give you the tileset you want, but it's a bit tricky to set up in the first place.
Actually the last step is the main thing you're asking about. You can chop up any image into tiles. It may or may not be important to you that the tiles are geo-positioned in some meaningful way. For an old project I did a quick fudge solution using google tile cutter script, which is actually a wrapper around GDAL tools.
Have a look at the gdal library, and in particular gdal2tiles. This is a library designed to create maps from raster images, and serves exactly your purpose.
You can decide on a projection and what the bounds of your source image(s) are. The library allows you to reproject your image to the correct coordinate space.
It can also generate tiles at various zoom levels using gdal2tiles, either with or without reprojection.
Now you can check indoor rendering by drag and dropping OSM geojson data into https://app.openindoor.io web page.

Static maps with routing on iOS

Is there a way to have static maps on the iPhone, with either MapKit or a third-party framework? By this I mean fixed area of say, 5 sq miles, which can by zoomed/panned etc, but which doesn't require an internet connection to load the map.
Additionally, is it possible to get route directions, and draw them on the map?
You can of course always roll your own solution with CATiledLayer if the area you want to display is that small, but it's probably better and easier to have a look at routing frameworks like MapBox (http://mapbox.com/blog/introducing-mapbox-ios-sdk/), which provides offline support for iOS.
The MapKit framework doesn't offer offline maps currently.
It is possible to define an area on the maps, and lock the user into that area, but an internet connection is still required.
Maybe a more direct way to do what you want is to download a static image for the zone you are interested in and cache it, using the image of that map area to zoom and pan around in. Of course this would require an initial internet connection but that is really not such an obstacle, after all, one must have a connection to download your application.
You could also provide this image directly into your applications bundle, but you've not really told us much to conclude that the latter option is feasible.
As for routing, it's also not supported currently. You could however retrieve a list of waypoints from point A to B directly from the Google maps remote API - note you cannot do this with MapKit framework.
With these waypoints (which contain coordinates) and the current zoom level value, it's possible for you to plot these points and draw between each one in order to implement your own routing, this get a little ugly or maybe better to say "laggy" when the user begins to zoom in and out as it's only possible to know how to redraw your route when the user ends zooming (lifts their fingers from the screen), but of course like most things in programming, there is a solution to this which is, I feel out of scope for this question.
I hope this helps.