Using PouchDB as an offline raster map cache - leaflet

I have been exploring using PouchDB as an offline cache for raster map tiles. Specifically, for Leaflet. I have just finished some preliminary tests which I thought I would share.

I have created a "JsFiddle" (actually I prefer CodePen these days), as a playground for showing how to use PouchDB to cache off-line raster map tiles.
http://codepen.io/DrYSG/pen/hpqoD
The Algorithm it uses is as follows:
Test for presence of XHR2, IndexedDB, and Chrome (which does not have
binary blobs, but Base64). and show this status info
Fetch a JSON manifest of PNG tiles from GoogleDrive (I have 171 PNG
tiles, each 256x256 in size). The manifest lists their names and
sizes.
Store the JSON manifest in the DB
MVVM and UI controls are from KendoUI (This time I did not use their
superb grid control, since I wanted to explore CSS3 Grid Styling).
XHR2 is from: https://github.com/p-m-p/xhr2-lib/blob/master/test/index.html
I am using the nightly build of PouchDB
All files PNG file are stored on Google Drive (NASA Blue Marble.
I created the tile pyramid with Safe FME 2013 Desktop.
http://www.safe.com/fme/fme-technology/fme-desktop/overview
Before Pressing the button "Download Tiles" Check to see that the manifest has been stored in the DB, and that 171 tiles are present. If you already ran the test then your PouchDB is going to already have tiles in the DB, and you will get errors. In that case, Press Delete DB, and then reload the page.
When you press "Download Tiles" The following steps occur:
The Manifest is fetched from the DB
A XHR2 Fetch loop grabs the PNG blobs from GoogleDrive.
As loop runs, it starts saving the Blobs into PouchDB.
Note: Fetching and Saving are on overlapped threads (think
co-routines), since those (fetch & store) operations are running
async on separate threads.
When the Fetch loop is done it reports the elapsed time.
Note: This time is not pure Fetch work, since PouchDB
putAttachments() are running in parallel.
When all the Tiles are saved, it will report full statistics, and
display a tile fetched from PouchDB.
The Blob-Rate the total fetch and store time per each png tile
Right now Chrome is running fine. Firefox is very slow. I found this out a few months when I did a native IndexedDB API. So I don't think this is a PouchDB issue. Probably more due to FireFox using SQLlite which is a relational approach to a no-SQL DB.
IE10 is not working. This is sad, since my prior tests with IE10 shows it has a fantastically fast IndexedDB solution: Storing Image Data for offline web application (client-side storage database)
A Must read article on "How the browsers store IndexedDB data"
http://www.aaron-powell.com/web/indexeddb-storage
Note: FireFox uses SQLlite for the NOSQL IndexedDB. That might be the
reason for the slow performance. (blobs stored separately)
Note: Microsoft IE uses the extensible storage engine:
http://en.wikipedia.org/wiki/Extensible_Storage_Engine
Note: Chrome uses LevelDB http://code.google.com/p/leveldb/

Related

How to update the whole tiles incrementally?

It took nearly 7 hours for me to download the whole tiles. Production-package shows that Business PLUS users can use the automated update service to update the tiles incrementally. But I can not find the tool or system. So how can I do that?
With the higher Production package, you can download the smaller incremental set of map tiles - containing only the places which have changed since the last update (typically a week).
This allows proceeding with the faster updates, where your locally stored 60 Gbyte planet file is actualized by a script using a small downloaded incremental tileset.
Please contact us (OpenMapTiles.com) to get the access to this system.

Save image as base64 in mongoDB

I looking for the best way to upload an image from mobile phone to my server. I am currently using html5 to open the camera and take the picture, then I convert the file into a base64 string, then I send to the server, then save it in MongoDB.
I am expecting around 1000 to 1500 user request per day ( upload image ) , so I have the following question :
Is it a good way to do it?
Should I compress the base64, if yes how?
Should use a specific server to handle this task?
My backend is node express and the front end is ReactJS.
Thanks
It all depends on your situation. Reading and writing images from a cdn via i.e. streams is usually faster than reading and writing binary representations of images i.e. base64 from a database. However, your speed if reading from a cdn will obviously be effected by what service you use. Today, companies like Amazon can offer storage to a very cheap price so if you are not building a hobby app for like a student project you can usually afford it. Storing binary representation of images actually end up a little bit bigger in size than storing the image itself. You don't compress the base64, you compress the image before converting it. However, if you can't afford a storage account and if you know your users won't upload that many images it is usually enough to store binary representations of the images in a database. Mongo Atlas, for example, offers 512 mb for free on their database clusters. Dividing tasks of your app such as database requests and cdn services from your main application is usually a good choice if possible. This way you will divide the cpu, memory, etc. of your hardware and it will lead to faster reading and writing tasks for the user.
There are a lot of different modules for doing this in node. JIMP is a pretty nice one with loads of built in functions like resizing images and converting them to binary, either as Buffer or base64.

Efficient video game server data storage?

I'm currently developing a multiplayer iOS video game where I want the players data to be stored in the cloud. In the game the player creates an area by placing items, planting trees etc which other players can then visit and walk around in realtime.
From the game map side what needs to be stored is:
A 50x50 tile grid - The type of tile (grass etc) and the height of
the tile
All of the placed items and any associated item specific data (A tree for example would store when it was planted)
On the player side I need to store their:
Character customisation
Inventory
The information would need to be read fairly infrequently (every time a client connects to the area) but would need to be written back to fairly regularly (every time an item is placed etc) so that any future connecting client would be reading the correct data.
My question really is where to start with regards to storing this data in an efficient way for the server? I considered using several XML files to store different portions of the map and then storing each of the other parts (Inventory, Character etc) in their own XML file. But is this the most efficient way of doing it from the servers perspective? I want to minimise the amount of data it has to write back when the client updates :)
TL;DR: Use a database and the Unity plugin for said database.
Managing an XML file via server is just going to cause a bunch of file locking/synchronization problems that you shouldn't be dealing with manually (unless, this is the focus of your project/research).
There are plenty of database backends that you can use, and that work well with Unity. A few I can think of off-hand:
• Parse.com --- they have a Unity plugin
• Write your own HTTP-based web server (in PHP/MySQL, JavaScript, Ruby, whatever) and then use a REST interface library (a bunch on the Unity Asset Store) to talk to it and store your data.

Is it a good idea to store pictures / images in SQLite DB from the App?

Some discussions in Stackoverflow say that storing the picture in DB is a bad idea (a because overtime, the number of images get large & may lead to app crash). So, either
a. the image can be stored in the iPhone itself & only its location can be stored in the DB
Potential Issue: The image might get removed (outside of the app) & the app might not be able to load them the next time
b. the image can be shrunk to a small size (say, 100*100 pixels) and stored in the DB
Potential Issue: will size be an issue if the image is shrunk to just 100x100pixels?
c. Doing both (a) & (b). So, small versions of the images will be stored in the DB and then, retrieved & displayed in the App, whereas, if the user chooses to see the original version of the image (whose probability is low), then that'll be fetched from the local directory & shown.
Your suggestions please? In my opinion, (c) seems a good option to go in for, in case (a) has the potential issue mentioned.
It really depends on where the images are coming from. Are they being downloaded from the Internet (or imported from the user's library / camera) after the user installs the app, or are they bundled with the app from the App Store?
If the images are being downloaded / imported, the best solution is to store images in the filesystem following the recommendations in http://developer.apple.com/library/ios/#qa/qa1719/_index.html
Basically, if the images cannot be replaced or recreated, store them in the <Application_Home>/Documents directory and do not set the Do Not Backup attribute. These items will be backed up to iCloud, and this data does persist for at least some amount of time even if the app is deleted from the device. (Remember that your users do not have unlimited iCloud space. Be responsible.)
-However-
If the images are bundled with the app, the best solution is to import them directly into your Xcode project and reference them from there. This way you know they are always available even if the user deletes and reinstalls the app.
I would definitely stay away from storing image data in the databased whenever possible. There are simply better, more efficient ways in most any scenario.

how can I hold initial data when introducing an iPhone app?

I am developing an iPhone app which retrieves information via NSUrlRequest and displays through UIWebView.
I want to hold initial data (such as HTML pages, images) as a cache so that users of my app can access to data without network costs at the first time.
Then, if data on my web server are updated, I would download them and update the cache.
For performance issues, I think it is better to store data on file system than on core data.
Yet, I think it's not possible to release a new app writing data on disk.
So, I am about to store initial data(or initial cache) at Core Data, and when users launch my app for the first time, I would copy the data to disk (like /Library folder).
Is it, do you think, a good approach?
Or,...hmm, can I access to Core Data using NSUrlRequest?
One more question,
I might access to file system using NSURL, which is the same as to data on the Web. (right?)
My app would compare version of the cache with version of data on my web server, and if it's old, retrieve new data.
and my app will access only to file system.
All data are actually HTML pages including script, and images. And, I want to cache them.
could you suggest a better design?
Thank you.
Is it, do you think, a good approach? Or,...hmm, can I access to Core Data using NSUrlRequest?
No.
One more question, I might access to file system using NSURL, which is the same as to data on the Web. (right?) My app would compare version of the cache with version of data on my web server, and if it's old, retrieve new data. and my app will access only to file system. All data are actually HTML pages including script, and images. And, I want to cache them.
Yes.
But you could also be more clever. And by "more clever" I mean "Matt Gallagher." Take a look at his very interesting approach in Substituting local data for remote UIWebView requests.