How to cache firestore documents in flutter? - flutter

I want to store documents that I download from firestore. Each document has a unique id so I was thinking of a way I can store that document in json format or in database table (so I can access its fields) and if that id is not present in the memory than just simply download it from firestore and save it. I also want these documents to be deleted if they are not used/called for a long time. I found this cache manager but was unable to understand how to use it.

you can download the content by using any of the flutter downloadmanager and keep the path of the downloaded content in sqllite and load the data from local storage rather than url if the data is already saved.
Following pseudo-code may help you:
if(isDownloaded(id)){
//show from local;
} else {
// show from remote
}
you can manually check whether the resources are being used or not by keeping a timestam of it and update the timestamp only when it is shown. and you can run a service which looks for the unused resources and delete it from storage and the databse too.

Related

Is there any way to avoid delay for getting data when high speed internet connection available?(no delay if it there is no internet connection)

I have 100 documents
Document id
0001
0002
0003
....
....
0100
and if we load 5 documents with id 001,002,004,005,006
then firestore charge for 5 document reads and then we again load(call the read operation query) documents with id 004,005,006,007,008,001,002
then firestore will charge for 7 document reads
here on first time we already loaded document with ids 001,002,004,005,006 and in second time or refresh time we are loading documents already loaded and some new documents
Here we need to avoid multiple times reading document from server and read it from cash and need to avoid the firestore over document read charges How to do it?
Firestore have cash loading option but it will only load from cash and not from server here what we need is load exiting data from cash and load remaning data form server.Here now what firestore doing is it will load from server and if it is failed then it will read from cash that is ok but i need in revise order
Now what happening is if non internet all data load faster with out showing progress and if there is internet it will take few sec to load and it will shows loader When we do it without fireabase our app will shows loading only one time then first it will shows the data from sqlite then when ever the api call resoppnce reached we will update in ui, so users will not face any loader but with firestore user need to wait for a progress bar to finish
From a first glance it seems that you may use firebase firestore caching for this use case. This can be done easily for example in JS:
documentRef.get({ source: 'server' }) // or 'cache'
this will indeed reduce costs however it may read always from your local cache and never reach the server for new changes in your document. This might be what you want but it seems practical only if your documents (immutable) and never change. so you will be able to read new documents but if they change you might not see the changes. Please, read more about this here
A better suggestion is to change your app logic. So, rather than reading the documents this way:
001,002,004,005,006
004,005,006,007,008,001,002
it's better to read them in a paginated way like this:
001,002,003, 004,005,006
007,008,009,010,011,012
You can achieve that easily by using the concept of pagination in Firestore:
var first = db.collection("cities")
.orderBy("population")
.limit(25);
return first.get().then(function (documentSnapshots) {
// Get the last visible document
var lastVisible = documentSnapshots.docs[documentSnapshots.docs.length-1];
console.log("last", lastVisible);
// Construct a new query starting at this document,
// get the next 25 cities.
var next = db.collection("cities")
.orderBy("population")
.startAfter(lastVisible)
.limit(25);
});
check Firestore documentation for more details

How to access the latest uploaded object in google cloud storage bucket using python in tensorflow model

I am woking on tensorflow model where I want to make use of the latest ulpoad object, in order get output from that uploaded object. Is there way to access latest object uploaded to Google cloud storage bucket using python.
The below is what I use for grabbing the latest updated object.
Instantiate your client
from google.cloud import storage
# first establish your client
storage_client = storage.Client()
Define bucket_name and any additional paths via prefix
# get your blobs
bucket_name = 'your-glorious-bucket-name'
prefix = 'special-directory/within/your/bucket' # optional
Iterate the blobs returned by the client
Storing these as tuple records is quick and efficient.
blobs = [(blob, blob.updated) for blob in storage_client.list_blobs(
bucket_name,
prefix = prefix,
)]
Sort the list on the second tuple value
# sort and grab the latest value, based on the updated key
latest = sorted(blobs, key=lambda tup: tup[1])[-1][0]
string_data = latest.download_as_string()
Metadata key docs and Google Cloud Storage Python client docs.
One-liner
# assumes storage_client as above
# latest is a string formatted response of the blob's data
latest = sorted([(blob, blob.updated) for blob in storage_client.list_blobs(bucket_name, prefix=prefix)], key=lambda tup: tup[1])[-1][0].download_as_string()
There is no a direct way to get the latest uploaded object from Google Cloud Storage. However, there is a workaround using the object's metadata.
Every object that it is uploaded to the Google Cloud Storage has different metadata. For more information you can visit Cloud Storage > Object Metadata documentation. One of the metadatas is "Last updated". This value is a timestamp of the last time the object was updated. Which can happen only in 3 occasions:
A) The object was uploaded for the first time.
B) The object was uploaded and replaced because it already existed.
C) The object's metadata changed.
If you are not updating the metadata of the object, then you can use this work around:
Set a variable with very old date_time object (1900-01-01 00:00:00.000000). There is no chance of an object to have this update metadata.
Set a variable to store the latest's blob's name and set it to "NONE"
List all the blobs in the bucket Google Cloud Storage Documentation
For each blob name load the updated metadata and convert it to date_time object
If the blob's update metadata is greater than the one you have already, then update it and save the current name.
This process will continue until you search all the blobs and only the latest one will be saved in the variables.
I have did a little bit of coding my self and this is my GitHub code example that worked for me. Take the logic and modify it based on your needs. I would also suggest to test it locally and then use it in your code.
BUT, in case you update the blob's metadata manually then this is another workaround:
If you update the blob's any metadata, see this documentation Viewing and Editing Object Metadata, then the "Last update" timestamp of that blob will also get updated so running the above method will NOT give you the last uploaded object but the last modified which are different. Therefore you can add a custom metadata to your object every time you upload and that custom metadata will be the timestamp at the time you upload the object. So no matter what happen to the metadata later, the custom metadata will always keep the time that the object was uploaded. Then use the same method as above but instead of getting blob.update get the blob.metadata and then use that date with the same logic as above.
Additional notes:
To use custom metadata you need to use the prefix x-goog-meta- as it is stated in Editing object metadata section in Viewing and Editing Object Metadata documentation.
So the [CUSTOM_METADATA_KEY] should be something like x-goog-meta-uploaded and [CUSTOM_METADATA_VALUE] should be [CURRENT_TIMESTAMP_DURING_UPLOAD]

Using Ionic storage or a database (NoSQL, Sqlite, etc)

I am going to create an app (iOS and Android) that will save data to the users device (text, images, files, etc) and will stay on the users device until they decide to send it to the server. I can do it either with a sqlite database or using ionic storage but I don't know what the best practice would be
For simplicity I will only present two types of items that will be stored - notes and records
notes structure
notes = {
description: this.description,
'otherText': this.otherText,
fileOrImage1: this.imageOrFileURL1,
fileOrImage2: this.imageOrFileURL2,
..... Unlimited number of fileOrImageURL'S here
};
records structure
records = {
name: this.name,
'description': this.description,
// NOTE: These files and images are different than the ones above. They will be in separate components
fileOrImage1: this.imageOrFileURL1,
fileOrImage2: this.imageOrFileURL2,
..... Unlimited number of fileOrImageURL'S here
}
The user will first store the data on the device and it will only get uploaded when the user sends it to the server. Once its uploaded it gets deleted.
There can be many notes and records, lets say 25 each. Should I use Ionic Storage or something like a sqlite database? If I use ionic storage I will need to create a unique ID for each note and record and save it.
I am willing to change my approach if anybody has a better way. I'm still in the planning stage
I used the sqlite database for an app I did with Ionic, the reason for my choice was that I could then easily query the data, as with any database.

Import "normal" MongoDB collections into DerbyJS 0.6

Same situation like this question, but with current DerbyJS (version 0.6):
Using imported docs from MongoDB in DerbyJS
I have a MongoDB collection with data that was not saved through my
Derby app. I want to query against that and pull it into my Derby app.
Is this still possible?
The accepted answer there links to a dead link. The newest working link would be this: https://github.com/derbyjs/racer/blob/0.3/lib/descriptor/query/README.md
Which refers to the 0.3 branch for Racer (current master version is 0.6).
What I tried
Searching the internets
The naïve way:
var query = model.query('projects-legacy', { public: true });
model.fetch(query, function() {
query.ref('_page.projects');
})
(doesn't work)
A utility was written for this purpose: https://github.com/share/igor
You may need to modify it to only run against a single collection instead of the whole database, but it essentially goes through every document in the database and modifies it with the necessary livedb metadata and creates a default operation for it as well.
In livedb every collection has a corresponding operations collection, for example profiles will have a profiles_ops collection which holds all the operations for the profiles.
You will have to convert the collection to use it with Racer/livedb because of the metadata on the document itself.
An alternative if you dont want to convert is to use traditional AJAX/REST to get the data from your mongo database and then just put it in your local model. This will not be real-time or synced to the server but it will allow you to drive your templates from data that you dont want to convert for some reason.

Fetching data from webservice and caching on iphone

I want to access a webservice to fetch alot of data (e.g. product lists/details/search results) and display this.
Are there any best practices for this type of operations?
Performance-wise, is there any better way than retrieving, parsing and displaying text data on each request and maybe load images in the background? Are there any wise caching policies which may be applied?
If I were doing something like this from the ground-up, here's what I'd do:
Have the web site post all the data in XML. Except for maybe the pictures - just have an XML field specify a URL for each picture. So, for example, say I was doing a product list.
Use NSXMLParser to both fetch and parse the XML data.
Use a separate NSData dataWithContentsOfURL: call to fetch the contents of each image, with the URL from the XML data
Write the XML data, (and the NSData image) to a database table with CoreData. Add a indexed timestamp field to the table.
You can now use the timestamp field to keep the newest "x" records in the database - and can purge the older ones if/when you need to.
Use the contents of the database table to populate a UITableView - or however else you want to present.
Make some sort of "next", "prev" or "update" fields in the UITableView to get more data from the web, if you need to display more data than is what is cached - or you want to update the data in the cache.