How would you achieve local data persistence in Flutter when remote versions of the same data are returned as nested JSON objects? - mongodb

When the server stores data in a MongoDB database and is accessed through GraphQL, it would be cool if local/cached versions of the same data could be stored similarly - in some sort of local NoSQL data store.
However, from my research it looks like there aren't that many data persistence options available in Flutter and the best one available is SQFLite. If I use SQFLite, though, I have to wrangle different formats of the same data - the nested-object NoSQL/GraphQL format and the "separate objects joined through relations" format of SQL.
Has anyone dealt with this before? Even if you're not using MongoDB/GraphQL in your remote backend, your API likely still returns nested objects which can't be stored as-is in your local SQL DB and can't be used interchangeably with their locally persisted versions.
So how would you deal with this issue and achieve clean syncing of local and remote data without it turning into a mess?

Related

Synchronising subset of mongodb on local machine

I would like to synchronise a subset of a Mongodb onto my local machine which would have a lightweight database like NeDB. What are the methods available to see what is the difference between the data in the two database, so that I could retrieve only the data that have changed?

MongoDB to DynamoDB

I have a database currently in Mongo running on an EC2 instance and would like to migrate the data to DynamoDB. Is this possible and what is the most cost effective way to achieve this?
When you ask for a "cost effective way" to migrate data, I assume you are looking for existing technologies that can ease your life. If so, you could do the following:
Export your MongoDB data to a text file, say in tsv format, using mongoexport.
Upload that file somewhere in S3.
Import this data, in S3, to DynamoDB using AWS Data Pipeline.
Of course, you should design & finalize your DynamoDB table schema before doing all this.
Whenever you are changing databases, you have to be very careful about the way you migrate data. Certain data formats maintain type consistency, while others do not.
Then there are just data formats that cannot handle your schema. For example, CSV is great at handling data when it is one row per entry, but how do you render an embedded array in CSV? It really isn't possible, JSON is good at this, but JSON has its own problems.
The easiest example of this is JSON and DateTime. JSON does not have a specification for storing DateTime values, they can end up as ISO8601 dates, or perhaps UNIX Epoch Timestamps, or really anything a developer can dream up. What about Longs, Doubles, Ints? JSON doesn't discriminate, it makes them all strings, which can cause loss of precision if not deserialized correctly.
This makes it very important that you choose the appropriate translation medium. The generally means you have to roll your own solution. This means loading up the drivers for both databases, reading an entry from one, translating, and writing to this other. This is the best way to be absolutely sure errors are handled properly for your environment, that types are kept consistently, and that the code properly translates schema from source to destination (if necessary).
What does this all mean for you? It means a lot of leg work for you. It is possible somebody has already rolled something that is broad enough for your case, but I have found in the past that it is best for you to do it yourself.
I know this post is old, Amazon made it possible with AWS DMS, check this document :
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MongoDB.html
Some relevant parts:
Using an Amazon DynamoDB Database as a Target for AWS Database
Migration Service
You can use AWS DMS to migrate data to an Amazon DynamoDB table.
Amazon DynamoDB is a fully managed NoSQL database service that
provides fast and predictable performance with seamless scalability.
AWS DMS supports using a relational database or MongoDB as a source.

Inserting .NET object into MongoDB

We have a large application with hundreds of classes/enums, and we want to use MongoDB to store some of these.
The situation is that there is a current system whereby we binary serialize the .NET object into a field in a SQL database, then deserialize on demand. What we want is put the object into Mongo in a way that will allow us to query the object's properties directly (ie. without having to load the object into memory, deserialize, etc.). This is so we can start to get some analytics from the historic data without having to drastically change the code base.
My question is, is this something that easily possible? are there in built serializers in the C# driver to do this?
I'm also open to answers that propose a better way to do this if what I'm trying to do is inherently wrong.
Update: to be clear, what I'm trying to do is take an object that has been loaded using NHibernate, and insert it into Mongo as a Queryable object. Ultimately, I'll want to load it back into memory at some point too.
MongoDB is basically a store of JSON documents, so if you can serialize your objects in a JSON way, you should be ok to store it in MongoDB, and I assume there are lots of JSON serializers for .NET, so should be easy to find one.
Once everything is stored as JSON in MongoDB you will be able to query it without any more tools that the ones to query the database directly.
Regards,
You can use Simple.Data.MongoDB a lightweight, dynamic data access .NET component for MongoDB

Is there a way to persist HSQLDB data?

We have all of our unit tests written so that they create and populate tables in HSQL. I want the developers who use this to be able to write queries against this HSQL DB ( 1) by writing queries they can better understand the data model and the ones not as familiar with SQL can play with the data before writing the runtime statements and 2) since they don't have access to the test DB/security reasons). Is there a way to persist the results of the test data so that it may be examine and analyzed with a an sql client?
Right now I am jury rigging it by switching the data source to a different DB (like DB2/mysql, then connecting to that DB on my machine so I can play with persistant data), however it would be easier for me if HSQL supports persisting this than to explain how to do this to every new developer.
Just to be clear, I need an SQL client to interact with persistent data, so debugging and checking memory won't be clean. This has more to do with initial development and not debugging/maintenance/testing.
If you use an HSQLDB Server instance for your tests, the data will survive the test run.
If the server uses a jdbc:hsqldb:mem:aname (all-in-memory) url for its database, then the data will be available while the server is running. Alternatively the server can use a jdbc:hsqldb:file:filepath url and the data is persisted to files.
The latest HSQLDB docs explain the different options. Most of the observations also apply to older (1.8.x) versions. However, the latest version 2.0.1 supports starting a server and creating databases dynamically upon the first connection, which can simplify testing a lot.
http://hsqldb.org/doc/2.0/guide/deployment-chapt.html#N13C3D

Synchronizing Core Data data with External Database

I have started working on an iPhone application that where I need to synchronize data with an external MySQL database. The current database scheme uses GUID/UUID fields as primary keys to maintain relationships between tables. I already have this working between a database app and the MySQL database, so this isn't a question regarding synchronization per say.
I've started going down the path of using Core Data, but I'm realizing that it maintains relationships between entities using it's own schema within the SQLite database.
Am I going down the wrong path using Core Data? If not how does one synchronize data between a Core Data store and an external database and still maintain the data relationships?
All you need to do is write the logic to translate entities from one db schema to another. You can fetch objects from the server and convert them to core data objects, and fetch object from core data and convert them to mysql entities when saving to the server. Nothing too difficult involved really
I agree with Griffo; simply translate the rows or entities you retrieve from the mysql database into managed objects (and visa versa).
If I understand what you are looking to correctly, I would definitely recommend using Core data. Translating the data between MySQL and Core Data isn't that hard, and if you use an NSFetchedResultsController to display your data in a UITableView, you practically don't have to write any code.
and you can always preserve the original GUIDs as, for example, optional externalIDs for the imported entities. This way you will be able troubleshoot your data imports easier and correlated the data between the to types of the data stores.