What is the most lightweight way to get last modified datetime from an Aqueduct application? - postgresql

I'm writing a very small REST API using Dart/Aqueduct hosted on Heroku utilizing PostgreSQL.
When communicating with this API, I need to fetch all data and store it in an application locally. The application will on reboot ask the API if any data has changed in any databases, which will only get modified through this API.
My question is: How do I check, whether data has been modified? Storing it in the Aqueduct channel is not viable, as Heroku will boot servers up as needed (and multiple at the same time) changing this modified date time each time.
PostgreSQL cannot supply this modified date time (https://dba.stackexchange.com/questions/58214/getting-last-modification-date-of-a-postgresql-database-table) - so what do I do? Is the only way to do this to have a seperate table storing this information? Can this be done more lightweight, so I wouldn't have to make a query to the databases when e.g. calling https://my-api.com/lastModified? Should I serve a static file, which should be written to on each data modification?
Maybe there exist a smart, lightweight solution!
Cheers :)

Yes, use a separate table to store the date - the ORM is already doing this to track migrations. I’m not sure what light weight means in this context, but you won’t run into any problems with this approach, whereas a file or in-memory storage won’t work at all.

Related

MobileApp / Data Sync / Existing database

I'm working through the issues of using an offline data sync with a mobile back end, and have worked through numerous articles/documentation regarding it's use.
The scenario that I'm looking for is a bit different that the normal demo projects. We have a Azure SQL db that we have been using for some time now, and a use case has come up to use this data in a mobile back end.
I have yet to see an article on how best practises of using an existing sql DB (EF codefirst), with offline data sync tables that need to inherit from EntityData, which adds additional fields for the sync process.
My original thought was to use the ef code first definition of the data models, and use them with Mobile App... however the requirement for EntityData doesn't work, as I'm not going to add those fields to a production system.
My question is, what is the best practise of using data from a production system, and get them syncing with mobile back ends? I'm thinking a intermidary DB is required, but that just means that there are three databases/tables to sync - which doesn't feel right at this point in time.
Anybody know of an article that starts with an Azure DB, and the process/decisions they made to allow data sync to offline devices?
This scenario is discussed in detail in my book - http://aka.ms/zumobook - chapter 3.

Which kind of Google Cloud Platform mobile backend client is appropriate?

THE PROBLEM
I'm writing a mobile app which will allow a user to log in, save some preferences that must be stored in a database, and display congressional bills to the user.
I've only written simple RESTful services with PHP and MySQL in the past. I'd like to take advantage of newer technologies, and am a little lost on general direction.
The bill data (formatted as JSON) can be gathered by running the scrapers found here. Using docker, I managed to set a working directory and download the files on my local machine.
I've designed a MySQL database for holding the relevant bill and user data.
I started to mess around in Google Cloud Platform, and read the doc that describes different models. I'm thinking of a few different ideas, but aren't familiar with GCP or what I can actually accomplish.
QUESTIONS
1) What are App Engine, Compute Engine, and Container Engine each for? I get the gist that Container Engine holds different instances of stuff you load up with docker, and that Compute Engine sets up a VM, but I don't really understand the relationships. How should I think of them?
2) When I run those scrapers from the shell, where are the files being stored, and how can I check on them? On my computer, I set a working directory, but how do directories work in GCP? Is it just a directory in the currently selected VM, or is this what Buckets are for?
IDEAS
1) Since my bill data already comes as JSON, should I skip the entire process of building a database for the bills and insert them into Firebase somehow? Is this even possible? If so, am I stuck using Firebase's NoSQL, or can I still set up a relational database?
2) I could schedule the scrapers to run periodically, detect new files, and run a script to parse the JSON and insert new bill data into my a database (PostgrSQL?/MySQL?). Then I would write an API.
3) Download the JSON files to a bucket, and write an API that reads from them. Not sure how the performance would compare to using a DB.
I'm open to other suggestions as well.
For your use case (stateless web application), App Engine is probably your best choice. The Google documentation has severalcomparisons of your computing options
You can use App Engine with PHP and cloud-hosted MySQL if you want, which could be a good way to get your toes wet without going in over your head.

Stream coredata to remote REST end-point

There are several apps that I use on my Mac that store their data in core data. I can see the data I want in CoreDataPro. I want that data - specifically I want to send changes in that to some remote end points (such as Zapier, or some other REST service).
I was thinking of piggybacking something like RestKit - such that I provide a configuration file saying where the app is and what end points the data needs sending to. I need only to scrape the data and send to REST, not a two-way sync.
I'd prefer a utility that I could configure rather than having to code a Mac application.
I noted http://nshipster.com/core-data-libraries-and-utilities/ - RestKit still seemed the most capable, but in https://github.com/RestKit/RestKit/issues/1748 I was advised that coredata projects should only be opened by a single application at a time, and really RestKit is designed for baking into the source app (rather than for database scraping and sending).
What approach would you take?
I also noted:
http://www.raywenderlich.com/15916/how-to-synchronize-core-data-with-a-web-service-part-1
Thanks, Martin.
First, Core Data is an object store in memory. What is written to disk from Core Data can be in one of several formats. One of those formats happens to be SQLite. If the application is writing to SQLite then it is possible to sample that same file and push it somewhere else.
However, each application will have its own data structure so you would need to be flexible in the structure you are handling.
RestKit has no value in this situation as you are just translating objects into JSON and pushing them to a server. The built in frameworks do that just fine.
There is no utility to do this at this time. You would need to write it yourself or hire someone to write it.
If I were going to do something like this, I would write it using Core Data itself interrogate the model from the application that wrote the data in the first place and then translate the database into JSON and push it. You won't be able to tell what is new vs. old so the server will need to sort that out.
Another option, since you can't diff anyway, is to just push the sqlite file to the server and let the server parse through it.
Other answers might include:
use a middleware platform e.g. using rssbus.com (only) sqlite connections are free to send the events
as my target system (http://easy-insight.com) actually has a transmitter that sends new records it sees from MySQL abd PostgreSQL, I could https://dba.stackexchange.com/questions/2510/tools-to-migrate-from-sqlite-to-postgresql or use an ETL such as http://www.easyfrom.net (I did ask the vendor for SQLite support a long time ago, but SQLite is just not a priority for them).
I'm wondering whether a good answer (where good excludes Objective-C and includes languages that I do know, such as - to a limited extent - Ruby) is to use MacRuby and its Core Data libraries.
Core Data seemingly can be exposed as an Active Record. https://www.google.com/search?q=macruby+coredata , notably http://www.spacevatican.org/2012/1/26/seeding-coredata-databases-with-ruby/
However, MacRuby seems to have faded - https://github.com/MacRuby/MacRuby/issues/231 - it won't even compile on Mavericks.

How to have complete offline functionality in a web app with PostgreSQL database?

I would like to give a web app with a PostgreSQL database 100% offline functionality. In an ideal case the database should be completely replicated in the browser per user, and synchronized when online. So that the same code can be used to talk to both the offline and online database. I know this is possible with PouchDB and CouchDB, but have not found a solution that works with PostgreSQL. Is this at all possible?
Short answer: I don't know of anything like this that currently exists.
However, in theory, this could be made to work...(long answer:)
Write a PostgreSQL backend for levelup (one exists for MySQL: https://github.com/kesla/mysqldown)
Wire up pouch-server to read/write from your PostgreSQL db using pouchdb's existing leveldb adapter (which in turn will have to be configured to use your postgres backend). Congrats, you can now sync data using PouchDB!
Whether an approach like this is practical in reality for your application is a different question you'll have to answer.
You may be wondering, for example, "will I be able to sync an existing complex schema with multiple tables to the client with this approach?" The answer is probably not - the mysqldown implementation of leveldown uses a single MySQL table with three fields: id, key, and value (source), and I imagine any general-purpose PostgreSQL adapter would be similar (nothing says you can't do a special-purpose adapter just for your app though!).
On the other hand, if you were to implement a couchdb-compatible API (or a subset- you may not need attachments, for example) over your existing database schema, there's nothing stopping you from using PouchDB on the client to talk directly to that as if it were an actual CouchDB - just pop in the URL and call replicate()! Implementing the replication protocol might be a fair bit of work, since you'd need to track revisions and so on somewhere - but again, technically not impossible!
There are also implementations of levelup's backend storage that are designed for browsers. See level.js, which could be another way to sync between a server-side Postgres levelup backend and the browser.
TL;DR: There's tons of work being done around Javascript databases right now. Is syncing with Postgres impossible? probably not. Would it be a lot of work? Definitely. Worth it? Who knows, but it would be cool.
Without installing PostgreSQL on the client? No. Obviously you can cache data for offline use, but an entire RDBMS+procedural languages in Javscript, no.

Xcode database usage confusion

I am new to iPhone app development and have a question about storing data. I've spent quite sometime learning about core data but still confused about the concept of persistence store.
What I understand is that core data is just a way of managing the data you downloaded from an external database. But given that core data is backed by SQLite, does that mean there exists a SQLite db in-memory while running? If so, does that mean when I use core data it will be more effective if I download a huge data set at start? But what about apps such as twitter or Facebook that require constant update of data, is a straight $NSURLConnection$ sufficient in these cases? If core data is used, will the extra overheads (i.e. data objects) be of any burden for such frequent request of update?
I would also like to find out some common ways of setting up an online database for iPhone app? Is it usually MySQL servers with a homemade Python wrapper that translates the data into JSON? Any standard server provider would provide the whole package? Or open source code?
Many many thanks!
I'm going to go through and try to address each of your questions, let me know if I missed one!
Firstly, Core Data can be used to store information generated in your app as well, there is nothing keeping you from using it in one way or another.
The way I understand it working is that the file, or other storage mechanism Core Data uses, exists regardless of whether or not your app is running. For a user to have to wait for a large database to be downloaded and loaded into a local database without being able to interact with your application is not the best way to do it in my opinion, people react negatively unresponsive UI. When a user may run your app for the first time, its possible you may need to get a larger set of data, but if any of it is generic and can be preloaded that is ideal, the rest should be downloaded as the user attempts to access it.
Facebook and Twitter applications work just as you understand in that a connection is established and the information is pulled from the appropriate site, the only thing they store is profile information, as far as I know. I would hesitate to use Core Data to store peoples information as eventually yes, there would be a significant amount of overhead caused by having to store peoples news feed or messages going back months on end.
As for setting up on online database that is something I'm unfamiliar with, so hopefully someone else can provide some insight on that, or if I find something I think may be of use, I will post back here for you. This part may actually merit its own separate question.
Let me know if you need to me elaborate on anything!