How to set up 2-way syncing between Airtable base and multiple external sources - postgresql

Is this even possible? I would like to update my companies external postgres database, as well as potentially updating their information in Salesforce, whenever a new record is added in Airtable.
I'd also like to set up a sync so that updates in the postgres database(and again Salesforce possibly) are reflected in Airtable.

Related

PostgreSQL - Periodically copying data from one database to another

I'm trying to set up an architecture with 2 databases, say preview and live, that have the exact same schemas. The use case is that edits can be made to the preview database and then pushed to the live database after they are vetted and approved. The production application would read from the live database.
What would be the most appropriate way to push all data from the preview database to the live database without bringing the live database down? Ideally the copy from preview to live would be an atomic transaction.
I've worked with this type of setup in MSSQL, but I'm fairly new to Postgres. So I'm open to hearing other ways to architect this (with Schemas perhaps?).
EDIT: The main reason to use separate databases is that I may need more than 1 target database (not just a single "live" database). I also may need to switch target databases on the fly without altering the source database schema.
I think what you're looking for is a "hot standby". This would be a separate instance of Postgresql, possibly on the same server but usually not, which is a near-real-time replica of the primary server.
In broad strokes, this is done by shipping the binary transaction logs from the primary server to the backup server, and then "replaying" them there. The exact mechanism for transmitting the logs may vary depending on your requirements.
Fortunately, the docs on this are excellent:
https://www.postgresql.org/docs/9.3/static/warm-standby.html
https://www.postgresql.org/docs/9.0/static/hot-standby.html

Copying CloudKit records from one record zone to another?

I am working on an app that will store a relatively small set of records to the CloudKit public database. Each user of the app will read those records but not update them. Along with this app, I am creating a companion app that I and a small group of administrators will be able to use to update records in that public database.
I have the basic operations of the administrator application working using basic CloudKit operations. I am now trying to implement a feature that requires multiple record types to be changed together. If an app user were to get one of those updated records and not the others, their system would be in an inconsistent state so I'd like to avoid that situation.
The ideal solution to this would be for the administrator application to submit the changes to all related records with a single CKModifyRecordsOperation, and to set the atomic property of that operation to true. Unfortunately, the CloudKit public database's default record zone does not support atomic operations and custom record zones are not supported in the public database. Therefore, there does not seem to be any way to atomically apply changes to the public CloudKit database.
As a workaround, I was considering the use of a private database that the administrator app would use. That database could have a custom record zone that supported I need atomic operations. However, that doesn't help me get the completed records copied over to the public database.
I am wondering if there is any supported way to tell CloudKit to copy a set of records from one record zone to another. I suspect there is no way to do this, but I'm hoping that I'm missing something.
Assuming there is no directly-supported way to do this in CloudKit, does anyone have suggested design changes that would allow me to make data available to my app users via the public CloudKit database in some way that supports atomic updates to multiple records?
Thanks

MS sync framework 2.1 - Delete on client without affecting server

I am using the Sync framework 2.1 to sync multiple SQL Express databases to a central database server over a WCF service. It works fine. Now I have a requirement that certain records in a table in the client database (source) which is set to "upload only" need to be deleted as part of archiving but it should not delete the corresponding records on the server (destination). Please note that the records would already have been synced. How can I do this with the Sync framework, like restrict the sync only for new records and deletes should not be synced. Is this possible?
Thanks.
that should be possible, you can intercept the changes from the source on the Changes Selected event and remove the rows you don't want to sync to prevent them from flowing thru the destination.
have a look at this post : Manipulating Change Dataset in SyncFx

Syncing iOS Core Data with remote server which sends XML

My app parses XML from remote server and store the objects in Core Data(SQLite storage). So that user can browse the material when OFFLINE by reading from local storage.
User may make changes to objects when browsing offline which gets stored locally in Core Data SQLite store. Another User makes some changes to object on Remote server and it is stored there. Now when I detect internet connection, my app should sync my local storage with remote server. Which means remote server is updated with changes I made to my Core Data(SQLite storage) when I was offline and my local storage - Core Data(SQLite storage) needs to be updated with what ever changes other user made to remote server.
For example there is a forum and it is stored in my local storage so that I can read and reply when I am traveling. When later on internet is accessible. My app should automatically put all my replies stored in core data to remote server and also bring other posts on remote server into my local storage.
Remote server is sending XML which I'm parsing and storing in Coredata. My problem is how to sync it ?
How both ways communication happens when there is a change?
How to sync only data which has changed and not to IMPORT whole remote server DB and vice-versa ?
I know one of the way to do it..
add one more field to your local and server database. i.e. Timestamp.
when user change data on the local database change the Timestamp to current time.
do same on the server..i.e. When someone edit data on the server change Timestamp to current time.
When user connects to internet... check local Timestamp to Server timestamp..
case 1 Both are same - Nothing to do
case 2 local time > server time - use sql to get all the data having timestamp greater than server timestamp.. and upload it on the server...
case 3 local < server .... get all the records greater than the local timestamp and add it to the local database..
I am not sure if there is any better way... but this surely works...
One solution could be
iphone syncs the changes to the server
server merges the new and old stuff
iphone gets the new changes (from the merge) from the server
So let the server be the master which should know how to merge stuff and the clients should only download the data incrementally after some changes.
Transactions. You want to follow the ACID rules for transactions. Essentially you have to make sure that data has not be updated that you have not refreshed locally before altering your local write to update.
So the easiest way is have a user get the most recent update from the server, then overwrite that and make sure that with timestamps no othe update happens during that process. Even better yet, use blocking with threads to insure that nothing else happens.
If you Google transactions or ACID there will be a lot of info out there. It's a big area in RDBMS environments where many users can corrupt the data and locks must be held between writes and updates.

Overwrite database or update (iPhone)?

I have a content based, read-only iPhone app. Users can select favorite topics, which I need to track. Some topics I'd like to make available between app updates through the App Store. I'll need to track if users have downloaded these particular topics or not until the App Store update is available. This approach will consist of two tables for user tracking. All other tables contain mainly static content, save any new downloaded entries.
Before I began tracking user content, I'd always deploy the database on app updates. An overwrite - simple. But now I need to track certain user configurations. Rather than trying to keep track of which app version a user has and running through a list of sql scripts in the correct order, so the user is at the right database version, I'm thiking to use two databases. One contains static content and the other user data. The static content database is always overwritten. That keeps things simple. The database currently is 250kb. It will grow very slowly.
I have plans to use SDK 3.0 push notification and peer-to-peer as well, which will store any user config data in the user database.
Any one see problems with this approach?
This sounds alright to me. If you're using SQLite, you may want to look into the ATTACH DATABASE command, which lets you keep two databases open on the same connection.