How to use pouch-couch db in python postgresql application? - postgresql

I am building a WEB (PWA) application which needs to run offline and should sync data whenever network is available. Current technology stack is Postgresql and django.
For offline apps Pouch db and couch db seems to be the first choice. But how exactly postgresql database should be used with Pouch db and couch db? What should be architecture and which library I have use to sync postgresql data to user's device

Related

Sync database on server with database local machine

I have a database on my server which I could like to have on my laptop so I can use it for development. Is there a way to do this? I was using the default SQLite database with Django but now I'm using PostgreSQL which isn't part of the repo.

Backup of Dashdb on Bluemix - what options available

I tried taking backup of dashdb from bluemix cloud using Data Studio. I am getting this error 'Remote target is unreachable.'.
Since this is an admin activity, I assume it should be done on the server. As this is cloud server, I am trying to understand how this can be done!
Are there any tools which support ssh to the server and how to take backup of the db? Any documentation in this regard?
Thanks
Which plan are you using ?
Have you read the details of backup on the FAQ ?
This section might help:
Is my data backed up? Encrypted backups on the full Db2 managed service database are done daily. For the Db2 Warehouse on Cloud Flex Performance plan, the
last 7 daily backups are retained. For all other Db2 Warehouse on
Cloud plans, the last 2 daily backups are retained. For Db2 on Cloud,
the last 14 daily backups are retained. In the Db2 Warehouse on Cloud Flex Performance plan, you can restore your database from any of your retained backups at any time
that you choose. In the case of all of the other Db2 Warehouse on
Cloud plans, the retained backups are used exclusively by IBM for only
system recovery purposes in the event of a disaster or system loss. A
request to restore your database from a backup is not supported. You
can export your data using Db2 tools such as IBM Data Studio or by
using the db2 export command.>
For Db2 on Cloud, backups can be stored off site in a different data center or region upon request to IBM Support. These backups are
also used exclusively by IBM to recover from only disaster or system
loss events. A request to restore your database from a backup is not
supported.
Are there any tools which support ssh to the server and how to take backup of the db? Any documentation in this regard?
SSH is not supported as this is a managed service. This is documented in the FAQ:
How do I access the Db2 managed service database?
You can access your Db2 managed service database through the web console from a browser, or you can connect to the database with a client connection such as JDBC, ODBC, CLI, CLP, or CLPPlus. A direct login to the server with Telnet or a Secure Shell (ssh) is not supported.
If you want to take your own backups at the interval of your choice, exporting your data is your best option. That can be done from the web console or from a database client.
Alternatively, if what you're after is access to historical data, you can use time travel query.

convert mongodb to postgres in rails

I have 2 projects, 1 is rails project containing apis for Mobile application which database is in MongoDB and on opposite side other project is in spree which database is in postgres, So how might I synchronize both the databases? I want to make entries in both databases but at the same time. If rails api project get any entry in postgres database so, it would also make same entry into mongodb database.

Heroku horizontal scaling

I have a python app running on Heroku using a PostgreSQL database. If I create a database follower, will that follower be used to balance the read database load automatically? I know this provides me a failover copy of sorts, but will it relieve my database load?
No -- you'll need to configure your Python software to send SQL queries to both the follower AND the master database in order to actually 'relieve' your database load.
If you're using Django, you'll want to read this: https://docs.djangoproject.com/en/1.8/topics/db/multi-db/
If you're using SQLAlchemy, you'll want to read this: read slave , read-write master setup

Meteor.js persistent + memory Mongo

It's possible to connect meteor manually to 2 or more databases in order to have a normal mongo that saves to the disk and a memory one like redis?
I'm asking because mongo has already full support from meteor and using it would be a lot easier than redis or other database
Right now, a Meteor server can only connect to one (and exactly one) Mongo database.
Redis support is on the roadmap, as is SQL support. Once Meteor supports multiple databases, you will have more options for how to set up your databases as well as dividing up your data between them. The only way to do what you are saying right now is to have your Meteor client connect to two different Meteor servers, and have one of them clear/dump the database regularly.
Source: discussions at Meteor's offices.