I would like to use Apollographql to connect multiple Postgres databases which share the same structures (tables, columns). The idea was having a single endpoint to access any of these databases (using namespaces), but I couldn't find in the documentation how to do this.
I need a bit of your wisdom guys, how can Apollo connect to multiple Postgres databases using different namespaces? Is there another app better suited for this kind of requirements?
Related
I have created a Flask application using SQLAlchemy as the ORM. I also use flask-login to manage my different users. I want to connect to a PostgreSQL database with different flask users that correspond to different postgres users that have different privileges.
They should be able to be logged in at the same time and make different queries with their individual rights. I tried all kinds of pooling methods by SQLAlchemy, but could not manage to achieve what I want. It always led to an arbitrary association of the open connections to the different users. What is the best way to do this, i.e. to achieve this kind of "session-persistent" database connection behaviour?
Thanks in advance for any help!
One of ours statistician is stuck trying to read data from mongoDB using SAS.
In my experience connecting mongo to other languages always require a native driver, but in this case I've found that is only possible using ODBC.
I've tried to find a better way to connect this two software but the only idea that came to my mind is to expose mongo via webservice.
Any of you have a better solution to connect SAS to mongodb?
After some tries we found that using webservice is the most convenient way to solve mongodb access in ours case.
Some statistician required to load data on laptop from outside the corporate network so we decided to extend our web service to expose some more informations and access it in SAS like this https://blogs.sas.com/content/sasdummy/2016/12/02/json-libname-engine-sas/
Thanks all for the clarification regarding ODBC, to me was a real surprise that is still the preferred way to load data in enterprise environments.
I have an application where security and data theft are primary concerns. I am using Postgres 9.4 on RDS by AWS.
I have several users who need read permission on the db. I know that these users can essentially write a script to scrape all the data from the db but is there a way to deny them from using the pg_dump utility.
I am not sure what code examples I can provide for the same.
Is there any alternate strategy to use here? To share db data with developers without allowing them to take dumps of the same?
I've a google big table instance that need to be populate with data that are in a Postgres Database. My product team give a URL's that allow me to replicate the database. So using simple words I need to duplicate the Postgres database into the google instance and the way that my product team give me is using this url, how can I do this? any tutorial that can help me?
If you are already running PostgreSQL and would like to have a mirror of it on Google Cloud Platform, the best and simplest approach may be to run your own PostgreSQL instance on a Google Compute Engine virtual machine which can be done via several approaches, e.g.,
tutorial for launching PostgreSQL, or
click-to-deploy solution for PostgreSQL by Bitnami
Then, you would want to continuously mirror data from your local instance to the PostgreSQL instance running in Google Cloud to be able to query it. Another SO answer suggests that there are two major approaches to this:
Master/Master replication (Bucardo)
Master/Slave replication (Slony)
Based on your use case where you want to keep your local PostgreSQL instance as the canonical one, and just replicate to Google Cloud for the purpose of querying it, you want a Master/Slave replication, and have the PostgreSQL instance be the read-only replica, so you probably want to use the Slony approach.
For a more in-depth look at PostgreSQL solutions for high availability, load balancing, and replication, see the comparison in the manual.
We are in the process of building a cluster for our hosted services at work, the final product will be used to host multiple separate services. We are in the middle of deciding on how we want to setup our databases. We are running a postgresql database server which all services in the cluster will use. The debate right now is whether to give each service its own schema in a single database or to give each service its own database.
We just aren't sure which is the better solution for us. None of our services have a common structure and data does not need to be shared. What we are more concerned about is ease of use.
Here's what we care most about, we are really hoping for an objective vs opinion based answer.
Backups
Disaster recovery - all services vs individual
Security between services
Performance
For some additional information, the cluster is hosted within AWS with our database being an RDS instance.
This is what PostgreSQL official docs says:
Databases are physically separated and access control is managed at the connection level. If one PostgreSQL server instance is to house projects or users that should be separate and for the most part unaware of each other, it is therefore recommendable to put them into separate databases. If the projects or users are interrelated and should be able to use each other's resources they should be put in the same database, but possibly into separate schemas. Schemas are a purely logical structure and who can access what is managed by the privilege system.
Source: http://www.postgresql.org/docs/8.0/static/managing-databases.html
Disaster recovery - all services vs individual
You can dump and restore one database at a time. You can dump and restore one schema at a time. You can also dump schemas that match a pattern.
Security between services
I presume you mean isolation between databases and isolation between schemas. The isolation between databases is stronger and more "natural" for developers concerned with "ease of use". For example, if you use one database per service, every developer can just use the public schema for all development. This might seem "easier" than adding schemas to the search path, or "easier" than using schema.object when programming.
It depends in part on how you manage privileges for the roles you use for development, and on how you manage privileges in each database or schema. You can change default privileges.
Performance
I don't see a measurable difference. YMMV.