How do you specify DB_URI postgres DB connection string to instance running in Google Sql cloud? - postgresql

Here's my scenario. I have set up an instance of Postgres DB running in the Google SQL cloud. It's up and running and if I whitelist my local IP, I can connect directly with no issue.
I then have deployed a docker container (postGrest) which is a web server that connects to the postgres DB.
When I configured this on Google Cloud Run, it did have a drop-down option where I could specify DB connectivity and it says that, behind the scenes, it configures Cloud SQL Proxy for this connection.
The container allows environment variables to be passed in to specify which server, etc.
One required parameter is the DB_URI to the postgred instance. When running locally, it looks like this:
postgres://authenticator:mysecretpassword#localhost:5432/testdb
When I tried to configure this on the cloud version I tried using the IP 127.0.0.1 (The google cloud SQL proxy documentation says this is how you connect via the proxy). This didn't work.
I then tried using the public-ip assigned to the postgres DB....this didn't work either.
Does anyone know how to specify the correct connection string using this DB_URI format?

I am just going to add this as an answer rather than a comment since it's easier for readability and perhaps helping other users. Please don't feel encouraged to change the accepted answer.
By following the documentation provided by the OP, the final pattern for the URI became:
# Breaking lines for improved readability
POSTGRESS_URI=postgresql:///dbname
?host=/cloudsql/myprojectid:region:myinstanceid
&user=username
&password=password
&sslmode=disable
* dont forget to prefix the unix socket path with /cloudsql/
Any parameters can be used normally as in the example of sslmode.
Also, be aware that two important things are mentioned in the Cloud SQL documentation:
Note: The PostgreSQL standard requires a .s.PGSQL.5432 suffix in the socket path. Some libraries apply this suffix automatically, but others require you to specify the socket path as follows: /cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432.
In my case, the program I am using already adds as a suffix .s.PGSQL.5432, so I didn't need to add it to my URI.
Warning: Linux based operating systems have a maximum socket path length of 107 characters. If the total length of the path exceeds this length, you will not be able to connect with a socket from Cloud Run (fully managed).

Cloud Run does not support connecting to Cloud SQL using IP addresses. This means 127.0.0.1 will not work. Cloud Run uses Unix Sockets. You must use a connection string.
The Cloud SQL Proxy connection string looks like this:
myprojectid:region:myinstanceid
You can get the instanceid from the Cloud SQL Instance Details page in the console.
You will also need to add permissions to your Cloud Run service account to access Cloud SQL. You will need at least Cloud SQL Client.

It seems that some postgres client libraries don't support putting user and password parameters in the URL query params. I had problems with pq for Node.js and would get the error "no PostgreSQL user name specified in startup packet".
An alternative way of writing the connection string is as follows:
Breaking lines for readability:
postgres://db_user:db_password#
%2Fcloudsql%2Fproj_name%3Aus-central1%3Asql_instance_name
/db_name&sslmode=disable
It's like a normal TCP connection string, but you put the path to the Unix socket as the host, encoding the / and : characters. If the first character of the hostname is /, then the hostname will be treated as a filesystem path.

Related

Why does the Snowflake PUT command throw error 253003?

Using snowflake-connector-python 2.4.3, we're able to connect and execute DML & DDL successfully. But the put command throws this error:
snowflake.connector.errors.OperationalError: 253003:
253003: <urllib3.connection.HTTPSConnection object at 0x7f2dbd5784a8>:
Failed to establish a new connection: [Errno -2] Name or service not known, file=/tmp/tmpwh9m7__x, real file=/tmp/tmpwh9m7__x
We can manually put files to the specified stage using SnowSQL client. Does the put command create its own connection separate from the connection & cursor we've already created in python? Does put require a specific outbound port opened through our firewall? I assumed everything is going through port 443 (https).
I’m also facing same error in Linux with specific user. Trying to figure it out, why snowflake put not able to use the tmp location as user has full access on that tmp dir.
By the way, if I use any other uses, I’m able to complete this task. So please check if the user has right permissions.

How to connect build agent to PostgreSQL database

My integration tests for my asp.net core application require a connection to a PostgreSQL database. In my deployment pipeline I only want to deploy if my integration tests pass.
How do I supply a working connection string inside the Microsoft build agent?
I looked under service connections and couldn't see anything related to a database.
If you are using Microsoft hosted agent, then your database need to be accessible from internet.
Otherwise, you need to it on self-hosted agent that can access your database.
I assume the default connectionstring is in appsettings.json, you could store the actual database connectionstring to a secret variable, then update appsettings.json file with that variable value through some task (e.g. Set Json Property) or do it programming (e.g. powershell script) before running web app and starting test during build.
If you can use any PostgreSQL database, you can use service container with a docker image that has PostgreSQL database (e.g. postgres).
For classical pipeline, you could call docker command run the image.
I would recommend you to use runsettings which you can override in task. In that way you will keep your connection string away of source control. Please check this link. And in terms of service connection, you don't need any service connection, only what you need is proper connection string.
Since I don't know how you connect to your DB in details I can't give you more info. If you provide example how you already connect to database I can try to provide a better answer.

Transfer MongoDB dump on external hard drive to google cloud platform

As a part of my thesis project, I have been given a MongoDB dump of size 240GB which is on my external hard drive. I'll have to use this data to run my python scripts for a short duration. However, since my dataset is huge and I cannot mongoimport on my local mongodb server (since I don't have enough internal memory), my professor gave me a $100 google cloud platform coupon so I can use the google cloud computing resources.
So far I have researched that I can do it this way:
Create a compute engine in GCP and install mongodb on remote engine. Transfer the MongoDB dump to remote instance and run the scripts to get the output.
This method works well but I'm looking for a method to create a remote database server in GCP so I that I can run my scripts locally, which is something like one of the following.
Creating a remote mongodb server on GCP so that I can establish a remote mongo connection to run my scripts locally.
Transferring the mongodb dump to google's datastore so then I can use the datastore API to remotely connect and run my scripts locally.
I have given a thought of using MongoDB atlas but because of the size of the data, I will be billed hugely and I cannot use my GCP coupon.
Any help or suggestions on how of either of the two methods can be implemented is appreciated.
There is 2 parts to your question
First, you can create a compute engine VM with MongoDB installed and load your backup on it. Then, open the right firewall rules for allowing the connexion from your local environment to the Google Compute Engine VM. The connexion will be performed with a simple login/password.
You can use a static IP on your VM. By the way, in case of reboot on the VM you will keep the same IP (and it will be easier for your local connexion).
Second, BE CAREFUL to datastore. It's a good product, serverless NoSQL database, document oriented, but it's absolutely not the MongoDB equivalent. You can't perform aggregate, you are limited in search capabilities,... It's designed for specific use case (I don't know yours, but don't think that is the MongoDB equivalent!).
Anyway, if you use Datastore, you will have to use a service account or to install Google Cloud SDK on your local environment to be authenticated and to be able to request Datastore API. No login/password in this case.

Set ACE target launcher's internal server port to Bluemix's random port number

I am currently trying to deploy and run an Ace Target on an IBM Bluemix CloudFoundry Java/Liberty buildpack but without much success.
Symptoms:
During the deploy/re-stage procedure, the ACE Launcher's internal server is started with a preset port number (default or set manually via cfg) whilst the Bluemix container is dynamically assigned a random port number. Port binding between both entities times-out and launch procedure fails.
Option:
The Bluemix random port is accessible via a sys. env. variable $PORT.
Question:
What would be the best/simplest approach to assign the freshly generated Bluemix's random port number to ACE Launcher's internal server?
You can start the ACE launcher like this:
java -jar org.apache.ace.agent.launcher.felix.jar -v -s http://server:${PORT}
Where:
-v -- verbose, mainly so you can better diagnose what is going on
-s URL -- provides the launcher with the URL (which includes the port) of the server
It depends on how ACE takes arguments. The documentation for the Java Buildpack explains how to provide custom JVM arguments that may be able to provide ACE with what it needs (perhaps -s http://localhost:$PORT as suggested by others).

Cannot connect to mongodb replica set

I'm using the datanucleus mongodb maven plugin and "access platform" for connecting my java app to mongodb using JPA.
I've followed the instructions on http://docs.mongodb.org/manual/tutorial/deploy-replica-set/
on a ubuntu VM, added db1.mongo, db2.mongo and db3.mongo into the hosts file on both the guest vm and the host (Mac OS X).
I got a simple java app connecting to the servers, (as described in http://www.datanucleus.org/products/accessplatform_3_0/mongodb/support.html).
When I connect the app to the primary (connection url: mongodb:db1.mongo:27017/ops?replicaSet=rs0) everything works just fine, but when I add the other two mongodb's to the connection url, so it becomes mongodb:db1.mongo:27017/ops?replicaSet=rs0,db2.mongo:27018,db3.mongo:27019 I get the exception:
com.mongodb.MongoException: can't find a master
at com.mongodb.DBTCPConnector.checkMaster(DBTCPConnector.java:503)
at com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:236)
at com.mongodb.DBTCPConnector.call(DBTCPConnector.java:216)
...
I've searched for this error, but the ones I have found concerns use of localhost/127.0.0.1. I tried to mitigate that by running mongodb on a separate VM and thus a non-local IP as well as adding the names to the hosts file.
The primary goal with trying mongodb is to achieve availability so replication and being able to failover is extremely important. Transactions and consistency between nodes in case of failure is not a problem, neither are we concerned about loosing an update or two once in a while so mongodb looks like a good alternative using JPA (I'm utterly fed up with mysql :-)
Thanks in advance for your help!
I used multiple MongoDB servers when I originally wrote that support and worked back then. Not got time now, but you can look at the DataNucleus code that parses your datastore connection URL and converts it into MongoDB java API calls. Should strip the servers apart and then call "new Mongo(serverAddrs);". If its passing it in correctly (debugger?), then the problem is possibly Mongo-specific, as opposed to what DataNucleus does for you.
Also make sure you're using v3.1.2 (or later) of datanucleus-mongodb
I think you've misformatted your MongoDB URI. Instead of this:
mongodb:db1.mongo:27017/ops?replicaSet=rs0,db2.mongo:27018,db3.mongo:27019
Do this:
mongodb:db1.mongo:27017,db2.mongo:27018,db3.mongo:27019/ops?replicaSet=rs0