Any of you with experience with PostgREST and Cloud SQL ?
I have my SQL instance ready with open access (0.0.0.0/0) and I can access it with local PostGREST using the Cloud proxy app.
Now I want to run Postgrest from an instance of the same project but
I can't find an URI format for Postgrest that supports Cloud SQL format, as
Google SQL Cloud uses only unix sockets like /cloudsql/INSTANCE_CONNECTION_NAME
Config 1
db-uri = "postgres://postgres:password#/unix(/cloudsql/INSTANCE_CONNECTION_NAME)/mydatabase"
db-schema = "api"
jwt-secret = "OOcJ7VoSY1mXqod4MKtb9WCCwt9erJkRQ2tzYmLb4Xe="
db-anon-role = "web_anon"
server-port=3000
Returns {"details":"could not translate host name \"unix(\" to address: Unknown host\n","code":"","message":"Database connection error"}
Config 2
db-uri = "postgres://postgres:password#/mydatabase?unix_socket=/cloudsql/INSTANCE_CONNECTION_NAME"
db-schema = "api"
jwt-secret = "OOcJ7VoSY1mXqod4MKtb9WCCwt9erJkRQ2tzYmLb4Xe="
db-anon-role = "web_anon"
server-port=3000
The parser rejects the question mark
{"details":"invalid URI query parameter: \"unix_socket\"\n","code":"","message":"Database connection error"}
Config 3
db-uri = "postgres://postgres:password#/mydatabase"
db-schema = "api"
jwt-secret = "OOcJ7VoSY1mXqod4MKtb9WCCwt9erJkRQ2tzYmLb4Xe="
db-anon-role = "web_anon"
server-port=3000
server-unix-socket= "/cloudsql/INSTANCE_CONNECTION_NAME"
server-unix-socket appears to only take socket lock file path. Feeding it /cloudsql/INSTANCE_CONNECTION_NAME tries to delete file as in `postgrest.exe: /cloudsql/INSTANCE_CONNECTION_NAME: DeleteFile "/cloudsql/INSTANCE_CONNECTION_NAME": invalid argument t (The filename, directory name, or volume label syntax is incorrect.)
Documentation
Cloud SQL Doc
https://cloud.google.com/sql/docs/mysql/connect-run
PostgREST
http://postgrest.org/en/v6.0/configuration.html
https://github.com/PostgREST/postgrest/issues/1186
https://github.com/PostgREST/postgrest/issues/169
Environment
PostgreSQL version:11
PostgREST version: 6.0.2
Operating system: Win10 and Alpine
First you have to add the Cloud SQL connection to the Cloud Run instance:
https://cloud.google.com/sql/docs/postgres/connect-run#configuring
After that, the DB connection will be available in the service on a Unix domain socket at path /cloudsql/<cloud_sql_instance_connection_name> and you can set the PGRST_DB_URI environment variable to reflect that.
Here's the correct format:
postgres://<pg_user>:<pg_pass>#/<db_name>?host=/cloudsql/<cloud_sql_instance_connection_name>
e.g.
postgres://postgres:postgres#/postgres?host=/cloudsql/project-id:zone-id-1:sql-instance
According with Connecting with CloudSQL, the example is:
# postgres+pg8000://<db_user>:<db_pass>#/<db_name>?unix_sock=/cloudsql//.s.PGSQL.5432
Then you can try with (Just as #marian.vladoi mentioned):
db-uri = "postgres://postgres:password#/mydatabase?unix_socket=/cloudsql/INSTANCE_CONNECTION_NAME/.s.PGSQL.5432"
Keep in mind that the connection name should include:
ProjectID:Region:DatabaseName
For example: myproject:myregion:myinstance
Anyway, you can find here more options to connect from external applications and from within Google Cloud.
I tried many variations but couldn't get it to work out of the box, however I'll post this workaround.
FWIW I was able to use an alternate socket location with postgrest locally, but then when trying to use the cloudsql location it doesn't seem to interpret it right - perhaps the colons in the socket path are throwing it off?
In any case as #Steve_Chávez mentions, this approach does work db-uri = postgres:///user:password#/dbname and defaults to the postgrest default socket location (/run/postgresql/.s.PGSQL.5432). So in the docker entrypoint we can symlink this location to the actual socket injected by Cloud Run.
First, add the following to the Dockerfile (above USER 1000):
RUN mkdir -p /run/postgresql/ && chown postgrest:postgrest /run/postgresql/
Then add an executable file at /etc/entrypoint.bash containing:
set -eEux pipefail
CLOUDSQL_INSTANCE_NAME=${CLOUDSQL_INSTANCE_NAME:-PROJECT_REGION_INSTANCE_NAME}
POSTGRES_SOCKET_LOCATION=/run/postgresql
ln -s /cloudsql/${CLOUDSQL_INSTANCE_NAME}/.s.PGSQL.5432 ${POSTGRES_SOCKET_LOCATION}/.s.PGSQL.5432
postgrest /etc/postgrest.conf
Change the Dockefile entrypoint to CMD /etc/entrypoint.sh. Then add CLOUDSQL_INSTANCE_NAME as an env var in cloud run. The PGRST_DB_URI env var is like so postgres://authenticator:password#/postgres
An alternative approach if you don't like this, would be to connect via serverless vpc connector.
I struggled with this too.
I end up doing a one-liner for DB-URI env variable
host=/cloudsql/project-id:zone:instance-id user=user port=5432 dbname=dbname password=password
However, I have postgrest running on cloud run that lets you specify the instance connection name via
INSTANCE_CONNECTION_NAME=/cloudsql/project-id:zone:instance-id
Maybe you can host it there and you end up doing it serverless Im not sure where are you running it currently.
https://cloud.google.com/sql/docs/mysql/connect-run
Related
I am fairly new to Eclipse Ditto and have just started using it for my project.
I am trying to connect Cloud hosted mongodb instance to ditto.
Following the documentation I know that I need to add some variables and pass them to docker-compose. The problem is that I do not know what should be the values of these variables as there are no examples.
Are all these variables necessary or will just the URI work?
This is my current .env file config
MONGO_DB_URI=mongodb+srv://username:pass#IP
MONGO_DB_READ_PREFERENCE=primary
MONGO_DB_WRITE_CONCERN=majority
The command I am using to start ditto is
docker-compose --env-file .env up
I have removed mongodb service from docker-compose.yml
Nice to hear that you started using Ditto in your project.
You need to set the following env variables to connect to your Cloud hosted MongoDB.
MONGO_DB_URI: Connection string to MongoDB
For more detailes see: https://docs.mongodb.com/manual/reference/connection-string/
If you have a ReplicaSet your MongoDB URI should look like this: mongodb://[username:password#]mongodb0.example.com:27017,mongodb1.example.com:27017,mongodb2.example.com:27017/?replicaSet=myRepl
I assume you also need to enable SSL to connect to your MongoDB.
To do so set this env var.
MONGO_DB_SSL_ENABLED: true
If you want to use a specific Ditto version you can set the following env var
DITTO_VERSION= e.g. 2.1.0-M3
If you use .env as file name you can start Ditto with:
docker-compose up
The other options for pool size, read preference and write concern aren't necessary as there are default values in place.
I have 2 systems. system A and system B and both are DB2 servers. I want to be able to access system B database from system A. Both have a database called TESTDB. I am trying to run the following command to create a server.
CREATE WRAPPER "drdawrapper"
LIBRARY 'libdb2drda.so'
OPTIONS (DB2_FENCED 'Y'
);
db2 "CREATE SERVER "PRD_SERVER_SSL_FLEX" TYPE DB2/UDB VERSION '11' WRAPPER "drdawrapper" AUTHORIZATION "xyz" PASSWORD "xyz" OPTIONS (DB2_CONCAT_NULL_NULL 'Y',DB2_VARCHAR_BLANKPADDED_COMPARISON 'Y',DBNAME 'TESTDB',HOST '169.62.253.230',NO_EMPTY_STRING 'N',PORT '50001',SECURITY 'SSL',STRING_UNITS 'S');"
But I keep getting:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL1101N Remote database "TESTDB" on node "<unknown>" could not be accessed
with the specified authorization id and password. SQLSTATE=08004
Node directory:
db2 list node directory
Node Directory
Number of entries in the directory = 1
Node 1 entry:
Node name = TESTNODE
Comment =
Directory entry type = LOCAL
Protocol = TCPIP
Hostname = 123.21.23.12
Service name = 50001
The credentials are correct. I am not sure what node is it looking for. Any pointers?
Your question is more about configuration than programming.
As you appear to be encrypting the federated connection it can be wise to first verify that the encrypted connection works at the command-line, separately from federation. This irons out a lot of the detail and is easier to troubleshoot. After you get that working, you can then begin on encrypting the federated connection.
Please follow the detailed instructions here (choose the correct Db2-version):
You have to know in advance which kind of SSL/TLS trust verification you want (i.e. either single cert (client trusts the server - simplest and easiest), or multiple certs (both sides trust the other - more setup, arguably more secure), because this determines the configuration.
Ensure both of your Db2 instances and databases are properly configured for SSL.
Catalog the remote-node locally with security SSL (db2 catalog tcpip node ... remote ... server ...security ssl)
Catalog the remote-database locally on the new node name (db2 catalog database ... at node ...) followed by db2 terminate .
Verify a command-line connect to the remote database using the federated credentials, using the configured db2dsdriver.cfg if using SSLSERVERCERTIFICATE method, or using the keystore/stash configuration ( db2 connect to remotedb user ... using ... ). Use the same userid/password that you will use later in the create server command.
Once that command-line connect works, you can proceed with the encrypted federation link, via db2 create wrapper... and db2 create server....
There's no need to use quotes around the wrapper name, just let it fold, otherwise quotes are just annoying redundant noise, although it is not a mistake.
Inside the script for create server command options instead of AUTHORIZATION "xyz" PASSWORD "xyz" use AUTHORIZATION \"xyz\" PASSWORD \"xyz\" (i.e. escape the quotes).
For one-sided trust, use SSL_SERVERCERTIFICATE in the create server options clause and ensure the value is accurate (fully qualified path to the remote-db2instance-certificate-file), and that the file/directory permissions are valid.
For mutual trusts, use both SSL_KEYSTORE and SSL_KEYSTASH keywords with correct values, in the create server options clause (having previously ensured your keystores are properly populated, as verified by a command-line connect above).
You may also want to consider create user mapping depending on the requirements.
Finally you can create your nicknames, and test out the federated link by querying those nicknames.
I have a Google Cloud SQL instance with a public IP, only accessible to whitelisted IP and through an SSL connection.
I'd like to know how I can connect to this database from Google Colab with Python.
If I try to connect like any external application, the connection is refused since the ip of the "client" is not whitelisted (and I can't whitelist it since I don't it and it's highly probable it's volatile)
Is there a shortcut, like with Google App Engine to connect to the database using its instance and a google client?
Thanks
A little late to answer, but I think I have a solution and it involved using the Cloud SQL Proxy. Overall, you first need to use the Gcloud SDK (included with Colab) to authenticate, then install the proxy, then spin it up. I did this in two blocks
# gcloud login and check the DB
!gcloud auth login
!gcloud config set project [YOUR PROJECT ID]
!gcloud sql instances describe [YOUR CLOUDSQL INSTANCE ID]
This last line will output a dump of info and we want connectionName in particular. The next block then downloads the proxy and tells it to proxy for that CloudSQL instance:
# download and initialize the psql proxy
!wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
!chmod +x cloud_sql_proxy
# "connectionName" is from the previous block
!nohup ./cloud_sql_proxy -instances="[connectionName]"=tcp:5432 &
!sleep 30s
Later on you can (and I've found it helpful) to check the proxy's logs with
!cat nohup.out
And finally, you can construct a connection with the address 127.0.0.1:5432 (or whatever port you set above. I did so with psycopg2 like this
conn = psycopg2.connect(
host='127.0.0.1', port='5432', database=[YOUR DB NAME],
user=[USERNAME], password=[PASSWORD])
It seems to work, though it's definitely a bit slower than a direct connection.
Go to the Cloud SQL Instances page in the Google Cloud Console.
Go to the Cloud SQL Instances page
Click the instance name to open its Instance details page.
Select the Connections tab.
Select the Public IP checkbox.
Click Add network.
In the Network field, enter the IP address or address range you want to allow connections from.(for colab add two networks:-34.0.0.0/8 and 35.0.0.0/8)
Use CIDR notation.
Optionally, enter a name for this entry.
Click Done.
Click Save to update the instance.
and i used pymysql module to connect to database and it worked!!!
in colab,
`pymysql.connect(host="enter publicIP present in overview of sql instance", user="root",passwd="", db="your database name") `
The most simple way for SQL SERVER, as long as you allow the two colab networks already mentioned: 34.0.0.0/8 and 35.0.0.0/8
!pip install pymssql
import pymssql
conn = pymssql.connect(server='public IP',user='user',password='pass',database='dbname')
cursor = conn.cursor()
cursor.execute('SELECT TOP 10 * FROM table;')
row = cursor.fetchone()
while row:
print(str(row))
row = cursor.fetchone()
I don't have a lot of experience with sockets, especially google cloud ones. The Cloud SQL uses a format: mysql:unix_socket=/cloudsql/INSTANCE_CONNECTION_NAME;dbname=DATABASE
How does this get translated into making a real connection? To me it seems like it is missing a domain name.
https://cloud.google.com/appengine/docs/standard/php/cloud-sql/using-cloud-sql-mysql
env_variables:
# Replace USER, PASSWORD, DATABASE, and CONNECTION_NAME with the
# values obtained when configuring your Cloud SQL instance.
MYSQL_DSN: mysql:unix_socket=/cloudsql/INSTANCE_CONNECTION_NAME;dbname=DATABASE
MYSQL_USER: USER
MYSQL_PASSWORD: PASSWORD
Sockets on Linux are just folders that are CHMOD 777
In this case you need to create a directory /cloudsql and CHMOD 777.
Better documentation found for this by visiting https://cloud.google.com/appengine/docs/flexible/php/using-cloud-sql and clicking UNIX Sockets as the doc option.
Also you need to download the cloud sql proxy app, these are all just settings for it.
I am building a Django site on Google Compute Engine, and I want to install my database in SQL Cloud. It is possible?
What is the most common way to do this? Installing MySQL on virtual machine or use a Cloud SQL instance?
Thank you.
You can use either Google Cloud SQL or manage your own SQL database, depending on your needs.
To use Cloud SQL, you'd want to follow the instructions here: https://developers.google.com/cloud-sql/docs/external
If you want to manage your own SQL database, you can install MySQL or some other database on an instance. Depending on your needs, you can start with a g1-small with a fairly large disk attached and then later use a larger instance type to run your database.
If you're running your own database, you'll need to make sure to take regular backups and copy them off the database machine, to someplace like Google Cloud Storage. If you're using Cloud SQL, you can use the console or the API to schedule database backups.
This answer is following up from "Well, the problem is that to use Cloud SQL, I must connect using JDBC. I'm using Python. How I can do?"
I am not from Python world, but I recently connected my Java app on GCE instance to a Cloud-Sql DB (via cloud-sql-proxy approach, as described here: https://cloud.google.com/sql/docs/compute-engine-access) and didn't see any reason why it shouldn't work for Python too.
Here is what I just tried and easily connected my test Python app to a Cloud-Sql DB, via the cloud-sql-proxy:
Step 1: Download and run the proxy on a local port, like below (this establishes a channel between the local port 3306 and the Cloud-SQL database instance identified by the connection name "PROJ_NAME:TIMEZONE:SQL_NAME"):
sudo wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64
sudo mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy
sudo chmod +x cloud_sql_proxy
sudo ./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306 &
Step 2: Make sure that python-mysqldb is installed
sudo apt-get install python-mysqldb
Steo 3: Ran the following test program to connect to the Cloud-SQL db, via the local socket 3306, setup by the proxy:
import MySQLdb
conn = MySQLdb.connect(host= "127.0.0.1", port=3306, user="root", passwd="my_root_password", db="my_db")
x = conn.cursor()
try:
x.execute("""INSERT INTO Test(test_id) VALUES ('111')""")
conn.commit()
except:
conn.rollback()
conn.close()
Hope it helps.