I am building a Django site on Google Compute Engine, and I want to install my database in SQL Cloud. It is possible?
What is the most common way to do this? Installing MySQL on virtual machine or use a Cloud SQL instance?
Thank you.
You can use either Google Cloud SQL or manage your own SQL database, depending on your needs.
To use Cloud SQL, you'd want to follow the instructions here: https://developers.google.com/cloud-sql/docs/external
If you want to manage your own SQL database, you can install MySQL or some other database on an instance. Depending on your needs, you can start with a g1-small with a fairly large disk attached and then later use a larger instance type to run your database.
If you're running your own database, you'll need to make sure to take regular backups and copy them off the database machine, to someplace like Google Cloud Storage. If you're using Cloud SQL, you can use the console or the API to schedule database backups.
This answer is following up from "Well, the problem is that to use Cloud SQL, I must connect using JDBC. I'm using Python. How I can do?"
I am not from Python world, but I recently connected my Java app on GCE instance to a Cloud-Sql DB (via cloud-sql-proxy approach, as described here: https://cloud.google.com/sql/docs/compute-engine-access) and didn't see any reason why it shouldn't work for Python too.
Here is what I just tried and easily connected my test Python app to a Cloud-Sql DB, via the cloud-sql-proxy:
Step 1: Download and run the proxy on a local port, like below (this establishes a channel between the local port 3306 and the Cloud-SQL database instance identified by the connection name "PROJ_NAME:TIMEZONE:SQL_NAME"):
sudo wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64
sudo mv cloud_sql_proxy.linux.amd64 cloud_sql_proxy
sudo chmod +x cloud_sql_proxy
sudo ./cloud_sql_proxy -instances=PROJ_NAME:TIMEZONE:SQL_NAME=tcp:3306 &
Step 2: Make sure that python-mysqldb is installed
sudo apt-get install python-mysqldb
Steo 3: Ran the following test program to connect to the Cloud-SQL db, via the local socket 3306, setup by the proxy:
import MySQLdb
conn = MySQLdb.connect(host= "127.0.0.1", port=3306, user="root", passwd="my_root_password", db="my_db")
x = conn.cursor()
try:
x.execute("""INSERT INTO Test(test_id) VALUES ('111')""")
conn.commit()
except:
conn.rollback()
conn.close()
Hope it helps.
Related
I'm currently in the process of switching my cloud server from Heroku to Digital Ocean. However is there a way to migrate the database from the heroku server to the digital ocean one? I use postgresql for my database
I hope you already got a solution, but in case you didn’t, I’ll provide a simple guide on how I did it. I am going to assume that you have already created a postgres database on digitalocean. Also you need navigate to your project directory and log in to heroku using the heroku cli. And, you need to have postgresql installed or a psql client. Installing postgresql would do it as it comes with psql.
Step 1: Create a backup and download the backup from heroku postgres
heroku pg:backups:capture --app <app_name>
heroku pg:backups:download --app <app_name>
The first command will create a backup of your database and the second command will download it to your current directory, its a .dump file. If you would like to read more, here is an article.
Step 2: Connect to your remote (digital ocean’s) database using psql
Before you can do this, you need to go and add your machine you are connecting from to the list of database’s list of trusted sources, If you don’t, you’ll get a Connection Timed Out error. That’s because the database’s firewall doesn’t allow you to connect to the database from your local machine or resource (for security reasons).
Step 3: Import the Database
pg_restore -d "postgresql://<database_username>:<database_password>#<host>:<port>/<database>?sslmode=require" --jobs 4 -c "/path/to/dump_file.dump"
This will import your database from your dump file. Just substitute the variables will your connection parameters that you get from your dashboard. If you would like to read more, here is another article for this step.
Another thing to make clear is, sometimes, you will see some harmless error messages when running this command, but it will push through anyway. To learn more about pg_restore read this article.
And that’s it, your database has been migrated. Now, can you confirm it worked?, well, as for me, I used pgAdmin to connect to the remote database and I saw the tables and data as expected.
Hope this helps anyone with the same problem :)
I have a Google Cloud SQL instance with a public IP, only accessible to whitelisted IP and through an SSL connection.
I'd like to know how I can connect to this database from Google Colab with Python.
If I try to connect like any external application, the connection is refused since the ip of the "client" is not whitelisted (and I can't whitelist it since I don't it and it's highly probable it's volatile)
Is there a shortcut, like with Google App Engine to connect to the database using its instance and a google client?
Thanks
A little late to answer, but I think I have a solution and it involved using the Cloud SQL Proxy. Overall, you first need to use the Gcloud SDK (included with Colab) to authenticate, then install the proxy, then spin it up. I did this in two blocks
# gcloud login and check the DB
!gcloud auth login
!gcloud config set project [YOUR PROJECT ID]
!gcloud sql instances describe [YOUR CLOUDSQL INSTANCE ID]
This last line will output a dump of info and we want connectionName in particular. The next block then downloads the proxy and tells it to proxy for that CloudSQL instance:
# download and initialize the psql proxy
!wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
!chmod +x cloud_sql_proxy
# "connectionName" is from the previous block
!nohup ./cloud_sql_proxy -instances="[connectionName]"=tcp:5432 &
!sleep 30s
Later on you can (and I've found it helpful) to check the proxy's logs with
!cat nohup.out
And finally, you can construct a connection with the address 127.0.0.1:5432 (or whatever port you set above. I did so with psycopg2 like this
conn = psycopg2.connect(
host='127.0.0.1', port='5432', database=[YOUR DB NAME],
user=[USERNAME], password=[PASSWORD])
It seems to work, though it's definitely a bit slower than a direct connection.
Go to the Cloud SQL Instances page in the Google Cloud Console.
Go to the Cloud SQL Instances page
Click the instance name to open its Instance details page.
Select the Connections tab.
Select the Public IP checkbox.
Click Add network.
In the Network field, enter the IP address or address range you want to allow connections from.(for colab add two networks:-34.0.0.0/8 and 35.0.0.0/8)
Use CIDR notation.
Optionally, enter a name for this entry.
Click Done.
Click Save to update the instance.
and i used pymysql module to connect to database and it worked!!!
in colab,
`pymysql.connect(host="enter publicIP present in overview of sql instance", user="root",passwd="", db="your database name") `
The most simple way for SQL SERVER, as long as you allow the two colab networks already mentioned: 34.0.0.0/8 and 35.0.0.0/8
!pip install pymssql
import pymssql
conn = pymssql.connect(server='public IP',user='user',password='pass',database='dbname')
cursor = conn.cursor()
cursor.execute('SELECT TOP 10 * FROM table;')
row = cursor.fetchone()
while row:
print(str(row))
row = cursor.fetchone()
I have a Compose for Postgresql service on IBM Bluemix which isn't allowing me run PostGIS functions on my cloud foundry rails app. I have run "CREATE EXTENSION PostGIS;" and I have also added the adapter to database.yml. Compose for Postgresql says PostGIS comes installed by default
I am using Ruby on Rails with the rgeo gem and the error is
ERR NoMethodError: undefined method `st_point' for #
Can you please let me know if there is anything I need to do to get PostGIS working?
Please raise a support request asking for the postgis plugin to be enabled on your compose instance.
Answered my own question. The problem was with the rgeo gem and the adapter. I needed the postgis:// adapter for working with the gem.
Bluemix does not allow you to change the adapter in their connections. It will always be postgresql. To get around this I set a CUSTOM_DATABASE_URL environment variable with the connection string postgis://<username>:<password>#host:port/<db_name>. Using the cf client this would look like
cf set-env <app-name> CUSTOM_DATABASE_URL postgis://<username>:<password>#host:port/<db_name>
Then in the command for my container in the manifest.yml, I prepended setting DATABASE_URL = CUSTOM_DATABASE_URL, specifically
DATABASE_URL=$CUSTOM_DATABASE_URL &&.....
Its a workaround for now until Bluemix allows us to change the adapter in the connections.
My understanding is that Heroku Postgres runs on top of AWS. Is it possible to configure which datacenter your database is running in? I'm also wondering if the database files are stored on an encrypted filesystem.
Yes, Heroku runs on AWS. But you are not able to specify which datacenter to run your database. For encryption look at http://www.postgresql.org/docs/current/static/pgcrypto.html.
Heroku runs out of Amazon US-East - once you've add a postgres db to your app heroku config will give you the database connection URL which you would be able to tracert on to see where it is
SQL distributes pre-initialized catalog cluster but for postgresql we need initialize cluster using initdb and a network service account. It fails in few cases and causing bit of misery!
Can initialize cluster ourselves and distribute pre-initialized cluster?
Thanks
The "cluster" (or data directory) depends on the operating system and the architecture. So a data directory that was initialized with initdb on a 32bit Linux will not work on a 64bit Windows.
But you don't need to do that. A service account is only necessary if you want to run PostgreSQL as a service.
You can easily use the ZIP distribution to install and start Postgres without the need for a full-fledge installation or a service account.
The steps to do so are:
Unzip the binaries
Run initdb pointing it to the directory where the database cluster should be created.
Run pg_ctl to start the server.
Note that the steps 2) and 3) must be run using the same user, otherwise the server will have no priviliges to write to the data directory.
These steps can easily be put into a batch file or shell script.
Hard to understand your question, but I think you are talking about the Windows installer for PostgreSQL. Right? What version, what installer, what about error messages, loggings, etc. ?
The installer can be found here.
SQL = database language, SQL Server =
Microsoft database product