How to open Google Cloud SQL instance to see database - google-cloud-storage

I have exported my google Cloud SQL instance to Google Cloud Storage. I have exported the file in the compressed format (.gz) to Cloud Storage bucket. Then after I downloaded to my system and extracted it using 7zip. How can open it in MySQL Workbench to see the database and values. Its file type is shown as instance name.

The exported data from Cloud SQL is similar to what you get from mysqldump. It's basically a series of SQL statements that, when you run it on another server will run all the commands to get from a clean state to the exported state.
I'm not very familiar with MySQL Workbench, but from what I've read it allows you to manage your MySQL database, browsing tables and data. So you may need to upload your exported data to another MySQL server, for example a local one running on your computer.
Note that you could also connect directly from MySQL Workbench to your Cloud SQL instance by requesting an IP for your instance and authorizing the network that you'll connect from.

You can connect directly to your Cloud SQL instance. All you need to do is whitelist your IP address and connect through MySQL Workbench as if it's a normal database instance.
You can whitelist your IP by:
Navigate to https://console.cloud.google.com/sql and select your project.
Go to the Connections tab and Add Network in the Public IP section.
Use the connection details on the Overview tab to connect
Then you can browse your database through Workbench as if it was a local instance.

Related

What could I be missing with Prisma client, Cloud Run, and Cloud SQL - my Prisma client can't socket-connect to my Cloud SQL instance DB?

Background
I have a NestJS project with Prisma ORM, and I am continually receiving the error:
PrismaClientInitializationError: Can't reach database server at `localhost`:`5432`
This is happening during the Cloud Build Deploy step.
Since this is a containerized application (attempting to) run in a Cloud Run instance, I'm supposed to use a socket connection. Here's the documentation from Prisma on connecting to a Postgres DB through a socket connection: https://www.prisma.io/docs/concepts/database-connectors/postgresql#connecting-via-sockets
Connecting via sockets
To connect to your PostgreSQL database via sockets, you must add a host field as a query parameter to the connection URL (instead of setting it as the host part of the URI). The value of this parameter then must point to the directory that contains the socket, e.g.: postgresql://USER:PASSWORD#localhost/database?host=/var/run/postgresql/
Note that localhost is required, the value itself is ignored and can be anything.
I've done this to the letter, as described in the Cloud SQL documentation, with the exception that I percent-encoded my path to the directory containing the socket. I've included and excluded the trailing slash.
So my host var looks like this, mapped from the percent-encoded values:
/cloudsql/<MY CLOUD SQL CONNECTION NAME>/<DB>
I've read over the Cloud Run documentation, and in my mind, I should expect a different error if the instance itself can't connect to the Cloud SQL instance. I've followed the "Make sure you have the appropriate permissions and connection" from the documentation a few times now.
Is there anything obvious that I'm missing? Am I wrong about an error related to Cloud Run instance just not connecting with Cloud SQL instance?
Things I've tried & things I know
I CAN connect directly to the Cloud SQL instance locally through psql
I CAN run a local server with the Cloud SQL instance public IP and establish a client connection & interact with the database
I CAN successfully create an image and run a container from that image locally
My big concern
It doesn't make sense to me in which order things should connect to the Cloud SQL instance. To me, the Cloud Run - Cloud SQL connection MUST be established before the application run inside the Cloud Run instance can establish its connection through the socket to the Cloud SQL instance. -- Am I thinking through that correctly?

Copy data from Postgres DB (GCP Project A) to another Postgres DB (GCP Project B)

I would be happy to get your help / feedback re data load.
Goal:
Load source data from a Postgres database, which is located in GCP project A to another Postgres database, which is located in GCP project B.
Challenge:
Get a connection (I have an IAM account with sufficient rights to run a COPY TO / COPY FROM command) to the Postgres DB in GCP Project A and copy the table either to a CSV or create a dump that can be used in order to be inserted to another Postgres DB in GCP Project B.
How do I connect to the database (e.g. if I create a key, where shall I store the json keyfile and would that approach even be feasible?) with this IAM email account?
Other ways I've researched were to use psycopg2 (thus I could use the function cursor.copy_expert (which doesn’t need any superuser right or Postgres user credentials and copy the data), but I didn’t succeed in connecting to the database with psycopg2 due to challenges with cloud proxy.
Another idea was to use pg_dump or gcloud sql export csv.
I would be curious if some of you were facing a similar challenge and how did you solve it and what might be the best way/practice
You can have a try out database migration service. You can set up a continuous migration configuration and use Cloud SQL for PostgreSQL.
Hello after a lot of searching I've come to these solutions:
If you have continuous copy, you need to use the database migration service, check this documentation.
If you have one shot copy:
you can restore your instance, see the bottom page of this documentation
you can create a bucket and backup your instance on it, then import it from the other project

is there any way to create directory in data directory location of Amazon RDS PostgreSQL instance

AWS RDS PostgreSQL instance able to connect from another PostgreSQL client but not able to see data directory and configuration files .is there any way to edit/view data directory and configuration files
If you want to work with file system, use EC2 instances with postgres installed and configured as you wish. Neither postgres.conf, nor hba.conf cant be edited directly on file system.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#Appendix.PostgreSQL.CommonDBATasks.Parameters
Instead use amazon provided interface to change supported parameters or use SET command where possible...

Can db2 import or load be used to populate DashDB?

I'm looking to bulk loads millions of rows into a DashDB database. After connecting using the DB2 CLI, I enter a command like:
db2 import from rowsToImport.csv of del insert into MY_TABLE
with results:
SQL0551N "DASHXXX" does not have the required authorization or privilege to
perform operation "BIND" on object "NULLID.SQLUAJ19". SQLSTATE=42501
Is this an inherent limitation with DashDB, or is something configured incorrectly on my client? I get a similar message when trying db2 load:
SQL2019N An error occurred while utilities were being bound to the database.
p.s. I'm aware of the rest client api for DashDB for loading data - I'm asking specifically how/if bulk loads can be done with the DB2 command line as an alternate option.
As per dashDB documentation you can use the Command line processor plus (CLPPlus). It is included in the dashDB driver package and provides a command-line user interface that you can use to connect to the dashDB database, BLUDB. You can use CLPPlus to define, edit, and run statements, scripts, and commands. Please take also a look at Connecting CLPPlus to the dashDB database to see how to connect and use the CLI.
Please note that in CLPPlus: IMPORT, EXPORT and LOAD commands have a restriction that processed files must be on the server: see here. So you should copy the input load file onto the remote server first with SCP. However SSH/SCP protocol should be blocked (not accessible) for a normal dashDB user.
Only geospatial data can be loaded from your local machine to dashDB, using IDA LOADGEOSPATIALDATA command in CLPPlus.
The file to be loaded in dashDB using the above command can be in the local file system, accessible to the CLPPlus user.
Alternative ways to do that are:
dashDB REST API (as you already mentioned). See Load delimited data using the REST API and cURL.
load the csv directly from the dashDB dashboard on Bluemix. See Loading data from the desktop into IBM dashDB.
load the csv using IBM Data Studio. See dashDB large file load using IBM Data Studio.
According to this technote, the package NULLID.SQLUAJ19 belongs to one of the early DB2 10.1 fix packs, so I suspect your client version is 10.1. When attempting to execute the IMPORT command it needs to bind some packages of that older version, since dashDB is DB2 10.5, obvisouly.
You may want to try installing the latest DB2 client fix pack, as the necessary packages may be already bound in the database.
To verify that you could run select pkgname from syscat.packages where pkgschema = 'NULLID' and pkgname like 'SQLUA%' -- you should see "SQLUAK20", which seems to be the corresponding package in DB2 10.5.
If that doesn't work, your other option might be to move to a dedicated dashDB instance, as you won't have sufficient privileges to bind missing packages in the entry-level shared dashDB service.

JBOSS AS 7 Database internal

1 and i'm connected to intern database java:jboss/datasources/ExampleDS
i want to browse the data like sql in tables etc. in the administration console of jboss
but there is no such option in Administration menu or can someone help me on how to browse data? Or is there some other tool on this?
Database has nothing to do with application server. First you have to find out what database server is datastore ExampleDS using. Then connect to that server. Example: if ExampleDS uses Mysql on 192.168.1.1:3306 you have to use some Mysql client (Mysql workbanch) to connect to that server. There you can see actual tables.