how to import data files from s3 to postgresql rds - postgresql

I am very new to AWS, and Postgresql.
I have created a Postgresql db (using rds on aws)
I have uploaded several documents to multiple s3 buckets
I have a EC2 (Amazon Linux 64 bit) running
I tried to use a data pipeline, but nothing seems to be available (template) for Postgres. I can't figure out how to connect to my RDS instance and import/export data from postgres.
I assumed that I could use EC2 to grab from my S3 bucket and import into Postgres in lieu of no data pipeline template being available. If it is possible I have no idea how.. Please advise if possible..

S3 -> RDS direct load is now possible for PostgreSQL Aurora and RDS PostgreSQL >= 11.1 as aws_s3 extension.
Amazon Aurora with PostgreSQL Compatibility Supports Data Import from Amazon S3
Amazon RDS for PostgreSQL Now Supports Data Import from Amazon S3
Parameters are similar to those of PostgreSQL COPY command
psql=> SELECT aws_s3.table_import_from_s3(
'table_name', '', '(format csv)',
'BUCKET_NAME', 'path/to/object', 'us-east-2'
);
Be warned that this feature does not work for older versions.

I wish AWS extends COPY command in RDS Postgresql as they did in Redshift. But for now they haven't and we have to do it by ourselves.
Install awscli on your EC2 box (it might have been installed by default)
Configure your awscli with credentials
Use aws s3 sync or aws s3 cp commmands to download from s3 to your local directory
Use psql command to \COPY the files into your RDS (requires \ to copy from client directory)
Example:
aws s3 cp s3://bucket/file.csv /mydirectory/file.csv
psql -h your_rds.amazonaws.com -U username -d dbname -c '\COPY table FROM ''file.csv'' CSV HEADER'

The prior answers have been superseded by more recent events at AWS.
There is now excellent support for S3-to-RDS-database loading via the Data Pipeline service (which can be used for many other data conversion tasks too, this is just one example).
This AWS article is for S3-to-RDS-MySQL. Should be very similar for RDS-Postgres.
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-template-copys3tords.html

if you can launch the psql client and connect to RDS on EC2 instance, you should be able to use the following command:
\copy customer_orders from 'myfile.csv' with DELIMITER ','

Related

How to restore exported RDS snapshot from S3 to RDS cluster

I have an AWS RDS Aurora PostgreSQL cluster (compatible with PostgreSQL 13.4).
I successfully followed this tutorial to back up my PostgreSQL RDS aurora cluster snapshot to S3, and it seems that all the data is backed up to s3.
Now I'm trying to restore the exported snapshot from S3 to PostgreSQL RDS cluster, and I couldn't find explanation how to do it.
Any idea how to do it? maybe I need to first restore the exported data from S3 to snapshot, and then connect it to to RDS, or any other way?
The RDS Snapshot to S3 export feature is not intended for additional backups of your data. It is intended to convert your data to Parquet for use in analytics tools like Redshift or Athena. Some data type conversion happens during this export process.
There is currently no method available to import these Parquet files back into RDS. You would have to write some code yourself to read the Parquet files and insert the data back into a running RDS instance if you needed that.
If you are just wanting a secondary backup of your RDS instance in addition to the RDS snapshots, you could either look into cross-region or cross-account copies of your RDS snapshots, or look into using the AWS Backup service.

Binary backup of AWS RDS PostgreSQL

I am looking for a way to do a regular binary backup of my AWS RDS PostgreSQL database to use this copy locally and mount it in my docker postgis container. As far as I have understood, I cannot connect to the RDS host and do pg_basebackup.
So what is the best way to do this task?
The best workflow would be: There is an automatic daily binary backup stored in AWS S3. Then locally, some shell script to download the files to a folder that is mounted to the postgis folder.
Any ideas if that is possible?
I have one more additional requirement: I would like to exclude specific tables in the backup.

loading one table from RDS / postgres into Redshift

We have a Redshift cluster that needs one table from one of our RDS / postgres databases. I'm not quite sure the best way to export that data and bring it in, what the exact steps should be.
In piecing together various blogs and articles the consensus appears to be using pg_dump to copy the table to a csv file, then copying it to an S3 bucket, and from there use the Redshift COPY command to bring it in to a new table-- that's my high level understanding, but am not sure what the command line switches should be, or the actual details. Is anyone doing this currently and if so, is what I have above the 'recommended' way to do a one-off import into Redshift?
It appears that you want to:
Export from Amazon RDS PostgreSQL
Import into Amazon Redshift
From Exporting data from an RDS for PostgreSQL DB instance to Amazon S3 - Amazon Relational Database Service:
You can query data from an RDS for PostgreSQL DB instance and export it directly into files stored in an Amazon S3 bucket. To do this, you use the aws_s3 PostgreSQL extension that Amazon RDS provides.
This will save a CSV file into Amazon S3.
You can then use the Amazon Redshift COPY command to load this CSV file into an existing Redshift table.
You will need some way to orchestrate these operations, which would involve running a command against the RDS database, waiting for it to finish, then running a command in the Redshift database. This could be done via a Python script that connects to each database (eg via psycopg2) in turn and runs the command.

How to export result set as CSV from Aurora Postgres DB to AWS-S3?

As part of my Flask and Celery application, I'm trying to move data from AWS-Aurora Postgres DB to Redshift.
I'll be running this application in Kubernetes.
My approach is to query the aurora Postgres database and write the result set to a CSV file which is saved on to an attached volume and then upload it to S3 and then import the file into Redshift.
However, I came across another article which lets us directly upload the result set as a CSV file to S3 instead of having an intermediate volume.
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html
They've mentioned the usage of OUTFILE command. But it's mentioned about MySQL. But they haven't mentioned anything about Postgres DB.
Is it even possible to use the command on Aurora Postgres DB and export to S3.
If you can connect to the database with psql, you can use the \copy command to export the output from any select statement to a csv:
https://codeburst.io/two-handy-examples-of-the-psql-copy-meta-command-2feaefd5dd90
https://dba.stackexchange.com/questions/7651/postgres-client-copy-copy-command-doesnt-have-access-to-a-temporary-table
Yes, Aurora runs on MySQL so you can use the outfile command. Did you even try running a query with outfile?

Is there any approach to migrate PostgreSQL database from Azure to AWS RDS PostgreSQL

I able to migrate from on-prem database to AWS using DMS service but couldn't able to migrate database from Azure to AWS.
Is there any better approach?
A pretty low level but effective way would be to: export your Azure data as CSV using pg_dump,
copy it into an AWS S3 bucket (pretty easy with Python awscli package) and load from S3 into RDS Postgres.
Would be great if files could be moved directly from blob storage to s3, but I don't think Azure Postgres supports dumping directly to blob anyway.