Migration from AWS Postgresql to ElasticSearch using AWS DMS - postgresql

To improve the read performance, we are synching RDS Postgresql data to AWS ElasticSearch. Used AWS DMS for it. It's working fine. But Joining tables is a challenge. We need a solution to denormalize the data and put it in elastic search using AWS DMS. For example, In customer and order tables in PostgreSQL should be one record in ElasticSearch.

Related

AWS data migration from mongodb to dynamodb

my database is currently present in mongodb atlas.Can i use AWS DMS to directly migrate from atlas to dynamo db or do i have to first make a EC2 instance and make a mongodb server there and dump my data. I can't find any videos nor blogs related to it

What is the most recommend way to transfer data from a postgresl db to another postgresdb in aws

We have a production postgresql db available only in glue catalog. What's the best practices to ETL some tables in this database and load the data into another postgresql instance in the same aws account?
This production database is our transactional db and we don't need most of it's tables. We already have some glue ETL's creating tables in S3 (so accessible via Athena) but the goal here is to load into another postgresql instance.
Thanks

PostgreSQL data migration from one database to other database in AWS

I have my Old postgres database which is not a cloud based. And I want to migrate the data from the old database to new database which is in aws.
So can this be done via dblink or what is the other best practises to do this.
You can migrate DBs to AWS via AWS Database Migration Service. It's fully managed tool to help you move your data from on premises to AWS. You can read more about it here: https://aws.amazon.com/dms/?nc=sn&loc=1.

loading one table from RDS / postgres into Redshift

We have a Redshift cluster that needs one table from one of our RDS / postgres databases. I'm not quite sure the best way to export that data and bring it in, what the exact steps should be.
In piecing together various blogs and articles the consensus appears to be using pg_dump to copy the table to a csv file, then copying it to an S3 bucket, and from there use the Redshift COPY command to bring it in to a new table-- that's my high level understanding, but am not sure what the command line switches should be, or the actual details. Is anyone doing this currently and if so, is what I have above the 'recommended' way to do a one-off import into Redshift?
It appears that you want to:
Export from Amazon RDS PostgreSQL
Import into Amazon Redshift
From Exporting data from an RDS for PostgreSQL DB instance to Amazon S3 - Amazon Relational Database Service:
You can query data from an RDS for PostgreSQL DB instance and export it directly into files stored in an Amazon S3 bucket. To do this, you use the aws_s3 PostgreSQL extension that Amazon RDS provides.
This will save a CSV file into Amazon S3.
You can then use the Amazon Redshift COPY command to load this CSV file into an existing Redshift table.
You will need some way to orchestrate these operations, which would involve running a command against the RDS database, waiting for it to finish, then running a command in the Redshift database. This could be done via a Python script that connects to each database (eg via psycopg2) in turn and runs the command.

Amazon Aurora PostgreSQL SELECT INTO OUTFILE S3

We are trying to export data from an Amazon Aurora PostgreSQL database to an S3 buckets. The code being used is like this:
SELECT * FROM analytics.my_test INTO OUTFILE S3
's3-us-east-2://myurl/sampledata'
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';
MANIFEST ON
OVERWRITE ON;
All permissions have been set up but we get the error
SQL Error [42601]: ERROR: syntax error at or near "INTO" Position: 55
Does this only work with a MySQL database?
It is fairly new feature on Aurora Postgres, but it is possible to export the query result into a file on s3: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/postgresql-s3-export.html#postgresql-s3-export-file
The syntax is not the same as for MySQL though. For Postgres it is:
SELECT * from aws_s3.query_export_to_s3('select * from sample_table',
aws_commons.create_s3_uri('sample-bucket', 'sample-filepath', 'us-west-2')
);
I believe saving SQL select output data in S3 ONLY works for Amazon Aurora MySQL DB. I don't see any reference in the official documentation that mentions the same for Amazon Aurora PostgresSQL.
Here are snippets from official documentation that I referred to
Integrating Amazon Aurora MySQL with Other AWS Services
Amazon Aurora MySQL integrates with other AWS services so that you can
extend your Aurora MySQL DB cluster to use additional capabilities in
the AWS Cloud. Your Aurora MySQL DB cluster can use AWS services to do
the following:
Synchronously or asynchronously invoke an AWS Lambda function using
the native functions lambda_sync or lambda_async. For more
information, see Invoking a Lambda Function with an Aurora MySQL
Native Function.
Load data from text or XML files stored in an Amazon Simple Storage
Service (Amazon S3) bucket into your DB cluster using the LOAD DATA
FROM S3 or LOAD XML FROM S3 command. For more information, see Loading
Data into an Amazon Aurora MySQL DB Cluster from Text Files in an
Amazon S3 Bucket.
Save data to text files stored in an Amazon S3 bucket from your DB
cluster using the SELECT INTO OUTFILE S3 command. For more
information, see Saving Data from an Amazon Aurora MySQL DB Cluster
into Text Files in an Amazon S3 Bucket.
Automatically add or remove Aurora Replicas with Application Auto
Scaling. For more information, see Using Amazon Aurora Auto Scaling
with Aurora Replicas.
Integrating Amazon Aurora PostgreSQL with Other AWS Services
Amazon Aurora integrates with other AWS services so that you can
extend your Aurora PostgreSQL DB cluster to use additional
capabilities in the AWS Cloud. Your Aurora PostgreSQL DB cluster can
use AWS services to do the following:
Quickly collect, view, and assess performance for your Aurora
PostgreSQL DB instances with Amazon RDS Performance Insights.
Performance Insights expands on existing Amazon RDS monitoring
features to illustrate your database's performance and help you
analyze any issues that affect it. With the Performance Insights
dashboard, you can visualize the database load and filter the load by
waits, SQL statements, hosts, or users.
For more information about Performance Insights, see Using Amazon RDS
Performance Insights.
Automatically add or remove Aurora Replicas with Aurora Auto Scaling.
For more information, see Using Amazon Aurora Auto Scaling with Aurora
Replicas.
Configure your Aurora PostgreSQL DB cluster to publish log data to
Amazon CloudWatch Logs. CloudWatch Logs provide highly durable storage
for your log records. With CloudWatch Logs, you can perform real-time
analysis of the log data, and use CloudWatch to create alarms and view
metrics. For more information, see Publishing Aurora PostgreSQL Logs
to Amazon CloudWatch Logs.
Ther is no mention of saving data to S3 for PostgresSQL