AWS DMS how to deal with invalid timestamp - amazon-redshift

I am moving data from a MySQL Aurora Db which contains timestamp such as "0000-00-00 00:00:00".
The target is a redshift cluster. The migration is failing because the invalid timestamps are not accepted by Redshift.
I cannot modify the source data. I cannot drop that column (most of the timestamps are valid)
I have tried to use a replace-prefix transformation to replace "0000-00-00" by "1970-01-01". It doesn't work because DMS is trying to load the data first and process them after.
How can I import the data ?

Related

CDC Migration from AWS RDS to AWS Redshift

How to migrate my whole database which is currently in AWS RDS Postgres to AWS Redshift and also can you please help me out how can I keep both these DBs in sync. I want to sync even if any column is updated in RDS so it must get updated in Redshift also.
I know we can achieve it with AWS Glue, but the above scenario is mandatory in my case. Migration task is easy to do but to to the CDC migration is bit challenging. I am also aware about the bookmark key but my situation is bit different, I do not have any sequential column in the tables, but it has updated_at field in all the tables so this column is the only field on which I can check whether the record is processed or not so that duplicate processing may not occur and if any new data is inserted it should also get replicated in RedShift.
So, would anyone help me out to do this even by using pyspark script?
Thanks.

how to incremental load from aurora mysql to redshift with glue?

I am setting up data warehouse.
Source is aurora mysql and I will use redshift as DW.
I succeed full data loading from aurora mysql to redshift.
But I want to incremental loading about data the day before.
There is a date field in table.
ex)
source table is :
target table is :
If today is 2022-06-04, so 2022-06-03 row's data only must be incremental loaded.
I tried to UPSERT in glue and result is good.
But data are wrote in real time in source.
And KPI in BI show to the day before.
So I need a incremental loading the day before, NOT UPSERT or Last data insert.
In python script in glue jobs, I declared preaction and postaction through SQL with where clause.
But where clause is not working.
All data is loaded.
I don't know why where clause is not working.
And I want to know how to incremental load from aurora mysql to redshift with glue.
Please help me.

load orc format to aurora postgres DB

We have a ORC file format which are stored in s3 and we want to load the files into AWS Aurora postgres DB .
What we got from internet was :
postgres support csv, txt and other formats not ORC ..
INSERT OVERWRITE DIRECTORY '<Hdfs-Directory-Path>' ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE SELECT * FROM default.foo;
Can any one please help us to find a solution?
This date PostgreSQL on Aurora supports ingestion of data from S3 through the COPY command only from TXT and CSV files.
Since your files are in ORC format, you could convert these tiles in either CSV or TXT and then ingest the data. You could do this very easily with Athena, by simply creating a table for your original data and running a SELECT * FROM table query. As explained in the Working with Query Results, Output Files, and Query History
page, this will automatically generate a CSV file containing the results.
This would not be optimal as you’d pay not only the transform price but also the he storage twice (as original ORC and converted CSV), but it would allow you to convert the data pretty easily.
A better way to do it would instead be to use a service like AWS Glue, that supports S3 as source and that has an Aurora connector. Using this method would give you an actual ETL and even if now you just need the E(xtract) and L(oad), would still leave the door open for any kind of transform you might need in the future.
In this AWS Blog titled How to extract, transform, and load data for analytic processing using AWS Glue (Part 2) they show the opposite flow (Aurora->S3 via Glue), but it should still give you an idea of the process.

Data Migration from one DB to another

I have to create an app which transfer data from snowflake to postgres everyday. Some tables in postgres are truncated before migration and all data from corresponding snowflake table is copied. While for other tables, data after last timestamp in postgres is copied from snowflake.
This job has to run at night sometime and not when customers are using the service at daytime.
What is the best way to do this ?
Do you have constraints, limiting your choices in:
ETL or bulk data tooling
Development languages?
According to this site, you can create a foreign data wrapper on Postgresql for snowflake

Get incremental changes for RDS Postgres DB

I have a process that extract data from origin DB, make changes and insert it to target DB.
Now I'm using an AWS Lambda that runs every few minutes and added a timestamp column that indicates when the data was last changed to filter base on it.
This process is inefficient since I need to "remember" to manually add a timestamp column to each new table. Is there a better way? can I use a query (Postgres) or an API call (boto3) to get only the data that was changed?