How to get user creation timestamp in Amazon Redshift - amazon-redshift

Team,
how to get user creation timestamp in Redshift and RDS Postgre like Oracle ?
Thank you
kesavan

You can query the stl_userlog table where action='create'

Related

Large Object Replication using AWS DMS?

Wanted to check if Large Object replication is supported by AWS DMS when Source and destination DB's are PostgreSQL?
I just used pglogical to replicate a DB which has Large Object (Like IOD's etc) and the target DB does not have LO's.
When I query a table on the destination which uses a OID column:
select id, lo_get(json) from table_1 where id=998877;
ERROR: large object 6698726 does not exist
The json column is oid datatype
If AWS DMS takes care of it, I will start using it.
Thanks

After numerous (error-free) inserts to Aurora (PostgreSQL) RDS serverless cluster with SQLAlchemy I can't see the table. What happened to my data?

After changes to some Terraform code, I can no longer access the data I've added into an Aurora (PostgreSQL) database. The data gets added into the database as expected without errors in the logs but I can't find the data after connecting to the database with AWS RDS Query Editor.
I have added thousands of rows with Python code that uses the SQLAlchemy/PostgreSQL engine object to insert a batch of rows from a mappings dictionary, like so:
if (count % batch_size) == 0:
self.engine.execute(Building.__table__.insert(), mappings)
self.session.commit()
The logs from this data ingest show no errors, the commits all appear to have completed successfully. So the data was inserted someplace, I just can't work out where that is, as it's not showing up in the AWS Console RDS Query Editor. I run the SQL below to find the table, with zero rows returned:
SELECT * FROM information_schema.tables WHERE table_name = 'buildings'
This has worked as expected before (i.e. I could see the data in the Aurora database via the Query Editor) so I'm trying to work out which of the recently modified Terraform settings have caused the issue.
Where else can I look to find where the data was inserted, assuming that it was actually inserted somewhere? If I can work that out it may help reveal the culprit.
I suspect misleading capitalization. Like "Buildings". Search again with:
SELECT * FROM information_schema.tables WHERE table_name ~* 'building';
Or:
SELECT * FROM pg_catalog.pg_tables WHERE tablename ~* 'building';
Or maybe your target wasn't a table? You can "write" to simple views. Check with:
SELECT * FROM pg_catalog.pg_class WHERE relname ~* 'building';
None of this is specific to RDS. It's the same in plain Postgres.
If the last query returns nothing, you are in the wrong database. (You are aware that there can be multiple databases in one DB cluster?) Or you have a serious problem.
See:
How to check if a table exists in a given schema
Are PostgreSQL column names case-sensitive?
Once I logged more information regarding the connection I discovered that the database name being used was incorrect, so I have been querying the Aurora instance using the wrong database name. Once I worked this out and used the correct database name the select statements in AWS RDS Query Editor worked as expected.

How to store result set or jobid of sql query on cloud?

I am pushing data from a SQL query on cloud to db2 on cloud, when I am querying data and storing it in my s3 bucket it is saving jobid of that particular query result.
But when I am pushing data into db2 it is not saving any jobid, in fact data has been inserted. I am checking by going again on db2.
But how do I came to know on SQL query that my query has succeeded or not? I want to confirm it when I am running the SQL query on cloud.
My query:
select
a.col, a.col2, explode (a.col3) company_list
from
cos://us-east/akshay-test-bucket1/Receive-2020-03-11/CP-2020-03-11/ STORED AS PARQUET c
into
crn:v1:bluemix:public:dashdb-for-transactions:eu-gb:a/2c36cd86085f442b915f0fba63138e0c:61f353e4-6640-4599-b1dd-48ee52ee008d::/schema_name.table_name
Here I am storing data into db2 and SQL query is saying "A preview is only possible for results stored in Cloud Object Storage" as you can see in above screenshot and see my query.

Unload data from snowflake into Postgres?

I want to unload data from a snowflake table into a Postgres database. Snowflake documentation does not show an unload option in a relational database.
Is there a way to unload the data from snowflake to Postgres currently.
Any help is appreciated.
Snowflake only has connectivity to cloud storage. It can't connect to any other database directly.
If the table is small to medium size, you can use the Snowflake Web GUI:
Query the data: SELECT * FROM my_table;
Press the download button in the Results pane (next to the Copy button) and export as TSV or CSV
Import the file into Postgress (I don't know the details of this step)

Transfer data from redshift to postgresql

I tried searching for it but couldn't find out
What is the best way to copy data from Redshift to Postgresql Database ?
using Talend job/any other tool/code ,etc
anyhow i want to transfer data from Redshift to PostgreSQL database
also,you can use any third party database tool if it has similar kind of functionality.
Also,as far as I know,we can do so using AWS Data Migration Service,but not sure our source db and destination db matches that criteria or not
Can anyone please suggest something better ?
The way I do it is with a Postgres Foreign Data Wrapper and dblink,
This way, the redshift table is available directly within Postgres.
Follow the instructions here to set it up https://aws.amazon.com/blogs/big-data/join-amazon-redshift-and-amazon-rds-postgresql-with-dblink/
The important part of that link is this code:
CREATE EXTENSION postgres_fdw;
CREATE EXTENSION dblink;
CREATE SERVER foreign_server
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host '<amazon_redshift _ip>', port '<port>', dbname '<database_name>', sslmode 'require');
CREATE USER MAPPING FOR <rds_postgresql_username>
SERVER foreign_server
OPTIONS (user '<amazon_redshift_username>', password '<password>');
For my use case I then set up a postgres materialised view with indexes based upon that.
create materialized view if not exists your_new_view as
SELECT some,
columns,
etc
FROM dblink('foreign_server'::text, '
<the redshift sql>
'::text) t1(some bigint, columns bigint, etc character varying(50));
create unique index if not exists index1
on your_new_view (some);
create index if not exists index2
on your_new_view (columns);
Then on a regular basis I run (on postgres)
REFRESH MATERIALIZED VIEW your_new_view;
or
REFRESH MATERIALIZED VIEW CONCURRENTLY your_new_view;
In the past, I managed to transfer data from one PostgreSQL database to another by doing a pg_dump and piping the output as an SQL command to the second instance.
Amazon Redshift is based on PostgreSQL, so this method should work, too.
You can control whether pg_dump should include the DDL to create tables, or whether it should just load the data (--data-only).
See: PostgreSQL: Documentation: 8.0: pg_dump