Covert DB2 Compress adaptive to PostgreSQL - postgresql

What will be equivalent query of below DB2 to PostgreSQL .
ALTER TABLE User.emp COMPRESS YES ADAPTIVE

There is no equivalent. The only compression that takes place is the compression of large column values by the TOAST machinery, and that happens by default.

Related

data pseudonymization in postgreSQL

I am trying get the PII data from postgreSQL table. But I can't display the raw data.
How to Pseudonymize the data while fetching(select) it from postgreSQL database?
You can always create (pseudo)anonymized views for tables, and select from those. As for anonymization techniques, that depends on data, you can use regex replaces and md5 very easily in postgres.

Clob(length 1048576) DB2 to PostgreSQL best datatype?

We had a table with a column with a Clob(length 1048576) that would store search text that helps with searching. When I transferred it from DB2 to Postgres in our migration, I found that it wasn't working as well. So I going to try text or varchar, but I was finding it would take much longer for the long text entries to be added to the table to the point my local wildfly window would timeout when trying to run.
What is the equilavelent of a datatpye that accepts text that I should be using in postgres to replace a Clob that was length 1048576 in DB2? It might be that I was using the right datatypes but didn't have the right corresponding size.
Use text. That is the only reasonable data type for long character strings.

Which option is best for selecting column compression encoding [ COPY VS ANALYZE COMPRESSION ]

Scenario : I have to change existing table's column encoding
a) If I execute ANALYZE COMPRESSION table_name ---this approach is suggested to use ZSTD compression for all columns including SORT-KEY column.
b) I have created new table using existing table’s DDL and used copy command in order to get column compression encoding (Copy select column compression encoding when load data into an empty table) ---COPY command suggested LZO for all columns including SORT-KEY column.
Question :
Which approach is correct or optimised ?
SORT-KEY column compression is bad so will ZSTD for SORT-KEY column improve performance ?
ANALYZE COMPRESSION only looks at the effectiveness of the compression based on storage and does not consider other factors.
In many cases the first column of the SORT KEY compresses well and is typically filtered on (predicate in the where clause). If for some reason you never filtered on the column (maybe a merge join) it would be okay to compress the SORT KEY.
The reason we recommend decompressing the first column of the SORT KEY is because when you filter with a range restricted scan on a column that is highly compressed compared to the other columns you are scanning it can result in a slight decrease in performance.
https://forums.aws.amazon.com/thread.jspa?threadID=252583
https://discourse.snowplowanalytics.com/t/make-big-data-small-again-with-redshift-zstd-compression/1280
Probably the above threads helps a it.

how to reduce toast_tuple_threshold in PostgreSQL?

OS: RHEL 7.2
PostgreSQL Version 9.6
I want toast to compress data. The average record length is around 500 Bytes in my tables. Although the columns show storage as extended, yet no compression is happening. Hence I want to modify toast_tuple_threshold to 500 bytes. Which file holds this value? And do we need to modify any other parameter?
I tried
ALTER TABLE tablename SET (TOAST_TUPLE_TARGET = 128);

POSTGRESQL 9.6 COPY created file bigger than the table

Im trying to export oracle table into a local postgresql dump via the copy command :
\copy (select * from remote_oracle_table) to /postgresql/table.dump with binary;
The oracle table`s size is 25G. Howvere, the copy command created a 50G file. How is it possible ?
I'm capable of selecting from remote oracle table because i have the oracle_fdw
extension.
A few factors are likely at work here, including:
Small numbers in integer and numeric fields use more space in binary format than text format;
Oracle probably stores the table with some degree of compression, which the binary dump won't have.
You'll likely find that if you compress the resulting dump it'll be a lot smaller.