Why postgres ltree takes more storage space - postgresql

I have been trying to understand how ltree is stored internally in postgres.
create table ltreetable(path ltree)
create table varchartable(path varchar(50))
I insert around 17 Million records in both tables using same paths (strings joined by dots).
ltree table takes 2.4GB and varchartable takes 1.9GB. Why is that?

Related

Difference between oid and relfilenode

I am reading Internals of postgreSQL chp 1 and I am unable to understand the difference between object identifier and relfilenode.
Tables and indexes as database objects are internally managed by individual OIDs, while those data files are managed by the variable, relfilenode. The relfilenode values of tables and indexes basically but not always match the respective OIDs
I get that both these are the attributes of the system catalog 'pg_class' and OID can be thought of as the primary key of the table, so what is the purpose of relfilenode and how is it different from OID?
relfilenode is the prefix for the name of the files that make up the table. Initially it is identical to the immutable object ID (oid), but SQL statements that rewrite the table will modify it (for example VACUUM (FULL), CLUSTER, TRUNCATE or the variants of ALTER TABLE that rewrite the table).

How to get the describe tables from the Redshift and ALTER it

I have create a redshift cluster and created a db inside.
My schema is new_schema
I have created 2 tables inside two tables inside table1, table2
My Question.
I want to list the datatypes of table1
I need to change the datatype of description which is inside the table1 which is of VARCHAR to TEXT
I have tried to list the datatypes of table1 with below query but nothing listing
SELECT * FROM PG_TABLE_DEF WHERE schemaname = 'new_schema';
A few possibilities as to why you are not seeing the expected results. Most likely is that new_schema isn't in your search_path. Pg_table_info only return info for tables in your search_path - see: https://docs.aws.amazon.com/redshift/latest/dg/r_PG_TABLE_DEF.html
Another possibility is that the tables have no data rows (no blocks assigned) and this can lead to incomplete info from some system tables.
Another possibility is that the tables were not committed by the creating session and being checked by a different session. Since you say that you are creating a new db this comes to mind.
Are the tables visible in svv_table_info?
Also the premise of changing varchar to text is a bit off. From https://docs.aws.amazon.com/redshift/latest/dg/r_Character_types.html#r_Character_types-text-and-bpchar-types
You can create an Amazon Redshift table with a TEXT column, but it is
converted to a VARCHAR(256) column that accepts variable-length values
with a maximum of 256 characters.
So it seems like the objective you are trying to achieve is a bit off.

Is there a method to do an ALTER Column in postgres 12 on an huge table without waiting a lifetime?

Is there a method to do an ALTER COLUMN in postgres 12 on an huge table without waiting a lifetime?
I try to convert a field from bigint to smallint :
ALTER TABLE huge ALTER COLUMN result_code TYPE SMALLINT;
It takes 28 hours, is there a smarter method?
The table has sequences, keys and foreign keys
The table has to be rewritten, and you have to wait.
If you have several columns whose data type you want to change, you can use several ALTER COLUMN clauses in a single ALTER TABLE statement and save time that way.
An alternative idea would be to use logical replication: set up an empty copy of the database (pg_dump -s), where your large table is defined with smallint columns. Replicate your database to that database, and switch over as soon as replication has caught up.

How can we execute Oracle sequence in Postgres?

In between migration from Oracle to Postgres, I need to execute some insert statement for an Oracle table from Postgres (in which the primary key field is using a sequence for uniqueness).
Now at the time of the migration I am converting some procedure that is used to insert a row in a table, but I can't move table directly from oracle to Postgres due to a higher dependency on the table.
That's why I need to execute an Oracle sequence from Postgres.
The simplest solution is probably to create a view in Oracle that doesn't contain the column that is to be filled from the sequence.
Then define a trigger on the table that fills the column from the sequence when NULL and creare a foreign table on the view.
Wheb you INSERT into the foreign table, the column will get filled by the trigger.

Efficient way to move large number of rows from one table to another new table using postgres

I am using PostgreSQL database for live project. In which, I have one table with 8 columns.
This table contains millions of rows, so to make search faster from table, I want to delete and store old entries from this table to new another table.
To do so, I know one approach:
first select some rows
create new table
store this rows in that table
than delete from main table.
But it takes too much time and it is not efficient.
So I want to know what is the best possible approach to perform this in postgresql database?
Postgresql version: 9.4.2.
Approx number of rows: 8000000
I want to move rows: 2000000
You can use CTE (common table expressions) to move rows in a single SQL statement (more in the documentation):
with delta as (
delete from one_table where ...
returning *
)
insert into another_table
select * from delta;
But think carefully whether you actually need it. Like a_horse_with_no_name said in the comment, tuning your queries might be enough.
This is a sample code for copying data between two table of same.
Here i used different DB, one is my production DB and other is my testing DB
INSERT INTO "Table2"
select * from dblink('dbname=DB1 dbname=DB2 user=postgres password=root',
'select "col1","Col2" from "Table1"')
as t1(a character varying,b character varying);