PostgreSQL - Determine column storage type - postgresql

I've been reading a lot about PostgreSQL's TOAST, and there's one thing I seem to be missing. They mention in the documentation that, "there are four different strategies for storing TOAST-able columns on disk," those being: PLAIN, EXTENDED, EXTERNAL, and MAIN. They also have a very clear way to define which strategy to use for your column, which can be found here. Essentially, it would be something like this:
ALTER TABLE table_name ALTER COLUMN column_name SET STORAGE EXTERNAL
The one thing I don't see is how to easily retrieve that setting. My question is, is there a simple way (either through commands or pgAdmin) to retrieve the storage strategy being used by a column?

This is stored pg_attribute.attstorage, e.g.:
select att.attname,
case att.attstorage
when 'p' then 'plain'
when 'm' then 'main'
when 'e' then 'external'
when 'x' then 'extended'
end as attstorage
from pg_attribute att
join pg_class tbl on tbl.oid = att.attrelid
join pg_namespace ns on tbl.relnamespace = ns.oid
where tbl.relname = 'table_name'
and ns.nspname = 'public'
and not att.attisdropped;
Note that attstorage is only valid if attlen is > -1

While I like #a_horse_with_no_name's method, after I posted this question, I expanded my search to just general table information and found that if you use psql, you can use the command described here, and the result will be a table listing all of the columns, their types, modifiers, storage types, stats targets, and descriptions.
So, using psql this info can be found with:
\d+ table_name
I just figured I'd post this in case anyone wanted another solution.

Related

Find a table in a schema without knowing in advance

Is it possible to easily see what tables exist in what schemas, at a glance?
So far I have had to connect to a database, view the schemas, then change the search path to one of the schemas and then list the tables. I had to do this for multiple schemas until I found the table I was looking for.
What if there is a scenario where you inherit a poorly documented database and you want to find a specific table in hundreds of schemas?
Ideally I imagine some output like so;
SCHEMA TABLE
--------------------
schema1 table1
schema2 table2
schema2 table1
--------------------
Or even the more standard <SCHEMA_NAME>.<TABLE_NAME>;
schema1.table1
schema2.table2
schema2.table1
The latter output would be even better since you could simply check the table using copy-paste;
my-database=# \d schema2.table1
Ideally I'm hoping I missed a built-in command to find this. I don't really want to create and memorize a lengthy SQL command to get this (somewhat basic) information.
You can make use of pg_tables
SELECT schemaname, tablename,
quote_ident(schemaname) || '.' || quote_ident(tablename)
FROM pg_tables
WHERE tablename = 'test';

Feedback on whether index was created on materialized views in postgresql

I created a unique index for a materialized view as :
create unique index if not exists matview_key on
matview (some_group_id, some_description);
I can't tell if it has been created
How do I see the index?
Thank you!
Two ways to verify index creation:
--In psql
\d matview
--Using SQL
select
*
from
pg_indexes
where
indexname = 'matview_key'
and
tablename = 'matview';
More information on pg_indexes.
Like has been commented, if the command finishes successfully and you don't get an error message, the index was created. Possible caveat: while the transaction is not committed, nobody else can see it (except the unique name is reserved now), and it still might get rolled back. Check in a separate transaction to be sure.
To be absolutely sure:
SELECT pg_get_indexdef(oid)
FROM pg_catalog.pg_class
WHERE relname = 'matview_key'
AND relkind = 'i'
-- AND relnamespace = 'public'::regnamespace -- optional, to make sure of the schema, too
This way you see whether an index of the given name exists, and also its exact definition to rule out a different index with the same name. Pure SQL, works from any client. (There is nothing special about an index on materialized views.)
Also filter for the schema to be absolutely sure. Would be the "default" schema (a.k.a. "current" schema) in your case, since you did not specify in the creation. See:
How does the search_path influence identifier resolution and the "current schema"
Related:
Create index if it does not exist
How to check if a table exists in a given schema
In psql:
\di public.matview_key
To only find indexes. Again, the schema is optional to narrow down.
Progress Reporting
If creating an index takes a long time, you can look up progress in pg_stat_progress_create_index since Postgres 12:
SELECT * FROM pg_stat_progress_create_index
-- WHERE relid = 'public.matview'::regclass -- optionally narrow down
Un alternative to looking into pg_indexes is pg_matviews (for a materialized view only)
select *
from pg_matviews
where matviewname = 'my_matview_name';

TSQL Case on the From clause

in my LiveAppDB we have a load of views which reference a larger LiveProduction DB. What I'd like to do is switch the code to look at the TestProduction DB, depending on which Application DB the view is running on
-- VIEW CAN RUN ON LiveApplicationDB or TestApplicationDB
SELECT COL1, COL2
FROM (CASE WHEN DB_NAME() = ‘LiveApplicationDB‘ THEN LIVEPRODUCTION.DB.DBTABLE ELSE TESTPRODUCTION.DB.DBTABLE END) AS tabl1 -- CASE TO DETERMINE WHICH PRODUCTION DB TO USE
INNER JOIN dbo.ThisDBTable BRA
ON tabl1.product COLLATE Latin1_General_BIN = BRA.product
WHERE (tabl1.COL1 IS NOT NULL)
In a bid to help clarify this…
If the view LiveAppDB use LiveProductionDB else TestAppDB use TestProductionDB
Obviously you can’t use variables in a View.
Any help much appreciated.
Create a synonym in LiveProductionDB pointing to LIVEPRODUCTION.DB.DBTABLE and a synonym with the same exact name in TestAppDB pointing to TESTPRODUCTION.DB.DBTABLE. Use the synonym in your query. You probably should put something like this in your deployment script:
IF DB_NAME() = 'LiveApplicationDB'
CREATE SYNONYM dbo.DBTABLE FOR LIVEPRODUCTION.DB.DBTABLE;
IF DB_NAME() = 'LiveApplicationDB'
CREATE SYNONYM dbo.DBTABLE FOR TESTPRODUCTION.DB.DBTABLE;
FYI: Most compare tools I've tried ignore the destination of a synonym and thus this will not be considered a difference.
EDIT: I'd recommend never to use a object in an other database directly. For example: when dbA needs some objects in dbB, I'd create a schema in dbA called dbB and place synonyms in this schema referencing the objects in dbB from dbA. In most cases I'd also create a schema called dbA in dbB and place views and sprocs there only to be used by dbA. dbA is only allowed to use objects placed in the dbA schema in dbB. To futher explain the reasoning behind this approach:
All dependancies on dbB from dbA are clearly stated in both databases
Moving dbB to another server is a breeze, simply create a linked server and modify all synonyms. No sproc needs to be modified or tested.
Datamodel changes dbB don't necessarily mean you'd need to make changes in dbA, just make sure the interfaces of the objects in schema dbA in dbB remain the same.
All code that matters is exactly the same between test and production which benefits deployments and code compares
If the servers are linked, then you can use a four-part naming convention to point to your test instance - just qualify the name like so:
(CASE WHEN DB_NAME() = 'LiveApplicationDB' THEN LiveServer.LIVEPRODUCTION.DB.DBTABLE ELSE TestServer.TESTPRODUCTION.DB.DBTABLE END
But you'll have to get your DBA to agree to link the servers, or do it yourself, if you have the knowledge/power.
Donna
I'd approach this slightly differently to how you're doing it. You're always using the table dbo.ThisDBTable. Use this as the base table then join to the others with the db_name() check in the ON clause;
-- VIEW CAN RUN ON LiveApplicationDB or TestApplicationDB
SELECT BRA.COL1
,COLLATE(tab1.COL2, tab2.COL2) COL2
FROM dbo.ThisDBTable BRA
LEFT JOIN LIVEPRODUCTION.DB.DBTABLE tab1
ON BRA.product = tab1.product COLLATE Latin1_General_BIN
AND DB_NAME() = 'LiveApplicationDB'
LEFT JOIN TESTPRODUCTION.DB.DBTABLE tab2
ON BRA.product = tab2.product COLLATE Latin1_General_BIN
AND DB_NAME() = 'TestProductionDB'
WHERE tab1.COL1 IS NOT NULL

Postgresql dblink

Trying to be lazy when looking at an example
SELECT realestate.address, realestate.parcel, s.sale_year, s.sale_amount,
FROM realestate INNER JOIN
dblink('dbname=somedb port=5432 host=someserver
user=someuser password=somepwd',
'SELECT parcel_id, sale_year,
sale_amount FROM parcel_sales')
AS s(parcel_id char(10),sale_year int, sale_amount int)
Is there a way of getting the AS section filled in from the table?
I'm copying data from tables of the same name and structure on different servers.
If I can get the structure to copy from the existing table, it will save me a lot of time
Thanks
Bruce
The answer is: No. See the doc:
Since dblink can be used with any query, it is declared to return record, rather than specifying any particular set of columns. This means that you must specify the expected set of columns in the calling query — otherwise PostgreSQL would not know what to expect.
http://www.postgresql.org/docs/9.1/static/contrib-dblink-function.html
Edit: by the way, for a table or a view, you can get the fields name and type in a first query:
select column_name
from information_schema.columns
where table_name = 'your_table_or_view';
You could then use it to fill the fields declaration.
Alexis

How do I drop all tables in psql (PostgreSQL interactive terminal) that starts with a common word?

How do I drop all tables whose name start with, say, doors_? Can I do some sort of regex using the drop table command?
I prefer not writing a custom script but all solutions are welcomed. Thanks!
This script will generate the DDL commands to drop them all:
SELECT 'DROP TABLE ' || t.oid::regclass || ';'
FROM pg_class t
-- JOIN pg_namespace n ON n.oid = t.relnamespace -- to select by schema
WHERE t.relkind = 'r'
AND t.relname ~~ E'doors\_%' -- enter search term for table here
-- AND n.nspname ~~ '%myschema%' -- optionally select by schema(s), too
ORDER BY 1;
The cast t.oid::regclass makes the syntax work for mixed case identifiers, reserved words or special characters in table names, too. It also prevents SQL injection and prepends the schema name where necessary. More about object identifier types in the manual.
About the schema search path.
You could automate the dropping, too, but it's unwise not to check what you actually delete before you do.
You could append CASCADE to every statement to DROP depending objects (views and referencing foreign keys). But, again, that's unwise unless you know very well what you are doing. Foreign key constraints are no big loss, but this will also drop all dependent views entirely. Without CASCADE you get error messages informing you which objects prevent you from dropping the table. And you can then deal with it.
I normally use one query to generate the DDL commands for me based on some of the metadata tables and then run those commands manually. For example:
SELECT 'DROP TABLE ' || tablename || ';' FROM pg_tables
WHERE tablename LIKE 'prefix%' AND schemaname = 'public';
This will return a bunch of DROP TABLE xxx; queries, which I simply copy&paste to the console. While you could add some code to execute them automatically, I prefer to run them on my own.