How to add ONE column to ALL tables in postgresql schema - postgresql

question is pretty simple, but can't seem to find a concrete answer anywhere.
I need to update all tables inside my postgresql schema to include a timestamp column with default NOW(). I'm wondering how I can do this via a query instead of having to go to each individual table. There are several hundred tables in the schema and they all just need to have the one column added with the default value.
Any help would be greatly appreciated!

The easy way with psql, run a query to generate the commands, save and run the results
-- Turn off headers:
\t
-- Use SQL to build SQL:
SELECT 'ALTER TABLE public.' || table_name || ' add fecha timestamp not null default now();'
FROM information_schema.tables
WHERE table_type = 'BASE TABLE' AND table_schema='public';
-- If the output looks good, write it to a file and run it:
\g out.tmp
\i out.tmp
-- or if you don't want the temporal file, use gexec to run it:
\gexec

Related

Postgres run statement against multiple schemas

I have a mult-tenant application where tenants are set up on different schemas within the same database. The reason being that they have some shared data they use on one of the schemas.
Up until now I've been using a bash script with a list of the schemas in it that needs to be updated whenever a new schema is added and I need to do things like table schema changes across the accounts. For instance adding a new column to a table.
Is there a way in Postgres, psql, etc... to run for instance
ALTER TABLE some_table ADD COLUMN some_column TEXT NOT NULL DEFAULT '';
without having to do string replacement in another script like bash for instance.
So in other words is there an easy enough way to get the schemas, and write in psql a for loop that will iterate through the schemas and run the statement each by setting search_path for instance.
The reason being that the number of tenants is growing, and new tenants can be added by admin users that aren't devs, so I'm constantly updating my shell scripts. This will only grow exponentially. Is there a standard way of handling this kind of problem?
You can do that with a little PL/pgSQL block:
do
$$
declare
s_rec record;
begin
for s_rec in select schema_name
from information_schema.schemata
where schema_name not in ('pg_catalog', 'information_schema')
loop
execute format ('ALTER TABLE if exists %I.some_table ADD COLUMN some_column TEXT NOT NULL DEFAULT ''''), s_rec.schema_name);
end loop;
end;
$$
The if exists will make sure the statement doesn't fail if that table doesn't exist in the schema.
If you over-simplified your question and want in fact run complete scripts once for each schema, generating a script for each schema that includes the actual script is probably easier:
select concat(format('set search_path = %I;', schema_name),
chr(10),
'\i complete_migration_script.sql')
from information_schema.schemata
where schema_name not in ('pg_catalog', 'information_schema')
You can spool the output of that statement into a file and then run that file using psql (of course you need to replace complete_migration_script.sql with the actual name of your script)

Change all of the table owners within a schema

I'm currently using the following postgres query and then copying the data output and running to change all of the tables in a specified schema. What's the best way so that I don't have to always run, such as a stored procedure?
select 'ALTER TABLE ' || table_name || ' OWNER TO new_owner;'
from information_schema.tables
where table_schema = 'specified_schema';
A stored procedure must be run as well, you gain nothing.
I would create a cron job and put it in cron/{hourly,daily} - provided that it is the best alternative to solve the problem.
You do not give any information to judge that.

How can a list of table's field names be queried from PostgreSQL?

How can a plain text list of the field names of table be retrieved from PostgreSQL database?
Just query INFORMATION_SCHEMA.COLUMNS, like this:
SELECT
column_name,
data_type,
character_maximum_length,
ordinal_position
FROM information_schema.columns
WHERE table_name = 'mytable'
Better still, INFORMATION_SCHEMA is almost universally supported by all popular SQL databases, so this should work anywhere.
If you really want just dump plain text file recipe, you can execute this query using command line psql and save it as CSV or something like that.

Exporting sequences in PostgreSQL

I want to export ONLY the sequences created in a Database created in PostgreSQL.
There is any option to do that?
Thank you!
You could write a query to generate a script that will create your existing sequence objects by querying this information schema view.
select *
from information_schema.sequences;
Something like this.
SELECT 'CREATE SEQUENCE ' || sequence_name || ' START ' || start_value || ';'
from information_schema.sequences;
I know its too old but today I had similar requirement so I tried to solve it the same way by creating a series of "CREATE SEQUENCE" queries which can be used to RE-create sequences on the other DB with bad import (missing sequences)
here is the SQL I used:
SELECT
'CREATE SEQUENCE '||c.relname||
' START '||(select setval(c.relname::text, nextval(c.relname::text)-1))
AS "CREATE SEQUENCE SQLs"
FROM
pg_class c
WHERE
c.relkind = 'S'
Maybe that can be helpful for someone.
Using DBeaver, you can
open a schema
select its sequences
crtl-F to search for the sequences you're interested in
crtl-A to select all of them
Right-click and select Generate SQL -> DDL
You will be given SQL statements to create all of the sequences selected.

How do I find all code, triggers from an oracle database that relate to specific tables?

I have a problem where I need to remove all code and triggers from a database that relate to certain tables in order for a Solaris package to install. Long complicated story but I need
to start with a clean slate.
I've managed to remove all the existing tables/synonyms, but how to locate the code/triggers from sqlplus that is related?
Unfortunately, it's not feasible to drop the database and recreate it.
Well, it turns out all the table names are prefixed with my module name DAP.
So, to find all the table names and public synonyms with sqlplus:
select table_name from all_tables where table_name like 'DAP%';
select synonym_name from all_synonyms where table_name like 'DAP%';
To get a list of triggers and sequences
select trigger_name from all_triggers where table_name like 'DAP%';
select sequence_name from all_sequences where sequence_name like 'DAP%';
To get a list of all the constraints
select table_name, constraint_name from all_constraints where table_name like 'DAP%';
To get the DAP related code:
select text from dba_source where name like 'DAP%';
I can now write a script that drops everything.
You should be able to query the system table ALL_TRIGGERS to find the triggers. It has a table_name column. You can probably find the other related objects with different system tables (been awhile since I've messed with Oracle).
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/statviews_2107.htm