Exporting sequences in PostgreSQL - postgresql

I want to export ONLY the sequences created in a Database created in PostgreSQL.
There is any option to do that?
Thank you!

You could write a query to generate a script that will create your existing sequence objects by querying this information schema view.
select *
from information_schema.sequences;
Something like this.
SELECT 'CREATE SEQUENCE ' || sequence_name || ' START ' || start_value || ';'
from information_schema.sequences;

I know its too old but today I had similar requirement so I tried to solve it the same way by creating a series of "CREATE SEQUENCE" queries which can be used to RE-create sequences on the other DB with bad import (missing sequences)
here is the SQL I used:
SELECT
'CREATE SEQUENCE '||c.relname||
' START '||(select setval(c.relname::text, nextval(c.relname::text)-1))
AS "CREATE SEQUENCE SQLs"
FROM
pg_class c
WHERE
c.relkind = 'S'
Maybe that can be helpful for someone.

Using DBeaver, you can
open a schema
select its sequences
crtl-F to search for the sequences you're interested in
crtl-A to select all of them
Right-click and select Generate SQL -> DDL
You will be given SQL statements to create all of the sequences selected.

Related

How to add ONE column to ALL tables in postgresql schema

question is pretty simple, but can't seem to find a concrete answer anywhere.
I need to update all tables inside my postgresql schema to include a timestamp column with default NOW(). I'm wondering how I can do this via a query instead of having to go to each individual table. There are several hundred tables in the schema and they all just need to have the one column added with the default value.
Any help would be greatly appreciated!
The easy way with psql, run a query to generate the commands, save and run the results
-- Turn off headers:
\t
-- Use SQL to build SQL:
SELECT 'ALTER TABLE public.' || table_name || ' add fecha timestamp not null default now();'
FROM information_schema.tables
WHERE table_type = 'BASE TABLE' AND table_schema='public';
-- If the output looks good, write it to a file and run it:
\g out.tmp
\i out.tmp
-- or if you don't want the temporal file, use gexec to run it:
\gexec

How to copy the sequence numbers in postgresql

For a postgres based database I need to mirror the table definitions and sequence numbers from one schema to another. For the purpose of copying the schema definitions, I've been able to use pg_dump with schema definition only, however documentation seems to indicate that sequence numbers are only exported when data export is selected.
Is there an easy to export the corresponding sequence numbers in the schema exportation or an easy way to transfer these values or is the only alternative to interface with the database from a scripting language?
Looking at the dump the pg_dump first writes the creation of the sequence and the corrects that start value with
SELECT pg_catalog.setval('tuutti_id_seq', 4, true);
So if you do a schema-only dump you can construct the statement from information schema, for example with SQL query:
SELECT 'SELECT pg_catalog.setval(''' || sequence_name || ''', ' || start_value || ', true);'
FROM information_schema.sequences;

Change all of the table owners within a schema

I'm currently using the following postgres query and then copying the data output and running to change all of the tables in a specified schema. What's the best way so that I don't have to always run, such as a stored procedure?
select 'ALTER TABLE ' || table_name || ' OWNER TO new_owner;'
from information_schema.tables
where table_schema = 'specified_schema';
A stored procedure must be run as well, you gain nothing.
I would create a cron job and put it in cron/{hourly,daily} - provided that it is the best alternative to solve the problem.
You do not give any information to judge that.

How do I grant select for a user on all tables?

I have a user in my DB2 database that I want to grant select rights on all tables and views for a given schema. Any thoughts on how to do that as one SQL statement?
In order to grant select to a given user, you have to "generate" the sentence for each table and view of a given schema. You can do it via the CLP with a query like this:
db2 -x "select 'grant select on table ' || rtrim(tabschema) || '.' || rtrim(tabname) || ' to user JOHN_DOE' from syscat.tables where tabschema like 'FOO%' and (type = 'T' or type = 'V')" | db2 +p -tv
This command line will generate the grants for user JOHN_DOE for all tables (T) and views (V) of any schema starting with FOO.
If you have many tables, the output will be very big and the internal buffer will be filled. Reissue the command by generating the grants for a smaller set of tables.
If you are not sure about what you are going to execute, issue the previous command without the final part (| db2 +p -tv), this will write the commands in the standard output. However, this part is the most important, because this executes the generated output.
For more details, please check the InfoCenter or my blog http://angocadb2.blogspot.com/2011/12/ejecutar-la-salida-de-un-query-en-clp.html (In Spanish)

How do I drop all tables in psql (PostgreSQL interactive terminal) that starts with a common word?

How do I drop all tables whose name start with, say, doors_? Can I do some sort of regex using the drop table command?
I prefer not writing a custom script but all solutions are welcomed. Thanks!
This script will generate the DDL commands to drop them all:
SELECT 'DROP TABLE ' || t.oid::regclass || ';'
FROM pg_class t
-- JOIN pg_namespace n ON n.oid = t.relnamespace -- to select by schema
WHERE t.relkind = 'r'
AND t.relname ~~ E'doors\_%' -- enter search term for table here
-- AND n.nspname ~~ '%myschema%' -- optionally select by schema(s), too
ORDER BY 1;
The cast t.oid::regclass makes the syntax work for mixed case identifiers, reserved words or special characters in table names, too. It also prevents SQL injection and prepends the schema name where necessary. More about object identifier types in the manual.
About the schema search path.
You could automate the dropping, too, but it's unwise not to check what you actually delete before you do.
You could append CASCADE to every statement to DROP depending objects (views and referencing foreign keys). But, again, that's unwise unless you know very well what you are doing. Foreign key constraints are no big loss, but this will also drop all dependent views entirely. Without CASCADE you get error messages informing you which objects prevent you from dropping the table. And you can then deal with it.
I normally use one query to generate the DDL commands for me based on some of the metadata tables and then run those commands manually. For example:
SELECT 'DROP TABLE ' || tablename || ';' FROM pg_tables
WHERE tablename LIKE 'prefix%' AND schemaname = 'public';
This will return a bunch of DROP TABLE xxx; queries, which I simply copy&paste to the console. While you could add some code to execute them automatically, I prefer to run them on my own.