Copy permissions from another table - postgresql

Is it possible to copy the user permissions from one table in a PostgreSQL database to another table? Is it just a matter of updating the pg_class.relacl column value for the target table to the value for the source table, as in:
UPDATE pg_class
SET relacl=(SELECT relacl FROM pg_class WHERE relname='source_table')
WHERE relname='target_table';
This seems to work, but am I missing anything else that may need to be done or other 'gotchas' with this method?

If you can use command-line instead of SQL then a safer approach would be to use pg_dump:
pg_dump dbname -t oldtablename -s \
| egrep '^(GRANT|REVOKE)' \
| sed 's/oldtablename/newtablename/' \
| psql dbname
I assume a unix server. On Windows I'd use pg_dump -s to a file, manually edit it and then import it to a database.
Maybe you'll also need to copy permissions to sequences owned by this table - pg_dump will work.

The pg_dump approach is nice and simple, however, it doesn't work with tables in other schemas, as the output doesn't qualify the table with schema name. Instead it generates:
SET search_path = foo, pg_catalog;
...
GRANT SELECT ON foo_table to foo_user;
and will fail to grant privileges to an nonexistent public.foo_table relation.
Also, if you have relations with the same name in different schemas, you need to ensure that you only rename the table in the specified schema. I began to hack a bash script base on the above to take care of this but it started to become a bit unwieldy, so I switched to perl.
Usage: transfer-acl old-qualified-relation=new-qualified-relation
e.g. transfer-acl foo.foo_table=foo.bar_table will apply the grants on foo.foo_table to the foo.bar_table. I didn't implement any REVOKE rewriting because I wasn't able to get a dump to emit any.
#! /usr/bin/perl
use strict;
use warnings;
my %rename = map {(split '=')} #ARGV;
open my $dump, '-|', qw(pg_dump customer -s), map {('-t', $_)} keys %rename
or die "Cannot open pipe from pg_dump: $!\n";
my $schema = 'public';
while (<$dump>) {
if (/^SET search_path = (\w+)/) {
$schema = $1;
}
elsif (/^(GRANT .*? ON TABLE )(\w+)( TO (?:[^;]+);)$/) {
my $fq_table = "$schema." . $2; # fully-qualified schema.table
print "$1$rename{$fq_table}$3\n" if exists $rename{$fq_table};
}
}
Pipe the results of this to psql and you're set.

Related

Using pg_dump to export multiple schemas from like a par file

In order to export using pg_dump, I'm aware i need to do something like this to export multiple schemas
As an example:
pg_dump -n user1 -n user2 -f backup.sql
but what if i have like 10 schemas, instead of using "-n" n number of times is there a better way to define the schema list, like in a text file and somehow render in a pg_dump commandline?
thanks
Using patterns o regular expressions as appears in official docs
-n pattern --schema=pattern
Dump only schemas matching pattern; this selects both the schema itself, and all its contained objects.
Multiple schemas can be selected by writing multiple -n switches. The pattern parameter is interpreted
as a pattern according to the same rules used by psql's \d commands (see Patterns below), so multiple
schemas can also be selected by writing wildcard characters in the pattern. When using wildcards, be
careful to quote the pattern if needed to prevent the shell from expanding the wildcards; see Examples
below.
To dump all schemas whose names start with east or west and end in gsm, excluding any schemas whose names
contain the word test:
$ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
The same, using regular expression notation to consolidate the switches:
$ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
Or with a shell script and a text file containing names of schemas, one by line
#!/bin/bash
cat schemas.txt | while read schema || [[ -n $schema ]];
do
cmd="pg_dump -n '${schema}' postgres > ${schema}.sql"
printf '%s\n' "$cmd"
eval "$cmd"
done
where postgres is your database.
And additionally a version for all the selected schemas in one dump file
#!/bin/bash
cat schemas.txt | (while read schema || [[ -n $schema ]];
do
params+="-n '${schema}' "
done
cmd="pg_dump $params postgres > some_schemas.sql"
printf '%s\n' "$cmd"
eval "$cmd")

Backup complete postgis database with geometry transformation

I have a PostGIS enabled PostgreSQL database that is no longer needed in production. I would like to back it up, but the geometry columns of PostGIS should be transformed to a simple, long-term stable text format like WKT.
I'm aware of the ST_AsText function.
SELECT road_id, ST_AsText(road_geom) AS geom, road_name FROM roads;
But how do I apply this to backup the complete database with several tables with geometries and many without?
That's the backup strategy I finally applied:
1. Normal dump of the PostGIS-Postgres database with pg_dump. Plain-text format to improve readability.
2. CSV backup of all tables
For this I created a copy of my initial database ("la_as_text") and transformed all the geometry columns to text with the following script (thanks #Jim Jones). It's very specific for my case, but I decided to post it like this anyway. Just in case somebody runs in the came issues as I did with views that depend on geometry columns and gist indexes. The views won't work if you change the datatype to text and text columns are also not valid for gist indexing.
#!/bin/bash
# has to be run by a user who has owner permissions for the database
rm delete_views.txt delete_views_mod.txt alter_script.txt alter_script_mod.txt
# delete all views - not necessary in long-term-backup and cause problems when altering geometry columns
echo "select 'drop view ' ||table_name|| ';' from information_schema.views where table_schema not in ('pg_catalog', 'information_schema') and table_name "'!'"~ '^pg_';" | psql la_as_text >> delete_views.txt
cat delete_views.txt | tail -n +3 | head -n -2 >> delete_views_mod.txt
cat delete_views_mod.txt | psql la_as_text
# delete one particular gist index that depends on a geometry column -- text can't be indexed with gist
echo "drop index idx_combined_points_the_geom;" | psql la_as_text
# change data type geometry (EWKB) to human readable text (WKT)
echo "select 'alter table '||table_schema||'.'||table_name||' alter column '||column_name||' type text USING ST_AsText('||column_name||');' from information_schema.columns where table_schema = 'public' and udt_name = 'geometry';" | psql la_as_text >> alter_script.txt
cat alter_script.txt | tail -n +3 | head -n -2 >> alter_script_mod.txt
cat alter_script_mod.txt | psql la_as_text
The resulting database lacks a lot of its initial functionality due to the missing geometry data type, but it's human readable.
Instead of a normal dump I exported all the individual tables as ';'-separated text files with the following script:
#!/bin/bash
SCHEMA="public"
DB="la_as_text"
psql -Atc "select tablename from pg_tables where schemaname='$SCHEMA'" $DB |\
while read TBL; do
psql -c "copy $SCHEMA.$TBL to stdout with csv delimiter ';'" $DB > $TBL.csv
done
I'm pretty confident that this backup can be reconstructed in the future -- if necessary.

Postgres COPY command to tail a file?

I want to use to copy command to copy data into postgres; while there other processes are simultaneously writing into the CSV file.
Is something like this possible? Take the stdout from tail and pipe into the stdin of postgres.
COPY targetTable ( column1, column2 )
FROM `tail -f 'path/to/data.csv'`
WITH CSV
Assuming PostgreSQL 9.3 or better, there's the possibility of copying from a program output with:
COPY FROM PROGRAM 'command'
From the doc:
PROGRAM
A command to execute. In COPY FROM, the input is read from standard
output of the command, and in COPY TO, the output is written to the
standard input of the command.
This may be what you need except for the fact that tail -f being a never-ending command by design, it's not obvious how you plan for the COPY to ever finish. Presumably you'd need to replace tail -f by a more elaborate script with some exit condition.
you can also do COPY FROM STDIN;
example:
tail -f datafile.csv | psql -tc "COPY table from STDIN" database

PostgreSQL: How to pass parameters from command line?

I have a somewhat detailed query in a script that uses ? placeholders. I wanted to test this same query directly from the psql command line (outside the script). I want to avoid going in and replacing all the ? with actual values, instead I'd like to pass the arguments after the query.
Example:
SELECT *
FROM foobar
WHERE foo = ?
AND bar = ?
OR baz = ? ;
Looking for something like:
%> {select * from foobar where foo=? and bar=? or baz=? , 'foo','bar','baz' };
You can use the -v option e.g:
$ psql -v v1=12 -v v2="'Hello World'" -v v3="'2010-11-12'"
and then refer to the variables in SQL as :v1, :v2 etc:
select * from table_1 where id = :v1;
Please pay attention to how we pass string/date values using two quotes " '...' " But this way of interpolation is prone to SQL injections, because it's you who's responsible for quoting. E.g. need to include a single quote? -v v2="'don''t do this'".
A better/safer way is to let PostgreSQL handle it:
$ psql -c 'create table t (a int, b varchar, c date)'
$ echo "insert into t (a, b, c) values (:'v1', :'v2', :'v3')" \
| psql -v v1=1 -v v2="don't do this" -v v3=2022-01-01
Found out in PostgreSQL, you can PREPARE statements just like you can in a scripting language. Unfortunately, you still can't use ?, but you can use $n notation.
Using the above example:
PREPARE foo(text,text,text) AS
SELECT *
FROM foobar
WHERE foo = $1
AND bar = $2
OR baz = $3 ;
EXECUTE foo('foo','bar','baz');
DEALLOCATE foo;
In psql there is a mechanism via the
\set name val
command, which is supposed to be tied to the -v name=val command-line option. Quoting is painful, In most cases it is easier to put the whole query meat inside a shell here-document.
Edit
oops, I should have said -v instead of -P (which is for formatting options) previous reply got it right.
You can also pass-in the parameters at the psql command-line, or from a batch file. The first statements gather necessary details for connecting to your database.
The final prompt asks for the constraint values, which will be used in the WHERE column IN() clause. Remember to single-quote if strings, and separate by comma:
#echo off
echo "Test for Passing Params to PGSQL"
SET server=localhost
SET /P server="Server [%server%]: "
SET database=amedatamodel
SET /P database="Database [%database%]: "
SET port=5432
SET /P port="Port [%port%]: "
SET username=postgres
SET /P username="Username [%username%]: "
SET /P bunos="Enter multiple constraint values for IN clause [%constraints%]: "
ECHO you typed %constraints%
PAUSE
REM pause
"C:\Program Files\PostgreSQL\9.0\bin\psql.exe" -h %server% -U %username% -d %database% -p %port% -e -v v1=%constraints% -f test.sql
Now in your SQL code file, add the v1 token within your WHERE clause, or anywhere else in the SQL. Note that the tokens can also be used in an open SQL statement, not just in a file. Save this as test.sql:
SELECT * FROM myTable
WHERE NOT someColumn IN (:v1);
In Windows, save the whole file as a DOS BATch file (.bat), save the test.sql in the same directory, and launch the batch file.
Thanks for Dave Page, of EnterpriseDB, for the original prompted script.
I would like to offer another answer inspired by #malcook's comment (using bash).
This option may work for you if you need to use shell variables within your query when using the -c flag. Specifically, I wanted to get the count of a table, whose name was a shell variable (which you can't pass directly when using -c).
Assume you have your shell variable
$TABLE_NAME='users'
Then you can get the results of that by using
psql -q -A -t -d databasename -c <<< echo "select count(*) from $TABLE_NAME;"
(the -q -A -t is just to print out the resulting number without additional formatting)
I will note that the echo in the here-string (the <<< operator) may not be necessary, I originally thought the quotes by themselves would be fine, maybe someone can clarify the reason for this.
It would appear that what you ask can't be done directly from the command line. You'll either have to use a user-defined function in plpgsql or call the query from a scripting language (and the latter approach makes it a bit easier to avoid SQL injection).
I've ended up using a better version of #vol7ron answer:
DO $$
BEGIN
IF NOT EXISTS(SELECT 1 FROM pg_prepared_statements WHERE name = 'foo') THEN
PREPARE foo(text,text,text) AS
SELECT *
FROM foobar
WHERE foo = $1
AND bar = $2
OR baz = $3;
END IF;
END$$;
EXECUTE foo('foo','bar','baz');
This way you can always execute it in this order (the query prepared only if it does not prepared yet), repeat the execution and get the result from the last query.

How to export table as CSV with headings on Postgresql?

I'm trying to export a PostgreSQL table with headings to a CSV file via command line, however I get it to export to CSV file, but without headings.
My code looks as follows:
COPY products_273 to '/tmp/products_199.csv' delimiters',';
COPY products_273 TO '/tmp/products_199.csv' WITH (FORMAT CSV, HEADER);
as described in the manual.
From psql command line:
\COPY my_table TO 'filename' CSV HEADER
no semi-colon at the end.
instead of just table name, you can also write a query for getting only selected column data.
COPY (select id,name from tablename) TO 'filepath/aa.csv' DELIMITER ',' CSV HEADER;
with admin privilege
\COPY (select id,name from tablename) TO 'filepath/aa.csv' DELIMITER ',' CSV HEADER;
When I don't have permission to write a file out from Postgres I find that I can run the query from the command line.
psql -U user -d db_name -c "Copy (Select * From foo_table LIMIT 10) To STDOUT With CSV HEADER DELIMITER ',';" > foo_data.csv
This works
psql dbname -F , --no-align -c "SELECT * FROM TABLE"
The simplest way (using psql) seems to be by using --csv flag:
psql --csv -c "SELECT * FROM products_273" > '/tmp/products_199.csv'
For version 9.5 I use, it would be like this:
COPY products_273 TO '/tmp/products_199.csv' WITH (FORMAT CSV, HEADER);
This solution worked for me using \copy.
psql -h <host> -U <user> -d <dbname> -c "\copy <table_name> FROM '<path to csvfile/file.csv>' with (format csv,header true, delimiter ',');"
Heres how I got it working power shell using pgsl connnect to a Heroku PG database:
I had to first change the client encoding to utf8 like this: \encoding UTF8
Then dumped the data to a CSV file this:
\copy (SELECT * FROM my_table) TO C://wamp64/www/spider/chebi2/dump.csv CSV DELIMITER '~'
I used ~ as the delimiter because I don't like CSV files, I usually use TSV files, but it won't let me add '\t' as the delimiter, so I used ~ because its a rarely used characeter.
The COPY command isn't what is restricted. What is restricted is directing the output from the TO to anywhere except to STDOUT. However, there is no restriction on specifying the output file via the \o command.
\o '/tmp/products_199.csv';
COPY products_273 TO STDOUT WITH (FORMAT CSV, HEADER);
copy (anysql query datawanttoexport) to 'fileablsoutepathwihname' delimiter ',' csv header;
Using this u can export data also.
I am posting this answer because none of the other answers given here actually worked for me. I could not use COPY from within Postgres, because I did not have the correct permissions. So I chose "Export grid rows" and saved the output as UTF-8.
The psql version given by #Brian also did not work for me, for a different reason. The reason it did not work is that apparently the Windows command prompt (I was using Windows) was meddling around with the encoding on its own. I kept getting this error:
ERROR: character with byte sequence 0x81 in encoding "WIN1252" has no equivalent in encoding "UTF8"
The solution I ended up using was to write a short JDBC script (Java) which read the CSV file and issued insert statements directly into my Postgres table. This worked, but the command prompt also would have worked had it not been altering the encoding.
Try this:
"COPY products_273 FROM '\tmp\products_199.csv' DELIMITER ',' CSV HEADER"
In pgAdmin, highlight your query statement just like when you use F5 to execute and press F9 - this will open the file browser so you can pick where you save your CSV.
If you are using Azure Data Studio, the instruction are here: Azure Data Studio: Save As CSV.
I know this isn't a universal solution, but most of the time you just want to grab the file by hand.