I'm using a windows batch file to connect to postgres using psql. I'm issuing commands like this....
SET PGPASSWORD=postgres
psql -U postgres -d postgres -c "DROP USER IF EXISTS foo;"
This works fine for running one, short SQL command against the database. But I'm having trouble with two related issues
How to continue a single long SQL command over multiple lines, and
How to run multiple commands.
Example 1.....
psql -U postgres -d postgres -c "CREATE DATABASE foo
WITH OWNER = bar
ENCODING = 'UTF8'
TABLESPACE = mytabspace;"
Example 2.....
psql -U postgres -d postgres -c "
ALTER TABLE one ALTER COLUMN X TYPE INTEGER;
ALTER TABLE two ALTER COLUMN Y TYPE INTEGER;"
Neither of these will work as shown, I've done some googling and found some suggestions for doing this with linux, and have experimented with various carats, backslashes and underscores, but just don't seem to be able to split the commands across lines.
I'm aware of the -f option to run a file, but I'm trying to avoid that.
Any suggestions?
The line continuation character in batch is the ^. See this Q&A
So end the line with space+caret ^ and make sure the following line begins with a space.
You will also have to escape double quoted areas that span several lines with a caret for this to work.
Since the line is unquoted then for the batch parser you will also have to escape any special chararacters like <>|& also with a caret.
psql -U postgres -d postgres -c ^"CREATE DATABASE foo ^
WITH OWNER = bar ^
ENCODING = 'UTF8' ^
TABLESPACE = mytabspace;"
psql -U postgres -d postgres -c ^" ^
ALTER TABLE one ALTER COLUMN X TYPE INTEGER; ^
ALTER TABLE two ALTER COLUMN Y TYPE INTEGER;"
Related
Among the hundreds of tables in the postgres DB, only the structure dump of tables starting with a specific name is found.
Example
the name of the table
abc
abc_hello
abc_hi
def
def_hello
def_hi
If you think there is
I want to dump only tables starting with abc*.
pg_dump abc.txt -Fp -t abc* -s DBName
However, it was not recognized because the amount of tables was too large.
it answered,
pg_dump: error: too many command-line arguments (first is "DBName")
but this command works fine
pg_dump abc_hello.txt -Fp -t abc_hello -s DBName
How can I pick them out?
Your main mistake is that you didn't consider that * also is a special character for the shell. If you run
pg_dump -t abc* mydatabase
the shell will replace abc* with the names of all files in the current directory that start with abc, so that your command will actually be something like
pg_dump -t abcfile1 abcfile2 abcfile3 mydatabase
which is syntactically not correct and will make pg_dump complain.
You have to protect the asterisk from being expanded by the shell with single quotes:
pg_dump -t 'abc*' mydatabase
The other error in your command line is that you forgot the -f in -f abc.txt.
I have a file containing a value which should go into a field of a PostgreSQL table.
By searching a little, I found many answers, e.g. How can I update column values with the content of a file without interpreting it? or https://stackoverflow.com/a/14123513/6630397, with this kind of snippet, but it has to be run in a psql terminal:
\set content `cat /home/username/file.txt`
UPDATE table SET field = :'content' WHERE id=1;
It works, but is it possible to programmatically execute it in one shot, directly from a bash prompt, without manually entering the psql command line, e.g. something like:
$ psql -d postgres://postgres#localhost/mydatabase -c \
"UPDATE table SET field = :'the_file_content' WHERE id=1;"
?
There is also the -v argument that seems promising but I'm not successful when using it:
$ psql -d postgres://postgres#localhost/mydatabase \
-v content=`cat ${HOME}/file.txt` \
-c "UPDATE table SET field = :'content' WHERE id=1;"
I've got thousands of psql: warning: extra command-line argument where psql actually seems to "execute" each comma separated strings of the file as pg commands, where it shouldn't of course; the file content, which consists of a single line, must be treated as a whole.
Doc PostgreSQL 14:
https://www.postgresql.org/docs/current/app-psql.html
How about reading the file content into a variable first and then use it?
content=$(<integer_infile); psql -p 5434 -c "update table set field = $content where id = 1;"
content=$(<text_infile); psql -p 5434 -c "update table set field = '$content' where id = 1;"
This at least works for me if the file contains an integer or text including spaces on a single line.
I'm trying to perform a db query through a docker inline command within a shell script.
myscript.sh:
docker run -it --rm -c "psql -U ${DB_USER} -d ${DB_NAME} -h ${DB_HOST}\
-c 'select col1, col2 , col3 from table1\
where table1.col2 = \"matching_text\" order by col1;'"
But I get an odd error:
ERROR: column "matching_text" does not exist
LINE 1: ...ndow where table1.col2 = "matching_t...
For some reason when I run this, psql thinks the matching_text in my query is referring to a column name. How would I get around this?
Note: Our database is implemented as a psql docker container.
The Postgres manual explains you need to use single quotes:
A string constant in SQL is an arbitrary sequence of characters bounded by single quotes ('), for example 'This is a string'. To include a single-quote character within a string constant, write two adjacent single quotes, e.g., 'Dianne''s horse'. Note that this is not the same as a double-quote character (").
See section 4.1.2.1 of the postgres manual.
Double quotes are for table or column identifiers:
There is a second kind of identifier: the delimited identifier or quoted identifier. It is formed by enclosing an arbitrary sequence of characters in double-quotes ("). A delimited identifier is always an identifier, never a key word. So "select" could be used to refer to a column or table named "select", whereas an unquoted select would be taken as a key word and would therefore provoke a parse error when used where a table or column name is expected. The example can be written with quoted identifiers like this:
UPDATE "my_table" SET "a" = 5;
See section 4.1.1 of the same manual.
Combination of post here and other post solved this issue:
Need to use single quotes for string query
Use double quotes for -c in psql command (Answer thread)
docker run -it --rm -c "psql -U ${DB_USER} -d ${DB_NAME} -h ${DB_HOST}\
-c \"select col1, col2 , col3 from table1\
where table1.col2 = 'matching_text' order by col1;\""
I'm trying to copy a table from one database to another database (NOT schema). The code I used in terminal is as below:
pg_dump -U postgres -t OldSchema.TableToCopy OldDatabase | psql -U postgres -d NewDatabase
When I press Enter it requests postgres password I enter my pass and then It requests psql password. I enter it and press Enter. I receive lots of:
invalid command \N
ERROR: relation "TableToCopy" does not exist
Both tables have UTF8 encoding. Am I doing something wrong?
OS: windows XP
Error output:
psql:TblToCopy.sql:39236: invalid command \N
psql:TblToCopy.sql:39237: invalid command \N
psql:TblToCopy.sql:39238: invalid command \N
.
.
.
After Hundreds of above errors, the terminal echoes:
psql:TblToCopy.sql:39245: ERROR: syntax error at or near "509"
LINE 1: 509 some gibberish words and letters here
And Finally:
sql:TblToCopy.sql:39245: ERROR: relation "TableToCopy" does not exist
EDIT
I read this response to the same problem \N error with psql , it says to use INSERT instead of COPY, but in the file pg_dump created COPY. How to say to pg_dump to use INSERT instead of COPY?
I converted the file with iconv to utf-8. Now that error has gone but I have a new error. In this particular case when I use psql to import data to database something new happens. Table gets created but without data. It says:
SET
SET
SET
SET
SET
SET
SET
SET
CREATE TABLE
ALTER TABLE
psql:tblNew.sql:39610: ERROR: value too long for type character(3)
CONTEXT: COPY words, line 1, column first_two_letters: "سر"
ALTER TABLE
ALTER TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE TRIGGER
I've tried to create a database with Encoding: UTF8 with a table and insert the two UTF-8 encoded characters the COPY command is trying to insert and it works when using INSERT.
CREATE DATABASE test
WITH OWNER = postgres
ENCODING = 'UTF8'
TABLESPACE = pg_default
LC_COLLATE = 'English_United States.1252'
LC_CTYPE = 'English_United States.1252'
CONNECTION LIMIT = -1;
CREATE TABLE x
(
first_two_letters character(3)
)
WITH (
OIDS=FALSE
);
ALTER TABLE x
OWNER TO postgres;
INSERT INTO x(
first_two_letters)
VALUES ('سر');
According to http://rishida.net/tools/conversion/ for the failing COPY the Unicode code points are:
U+0633 U+0631
which are two characters, which means you should be able to store them in a column defined as character(3), which stores strings up to 3 characters (not bytes) in length.
and if we try to INSERT, it succeeds:
INSERT INTO x(
first_two_letters)
VALUES (U&'\0633\0631');
From the pgdump documentation you can INSERT instead of COPY by using the --inserts option
--inserts
Dump data as INSERT commands (rather than COPY). This will make restoration very slow; it is mainly useful for making dumps that can
be loaded into non-PostgreSQL databases. However, since this option
generates a separate command for each row, an error in reloading a row
causes only that row to be lost rather than the entire table contents.
Note that the restore might fail altogether if you have rearranged
column order. The --column-inserts option is safe against column order
changes, though even slower.
Try to use this instead for Step 1:
pg_dump -U postgres -t OldSchema."TableToCopy" --inserts OldDatabase > Table.sql
I've also tried to COPY from a table to a file and use COPY to import and for me it works.
Are you sure your client and server database encoding is UTF8 ?
Firstly, export the table named "x" from schema "public" on database "test" to a plain text SQL file:
pg_dump -U postgres -t public."x" test > x.sql
which creates the x.sql file that contains:
--
-- PostgreSQL database dump
--
SET statement_timeout = 0;
SET lock_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SET check_function_bodies = false;
SET client_min_messages = warning;
SET search_path = public, pg_catalog;
SET default_tablespace = '';
SET default_with_oids = false;
--
-- Name: x; Type: TABLE; Schema: public; Owner: postgres; Tablespace:
--
CREATE TABLE x (
first_two_letters character(3)
);
ALTER TABLE public.x OWNER TO postgres;
--
-- Data for Name: x; Type: TABLE DATA; Schema: public; Owner: postgres
--
COPY x (first_two_letters) FROM stdin;
سر
\.
--
-- PostgreSQL database dump complete
--
Secondly, import with:
psql -U postgres -d test -f x.sql
The table name should be quoted , as the following
pg_dump -U postgres -t OldSchema."TableToCopy" OldDatabase | psql -U postgres -d NewDatabase
And I suggest you do the job in two steps
Step 1
pg_dump -U postgres -t OldSchema."TableToCopy" OldDatabase > Table.sql
If step 1 goes ok then do the step2.
Step 2
psql -U postgres -d NewDatabase -f Table.sql
I am trying to write a script to import a database schema from a remote machine that only accepts ssh connections to a local one.
I managed to do anything except keep the same encoding has the remote database.
I found out that the solution was using pg_dump with -C (create) and that way I would be able to create the database with the same encoding but I faced a problem... there is a table space in the remote database and I dont want to import it.
I know that recent versions of psql already have the no-tablespace argument... but unlucky me, I'm not allowed to upgrade the postgres version.
Could someone tell me a way to remove all the tablespace ocurrences on a sql dump? like with sed or something.
Thanks a lot!
I used to switch tablespaces between installations by piping pg_dump through sed where I altered the TABLESPACE clause.
You can also just remove it and additionally remove CREATE TABLESPACE ... from the dump file with any editor and you are good to load it to another DB cluster.
I long since moved on to newer versions where I can use the --no-tablespaces option. Depending on your setup, a shell command could look something like this in Linux - from the top of my head, only tested cursory:
pgdump -h 123.456.7.89 -p 5432 mydb \
| sed \
-e' /^CREATE TABLESPACE / d' \
-e 's/ *TABLESPACE .*;/;/' \
-e "s/SET default_tablespace = .*;/SET default_tablespace = '';/"
| psql -p5432 mylocaldb
-e' /^CREATE TABLESPACE / d' ... delete lines beginning with "CREATE TABLESPACE ".
-e 's/ *TABLESPACE .*;/;/' ... trim the tablespace clause (always at the end of the line in pg_dump output) from CREATE TABLE or CREATE INDEX statements.
-e "s/SET default_tablespace = .*;/SET default_tablespace = '';" .. do away with any other default tablespace than the empty string - which signifies the default tablespace of the current db. Note the use of double quote ", so I can easily enter single quotes '.
If you know the name of the tablespace involved you can narrow this down. There is a theoretical possibility that a data line could start like one of the search terms. I have never encountered problems myself, though.
Check out a page like this for more info on sed.