Executing psql queries from a file vs passing it through bash - postgresql

I am trying to pass in queries to psql through an python script.
PGPASSWORD=pass -U postgres -d postgres -h localhost -c "insert into table1 values(1,2); select * from table2;"
Here suppose, the second query (select * from table2) fails, then the first query is also not applied(not sure if it is not applied or its effect is rolled back)
But if I have both of the queries in a file name <file.sql>
PGPASSWORD=pass -U postgres -d postgres -h localhost -m file.sql
then even if the second query fails, the first one is executed. Does the first method executes all the queries as one transaction and if one fails, it rolls back the results?

Yes, that is exactly what happens.
The argument to -c is sent to the server as a single request, so it runs as a single transaction.
The documentation says:
Each SQL command string passed to -c is sent to the server as a single request. Because of this, the server executes it as a single transaction even if the string contains multiple SQL commands, unless there are explicit BEGIN/COMMIT commands included in the string to divide it into multiple transactions.
You can use the -c option more than once if you don't want that.

Related

PostgreSQL / psql meta-command silently fails and doesn't insert rows

I've created a SQL file I run through the psql command that roughly looks like as follows:
truncate table my_table;
\set content `cat /workdir/test.json` insert into my_table values ('test_row', :'content');
The first line is somewhat irrelevant to the problem, except for the fact it does print out "TRUNCATE TABLE", so it is reading and running the SQL file correctly, at least initially. However, the insert row is never created, the table is always empty. Yet no error message pops up.
The JSON file has a valid value (even if I pare it down to super basic {}). I've also tried passing the sql command directly (just to cover my bases, tried it with just one '' and same, with three it gives invalid command error):
psql [...] -c "\\set content `cat /workdir/test.json` insert into my_table values ('test_row', :'content')"
Again, no output message, no new rows created. However not using the meta-command \set does work. E.g.:
psql [...] -c "insert into my_table values ('test_row', '{}')"
Seems like there's something it doesn't like about the meta-command \set, but without any error info, not sure what I'm doing wrong.
Both the script and database are running on the same VM. That is, script can call host via 'localhost' and the filesystem/filepaths should be the same, I think, should that matter.
A psql meta-command (something that starts with a backslash) are terminated by the end of line; you cannot have an SQL statement on the same line.
Write the \set in one line and the INSERT in another.
If you want to use the -c option of psql, use several -c options:
psql -c "\\set ..." -c "INSERT ..."

psql script execution not returning error

I am trying to execute multiple sql scripts using psql. I created one master script with all the scripts to be executed as below. I want all the scripts to succeed or fail together.
master.sql
BEGIN;
\i one.sql
\i two.sql
\i three.sql
COMMIT;
I am trying to catch the error code of psql to determine if it success. When i query for ?$ it always returns 0. I figured when add ON_ERROR_STOP=1 to command it returns proper error code. Problem with this approach if the error happens in three.sql it does not roll back one.sql and two.sql transactions.
psql -U postgres -h localhost -d test -v ON_ERROR_STOP=1 -f master.sql
What would be correct approach to find out if script executed successfully.

psql, can't copy db content to another - cannot run inside a transaction block-

I'd like to copy the content of my local machine to my remote one (inside a docker).
For some reason, it is more complicated that I was expected:
When I try to copy the data to the remote one, I get this "ERROR: CREATE DATABASE cannot run inside a transaction block".
Ok... So I get into my docker container, added the rule \set AUTOCOMMIT inside. But I still get this error.
This is the command I did:
// backup
pg_dump -C -h localhost -U postgres woof | xz >backup.xz
and then in my remote computer:
xz -dc backup.xz | docker exec -i -u postgres waf-postgres psql --set ON_ERROR_STOP=on --single-transaction
But each time I get this "CREATE DATABASE cannot run inside a transaction block" no matter what I try. Even if I put the autocommit to "on".
Here my problem: I don't know what a transaction block is. And I don't understand why copying one db to another need to be so hard pain: My remote db is empty. So why there is so much fuss and why psql just can't force what I want?
My aim is just to copy my local db to the remote one.
what happens here is: you add CREATE DATABASE statement with -C key and then try to run psql with --single-transaction, so the content of script are wrapped to BEGIN;...END;, where you can't use CREATE DATABASE
So iether remove -C and run psql against existing database, or remove --single-transaction for psql. Make decision based on what you really need...
from man pg_dump:
-C
--create
Begin the output with a command to create the database itself and reconnect to the created database. (With a script of this
form, it doesn't matter which database in the destination installation
you connect to before
running the script.) If --clean is also specified, the script drops and recreates the target database before reconnecting to
it.
from man psql:
--single-transaction
This option can only be used in combination with one or more -c and/or -f options. It causes psql to issue a BEGIN command
before the first such option and a COMMIT command after the last one, thereby wrapping all the commands into a single
transaction. This ensures that either all the commands complete successfully, or no changes are applied.

How to insert a result of a parallelized SELECT query into a table, in Postgresql?

According to https://www.postgresql.org/docs/current/static/when-can-parallel-query-be-used.html,
"Even when it is in general possible for parallel query plans to be
generated, the planner will not generate them for a given query if any
of the following are true:
The query writes any data or locks any database rows. If a query contains a data-modifying operation either at the top level or within
a CTE, no parallel plans for that query will be generated. This is a
limitation of the current implementation which could be lifted in a
future release."
Indeed, when I try to insert result of a parallel SELECT query into a table ( either by SELECT.. INTO or by WITH..SELECT..INTO ), the query is not executed as parallel query.
My question is: Is there any way to trick the Postgresql so that a SELECT query is executed as a parallel query and then its result inserted into a table?
There is a trick with psql -o parameter.
i.e.
Step1:
call psql -h localhost -d dbname -U username -c "select * from vw_FBigTable_extract" -o FBigTable_extract.csv -A -t -F ","
Step2:
call psql -h localhost -d dbname -U username -c "COPY t_FBigTable_extract FROM 'FBigTable_extract.csv' WITH (FORMAT CSV, DELIMITER ',', HEADER FALSE, ENCODING 'windows-1252')"
Sometimes works faster then non-parallel approach.

How to use slash commands outside the database?

I tried to use the query outside of the database. That is, without login to data base
I want to get the result. I found the option (-c). Using that option we can execute the query from outside the data base:
test:~$ psql -U sat -c "select * from test.details";
It gives the output. I want to use that query for a crontab entry. So I have tried to store the output in a file:
test:~$ psql -U sat -c "select * from test.details \g sat";
Produced an error:
ERROR: syntax error at or near "\"
LINE 1: select * from test.details \g sat
How to do that?
This is not a slash, but a backslash .
Backslash is an escape character in PostgreSQL string literals, therefore you have to double it to get a single backslash into the actual data.
If you want to store the result of a query into a file from the command line you have to use the -o command line option,so your query will become :
psql -o filename -U sathishkumar -c "select * from hospital_management.patient_details";
There is no such thing as a "query outside of the data base" or "without login to data base".
You are trying to mix meta-commands of the psql client with SQL commands, which is strictly impossible. The backslash meta commands are interpreted by the psql client, SQL queries are interpreted by the database server.
Most meta-commands in psql are actually translated into (a series of) SQL queries to the database server. You can make psql print the commands it sends to the database engine if you start it up with the command option -E in interactive mode. Try:
psql -E mydb
And then execute any backslash command and observe the output. For the rest of your question #aleroot has already given good advice.