How can I see the result of the last psql query? - postgresql

I executed a long-running Postgres query in the psql terminal. It displayed the result, but I pressed q too soon, exiting out of the result and back to the prompt.
Is there any way for me to see the result again without rerunning the query?

You've lost that output. Though the query is likely fairly "hot" atm as it was just requested (by you).
In the future though you can use the \o [filename] syntax of psql to save the output of the command locally, such that you don't run into this issue again.

Related

Automating Database Connection

For a homework, I have a few steps I have to go through every single time I want to connect to the database and it's becoming a really annoying and time-wasting act.
I've already automated part of it. However, my latest attempt at automating the last few commands hasn't been successful.
Initially, I've set up a shortcut to a PuTTy terminal:
Create new Shortcut
Select "C:\Program Files\PuTTY" as the entry point (Start in)
Enter "C:\Program Files\PuTTY\putty.exe" <MY_USERNAME>#arcade.iro.umontreal.ca -pw <MY_PASSWORD> as the Target
Then after double-clicking this shortcut, I entered these two lines (to create and then execute a bash script):
echo "psql -h postgres && \c ift2935 && set search_path to inscriptions_devoir;" > sql.sh
. sql.sh
Eventually, my goal would be to simply be able to write . sql.sh after opening my shortcut to be all set up and ready to go (and actually, maybe even that can be automatized somehow with the shortcut?). However, as it is, my shell script only runs the psql -h postgres command, which successfully launches PostGreSQL.
My question is:
How do I get the two other commands (\c ift2935 and set search_path to inscriptions_devoir;) to automatically run inside PostGreSQL?
EDIT:
Forgot to mention: after the first command of my script executes, I can then type \q to leave PostGreSQL and then the terminal outputs this:
-bash: c: command not found
Which, I think, indicates that the terminal interrupts its current process to actually run PostGreSQL and, on exit, it resumes the script, moving onto the second command, which fails because \c means nothing as a shell command.
While connected to the database, run:
ift2935=> ALTER ROLE <MY_USERNAME> SET search_path TO inscriptions_devoir;
This is your database user. Unless PGUSER is set, this should be the same as your operating system user, but you can always find it with SELECT current_user;.
Then the setting will automatically be active the next time you connect.
In your shell script, change the call to
psql -h postgres -d ift2935
Alternatively, and slightly better in my opinion, is the following, more complicated procedure:
Edit the file .bash_profile in your home directory and add
export PGHOST=postgres
export PGDATABASE=ift2935
Then disconnect and reconnect (this file is executed when you start a login shell).
Instead of running . sql.sh, simply type psql, which is less cumbersome.
Off topic: It is widely held that industriousness is the motor of progress. Nothing could be farther from the truth. Laziness is the mother of invention, specifically laziness paired with curiosity. If you plan to go into the computer engineering business, I promise you a bright future.
I think you should try using the pgpass file.
https://www.postgresql.org/docs/current/libpq-pgpass.html

COPY command not returning row count

I have two DB instances both running PG 9.4
When i issue the COPY command in one it will return with the number of rows affected, however in the second DB which is set up the same it will not.
I see nothing in the config that is different or may affect such. The imports do not error and import successfully on both accounts.
The Documentation states it should return as long as its not stdout.
This line in the documentation looks pertinent, but i'm not sure it applies to my situation.
Do not confuse COPY with the psql instruction \copy. \copy invokes COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in a file accessible to the psql client. Thus, file accessibility and access rights depend on the client rather than the server when \copy is used.
The command i'm issuing is:
COPY [tablename] from '/var/lib/pgsql/datafile.csv'
At the moment i'm down to looking at putty session variables, but i'm not sure this is the way to go.
Does anyone have any ideas as to why this may be happening?
When psql is quiet, it doesn't display these messages.
The quiet mode is activated with -q or \set QUIET on
Example:
test=# copy test to '/tmp/foo';
COPY 8
test=# \set QUIET on
test=# copy test to '/tmp/foo';
test=#

Suppress "current transaction is aborted…" messages in PostgreSQL

I have a very big SQL dump I'm working on. The overall structure looks like this:
BEGIN;
SET CONSTRAINTS ALL DEFERRED;
INSERT …;
-- … ~300K insert operations
INSERT …;
COMMIT;
The problem is if there is an error in any single statement, then error is shown and the message current transaction is aborted, commands ignored until end of transaction block is generated for EACH statement that follows it.
Why does it behave so weirdly? Is there a way to suppress the following messages? It's enough to just show the real error message and to skip transaction execution. I don't want to see ~300K meaningful error messages.
Do I need to structure my dump differently? Or is there a flag/option I can use?
Presumably you're using psql to send the queries to the server.
You may set the ON_ERROR_STOP built-in variable to on.
From https://www.postgresql.org/docs/current/static/app-psql.html:
ON_ERROR_STOP
By default, command processing continues after an error. When this
variable is set to on, processing will instead stop
immediately. In interactive mode, psql will return to the command
prompt; otherwise, psql will exit, returning error code 3 to
distinguish this case from fatal error conditions, which are
reported using error code 1.
It may be set from outside psql with psql -v ON_ERROR_STOP=on -f script.sql, or from inside the script or interactively with the meta-command \set ON_ERROR_STOP on.
pg_dump does not have an option to add this automatically to a dump (as far as I know, it doesn't emit any psql meta-command anyway, only pure SQL commands).

Ending Postgres Query without leaving the command line - Get back to command line automatically

Whenever I run a Postgres query it appears that you have to completely quit out of the command line.
I have seen it done where you can press CTRL-C and you are taken back to the PSQL command line i.e., databasename=>. Additionally, if I am in the middle of viewing results and I press CTRL-C, how can I have Postgres send me back to databasename=>?
Bonus:
Is there a way to script is so if I type usedb databasename folllwed by psql, Postgres will know which database I am referring to and automatically connect me to it so I dont have to type \connect databasename ?
Once a postgres query has run and has returned its table of results in the 'psql' command line environment it should drop you back in to the same database that you ran the previous command from.
If you want to connect directly to a database from your terminal :
psql -d nameofdatabase
If you want to connect using a script you can access postgres by url :
postgres://username:password#localhost/nameofdatabase
where 'localhost' could be replaced by the ip of the database you are trying to connect to if its not on the same machine.
Instead of pressing Ctrl-C, press Ctrl-D. In Unix, Ctrl-D is the End-of-File (EOF) character. That is what will make psql quit---just like if you fed it a script on stdin and it got to the end. It works in many other REPLs too, like irb, rails console, python, R, bash, etc.
The reason Ctrl-C doesn't exit is so that you can use it to abort an individual command, e.g. a long-running query.
EDIT: Also, if you are viewing results and they are paged (they appear on a new screen and you can scroll up and down), you can get back to the psql prompt by typing q. That's because by default the pager used is just less. You can say man less to read more about it. Or experiment with it on any text file: less /etc/services.
Personally I find paging in psql annoying, so I turn it off by creating a file named ~/.psqlrc with this line:
\pset pager
(Also, sorry if you know this already: ~ is just an abbreviation for "my home directory". So ~/.psqlrc is the same as /home/whatever/.psqlrc.)
Bonus: If you want to connect to a specific database, you can say psql -d foo or even just psql foo.
just enter \q or q in the psql terminal

How can I stop a Postgres script when it encounters an error?

Is there a way to specify that when executing a sql script it stops when encountering the first error on the script, it usually continues, regardless of previous errors.
I think the solution to add following to .psqlrc is far from perfection
\set ON_ERROR_STOP on
there exists much more simple and convenient way - use psql with parameter:
psql -v ON_ERROR_STOP=1
better to use also -X parameter turning off .psqlrc file usage.
Works perfectly for me
p.s. the solution found in great post from Peter Eisentraut. Thank you, Peter!
http://petereisentraut.blogspot.com/2010/03/running-sql-scripts-with-psql.html
I assume you are using psql, this might be handy to add to your ~/.psqlrc file.
\set ON_ERROR_STOP on
This will make it abort on the first error. If you don't have it, even with a transaction it will keep executing your script but fail on everything until the end of your script.
And you probably want to use a transaction as Paul said. Which also can be done with psql --single-transaction ... if you don't want to alter the script.
So a complete example, with ON_ERROR_STOP in your .psqlrc:
psql --single-transaction --file /your/script.sql
It's not exactly what you want, but if you start your script with begin transaction; and end with end transaction;, it will actually skip everything after the first error, and then it will rollback everything it did before the error.
I always like to reference the manual directly.
From the PostgreSQL Manual:
Exit Status
psql returns 0 to the shell if it finished normally, 1 if a fatal
error of its own occurs (e.g. out of memory, file not found), 2 if the
connection to the server went bad and the session was not interactive,
and 3 if an error occurred in a script and the variable
ON_ERROR_STOP was set.
By default if the sql code you are running on the PostgreSQL server error psql won't quit an error. It will catch the error and continue. If, as mentioned above, you set the ON_ERROR_STOP setting to on, when psql catches an error in the sql code it will exit and return 3 to the shell.