Set array using psql interface - postgresql

I have a function that adds users to the database from an array, e.g.
psql -c "SELECT addUsersFromList(array['userA','userB','userC'])"
This works fine, however I would prefer to be able to run this as a script, like so:
psql -f add_users.sql -v userlist=array['userA','userB','userC'])
And add_user.sql:
SELECT addUsersFromList(:userlist);
When I execute the above psql command, I get an error:
psql:Scripts/add_users.sql:39: ERROR: column "userA" does not exist
This seems to be an issue with how I use the -v flag. I had a look at the postgres documentaion on -v, \set and the Variables section on that same page, but could not find a way to assign an array.

Try to write the array as a string literal:
psql -f add_users.sql -v "userlist='{userA,userB,userC}'"

Related

Update a PostgreSQL field from the content of a file

I have a file containing a value which should go into a field of a PostgreSQL table.
By searching a little, I found many answers, e.g. How can I update column values with the content of a file without interpreting it? or https://stackoverflow.com/a/14123513/6630397, with this kind of snippet, but it has to be run in a psql terminal:
\set content `cat /home/username/file.txt`
UPDATE table SET field = :'content' WHERE id=1;
It works, but is it possible to programmatically execute it in one shot, directly from a bash prompt, without manually entering the psql command line, e.g. something like:
$ psql -d postgres://postgres#localhost/mydatabase -c \
"UPDATE table SET field = :'the_file_content' WHERE id=1;"
?
There is also the -v argument that seems promising but I'm not successful when using it:
$ psql -d postgres://postgres#localhost/mydatabase \
-v content=`cat ${HOME}/file.txt` \
-c "UPDATE table SET field = :'content' WHERE id=1;"
I've got thousands of psql: warning: extra command-line argument where psql actually seems to "execute" each comma separated strings of the file as pg commands, where it shouldn't of course; the file content, which consists of a single line, must be treated as a whole.
Doc PostgreSQL 14:
https://www.postgresql.org/docs/current/app-psql.html
How about reading the file content into a variable first and then use it?
content=$(<integer_infile); psql -p 5434 -c "update table set field = $content where id = 1;"
content=$(<text_infile); psql -p 5434 -c "update table set field = '$content' where id = 1;"
This at least works for me if the file contains an integer or text including spaces on a single line.

How to format result when query is executed from bash instead of PSQL shell?

I am familiar with \x auto mode of PSQL and it works great when I make query from inside PSQL shell.
But I'm executing query from bash shell and psql is running inside a docker container.
How can I combine \x auto with SELECT query in such case ?
What I have already tried:
$ docker exec -it my_database psql -U iamuser -c "\x auto; SELECT * FROM mytable;"
Expanded display is used automatically.
\x: extra argument "select" ignored
\x: extra argument "*" ignored
\x: extra argument "from" ignored
\x: extra argument "mytable;" ignored
I also tried doing this, but no query results are not shown.
$ docker exec -it my_database psql -U iamuser -c "\x auto \n SELECT * FROM mytable;"
Expanded display is used automatically.
Is it possible to achieve this ? If yes, how ?

postgres script silently pass without any result

i am trying to execute psql queries from the bash command line passing password in following format
set PGPASSWORD=rtttttul psql -U ostgres -h localhost -d postgres -c "select * from logs" -o output.txt
Somehow my queries are not giving any results.i have tried to pass different queries or incorrect credentials but still script execute without any error.
If i don't pass password and try logging in to command prompt,everything works fine.
i want to check what basic thing i am missing above
Below command worked
PGPASSWORD=rtttttul psql -U ostgres -h localhost -d postgres -c "select * from logs" -o output.txt
remove set at start of command fixed it

How can I specify the schema to run an sql file against in the Postgresql command line

I run scripts against my database like this...
psql -d myDataBase -a -f myInsertFile.sql
The only problem is I want to be able to specify in this command what schema to run the script against. I could call set search_path='my_schema_01' but the files are supposed to be portable. How can I do this?
You can create one file that contains the set schema ... statement and then include the actual file you want to run:
Create a file run_insert.sql:
set schema 'my_schema_01';
\i myInsertFile.sql
Then call this using:
psql -d myDataBase -a -f run_insert.sql
More universal way is to set search_path (should work in PostgreSQL 7.x and above):
SET search_path TO myschema;
Note that set schema myschema is an alias to above command that is not available in 8.x.
See also: http://www.postgresql.org/docs/9.3/static/ddl-schemas.html
Main Example
The example below will run myfile.sql on database mydatabase using schema myschema.
psql "dbname=mydatabase options=--search_path=myschema" -a -f myfile.sql
The way this works is the first argument to the psql command is the dbname argument. The docs mention a connection string can be provided.
If this parameter contains an = sign or starts with a valid URI prefix
(postgresql:// or postgres://), it is treated as a conninfo string
The dbname keyword specifies the database to connect to and the options keyword lets you specify command-line options to send to the server at connection startup. Those options are detailed in the server configuration chapter. The option we are using to select the schema is search_path.
Another Example
The example below will connect to host myhost on database mydatabase using schema myschema. The = special character must be url escaped with the escape sequence %3D.
psql postgres://myuser#myhost?options=--search_path%3Dmyschema
The PGOPTIONS environment variable may be used to achieve this in a flexible way.
In an Unix shell:
PGOPTIONS="--search_path=my_schema_01" psql -d myDataBase -a -f myInsertFile.sql
If there are several invocations in the script or sub-shells that need the same options, it's simpler to set PGOPTIONS only once and export it.
PGOPTIONS="--search_path=my_schema_01"
export PGOPTIONS
psql -d somebase
psql -d someotherbase
...
or invoke the top-level shell script with PGOPTIONS set from the outside
PGOPTIONS="--search_path=my_schema_01" ./my-upgrade-script.sh
In Windows CMD environment, set PGOPTIONS=value should work the same.
I'm using something like this and works very well:* :-)
(echo "set schema 'acme';" ; \
cat ~/git/soluvas-framework/schedule/src/main/resources/org/soluvas/schedule/tables_postgres.sql) \
| psql -Upostgres -hlocalhost quikdo_app_dev
Note: Linux/Mac/Bash only, though probably there's a way to do that in Windows/PowerShell too.
This works for me:
psql postgresql://myuser:password#myhost/my_db -f myInsertFile.sql
In my case, I wanted to add schema to a file dynamically so that whatever schema name user will provide from the cli, I will run sql file with that provided schema name.
For this, I replaced some text in the sql file. First I added {{schema}} in the file like this
CREATE OR REPLACE FUNCTION {{schema}}.usp_dailygaintablereportdata(
then replace {{schema}} dynamically with user provided schema name with the help of sed command
sed -i "s/{{schema}}/$pgSchemaName/" $filename
result=$(psql -U $user -h $host -p $port -d $dbName -f "$filename" 2>&1)
sed -i "s/$pgSchemaName/{{schema}}/" $filename
First replace is done, then target file is run and then again our replace is reverted back
I was facing similar problems trying to do some dat import on an intermediate schema (that later we move on to the final one). As we rely on things like extensions (for example PostGIS), the "run_insert" sql file did not fully solved the problem.
After a while, we've found that at least with Postgres 9.3 the solution is far easier... just create your SQL script always specifying the schema when refering to the table:
CREATE TABLE "my_schema"."my_table" (...);
COPY "my_schema"."my_table" (...) FROM stdin;
This way using psql -f xxxxx works perfectly, and you don't need to change search_paths nor use intermediate files (and won't hit extension schema problems).

How to use slash commands outside the database?

I tried to use the query outside of the database. That is, without login to data base
I want to get the result. I found the option (-c). Using that option we can execute the query from outside the data base:
test:~$ psql -U sat -c "select * from test.details";
It gives the output. I want to use that query for a crontab entry. So I have tried to store the output in a file:
test:~$ psql -U sat -c "select * from test.details \g sat";
Produced an error:
ERROR: syntax error at or near "\"
LINE 1: select * from test.details \g sat
How to do that?
This is not a slash, but a backslash .
Backslash is an escape character in PostgreSQL string literals, therefore you have to double it to get a single backslash into the actual data.
If you want to store the result of a query into a file from the command line you have to use the -o command line option,so your query will become :
psql -o filename -U sathishkumar -c "select * from hospital_management.patient_details";
There is no such thing as a "query outside of the data base" or "without login to data base".
You are trying to mix meta-commands of the psql client with SQL commands, which is strictly impossible. The backslash meta commands are interpreted by the psql client, SQL queries are interpreted by the database server.
Most meta-commands in psql are actually translated into (a series of) SQL queries to the database server. You can make psql print the commands it sends to the database engine if you start it up with the command option -E in interactive mode. Try:
psql -E mydb
And then execute any backslash command and observe the output. For the rest of your question #aleroot has already given good advice.