In MS SQL Server, I create my scripts to use customizable variables:
DECLARE #somevariable int
SELECT #somevariable = -1
INSERT INTO foo VALUES ( #somevariable )
I'll then change the value of #somevariable at runtime, depending on the value that I want in the particular situation. Since it's at the top of the script it's easy to see and remember.
How do I do the same with the PostgreSQL client psql?
Postgres variables are created through the \set command, for example ...
\set myvariable value
... and can then be substituted, for example, as ...
SELECT * FROM :myvariable.table1;
... or ...
SELECT * FROM table1 WHERE :myvariable IS NULL;
edit: As of psql 9.1, variables can be expanded in quotes as in:
\set myvariable value
SELECT * FROM table1 WHERE column1 = :'myvariable';
In older versions of the psql client:
... If you want to use the variable as the value in a conditional string query, such as ...
SELECT * FROM table1 WHERE column1 = ':myvariable';
... then you need to include the quotes in the variable itself as the above will not work. Instead define your variable as such ...
\set myvariable 'value'
However, if, like me, you ran into a situation in which you wanted to make a string from an existing variable, I found the trick to be this ...
\set quoted_myvariable '\'' :myvariable '\''
Now you have both a quoted and unquoted variable of the same string! And you can do something like this ....
INSERT INTO :myvariable.table1 SELECT * FROM table2 WHERE column1 = :quoted_myvariable;
One final word on PSQL variables:
They don't expand if you enclose them in single quotes in the SQL statement.
Thus this doesn't work:
SELECT * FROM foo WHERE bar = ':myvariable'
To expand to a string literal in a SQL statement, you have to include the quotes in the variable set. However, the variable value already has to be enclosed in quotes, which means that you need a second set of quotes, and the inner set has to be escaped. Thus you need:
\set myvariable '\'somestring\''
SELECT * FROM foo WHERE bar = :myvariable
EDIT: starting with PostgreSQL 9.1, you may write instead:
\set myvariable somestring
SELECT * FROM foo WHERE bar = :'myvariable'
You can try to use a WITH clause.
WITH vars AS (SELECT 42 AS answer, 3.14 AS appr_pi)
SELECT t.*, vars.answer, t.radius*vars.appr_pi
FROM table AS t, vars;
Specifically for psql, you can pass psql variables from the command line too; you can pass them with -v. Here's a usage example:
$ psql -v filepath=/path/to/my/directory/mydatafile.data regress
regress=> SELECT :'filepath';
?column?
---------------------------------------
/path/to/my/directory/mydatafile.data
(1 row)
Note that the colon is unquoted, then the variable name its self is quoted. Odd syntax, I know. This only works in psql; it won't work in (say) PgAdmin-III.
This substitution happens during input processing in psql, so you can't (say) define a function that uses :'filepath' and expect the value of :'filepath' to change from session to session. It'll be substituted once, when the function is defined, and then will be a constant after that. It's useful for scripting but not runtime use.
FWIW, the real problem was that I had included a semicolon at the end of my \set command:
\set owner_password 'thepassword';
The semicolon was interpreted as an actual character in the variable:
\echo :owner_password
thepassword;
So when I tried to use it:
CREATE ROLE myrole LOGIN UNENCRYPTED PASSWORD :owner_password NOINHERIT CREATEDB CREATEROLE VALID UNTIL 'infinity';
...I got this:
CREATE ROLE myrole LOGIN UNENCRYPTED PASSWORD thepassword; NOINHERIT CREATEDB CREATEROLE VALID UNTIL 'infinity';
That not only failed to set the quotes around the literal, but split the command into 2 parts (the second of which was invalid as it started with "NOINHERIT").
The moral of this story: PostgreSQL "variables" are really macros used in text expansion, not true values. I'm sure that comes in handy, but it's tricky at first.
postgres (since version 9.0) allows anonymous blocks in any of the supported server-side scripting languages
DO '
DECLARE somevariable int = -1;
BEGIN
INSERT INTO foo VALUES ( somevariable );
END
' ;
http://www.postgresql.org/docs/current/static/sql-do.html
As everything is inside a string, external string variables being substituted in will need to be escaped and quoted twice. Using dollar quoting instead will not give full protection against SQL injection.
You need to use one of the procedural languages such as PL/pgSQL not the SQL proc language.
In PL/pgSQL you can use vars right in SQL statements.
For single quotes you can use the quote literal function.
I solved it with a temp table.
CREATE TEMP TABLE temp_session_variables (
"sessionSalt" TEXT
);
INSERT INTO temp_session_variables ("sessionSalt") VALUES (current_timestamp || RANDOM()::TEXT);
This way, I had a "variable" I could use over multiple queries, that is unique for the session. I needed it to generate unique "usernames" while still not having collisions if importing users with the same user name.
Another approach is to (ab)use the PostgreSQL GUC mechanism to create variables. See this prior answer for details and examples.
You declare the GUC in postgresql.conf, then change its value at runtime with SET commands and get its value with current_setting(...).
I don't recommend this for general use, but it could be useful in narrow cases like the one mentioned in the linked question, where the poster wanted a way to provide the application-level username to triggers and functions.
I've found this question and the answers extremely useful, but also confusing. I had lots of trouble getting quoted variables to work, so here is the way I got it working:
\set deployment_user username -- username
\set deployment_pass '\'string_password\''
ALTER USER :deployment_user WITH PASSWORD :deployment_pass;
This way you can define the variable in one statement. When you use it, single quotes will be embedded into the variable.
NOTE! When I put a comment after the quoted variable it got sucked in as part of the variable when I tried some of the methods in other answers. That was really screwing me up for a while. With this method comments appear to be treated as you'd expect.
I really miss that feature. Only way to achieve something similar is to use functions.
I have used it in two ways:
perl functions that use $_SHARED variable
store your variables in table
Perl version:
CREATE FUNCTION var(name text, val text) RETURNS void AS $$
$_SHARED{$_[0]} = $_[1];
$$ LANGUAGE plperl;
CREATE FUNCTION var(name text) RETURNS text AS $$
return $_SHARED{$_[0]};
$$ LANGUAGE plperl;
Table version:
CREATE TABLE var (
sess bigint NOT NULL,
key varchar NOT NULL,
val varchar,
CONSTRAINT var_pkey PRIMARY KEY (sess, key)
);
CREATE FUNCTION var(key varchar, val anyelement) RETURNS void AS $$
DELETE FROM var WHERE sess = pg_backend_pid() AND key = $1;
INSERT INTO var (sess, key, val) VALUES (sessid(), $1, $2::varchar);
$$ LANGUAGE 'sql';
CREATE FUNCTION var(varname varchar) RETURNS varchar AS $$
SELECT val FROM var WHERE sess = pg_backend_pid() AND key = $1;
$$ LANGUAGE 'sql';
Notes:
plperlu is faster than perl
pg_backend_pid is not best session identification, consider using pid combined with backend_start from pg_stat_activity
this table version is also bad because you have to clear this is up occasionally (and not delete currently working session variables)
Variables in psql suck. If you want to declare an integer, you have to enter the integer, then do a carriage return, then end the statement in a semicolon. Observe:
Let's say I want to declare an integer variable my_var and insert it into a table test:
Example table test:
thedatabase=# \d test;
Table "public.test"
Column | Type | Modifiers
--------+---------+---------------------------------------------------
id | integer | not null default nextval('test_id_seq'::regclass)
Indexes:
"test_pkey" PRIMARY KEY, btree (id)
Clearly, nothing in this table yet:
thedatabase=# select * from test;
id
----
(0 rows)
We declare a variable. Notice how the semicolon is on the next line!
thedatabase=# \set my_var 999
thedatabase=# ;
Now we can insert. We have to use this weird ":''" looking syntax:
thedatabase=# insert into test(id) values (:'my_var');
INSERT 0 1
It worked!
thedatabase=# select * from test;
id
-----
999
(1 row)
Explanation:
So... what happens if we don't have the semicolon on the next line? The variable? Have a look:
We declare my_var without the new line.
thedatabase=# \set my_var 999;
Let's select my_var.
thedatabase=# select :'my_var';
?column?
----------
999;
(1 row)
WTF is that? It's not an integer, it's a string 999;!
thedatabase=# select 999;
?column?
----------
999
(1 row)
I've posted a new solution for this on another thread.
It uses a table to store variables, and can be updated at any time. A static immutable getter function is dynamically created (by another function), triggered by update to your table. You get nice table storage, plus the blazing fast speeds of an immutable getter.
Related
This question already has an answer here:
Referring to session variables (\set var='value') from PL/PGSQL
(1 answer)
Closed 6 years ago.
My SQL script contains the following:
\set test 'some value'
DO $$DECLARE
v_test text:= :'test';
BEGIN
RAISE NOTICE 'test var is %',v_test;
END$$;
I get a syntax error when trying to evaluate the value of test:
ERROR: syntax error at or near ":"
Ideally I'd like to have an anonymous plpqsql block living in a file which will then get called from a shell script using a set of environment variables
The explanation is, according to the manual:
Variable interpolation will not be performed within quoted SQL literals and identifiers.
The body of the DO statement is a (dollar-quoted) string. So no interpolation inside the string.
Since it must be a literal string, you can also not concatenate strings on the fly. The manual:
This must be specified as a string literal, just as in CREATE FUNCTION.
But you can concatenate the string and then execute it.
\set [ name [ value [ ... ] ] ]
Sets the psql variable name to value, or if more than one value is
given, to the concatenation of all of them.
Bold emphasis mine. You just have to get the quoting right:
test=# \set test 'some value'
test=# \set code 'DECLARE v_test text := ' :'test' '; BEGIN RAISE NOTICE ''test var is: %'', v_test; END'
test=# DO :'code';
NOTICE: test var is: some value
DO
test=#
But I would rather create a (temporary) function and pass the value as parameter (where psql interpolation works). Details in this related answer on dba.SE:
CREATE SEQUENCE using expressions with psql variables for parameters
I'm trying to clean out excessive trailing zeros, I used the following query...
UPDATE _table_ SET _column_=trim(trailing '00' FROM '_column_');
...and I received the following error:
ERROR: column "_column_" is of
expression is of type text.
I've played around with the quotes since that usually is what it barrels down to for text versus numeric though without any luck.
The CREATE TABLE syntax:
CREATE TABLE _table_ (
id bigint NOT NULL,
x bigint,
y bigint,
_column_ numeric
);
You can cast the arguments from and the result back to numeric:
UPDATE _table_ SET _column_=trim(trailing '00' FROM _column_::text)::numeric;
Also note that you don't quote column names with single quotes as you did.
Postgres version 13 now comes with the trim_scale() function:
UPDATE _table_ SET _column_ = trim_scale(_column_);
trim takes string parameters, so _column_ has to be cast to a string (varchar for example). Then, the result of trim has to be cast back to numeric.
UPDATE _table_ SET _column_=trim(trailing '00' FROM _column_::varchar)::numeric;
Another (arguably more consistent) way to clean out the trailing zeroes from a NUMERIC field would be to use something like the following:
UPDATE _table_ SET _column_ = CAST(to_char(_column_, 'FM999999999990.999999') AS NUMERIC);
Note that you would have to modify the FM pattern to match the maximum expected precision and scale of your _column_ field. For more details on the FM pattern modifier and/or the to_char(..) function see the PostgreSQL docs here and here.
Edit: Also, see the following post on the gnumed-devel mailing list for a longer and more thorough explanation on this approach.
Be careful with all the answers here. Although this looks like a simple problem, it's not.
If you have pg 13 or higher, you should use trim_scale (there is an answer about that already). If not, here is my "Polyfill":
DO $x$
BEGIN
IF count(*)=0 FROM pg_proc where proname='trim_scale' THEN
CREATE FUNCTION trim_scale(numeric) RETURNS numeric AS $$
SELECT CASE WHEN trim($1::text, '0')::numeric = $1 THEN trim($1::text, '0')::numeric ELSE $1 END $$
LANGUAGE SQL;
END IF;
END;
$x$;
And here is a query for testing the answers:
WITH test as (SELECT unnest(string_to_array('1|2.0|0030.00|4.123456000|300000','|'))::numeric _column_)
SELECT _column_ original,
trim(trailing '00' FROM _column_::text)::numeric accepted_answer,
CAST(to_char(_column_, 'FM999999999990.999') AS NUMERIC) another_fancy_one,
CASE WHEN trim(_column_::text, '0')::numeric = _column_ THEN trim(_column_::text, '0')::numeric ELSE _column_ END my FROM test;
Well... it looks like, I'm trying to show the flaws of the earlier answers, while just can't come up with other testcases. Maybe you should write more, if you can.
I'm like short syntax instead of fancy sql keywords, so I always go with :: over CAST and function call with comma separated args over constructs like trim(trailing '00' FROM _column_). But it's a personal taste only, you should check your company or team standards (and fight for change them XD)
I have a problem with an array parameter going into a plpgsql function. My code works in PostgreSQL 8.3, but fails when called on a 9.2.1 server.
I have written a dummy function that shows the problem. This is based on some of the first plpgsql code I have written. I know it is ugly, so if it is not possible to escape my quotes in a way that works for both server versions, I am open to suggestions on rewriting this code so that it works on both server versions. Actually I am open to suggestions no matter what. I am not too good at plpgsql
So here's the dummy code that demonstrates the problem:
CREATE OR REPLACE FUNCTION get_test(collection text[])
RETURNS text AS
$BODY$
DECLARE
counter int8; directive text; condition text; querytype text;
BEGIN
counter = array_lower(collection, 1);
WHILE (counter <= array_upper(collection, 1)) LOOP
SELECT INTO directive "?column?"[counter] FROM (SELECT collection) AS foo ;
counter = counter + 1;
SELECT INTO condition "?column?"[counter] FROM (SELECT collection) AS foo ;
counter = counter + 1;
SELECT into querytype "?column?"[counter] FROM (SELECT collection) AS foo ;
counter = counter + 1;
END LOOP;
RETURN 'dummy';
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION get_test(text[]) OWNER TO postgres;
The collection parameter is built in sets of three (a simple string, a SQL condition, and another string). Here is an example call that works in 8.3 and fails in 9.2:
select * from get_test('{"dynamic","(tr.PROJECT_NAME = \'SampleProject\')","1"}')
This gives back 'dummy' as expected on 8.3, but fails on 9.2 with:
ERROR: syntax error at or near "SampleProject"
The caret points to the S of SampleProject, right after the single-quote, so I know I am
not escaping the single quotes in the embedded SQL correctly.
I have tried this without the backslashes in front of the single quotes, with two backslashes, and finally with two single quotes, all to no avail. The original code is called by a Java client, but I have been testing this under pgadmin3 (version 1.16), connecting to two servers of the versions in question, trying different things.
Any ideas as to how I can make this call work?
Most likely reason is a different setting for standard_conforming_strings.
Try this call:
The SQL-standard way (and recommended in PostgreSQL) to escape single quotes inside a single-quoted string literal is to double them:
SELECT *
FROM get_test('{dynamic,(tr.PROJECT_NAME = ''SampleProject''),1}'::text[])
OR:
SELECT *
FROM get_test('{dynamic,"(tr.PROJECT_NAME = ''SampleProject'')",1}'::text[])
Or you can resort to dollar-quoting to avoid multiple layers of escaping quotes
SELECT *
FROM get_test($${dynamic,"(tr.PROJECT_NAME = 'SampleProject')",1}$$::text[])
Or even an ARRAY constructor:
SELECT *
FROM get_test(ARRAY['dynamic','(tr.PROJECT_NAME = ''SampleProject'')','1'])
I'm trying to get a PSQL script running using variables in an example like the one below without declaring functions and having to call them.
DECLARE
result TEXT;
BEGIN
SELECT INTO result name
FROM test;
RAISE NOTICE result;
END;
Where table test only has 1 row and column. Is this possible without having to wrap this script inside a function. This will allow me to call the script via say command line easier.
Thanks guys.
You can use DO to create and execute an anonymous function:
DO executes an anonymous code block, or in other words a transient anonymous function in a procedural language.
Something like this:
do $$
declare result text;
begin
select name into result from test;
raise notice '%', result;
end;
$$;
I also fixed your raise notice.
If you just want to dump the single value from the table to the standard output in a minimal format (i.e. easy to parse), then perhaps --tuples-only will help:
-t
--tuples-only
Turn off printing of column names and result row count footers, etc. This is equivalent to the \t command.
So you could say things like this from the shell:
result=$(echo 'select name from test;' | psql -t ...)
I once read this entry in mailing list http://archives.postgresql.org/pgsql-hackers/2005-06/msg01481.php
SELECT *
FROM foo_func(
c => current_timestamp::timestamp with time zone,
a => 2,
b => 5
);
Now I need this kindof solution where I can pass associative array argument to a function.
Do I need to make a dummy table and then use that table as argument type ? or there is any straight forward fix for this ? or has this hack been implemented ?
or can I emulate the same using pl/python ?
Here are the steps for an answer with hstore and PG-8.4 for debian.
1) if not installed already, install the contrib package
# apt-get install postgresql-contrib-8.4
2) install hstore in the relevant database
$ psql -U postgres -d dbname
# \i /usr/share/postgresql/8.4/contrib/hstore.sql
2bis) If the plpgsql language is not installed, install it (still inside psql as postgres user)
# CREATE LANGUAGE plpgsql;
3) create the function taking hstore as input. Here's an example in plpgsql that just enumerates the keys and values:
CREATE OR REPLACE function enum_hstore(in_h hstore) returns void
as $$
declare
kv record;
begin
for kv in select * from (select (each(in_h)).*) as f(k,v) loop
raise notice 'key=%,value=%',kv.k,kv.v;
end loop;
end
$$ language plpgsql;
4) call the function. Since the keys and values are of type text, it may be necessary to cast to text the non-literal entries, as the current_timestamp call in the question. Example:
select enum_hstore(
hstore('c',current_timestamp::text) ||
'a=>2,b=>5'::hstore
);
The result to expect from the above function:
NOTICE: key=a,value=2
NOTICE: key=b,value=5
NOTICE: key=c,value=2012-04-08 16:12:59.410056+02
This is implemented in version 9.0:
4.3.2. Using named notation
In named notation, each argument's name is specified using := to
separate it from the argument expression. For example:
SELECT concat_lower_or_upper(a := 'Hello', b := 'World');
concat_lower_or_upper
-----------------------
hello world
(1 row)