PostgreSQL isset function - postgresql

Is there any way, how to check, whether a variable has already been set in my environment?
Example:
\set table_name countries
\i queries.sql
queries.sql:
SELECT * FROM :table_name;
I want to make queries.sql to be called independently and use some default table name I would specify.
Is this possible or do I really need to create another SQL file through which I will call the queries (\i)?
My use case is usage of my SQL queries both in pgTAP unit tests (with some sample table names) and independently.

You could check the current value with:
SELECT :'table_name';
You can set it on the call to psql with something like --set='table_name' on the psql command line.

Related

How to detect if Postgresql function utilizing index on the tables or not

I have created a PL/pgSQL table-returning function that executes a SELECT statement and uses the input parameter in the WHERE clause of the query.
I frame the statement dynamically and execute it like this: EXECUTE sqlStmt USING empID;
sqlStmt is a variable of data type text that has the SELECT query which joins 3 tables.
When I execute that query in pgAdmin and analyze I could see that 'Index scan' on the tables are utilized as expected. However, when I do EXPLAIN ANALYZE SELECT * from fn_getDetails(12), the output just says "Function scan".
How do we know if the table indexes are utilized? Other SO answers to use auto_explain module did not provide details of the function body statements. And I am unable to use the PREPARE inside my function body.
The time taken by execution of the direct SELECT statement is almost the same as the use of function, just couple of milliseconds, but how can I know if the index was used?
auto_explain will certainly provide the requested information.
Set the following parameters:
shared_preload_libraries = 'auto_explain' # requires a restart
auto_explain.log_min_duration = 0 # log all statements
auto_explain.log_nested_statements = on # log statements in functions too
The last parameter is required for tracking SQL statements inside functions.
To activate the module, you need to restart the database.
Of course, testing if the index is used in a query on a small table won't give you a reliable result. You need about as many test data as you expect to have in reality.

What does this select statement actually do?

I'm reviewing log of executed PostgreSQL statements and stumble upon one statement I can't totally understand. Can somebody explain what PostgreSQL actually do when such query is executed? What is siq_query?
select *
from siq_query('', '21:1', '', '("my search string")', False, True, 'http://siqfindex:8080/storediq/findex')
I'm running PostgreSQL 9.2
siq_query(...) is a server-side function taking 7 input parameters (or more). It's not part of any standard Postgres distribution I know (certainly not mainline Postgres 9.2), so it has to be user-defined or part of some extension you installed. It does whatever is defined in the function. This can include basically anything your Postgres user is allowed to do. Unless it's a SECURITY DEFINER function, then it ca do whatever the owner of the function is allowed to do.
The way it is called (SELECT * FROM), only makes sense if it returns multiple rows and/or columns, most likely a set of rows, making it a "set-returning function", which can be used almost like a table in SQL queries.
Since the function name is not schema-qualified, it has to reside in a visible schema. See:
How does the search_path influence identifier resolution and the "current schema"
Long story short, you need to see the function definition to know what it does exactly. You can use psql (\df+ siq_query), pgAdmin (browse and select it to see its definition in the SQL pane) or any other client tool to look it up. Or query the system catalog pg_proc directly:
SELECT * FROM pg_proc WHERE proname = 'siq_query';
Pay special attention to the column prosrc, which holds the function body for some languages like plpgsql.
There might be multiple variants of that name, Postgres allows function overloading.

Alias for a complete SQL statement? (in PostgreSQL psql)

In PostgreSQL 9.5 psql (for use within an interactive session), I wold like to create an alias to a complete SQL statement, analogous to a shell alias. The objective is to just get the output printed on the screen.
If I could enable formatted server output (in Oracle terms) from within a stored procedure, it would look like this:
CREATE or replace FUNCTION print_my_table()
RETURNS void
AS $$
-- somehow enable output here
SELECT * from my_table;
$$ LANGUAGE SQL;
This would be invoked as print_my_table(); (as opposed to SELECT x FROM ...)
I know I can use 'RAISE NOTICE' to print from within a stored procedure, but to do that I would need to reimplement pretty-printing of a table.
Perhaps there is a completely different mechanism to do this?
(my_table stands for a complex SQL statement that collects server data accounting information, or a my_table() stored procedure returning a table)
EDIT
The solution provided by #Abelisto (using psql variables) enables the creation of aliases to arbitrary statements, beyond merely printing the result to the screen.
There is so called internal variables in the psql utility which will be replaced by its content (except inside the string constants):
postgres=# \set foo 'select 1;'
postgres=# :foo
?column?
----------
1
(1 row)
It can be also set by the command line option -v:
psql -v foo='select 1;' -v bar='select 2;'
Create the text file like
\set foo 'select 1;'
\set bar 'select 2;'
\set stringinside 'select $$abc$$;'
and load it using \i command.
Finally you can create the file ~/.psqlrc (its purpose is like ~/.bashrc file) and its content will be automatically executed each time when psql starts.

Is it possible to use dynamic SQL or host variables in DB2 control center?

I need to test some prepared statements that run slowly.
The control center uses JDBC.
In DB2 there's the CREATE VARIABLE statement. I guess it creates a variables on server, not a prepared statement parameter.
I need something like these:
select * from sysibm.sysdummy1 where 1=?;
SQL0313N The number of host variables in the EXECUTE or OPEN statement is not equal to the number of values required.
select * from sysibm.sysdummy1 where 1=:b1;
SQL0312N The host variable "b1" is used in a dynamic SQL statement, a view definition, or a trigger definition.
You can create a bash/batch script and execute it from db2clp
db2 connect to mydb
export b1=value
db2 "select * from sysibm.sysdummy1 where 1=$b1"
The script will replace the content of the variable.

Command to read a file and execute script with psql

I am using PostgreSQL 9.0.3. I have an Excel spreadsheet with lots of data to load into couple of tables in Windows OS.
I have written the script to get the data from input file and Insert into some 15 tables. This can't be done with COPY or Import. I named the input file as DATALD.
I find out the psql command -d to point the db and -f for the script sql. But I need to know the commands how to feed the input file along with the script so that the data gets inserted into the tables..
For example this is what I have done:
begin
for emp in (select distinct w_name from DATALD where w_name <> 'w_name')
--insert in a loop
INSERT INTO tblemployer( id_employer, employer_name,date_created, created_by)
VALUES (employer_id,emp.w_name,now(),'SYSTEM1');
Can someone please help?
For an SQL script you must ..
either have the data inlined in your script (in the same file).
or you need to utilize COPY to import the data into Postgres.
I suppose you use a temporary staging table, since the format doesn't seem to fit the target tables. Code example:
How to bulk insert only new rows in PostreSQL
There are other options like pg_read_file(). But:
Use of these functions is restricted to superusers.
Intended for special purposes.