How to resolve syntax error with dynamic PostgreSQL statement - postgresql

Suppose I have a table called my_table that has 3 columns
id | name | state
-----+------+--------
1020 | 'A ' | 'VA'
-----+------+--------
1021 | 'B' | 'VA'
-----+------+--------
1022 | 'C' | 'NC'
I am having an issue with a simple dynamic statement I am trying to run. I don't see anything wrong with this, do you?
EXECUTE 'SELECT * FROM my_schema.my_table WHERE id = '|| 1021;
I am just running this standalone. It should execute. Instead, I get a ERROR: syntax error at or near "'SELECT....

EXECUTE in plain SQL is used to run prepared statements.
Straight quote from the docs:
EXECUTE is used to execute a previously prepared statement. Since prepared statements only exist for the duration of a session, the prepared statement must have been created by a PREPARE statement executed earlier in the current session.
You're probably mixing it up with pl/pgsql EXECUTE, which is totally different.

Related

Last Accessed Date and Last modified Date in postgresql [duplicate]

On development server I'd like to remove unused databases. To realize that I need to know if database is still used by someone or not.
Is there a way to get last access or modification date of given database, schema or table?
You can do it via checking last modification time of table's file.
In postgresql,every table correspond one or more os files,like this:
select relfilenode from pg_class where relname = 'test';
the relfilenode is the file name of table "test".Then you could find the file in the database's directory.
in my test environment:
cd /data/pgdata/base/18976
ls -l -t | head
the last command means listing all files ordered by last modification time.
There is no built-in way to do this - and all the approaches that check the file mtime described in other answers here are wrong. The only reliable option is to add triggers to every table that record a change to a single change-history table, which is horribly inefficient and can't be done retroactively.
If you only care about "database used" vs "database not used" you can potentially collect this information from the CSV-format database log files. Detecting "modified" vs "not modified" is a lot harder; consider SELECT writes_to_some_table(...).
If you don't need to detect old activity, you can use pg_stat_database, which records activity since the last stats reset. e.g.:
-[ RECORD 6 ]--+------------------------------
datid | 51160
datname | regress
numbackends | 0
xact_commit | 54224
xact_rollback | 157
blks_read | 2591
blks_hit | 1592931
tup_returned | 26658392
tup_fetched | 327541
tup_inserted | 1664
tup_updated | 1371
tup_deleted | 246
conflicts | 0
temp_files | 0
temp_bytes | 0
deadlocks | 0
blk_read_time | 0
blk_write_time | 0
stats_reset | 2013-12-13 18:51:26.650521+08
so I can see that there has been activity on this DB since the last stats reset. However, I don't know anything about what happened before the stats reset, so if I had a DB showing zero activity since a stats reset half an hour ago, I'd know nothing useful.
PostgreSQL 9.5 let us to track last modified commit.
Check track commit is on or off using the following query
show track_commit_timestamp;
If it return "ON" go to step 3 else modify postgresql.conf
cd /etc/postgresql/9.5/main/
vi postgresql.conf
Change
track_commit_timestamp = off
to
track_commit_timestamp = on
Restart the postgres / system
Repeat step 1.
Use the following query to track last commit
SELECT pg_xact_commit_timestamp(xmin), * FROM YOUR_TABLE_NAME;
SELECT pg_xact_commit_timestamp(xmin), * FROM YOUR_TABLE_NAME where COLUMN_NAME=VALUE;
My way to get the modification date of my tables:
Python Function
CREATE OR REPLACE FUNCTION py_get_file_modification_timestamp(afilename text)
RETURNS timestamp without time zone AS
$BODY$
import os
import datetime
return datetime.datetime.fromtimestamp(os.path.getmtime(afilename))
$BODY$
LANGUAGE plpythonu VOLATILE
COST 100;
SQL Query
SELECT
schemaname,
tablename,
py_get_file_modification_timestamp('*postgresql_data_dir*/*tablespace_folder*/'||relfilenode)
FROM
pg_class
INNER JOIN
pg_catalog.pg_tables ON (tablename = relname)
WHERE
schemaname = 'public'
I'm not sure if things like vacuum can mess this aproach, but in my tests it's a pretty acurrate way to get tables that are no longer used, at least, on INSERT/UPDATE operations.
I guess you should activate some log options. You can get information about logging on postgreSQL here.

How to return values from dynamically generated "insert" command?

I have a stored procedure that performs inserts and updates in the tables. The need to create it was to try to centralize all the scan functions before inserting or updating records. Today the need arose to return the value of the field ID of the table so that my application can locate the registry and perform other stored procedures.
Stored procedure
SET TERM ^ ;
CREATE OR ALTER procedure sp_insupd (
iaction varchar(3),
iusuario varchar(20),
iip varchar(15),
imodulo varchar(30),
ifieldsvalues varchar(2000),
iwhere varchar(1000),
idesclogs varchar(200))
returns (
oid integer)
as
declare variable vdesc varchar(10000);
begin
if (iaction = 'ins') then
begin
vdesc = idesclogs;
/*** the error is on the line below ***/
execute statement 'insert into '||:imodulo||' '||:ifieldsvalues||' returning ID into '||:oid||';';
end else
if (iaction = 'upd') then
begin
execute statement 'select '||:idesclogs||' from '||:imodulo||' where '||:iwhere into :vdesc;
execute statement 'execute procedure SP_CREATE_AUDIT('''||:imodulo||''');';
execute statement 'update '||:imodulo||' set '||:ifieldsvalues||' where '||:iwhere||';';
end
insert into LOGS(USUARIO, IP, MODULO, TIPO, DESCRICAO) values (
:iusuario, :iip, :imodulo, (case :iaction when 'ins' then 1 when 'upd' then 2 end), :vdesc);
end^
SET TERM ; ^
The error in the above line is occurring due to syntax error. The procedure is compiled normally, that is, the error does not happen in the compilation, since the line in question is executed through the "execute statement". When there was no need to return the value of the ID field, the procedure worked normally with the line like this:
...
execute statement 'insert into '||:imodulo||' '||:ifieldsvalues||';';
...
What would be the correct way for the value of the ID field to be stored in the OID variable?
What is REAL VALUE in ifieldsvalues ?
you can not have BOTH
'insert into '||:imodulo||' '||:ifieldsvalues
'update '||:imodulo||' set '||:ifieldsvalues
because methods to specify column names and column values in INSERT and UPDATE statements is fundamentally different!!! You either would have broken update-stmt or broken insert-stmt!
The error in the above line is occurring due to syntax error
This is not enough. Show the real error text, all of it.
It includes the actual command you generate and it seems you had generated it really wrong way.
all the scan functions before inserting or updating records
Move those functions out of the SQL server and into your application server.
Then you would not have to make insert/update in that "strings splicing" way, which is VERY fragile and "SQL injection" friendly. You stepped into the road to hell here.
the error does not happen in the compilation
Exactly. And that is only for starters. You are removing all the safety checks that should had helped you in applications development.
http://searchsoftwarequality.techtarget.com/definition/3-tier-application
https://en.wikipedia.org/wiki/Multitier_architecture#Three-tier_architecture
http://bobby-tables.com
On modern Firebird versions EXECUTE STATEMENT command can have the same INTO clause as PSQL SELECT command.
https://www.firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-psql-coding.html#fblangref25-psql-execstmt
Use http://translate.ru to read http://www.firebirdsql.su/doku.php?id=execute_statement
Or just see SQL examples there. Notice, however, those examples all use SELECT dynamic command, not INSERT. So I am not sure it would work that way.
This works in Firebird 2.5 (but not in Firebird 2.1) PSQL blocks.
execute statement 'insert into Z(payload) values(2) returning id' into :i;
To run it from IBExpert/FlameRobin/iSQL interactive shell add that obvious boilerplate:
execute block returns (i integer) as
begin
execute statement 'insert into Z(payload) values(2) returning id' into :i;
suspend;
end

how to create a range of values and then use them to insert data into postgresql database

Background Information:
I need to auto generate a bunch of records in a table. The only piece of information I have is a start range and an end range.
Let's say my table looks like this:
id
widgetnumber
The logic needs to be contained within a .sql file.
I'm running postgresql
Code
This is what I have so far... as a test... and it seems to be working:
DO $$
DECLARE widgetnum text;
BEGIN
SELECT 5 INTO widgetnum;
INSERT INTO widgets VALUES(DEFAULT, widgetnum);
END $$;
And then to run it, I do this from a command line on my database server:
testbox:/tmp# psql -U myuser -d widgets -f addwidgets.sql
DO
Questions
How would I modify this code to loop through a range of widget numbers and insert them all?
for example, I would be provided with a start range and an end range (100 to 150 let's say)
Can you point me to a good online resource to learn the syntax i should be using?
Thanks.
How would I modify this code to loop through a range of widget numbers and insert them all?
You can use generate_series() for that.
insert into widgets (widgetnumber)
select i
from generate_series(100, 150) as t(i);
Can you point me to a good online resource to learn the syntax i should be using?
https://www.postgresql.org/docs/current/static/index.html
dvdrental=# \d test
Table "public.test"
Column | Type | Collation | Nullable | Default
--------+------------------------+-----------+----------+---------
id | integer | | |
name | character varying(250) | | |
dvdrental=# begin;
BEGIN
dvdrental=# insert into test(id,name) select generate_series(1,100000),'Kishore';
INSERT 0 100000

SELECT pgr_nodeNetwork query fails

I am working on windows, and have enabled the extension postgis, pgrouting on database. I have postgreSQL 9.4 installed and i am using the data from boundless workshop (http://workshops.boundlessgeo.com/tutorial-routing/).
SELECT pgr_nodeNetwork('edges',0.001,'geom','gid','noded')
when I run this query, it runs about 1minute and after that time it results in FAIL. How can I solve this issue? My pgr_createTopology query has been successfully run.
NOTICE: PROCESSING:
NOTICE: pgr_nodeNetwork('edges',0.001,'geom','gid','noded')
NOTICE: Performing checks, pelase wait .....
NOTICE: Processing, pelase wait .....
ERROR: line_locate_point: 1st arg isnt a line
CONTEXT: SQL statement "create temp table inter_loc on commit drop as ( select * from (
(select l1id, l2id, st_linelocatepoint(line,source) as locus from intergeom)
union
(select l1id, l2id, st_linelocatepoint(line,target) as locus from intergeom)) as foo
where locus<>0 and locus<>1)"
PL/pgSQL function pgr_nodenetwork(text,double precision,text,text,text) line 184 at EXECUTE statement
********** Error **********
ERROR: line_locate_point: 1st arg isnt a line
SQL state: XX000
Context: SQL statement "create temp table inter_loc on commit drop as ( select * from (
(select l1id, l2id, st_linelocatepoint(line,source) as locus from intergeom)
union
(select l1id, l2id, st_linelocatepoint(line,target) as locus from intergeom)) as foo
where locus<>0 and locus<>1)"
PL/pgSQL function pgr_nodenetwork(text,double precision,text,text,text) line 184 at EXECUTE statement
I ran into this issue in my project and I was stuck on it for hours trying to figure out what was causing it AND how to fix it. I will describe my situation and how I fixed it so hopefully, it helps someone else in the future.
I am using ogr2ogr to import a Shapefile into my database and I was using the -nlt PROMOTE_TO_MULTI as one of my arguments during my import; this caused my geometries to be imported as MultiLineStrings.
From the behavior I've observed and what others have mentioned (and more people), the pgr_nodeNetwork() function does not play nicely with MutliLineStrings.
Since MultiLineStrings won't work for routing, I took the SQL from dkastl's answer and ran it on my data to see if I actually needed MultiLineStrings or if I could just work with LineStrings.
SELECT
COUNT(
CASE WHEN ST_NumGeometries(geom) > 1 THEN 1 END
) AS multi,
COUNT(geom) AS total
FROM network_nodes;
After running that, I found that I had zero need for MultiLineStrings so I reimported my Shapefile with ogr2ogr using -nlt LINESTRING instead and then was able to run pgr_nodeNetwork() without problems.

dynamic query postgres

I am new to postgres and running following dynamic query
EXECUTE 'Select * from products';
I get following response.
ERROR: syntax error at or near "'Select * from products'"
LINE 1: EXECUTE 'Select * from products';
I Know this would be something basic I m missing
There is the EXECUTE statement of plpgsql, which would do what you are trying to do - execute an SQL query string. You tagged dynamic, so this may be what you are looking for.
Only works inside plpgsql functions or DO statements (anonymous code blocks). The distinction between EXECUTE and SQL-EXECUTE made clear in the fine manual:
Note: The PL/pgSQL EXECUTE statement is not related to the EXECUTE SQL
statement supported by the PostgreSQL server. The server's EXECUTE
statement cannot be used directly within PL/pgSQL functions (and is
not needed).
If you want to return values from a dynamic SELECT query as your example indicates, you need to create a function. DO statements always return void. More about returning values from a function in the very fine manual.
From the fine manual:
Synopsis
EXECUTE name [ ( parameter [, ...] ) ]
Description
EXECUTE is used to execute a previously prepared statement.
So EXECUTE doesn't execute a string of SQL, it execute a prepared statement that is identified by a name and you need to prepare the statement separately using PREPARE:
=> prepare stmt as select * from products;
=> execute stmt;
-- "select * from products" output goes here...