Currently running a simple sinatra app, using passenger, and using pgbouncer for connection pooling to a database on the same server as the app. Currently I am intermittently getting a PG error that the prepared statement "a\d" doesn't exist.
A PG::Error occurred in #:
ERROR: prepared statement "a2" does not exist
the ruby code that is executed before the error
def self.get_ownership_record(id, key)
self.where("user_id=? AND key=?", id, key ).first
end
pgbouncer config
; #########################################################
; ############# SECTION HEADER [DATABASES] ################
; #########################################################
[databases]
fakedatabase=fake
[pgbouncer]
; ----- Generic Settings --------------------------
; -------------------------------------------------
logfile=/opt/local/var/log/pgbouncer/pgbouncer.log
pidfile=/opt/local/var/run/pgbouncer/pgbouncer.pid
listen_addr=*
listen_port=5444
; unix_socket_dir=/tmp
user=_webuser
auth_file=/Users/Shared/data/global/pg_auth
auth_type=trust
pool_mode=transaction
; max_client_conn=100
; default_pool_size=20
; reserve_pool_size=0
; reserve_pool_timeout=5
; server_round_robin=0
; ----- Log Settings ------------------------------
; -------------------------------------------------
; syslog=0
; syslog_ident=pgbouncer
; syslog_facility=daemon
; log_connections=1
; log_disconnections=1
; log_pooler_errors=1
; ----- Console Access Control --------------------
; -------------------------------------------------
admin_users=admin,nagios
; -------------------------------------------------
; server_reset_query=DISCARD ALL;
server_check_delay=0
server_check_query=SELECT 1;
; server_lifetime=3600
; server_idle_timeout=600
; server_connect_timeout=600
; server_login_retry=15
Is my only solution, to turn off prepared statements?
database.yml
production:
adapter: postgresql
database: fakedatabase
username: admin
host: localhost
port: 5444
reconnect: true
prepared_statements: false
EDIT
I have updated the pgbouncer.ini to use session pooling
pool_mode=session
and uncommented
server_reset_query=DISCARD ALL;
and I am still seemingly, randomly getting errors involving prepared statements, but this time
An ActiveRecord::StatementInvalid occurred in #:
PG::Error: ERROR: bind message supplies 2 parameters, but prepared statement "a1" requires 0
I have turned on statement level logging in my postgresql logs and will report back with more details if possible.
follwing Richard Huxton advice, and after some trial and error.
my final setup looks like
database.yml
had to set prepared_statements to true
production:
adapter: postgresql
database: fakedatabase
username: admin
host: localhost
port: 5444
reconnect: true
prepared_statements: true
pgbouncer.ini
had to uncomment server_reset_query=DISCARD ALL;
and set pool_mode=session
; #########################################################
; ############# SECTION HEADER [DATABASES] ################
; #########################################################
[databases]
fakedatabase=fake
[pgbouncer]
; ----- Generic Settings --------------------------
; -------------------------------------------------
logfile=/opt/local/var/log/pgbouncer/pgbouncer.log
pidfile=/opt/local/var/run/pgbouncer/pgbouncer.pid
listen_addr=*
listen_port=5444
; unix_socket_dir=/tmp
user=_webuser
auth_file=/Users/Shared/data/global/pg_auth
auth_type=trust
pool_mode=session
; max_client_conn=100
; default_pool_size=20
; reserve_pool_size=0
; reserve_pool_timeout=5
; server_round_robin=0
; ----- Log Settings ------------------------------
; -------------------------------------------------
; syslog=0
; syslog_ident=pgbouncer
; syslog_facility=daemon
; log_connections=1
; log_disconnections=1
; log_pooler_errors=1
; ----- Console Access Control --------------------
; -------------------------------------------------
admin_users=admin,nagios
; -------------------------------------------------
server_reset_query=DISCARD ALL;
server_check_delay=0
server_check_query=SELECT 1;
; server_lifetime=3600
; server_idle_timeout=600
; server_connect_timeout=600
; server_login_retry=15
basically allow prepared statements in a session pool mode with the default server reset query.
Perhaps reading the FAQ would help? Unless you have a good reason not to, session-pooling should be sensible.
In my case access to postgres directory and run "DEALLOCATE ALL" resolve the problem.
If you use heroku, like this
> heroku pg:psql -a app_name
app_name::DATABASE=> DEALLOCATE ALL
You can use transaction pooling, provided that you PREPARE and EXECUTE the prepared query inside the same transaction (to avoid pgBouncer running the server_reset_query in between).
Related
I try to create a procedure that contains non-ASCII-chars in Firebird 2.5.9 but I always get the following error:
Statement failed, SQLSTATE = 42000
unsuccessful metadata update
-STORE RDB$PROCEDURES failed
-Malformed string
Here is, what I am trying to do:
isql -user admin -password masterkey
create database "chartest.fdb" default character set win1252
SET NAMES WIN1252;
SET TERM ^ ;
CREATE PROCEDURE PROC_VAL_TO_TEXT (AVAL INTEGER )
RETURNS (RESULT VARCHAR(20) ) AS
begin
if (aval = 0) then
result = 'natürlich';
else
result = 'niemals';
suspend;
end^
SET TERM ; ^
In Firebird 2.0 this is working fine. Why doesn't this work in 2.5 and what can I do to avoid the error?
"SET NAMES" command must be used before "CREATE DATABASE" or "CONNECT" commands to have effect. Also character set in it must exactly match real encoding of the script.
I am referring https://github.com/zubkov-andrei/pg_profile for generating awr like report.
Steps which I have followed are as below :
1) Enabled below parameters inside postgresql.conf (located inside D:\Program Files\PostgreSQL\9.6\data)
track_activities = on
track_counts = on
track_io_timing = on
track_functions = on
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 1000
pg_stat_statements.track = 'top'
pg_stat_statements.save = off
pg_profile.topn = 20
pg_profile.retention = 7
2) Manually copied all the file beginning with pg_profile to D:\Program Files\PostgreSQL\9.6\share\extension
3) From pgAdmin4 console executed below commands successfully
CREATE EXTENSION dblink;
CREATE EXTENSION pg_stat_statements;
CREATE EXTENSION pg_profile;
4) To see which node is already present I executed SELECT * from node_show();
which resulted in
node_name as local
connstr as dbname=postgres port=5432
enabled as true
5) To create a snapshot I executed SELECT * from snapshot('local');
but getting below error
ERROR: could not establish connection
DETAIL: fe_sendauth: no password supplied
CONTEXT: SQL statement "SELECT dblink_connect('node_connection',node_connstr)"
PL/pgSQL function snapshot(integer) line 38 at PERFORM
PL/pgSQL function snapshot(name) line 9 at RETURN
SQL state: 08001
Once I am able to generate multiple snapshot then I guess I should be able to generate report.
just use SELECT * from snapshot()
look at the code of the function. It calls the other one with node as parameter.
I'm having an issue with resolving macro variables within a macro. I think the issue is the language, and how SAS is sending my statements to the Macro Processor vs. Compiler.
Here's the jist of my code:
....some import statements...
%MACRO FCERR(date=);
%LET REMHOST=MainFrame PORT;
SIGNON REMHOST USER=&SYSUSERID. PASSWORD= _PROMPT_;
%SYSLPUT date=&yymm. ;
RSUBMIT;
FILENAME FIN "MY.FILE.QUALIFIERS" DISP = shr;
......some datasteps......
LIBNAME METRO "My.File.Qualifiers" DISP=shr;
/********************************************************************
******* *********
** ** **
******* ** * **
** ** * **
******* *********
*
/*******************************************************************/
%IF %SYSFUNC(EXIST(work.EQ_&date._FIN)) %THEN %DO;
PROC UPLOAD Data = work.EQ_&date._FIN
OUT = work.EQ_&date._FIN;
..........a bunch of data steps..................
PROC SQL NOPRINT ;
select count(*) as EQB format=10.0 INTO :EQBEF from EQ_1701_FIN ;
select count(*) as EQA format=10.0 INTO :EQAFTER from trunc_fin_eq ;
QUIT ;
%PUT &EQBEF;
%PUT &EQAFTER;
%IF %SYSFUNC(STRIP(&EQBEF.)) ~= %SYSFUNC(STRIP(&EQAFTER.)) %THEN %DO;
options emailhost= MYEMAILHOST.ORG ;
filename mail email ' '
to= (&recip.)
subject = "EQ Error QA/QC";
DATA _NULL_;
file mail ;
put "There were potential errors processing the Equifax Error file.";
put "The input dataset contains %SYSFUNC(STRIP(&EQBEF.)) observations.";
put "The output dataset contains %SYSFUNC(STRIP(&EQAFTER.)) observations.";
put "Please check the SAS log for additional details.";
RUN;
%END;
%END;
%ENDRSUBMIT;
%SIGNOFF;
%MEND;
%FCERR(date=&yymm.);
I keep getting an error that is stopping my macro from processing. This is it:
> SYMBOLGEN: Macro variable EQBEF resolves to 24707
> 24707
> MLOGIC(FCERR): %PUT &EQAFTER
> WARNING: Apparent symbolic reference EQAFTER not resolved.
> &EQAFTER
> SYMBOLGEN: Macro variable EQBEF resolves to 24707
> WARNING: Apparent symbolic reference EQAFTER not resolved.
> WARNING: Apparent symbolic reference EQAFTER not resolved.
> ERROR: A character operand was found in the %EVAL function or %IF condition where a numeric
> operand is required. The condition was: %SYSFUNC(STRIP(&EQBEF.)) ~=
> %SYSFUNC(STRIP(&EQAFTER.))
> ERROR: The macro FCERR will stop executing.
Question: Is SAS trying to process my second (i.e. the inside) %IF %THEN statement before it compiles and executes the Data steps above %IF %SYSFUNC(STRIP(&EQBEF.)) ~= %SYSFUNC(STRIP(&EQAFTER.)) %THEN %DO; I can see from the log that SAS is pumping out the error before it is creating the datasets from my datasteps, and I believe that the reason &EQBEF is resolving is because it is creating using PROC UPLOAD;
If so, How can I prevent SAS from executing the second %IF %THEN until the datasteps are processed, since my second select statement in proc sql; is dependent upon the datasteps executing.
Also, I'm having trouble getting my date variable to resolve in proc sql;
E.G.
PROC SQL NOPRINT ;
select count(*) as EQB format=10.0 INTO :EQBEF from EQ_1701_FIN ;
select count(*) as EQA format=10.0 INTO :EQAFTER from trunc_fin_eq ;
QUIT ;
is ideally:
PROC SQL NOPRINT ;
select count(*) as EQB format=10.0 INTO :EQBEF from EQ_&DATE._FIN ;
select count(*) as EQA format=10.0 INTO :EQAFTER from trunc_fin_eq ;
QUIT ;
but &DATE. won't resolve in that proc sql statement, but resolves perfectly fine in all my of my libname statements, etc. Is there some contingency as to why &date. won't resolve within PROC SQL?.....Do I need to have every variable used in my macro referenced in the Parameter List?
Your macro is running on your local SAS session, but because of the RSUBMIT; and ENDRSUBMIT; statements your SQL code that is generating the macro variable is running on the remote SAS session but the macro statements are referencing the local macro variable.
For example try this simple program that creates a local and a remote macro variable and tries to show the values.
signon rr sascmd='!sascmd';
%let mvar=Local ;
%syslput mvar=Remote ;
%put LOCAL &=mvar;
rsubmit rr;
%put REMOTE &=mvar ;
endrsubmit;
signoff rr;
If you run it in open SAS the %PUT statements will show that MVAR is equal to LOCAL and REMOTE , respectively.
But it you wrap inside a macro a run it
%macro xx;
signon rr sascmd='!sascmd';
%let mvar=Local ;
%syslput mvar=Remote ;
%put LOCAL &=mvar;
rsubmit rr;
%put REMOTE &=mvar ;
endrsubmit;
signoff rr;
%mend xx;
options mprint;
%xx;
You will see that both %PUT statements run in the local server and display the value of the local macro variable.
Check the log for the second select
select count(*) as EQA format=10.0 INTO :EQAFTER from trunc_fin_eq ;
If that data set doesn't exist, the macro variable won't be created.
You can set it to 0 to initialize it:
%let EQAFTER=0;
I have a REXX job that needs to read from both Teradata (using BTEQ) and DB2. At present, I can get it to either read from Teradata or DB2, but not both. When I try to read from both, the Teradata one (which runs first) works fine but the DB2 read gives an error of RC(1) upon attempting to open the cursor.
Code to read from Teradata (by and large copied from http://www.teradataforum.com/teradata/20040928_131203.htm):
ADDRESS TSO "DELETE BLAH.TEMP"
"ALLOC FI(SYSPRINT) DA(BLAH.TEMP) NEW CATALOG SP(10 10) TR RELEASE",
"UNIT(SYSDA) RECFM(F B A) LRECL(133) BLKSIZE(0) REUSE"
"ATTRIB FBATTR LRECL(220)"
"ALLOC F(SYSIN) UNIT(VIO) TRACKS SPACE(10,10) USING(FBATTR)"
/* Set up BTEQ script */
QUEUE ".RUN FILE=LOGON"
QUEUE "SELECT COLUMN1 FROM TABLE1;"
/* Run BTEQ script */
"EXECIO * DISKW SYSIN (FINIS"
"CALL 'SYS3.TDP.APPLOAD(BTQMAIN)'"; bteq_rc = rc
"FREE FI(SYSPRINT SYSIN)"
/* Read and parse BTEQ output */
"EXECIO * DISKR SYSPRINT (STEM BTEQOUT. FINIS"
DO I = 1 to BTEQOUT.0
...
END
Code to read from DB2:
ADDRESS TSO "SUBCOM DSNREXX"
IF RC THEN rcDB2 = RXSUBCOM('ADD','DSNREXX','DSNREXX')
ADDRESS DSNREXX "CONNECT " subsys
sqlQuery = "SELECT COLUMN2 FROM TABLE2;"
ADDRESS DSNREXX "EXECSQL DECLARE C001 CURSOR FOR S001"
IF SQLCODE <> 0 THEN DO
SAY 'DECLARE C001 SQLCODE = ' SQLCODE
EXIT 12
END
ADDRESS DSNREXX "EXECSQL PREPARE S001 FROM :sqlQuery"
IF SQLCODE <> 0 THEN DO
SAY 'PREPARE S001 SQLCODE = ' SQLCODE SQLERROR
EXIT 12
END
ADDRESS DSNREXX "EXECSQL OPEN C001"
IF SQLCODE <> 0 THEN DO
SAY 'OPEN C001 SQLCODE = ' SQLCODE
EXIT 12
END
ADDRESS DSNREXX "EXECSQL FETCH C001 INTO :col2"
IF SQLCODE <> 0 THEN DO
SAY 'FETCH C001 SQLCODE = ' SQLCODE
EXIT 12
END
I suspect that this has something to do with my use of SYSPRINT and SYSIN. Anyone know how I can get this to work?
Thanks.
Edit
The question as stated was actually wrong. Apologies for not correcting this earlier.
What I had really done was have this:
ADDRESS TSO "SUBCOM DSNREXX"
IF RC THEN rcDB2 = RXSUBCOM('ADD','DSNREXX','DSNREXX')
ADDRESS DSNREXX "CONNECT " subsys
...followed by a small read from DB2, then followed by the code to read from Teradata, followed by more code to read from DB2. When this was changed to reading from Teradata first before having anything to do with DB2 at all, it worked.
I don't think this has anything to do with SYSPRINT or SYSIN.
I think you are getting TSO RC = 1, not SQLCODE = 1 (because there is no SQLCODE of 1.
1 is a warning, -1 is an error. You can look this up in the DB2 Application Programming
and SQL Guide
Turn on TRACE R and run it.
There are variables (shown below) that display info about the error/warning.
22 *-* ADDRESS DSNREXX "EXECSQL OPEN C1"
>>> "EXECSQL OPEN C1"
+++ RC(1) +++
23 *-* IF SQLCODE <> 0
28 *-* SAY 'SQLSTATE='sqlstate', SQLERRMC='sqlerrmc', SQLERRP='sqlerrp
SQLSTATE=00000, SQLERRMC=, SQLERRP=DSN
29 - SAY 'SQLERRD='sqlerrd.1', 'sqlerrd.2', 'sqlerrd.3', 'sqlerrd.4',',
sqlerrd.5', 'sqlerrd.6
SQLERRD=0, 0, 0, -1, 0, 0
32 - SAY 'SQLWARN='sqlwarn.0', 'sqlwarn.1', 'sqlwarn.2', 'sqlwarn.3',',
sqlwarn.4', 'sqlwarn.5', 'sqlwarn.6', 'sqlwarn.7',',
sqlwarn.8', 'sqlwarn.9', 'sqlwarn.10
SQLWARN= , N, , , , 2, , , , ,
For example, it could be that when both are used together, there is not enough memory.
I noticed that when I call a PL/PgSQL or PL/Perl function from a Perl script using DBI, it does not return a value if a RAISE NOTICE or elog(NOTICE) is used in the function. To illustrate:
A simple table:
CREATE TABLE "public"."table1" (
"fld" INTEGER
) WITHOUT OIDS;
A simple function:
CREATE OR REPLACE FUNCTION "public"."function1" () RETURNS integer AS
$body$
DECLARE
myvar INTEGER;
BEGIN
SELECT INTO myvar fld FROM table1 LIMIT 1;
RETURN myvar;
END;
$body$
LANGUAGE 'plpgsql'
A piece of Perl script:
use DBI;
...
my $ref = $dbh->selectcol_arrayref('SELECT function1()');
print $$ref[0];
As it is, it prints the value from the table.
But I get no result if I add RAISE NOTICE as follows:
SELECT INTO myvar fld FROM table1 LIMIT 1;
RAISE NOTICE 'Testing';
RETURN myvar;
Am I missing something or such behavior is by design?
Check the client_min_messages setting in your database server's postgresql.conf file. From the PostgreSQL 8.3 docs:
client_min_messages (string)
Controls which message levels are sent to the client. Valid values are DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, LOG, NOTICE, WARNING, ERROR, FATAL, and PANIC. Each level includes all the levels that follow it. The later the level, the fewer messages are sent. The default is NOTICE. Note that LOG has a different rank here than in log_min_messages.
I can't reproduce this, using Debian's Perl 5.10, DBI 1.605 and DBD::Pg 2.8.7 against PostgreSQL 8.3.7. I get the notice printed out as expected.
steve#steve#[local] =# create or replace function public.function1() returns integer language 'plpgsql' as $$ declare myvar integer; begin select into myvar fld from table1 limit 1; raise notice 'Testing'; return myvar; end; $$;
CREATE FUNCTION
steve#steve#[local] =#
[1]+ Stopped psql --cluster 8.3/steve
steve#arise:~$ DBI_TRACE=1 perl -MData::Dumper -MDBI -e '$dbh = DBI->connect(qw|dbi:Pg:dbname=steve;port=5433;host=/tmp steve steve|, {RaiseError=>1,PrintError=>0}); print Data::Dumper->new([$dbh->selectcol_arrayref("SELECT function1()")], [qw|result|])->Dump'
DBI 1.605-ithread default trace level set to 0x0/1 (pid 5739) at DBI.pm line 273 via -e line 0
Note: perl is running without the recommended perl -w option
-> DBI->connect(dbi:Pg:dbname=steve;port=5433;host=/tmp, steve, ****, HASH(0x1c9ddf0))
-> DBI->install_driver(Pg) for linux perl=5.010000 pid=5739 ruid=1000 euid=1000
install_driver: DBD::Pg version 2.8.7 loaded from /usr/lib/perl5/DBD/Pg.pm
<- install_driver= DBI::dr=HASH(0x1e06a68)
!! warn: 0 CLEARED by call to connect method
<- connect('dbname=steve;port=5433;host=/tmp', 'steve', ...)= DBI::db=HASH(0x1fd8e08) at DBI.pm line 638
<- STORE('RaiseError', 1)= 1 at DBI.pm line 690
<- STORE('PrintError', 0)= 1 at DBI.pm line 690
<- STORE('AutoCommit', 1)= 1 at DBI.pm line 690
<- STORE('Username', 'steve')= 1 at DBI.pm line 693
<> FETCH('Username')= 'steve' ('Username' from cache) at DBI.pm line 693
<- connected('dbi:Pg:dbname=steve;port=5433;host=/tmp', 'steve', ...)= undef at DBI.pm line 699
<- connect= DBI::db=HASH(0x1fd8e08)
<- STORE('dbi_connect_closure', CODE(0x1da2280))= 1 at DBI.pm line 708
NOTICE: Testing
<- selectcol_arrayref('SELECT function1()')= ( [ '2' ] ) [1 items] at -e line 1
$result = [
'2'
];
I suggest isolating your problem to a small script (like above) and running it with DBI_TRACE set fairly high any seeing what differences you see. Maybe also looking at the release notes for DBD::Pg and seeing if they mention it maybe having been confused by these in the past. With DBI_TRACE=10 I see this:
PQexec
Begin pg_warn (message: NOTICE: Testing
DBIc_WARN: 1 PrintWarn: 1)
NOTICE: Testing
End pg_warn
Begin _sqlstate
So you should be looking for something like that in your own output.