PostgreSQL Query Cache - closing connection unexpectedly - postgresql

<?php
$c = pg_connect( $connectionString, PGSQL_CONNECT_FORCE_NEW );
$queries = array (
"SELECT COUNT(*) AS data FROM articles",
"SELECT * FROM posts WHERE visible = TRUE",
"SELECT * FROM countries WHERE visible = FALSE",
"SELECT * FROM types WHERE visible = TRUE"
);
foreach ( $queries as $query ) {
$res = #pg_query( $c, $query );
if ( empty( $res ) ) {
echo "[ERR] " . pg_last_error( $c ) . "\n";
} else {
echo "[OK]\n";
}
}
The snippet of code above is generating for the first time this:
[OK]
[OK]
[OK]
[OK]
but for the second time this:
[OK]
[OK]
[ERR] server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
[ERR] server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
This means that some cached queries cause a problem. We tried to change the order of quires, but it didn't help. Only simple queries like SELECT 1+8, which are probably not cached always run well.
Similar problem can be simulated using psql and probably any other language driver (not only PHP).
All troubles came up when we installed PostgreSQL Query Cache.
Should the query cache be configured somehow not behave this way?
Our config files are here:
http://pastebin.com/g2dBjba0 – pcqd_hba.conf
http://pastebin.com/X9Y3zrjx – pcqd.conf

Chances are you have a bug in the PostgreSQL Query Cache which is causing segfaults in the backend. Your best solution is to uninstall this add-on.

Related

Solved: DBI cached statements gone and CGI::Session is stucked

I'm using Apache2.2(worker)/mod_perl 2.0.4/Apache::DBI/CGI::Session and Firebird RDBMS.
I also wrote CGI::Session::Driver::firebird.pm to work with Firebird RDBMS.
DB connection is pooled by Apache::DBI and give connection handle to CGI::Session {Handle=>$dbh}.
Number of DB connection is equals to number of worker processes.
I posted Programming with Apache::DBI and firebird. Get Stucked httpd on exception 3 month ago.
I found a reason of that issue, and want to know how to fix it.
$dbh = DBI->connect("dbi:Firebird:db=$DBSERVER:/home/cdbs/xxnet.fdb;
ib_charset=UTF8;ib_dialect=3",$DBUSER,$DBPASS,{
AutoCommit=>1,
LongReadLen=>8192,
RaiseError=>1
});
my $session = new CGI::Session('dbi:firebird',$sessid,{Handle=>$dbh});
my $ses_p1 = $session->param('p1');
eval { $dbh->begin_work()
my $sql = "SELECT * FROM SAMPLETABLE"
my $st = $dbh->prepare($sql);
$st->execute();
while (my $R = $st->fetchrow_hashref()) {
...
}
$st->finish();
}; warn $# if $#;
if ($#) {
$dbh->rollback();
}else{
$dbh->commit();
}
$session->flush();
When an sql error is occured, an eval block catches exception and rollback transaction.
After that, CGI::Session does not retrieve session object no more.
Because prepare_cached statement failes at CGI::Session::DBI.pm.
CGI::Session::DBI.pm use prepare_cached($sql,undef,3). '3' is safest way of using cached statement, but it never find broken statement at this situation.
How to fix this ?
raise request to change CGI::Session::DBI.pm to use prepare() statement ?
write store(),retrieve(),traverse() function in firebird.pm to use prepare() statement ?
It may other prepare_cached() going to fail after catch exception...
1) I add die statement on CGI::Session->errstr()
I got an error "new(): failed: load(): couldn't retrieve data: retrieve(): $sth->execute failed with error message"
2) I flush session object after session->load()
if $session is valid, changes are stored to DB.
3) I replace begin_work() to {AutoCommit}=0
results are same. I can use $dbh normally after catching exception and rollback, BUT new CGI::Session returns error.
------------------------------------------ added 2017/07/26 18:47 JST
Please give me your suggestion.
Thank you.
There are various things you could try before request changes to CGI::Session::Driver::DBI.pm ...
First, change the way new CGI::Session is called in order to diagnose if the problem happens when the session is created or loaded:
my $session = CGI::Session->new('dbi:firebird',$sessid,{Handle=>$dbh}) or die CGI::Session->errstr();
The methods param or delete stores changes to the session inside $session handle, not in DB. flush stores in DB the changes made inside the session handle. Use $session->flush() only after a session->param set/update or a session delete:
$session->param('p1','someParamValue');
$session->flush() or die 'Unable to update session storage!';
# OR
$session->delete();
$session->flush() or die 'Unable to update session storage!';
The method flush does not destroy $session handle (you still can call $session->param('p1') after the flush). In some cases mod_perl caches $session causing problems to the next attempt to load that same session. In those cases it needs to be destroyed when it's not needed anymore:
undef($session)
The last thing i can suggest is avoid using begin_work method, control the transaction behavior with AutoCommit instead (because the DBD::Firebird documentation says that's the way transactions should be controlled) and commit inside the eval block:
eval {
# Setting AutoCommit to 0 enables transaction behavior
$dbh->{AutoCommit} = 0;
my $sql = "SELECT * FROM SAMPLETABLE"
my $st = $dbh->prepare($sql);
$st->execute();
while (my $R = $st->fetchrow_hashref()) {
...
}
$st->finish();
$dbh->commit();
};
if ($#) {
warn "Tansaction aborted! $#";
$dbh->rollback();
}
# Remember to set AutoCommit to 1 after the eval
$dbh->{AutoCommit} = 1;
You said you wrote your own session driver for Firebird... You should see how the CGI/Driver/sqlite.pm or CGI/Driver/mysql.pm are made, maybe you need to write some fetching method you are missing...
Hope this helps!!

Sybase Warning messages from perl DBI

I am connecting to sybase 12 from a perl script and calling storedprocs, I get the following warnings
DBD::Sybase::db prepare failed: Server message number=2401 severity=11 state=2 line=0 server=SERVER_NAME text=Character
set conversion is not available between client character set 'utf8' and server character set 'iso_1'.
Server message number=2411 severity=10 state=1 line=0 server=SERVER_NAME text=No conversions will be done.
at line 210.
Now, I understand these are only warnings, and my process works perfectly fine, but I am calling my stored proc in a loop and throughout the day and hence it creates a lot of warning message in my log files which causes the entire process to run a bit slower than expected. Can someone help me how can i suppress these please?
You can use a callback to handle the messages you want ignored. See the DBD::Sybase docs. The below is derived from the docs. You specify the message numbers you would like to ignore.
%blocked_msgs = map { $_ => 1 } ( 2401, 2411 );
sub err_handler {
my($err, $sev, $state, $line, $server, $proc, $msg, $sql, $err_type) = #_;
if ( exists $blocked_msgs{$err} ) { # it's a blocked message
return 0; # This is not an error
}
return 1;
}
This is how you might use it:
$dbh = DBI->connect('dbi:Sybase:server=troll', 'sa', '');
$dbh->{syb_err_handler} = \&err_handler;
$dbh->do("exec someproc");
$dbh->disconnect;

MongoCursorTimeoutException in jenssegers/laravel-mongodb

I have a query which looks up data in a huge collection (over 48M), yet even if I add timeout=-1 to it, it still throws MongoCursorTimeoutException exception..
return \DB::connection('mongodb')->collection('stats')->timeout(-1)
->where('ip','=',$alias)
->where('created_at','>=', new \DateTime( $date ) )
->where('created_at','<=', new \DateTime( $date . ' 23:59:59' ) )
->count();
I am using this library: https://github.com/jenssegers/laravel-mongodb
Any ideas?
There is an issue PHP-1249 - MongoCursor::count() should use cursor's socket timeout submitted for PHP MongoDB driver v1.5.7 which was fixed in v1.5.8 in October, 2014.
The reply from the support:
Looking into the code a bit, it appears that both socket timeout and maxTimeMS is not passed along to the count command.
If you need an immediate work-around, you should be able to get by with MongoDB::command() for now (which can support both timeouts).
The workaround posted by one of the users is:
$countComand = $mongo->command(
array(
'count' => 'collection',
'query' => $query
),
array('socketTimeoutMS' => -1)
);
if($countComand['ok']){
$count = $countComand['n'];
} else {
// ... error ...
}
It seems that laravel-mongodb doesn't use MongoDB::command(). You either have to write your query explicitly without help of where methods as shown above or upgrade to v.1.5.8.

Instagram Real-time API duplicate requests

I have an issue where when I create a real-time subscription I get duplicate notifications from different Instagram IP addresses. I have it set up so that when I get a notification, I send a request for latest updates using the min_tag_id setting. I store that in my db to use it for the next request. I don't always get duplicates, but when I do, everything about the notification is the same (time, object,changed_aspect), except I can tell they are different from my debugging output which lists two almost identical requests... the only differing info being a different IP address and the REQUEST_TIME_FLOAT is different by about 1/10th of a second. They even have the same HTTP_X_HUB_SIGNATURE value.
My general algorithm is:
process_subscription_update($data){
# get old min_id
$min_tag_id = mysqli_fetch_object(mysqli_query($dbconnecti,sprintf("SELECT instagram_min_id+0 as instaid FROM xxxx WHERE xxxx=%d",$_GET['xxxx'])));
$min_id = $min_tag_id->instaid;
# make api call
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL, 'https://api.instagram.com/v1/tags/'.$_GET['tag'].'/media/recent?client_id=xxxx&min_tag_id='.$min_id.($min_id==0?'&count=1':''));
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
$result = curl_exec($ch);
curl_close($ch);
$i = json_decode($result);
if ($min_id == $i->pagination->min_tag_id) { exit; }
# write new min_id to db
record_min_id($i->pagination->min_tag_id);
$data2 = $i->data;
foreach($data2 as $d) {
process_instagram($d);
}
// debugging output: ****************
$file = file_get_contents($_SERVER['DOCUMENT_ROOT'].'instagram/updates.txt');
$foo = "\n";
foreach($_SERVER as $key_name => $key_value) {
$foo .= $key_name . " = " . $key_value . "\n";
}
$fulldata = $file . "\n\n\n" . $result . "\n min_id = " . $min_id . $foo;
$fulldata .= "\nTIME:".$data[0]->time;
$fulldata .= "\nOBJECT:".$data[0]->object;
$fulldata .= "\nCHANGED_ASPECT:".$data[0]->changed_aspect;
file_put_contents($_SERVER['DOCUMENT_ROOT'].'instagram/updates.txt', $fulldata);
// end debugging output *************
}
I'd like to avoid checking if the instagram message id already exists in my db within the process_instagram function, and with the duplicates only coming 1/10th of a second apart, I don't know if that would work anyway.
Anybody else experience this and/or have a solution?
I solved this. I don't think there is anything I can do about receiving the duplicate notifications. So, when writing the Instagram to my db, I have a field for the Instagram id and put a unique constraint on the field. After doing the mysqli INSERT, I check to see if the errno = 1062, and if it does, I exit.
mysqli_query($dbconnecti,"INSERT INTO xxx (foo, etc, instagram_id ...")
if ($dbconnecti->errno==1062) { exit; }
...
// more script runs here if we don't have a duplicate.

Zend_Db_Profiler not logging db connection time?

Following the sample code on http://framework.zend.com/manual/en/zend.db.profiler.html I have set up db profiling of my Zend Framework app.
application.ini:
db.profiler.enabled = true
View Helper:
$totalTime = $profiler->getTotalElapsedSecs();
$queryCount = $profiler->getTotalNumQueries();
$longestTime = 0;
$longestQuery = null;
foreach ($profiler->getQueryProfiles() as $query) {
if ($query->getElapsedSecs() > $longestTime) {
$longestTime = $query->getElapsedSecs();
$longestQuery = $query->getQuery();
}
}
echo 'Executed ' . $queryCount . ' queries in ' . $totalTime . ' seconds' . "\n";
echo 'Average query length: ' . $totalTime / $queryCount . ' seconds' . "\n";
echo 'Queries per second: ' . $queryCount / $totalTime . "\n";
echo 'Longest query length: ' . $longestTime . "\n";
echo "Longest query: \n" . $longestQuery . "\n";
It works fine for select/insert/update/delete queries.
But I cannot find anyway to get the profiler to show the time taken to initiate the actual db connection, despite the documenation implying that it does log this.
I suspect that Zend_Db simply does not log the connection to the db with the profiler.
Does anyone know what is going on here?
I am using the Oracle database adapter, and ZF 1.10.1
UPDATE:
I understand it is possible to filter the profiler output, such that it will only show certain query types, e.g. select/insert/update. There also appears to be an option to filter just the connection records:
$profiler->setFilterQueryType(Zend_Db_Profiler::CONNECT);
However, my problem is that the profiler is not logging the connections to begin with, so this filter does nothing.
I know this for a fact, because if I print the profiler object, it contains data for many different queries - but no data for the connection queries:
print_r($profiler);
//output
Zend_Db_Profiler Object
(
[_queryProfiles:protected] => Array
(
[0] => Zend_Db_Profiler_Query Object
(
[_query:protected] => select * from table1
[_queryType:protected] => 32
[_startedMicrotime:protected] => 1268104035.3465
[_endedMicrotime:protected] => 1268104035.3855
[_boundParams:protected] => Array
(
)
)
[1] => Zend_Db_Profiler_Query Object
(
[_query:protected] => select * from table2
[_queryType:protected] => 32
[_startedMicrotime:protected] => 1268104035.3882
[_endedMicrotime:protected] => 1268104035.419
[_boundParams:protected] => Array
(
)
)
)
[_enabled:protected] => 1
[_filterElapsedSecs:protected] =>
[_filterTypes:protected] =>
)
Am I doing something wrong - or has logging of connections just not been added to Zend Framework yet?
The profiler 'bundles' connection and other operations in with the general queries.
There's three ways you might examine the connection specifically:
Set a filter on the profiler during setup:
$profiler->setFilterQueryType(**Zend_Db_Profiler::CONNECT**);
Then the resultant profiles will only include 'connect' operations.
Specify a query type when you retrieve the Query Profiles:
$profiles = $profiler->getQueryProfiles(**Zend_Db_Profiler::CONNECT**);
Examine the query objects directly during the iteration:
foreach($profiler->getQueryProfiles() as $query) {
if ($query->getQueryType() == Zend_Db_Profiler::CONNECT &&
$query->getElapsedSecs() > $longestConnectionTime) {
$longestConnectionTime = $query->getElapsedSecs();
}
}
You'll not find great detail in there, it's logged just as a 'connect' operation along with the time taken.