mysqli-affected_rows and buffering - mysqli

I've read about a dozen posts on the doings of affected_rows. I'm here now with an affected_rows returning 1 sometimes and 0 other times. I understand it reports on changes. An update that updates nothing will return a 0. Changes are made - I can see it in the database. There are no errors. All calls are tested.
I've tried a store_result() before an affected_rows access. No help.
Somewhere I read that buffering affects it's behavior. My question is,
can you 'flush' after an update? How? Here is some abbr. code:
$conn = db_connect();
$sth = $conn->prepare($mysql_update);
$sth->bind_param("siii", $name, $age, $wt, $ht);
$sth->execute();
$sth->store_result();
$update_count = $sth->affected_rows;
php 5.6, mysql 10.1.10-MariaDB (the xampp suite)
Additional: I don't close before I ask how many.

You just confused num_rows, which is affected by unbuffered query, with affcted_rows, which is not (as there is nothing actually to buffer at all). The latter is always available, and thus if you are getting 0, then no changes were made.

Related

What's the most reliable method for cross-platform alarm signal handling or execution timeouts in Perl?

I've added advisory locking to Sqitch, using Postgres advisory locks and MySQL GET_LOCK(). This feature prevents more than one instance of Sqitch from deploying to a database at one time. This works great, but I wanted to add a lock timeout, too, so that one never finds a CI/CD process hung for hours or days because something went amiss.
MySQL's GET_LOCK() supports a timeout argument, but Postgres advisory locks do not. Since I thought it likely that other database engines would also not have timeouts, I thought it best to implement the timeout in Perl. Following the DBI manual, I used Sys::SigAction to set and handle the timeout:
# Try waiting for the lock.
require App::Sqitch::SigAction;
return $self->_locked(1) unless App::Sqitch::SigAction::timeout_call($wait, sub {
$self->wait_lock
});
I also added tests to confirm it works with both MySQL and Postgres. So far so good.
Alas, Sys::SigAction does not work on Windows. I took a stab and testing it on Windows, but since Windows Perl is not compiled with d_sigaction, which Sys::SigAction also requires, I didn't get far. I tried implementing the Perl-standard alarm/$SIG{ALRM} pattern, but it failed to send the signal while waiting on the Postgres lock.
Which has led me here and to my question: What is the best cross-platform pattern for timing out some execution in Perl? Ideally it has a straight-forward interface, works on *nix and Windows, and effectively handles breaking out of a database query.
I ended up ditching Sys::SigAction following discussion here and elsewhere, and instead switched to:
Letting the database handle the timeout, as MySQL's get_lock() does
Adding a simple interface for polling with exponential backoff and timeout that engines can use to poll for a lock instead of waiting (similar to Retry::Backoff)
Switching the Postgres implementation to use the async query support in DBD::Pg to send off the lock request, and uses the backoff/timeout interface to check to see if it has returned and cancel the query if it times out
I was especially pleased to realize I could do #3, as I originally used the timeout/backoff interface to poll with pg_try_advisory_lock( key ), which just feels heavy. Better to asynchronously call pg_advisory_lock ( key ) and poll for its response. It looks like this:
sub wait_lock {
my $self = shift;
# Asyncronouslly request a lock with an indefinite wait.
my $dbh = $self->dbh;
$dbh->do(
'SELECT pg_advisory_lock(75474063)',
{ pg_async => DBD::Pg::PG_ASYNC() },
);
# Use _timeout to periodically check for the result.
return 1 if $self->_timeout(sub { $dbh->pg_ready && $dbh->pg_result });
# Timed out, cancel the query and return false.
$dbh->pg_cancel;
return 0;
}
Of course the MySQL implementation is simpler, since get_lock() does all the work:
sub wait_lock {
my $self = shift;
$self->dbh->selectcol_arrayref(
q{SELECT get_lock('sqitch working', ?)},
undef, $self->lock_timeout
)->[0]
}

update Typo3 7.6 to 8.7, can't get frontend to work on a local test envirement with XAMPP

I working on updating a Typo3 7.6 to 8.7. I do this on my local machine with XAMPP on windows with PHP 7.2.
I got the backend working. It needed some manual work in the DB, like changing the CType in tt_content for my own content elements as well as filling the colPos.
However when I call the page on the frontend all I get is a timeout:
Fatal error: Maximum execution time of 60 seconds exceeded in
C:\xampp\htdocs\typo3_src-8.7.19\vendor\doctrine\dbal\lib\Doctrine\DBAL\Driver\Mysqli\MysqliStatement.php on line 92
(this does not change if I set max_execution_time to 300)
Edit: I added an echo just before line 92 in the above file, this is the function:
public function __construct(\mysqli $conn, $prepareString)
{
$this->_conn = $conn;
echo $prepareString."<br />";
$this->_stmt = $conn->prepare($prepareString);
if (false === $this->_stmt) {
throw new MysqliException($this->_conn->error, $this->_conn->sqlstate, $this->_conn->errno);
}
$paramCount = $this->_stmt->param_count;
if (0 < $paramCount) {
$this->types = str_repeat('s', $paramCount);
$this->_bindedValues = array_fill(1, $paramCount, null);
}
}
What I get is the following statement 1000 of times, always exactly the same:
`SELECT `tx_fed_page_controller_action_sub`, `t3ver_oid`, `pid`, `uid` FROM `pages` WHERE (uid = 0) AND ((`pages`.`deleted` = 0) AND (`pages`.`hidden` = 0) AND (`pages`.`starttime` <= 1540305000) AND ((`pages`.`endtime` = 0) OR (`pages`.`endtime` > 1540305000)))`
Note: I don't have any entry in pages with uid=0. So I am really not sure what this is good for. Does there need to be a page with uid=0?
I enabled logging slow query in mysql, but don't get anything logged with it. I don't get any aditional PHP error nor do I get a log entry in typo3.
So right now I am a bit stuck and don't know how to proceed.
I enabled general logging for mysql and when I call a page on frontent I get this SQL query executed over and over again:
SELECT `tx_fed_page_controller_action_sub`, `t3ver_oid`, `pid`, `uid` FROM `pages` WHERE (uid = 0) AND ((`pages`.`deleted` = 0) AND (`pages`.`hidden` = 0) AND (`pages`.`starttime` <= 1540302600) AND ((`pages`.`endtime` = 0) OR (`pages`.`endtime` > 1540302600)))
executing this query manually gives back an empty result (I don't have any entry in pages with uid=0). I don't know if that means anything..
What options do I have? How can I find whats missing / where the error is?
First: give your PHP more time to run.
in the php.ini configuration increase the max execution time to 240 seconds.
be aware that for TYPO3 in production mode 240 seconds are recommended. If you start the install-tool you can do a system check and get information about configuration which might need optimization.
Second: avoid development mode and use production mode.
the execution is faster, but you will loose the option to debug.
debugging always costs more time and more memory to prepare all that information. maybe 240 seconds are not enough and you even need more memory.
The field tx_fed_page_controller_action_sub comes from an extension it is not part of the core. Most likely you have flux and fluidpages installed in your system.
Try to deactivate those extensions and proceed without them. Reintegrate them later if you still need them. A timeout often means that there is some kind of recursion going on. From my experience with flux it is possible that a content element has itself set as its own flux_parent and therefore creates an infinite rendering loop that will cause a fatal after the max_execution_time.
So, in your case I'd try to find the record that is causing this (seems to be a page record) and/or the code that initiates the Query. You do not need to debug in Doctrine itself :)

Moving from file-based tracing session to real time session

I need to log trace events during boot so I configure an AutoLogger with all the required providers. But when my service/process starts I want to switch to real-time mode so that the file doesn't explode.
I'm using TraceEvent and I can't figure out how to do this move correctly and atomically.
The first thing I tried:
const int timeToWait = 5000;
using (var tes = new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl") { StopOnDispose = false })
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
using (var tes = new TraceEventSession("TEMPSESSIONNAME", TraceEventSessionOptions.Attach))
{
Thread.Sleep(timeToWait);
tes.SetFileName(null);
Thread.Sleep(timeToWait);
Console.WriteLine("Done");
}
Here I wanted to make that I can transfer the session to real-time mode. But instead, the file I got contained events from a 15s period instead of just 10s.
The same happens if I use new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl", TraceEventSessionOptions.Create) instead.
It seems that the following will cause the file to stop being written to:
using (var tes = new TraceEventSession("TEMPSESSIONNAME"))
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
But here I must reenable all the providers and according to the documentation "if the session already existed it is closed and reopened (thus orphans are cleaned up on next use)". I don't understand the last part about orphans. Obviously some events might occur in the time between closing, opening and subscribing on the events. Does this mean I will lose these events or will I get the later?
I also found the following in the documentation of the library:
In real time mode, events are buffered and there is at least a second or so delay (typically 3 sec) between the firing of the event and the reception by the session (to allow events to be delivered in efficient clumps of many events)
Does this make the above code alright (well, unless the improbable happens and for some reason my thread is delayed for more than a second between creating the real-time session and starting processing the events)?
I could close the session and create a new different one but then I think I'd miss some events. Or I could open a new session and then close the file-based one but then I might get duplicate events.
I couldn't find online any examples of moving from a file-based trace to a real-time trace.
I managed to contact the author of TraceEvent and this is the answer I got:
Re the exception of the 'auto-closing and restarting' feature, it is really questions about the OS (TraceEvent simply calls the underlying OS API). Just FYI, the deal about orphans is that it is EASY for your process to exit but leave a session going. This MAY be what you want, but often it is not, and so to make the common case 'just work' if you do Create (which is the default), it will close a session if it already existed (since you asked for a new one).
Experimentation of course is the touchstone of 'truth' but I would frankly expecting unusual combinations to just work is generally NOT true.
My recommendation is to keep it simple. You need to open a new session and close the original one. Yes, you will end up with duplicates, but you CAN filter them out (after all they are IDENTICAL timestamps).
The other possibility is use SetFileName in its intended way (from one file to another). This certainly solves your problem of file size growth, and often is a good way to deal with other scenarios (after all you can start up you processing and start deleting files even as new files are being generated).

Perl CGI gets parameters from a different request to the current URL

This is a weird one. :)
I have a script running under Apache 1.3, with Apache::PerlRun option of mod_perl. It uses the standard CGI.pm module. It's a regularly accessed script on a busy server, accessed over https.
The URL is typically something like...
/script.pl?action=edit&id=47049
Which is then brought into Perl the usual way...
my $action = $cgi->param("action");
my $id = $cgi->param("id");
This has been working successfully for a couple of years. However we started getting support requests this week from our customers who were accessing this script and getting blank pages. We already had a line like the following that put the current URL into a form we use for customers to report an issue about a page...
$cgi->url(-query => 1);
And when we view source of the page, the result of that command is the same URL, but with an entirely different query string.
/script.pl?action=login&user=foo&password=bar
A query string that we recognise as being from a totally different script elsewhere on our system.
However crazy it sounds, it seems that when users are accessing a URL with a query string, the query string that the script is seeing is one from a previous request on another script. Of course the script can't handle that action and outputs nothing.
We have some automated test scripts running to see how often this happens, and it's not every time. To throw some extra confusion into the mix, after an Apache restart, the problem seems to initially disappear completely only to come back later. So whatever is causing it is somehow relieved by a restart, but we can't see how Apache can possibly take the request from one user and mix it up with another.
This, it appears, is an interesting combination of Apache 1.3, mod_perl 1.31, CGI.pm and Apache::GTopLimit.
A bug was logged against CGI.pm in May last year: RT #57184
Which also references CGI.pm params not being cleared?
CGI.pm registers a cleanup handler in order to cleanup all of it's cache.... (line 360)
$r->register_cleanup(\&CGI::_reset_globals);
Apache::GTopLimit (like Apache::SizeLimit mentioned in the bug report) also has a handler like this:
$r->post_connection(\&exit_if_too_big) if $r->is_main;
In pre mod_perl 1.31, post_connection and register_cleanup appears to push onto the stack, while in 1.31 it appears as if the GTopLimit one clobbers the CGI.pm entry. So if your GTopLimit function fires because the Apache process has got to large, then CGI.pm won't be cleaned up, leaving it open to returning the same parameters the next time you use it.
The solution seems to be to change line 360 of CGI.pm to;
$r->push_handlers( 'PerlCleanupHandler', \&CGI::_reset_globals);
Which explicitly pushes the handler onto the list.
Our restart of Apache temporarily resolved the problem because it reduced the size of all the processes and gave GTopLimit no reason to fire.
And we assume it has appeared over the past few weeks because we have increased the size of the Apache process either through new developments which included something that wasn't before.
All tests so far point to this being the issue, so fingers crossed it is!

How to avoid error maximum open_cursor exceeded when using Class::DBI

(Update to answer Jonathan Leffler's question below):
We're running Perl 5.8.7 and Oracle 11.1.0.7.0.
Due to the company's policy, developers have no arbitrary control in regard to software upgrade. Giving the proposal to the upper management takes months to be followed up (if approved) - I guess it's not a surprisingly odd situation for several other companies too.
I inherited the program from someone else left the company and found the warning about "issuing rollback() ..." from the application log file. The actual problem "maximum open_cursor exceeded" was found after I run DBI_TRACE=2=/tmp/trace.log program_name.pl.
Looking at the number of $dbh->{ActiveKids}, $dbh->{Kids}, and $dbh->{CachedKids}, I assume the maximum open cursor is 50 as the error happens after it reaches 50.
Our legacy production codes are using these modules:
DBI - 1.48
Ima::DBI - 0.33
Class::DBI - 0.96
Class::DBI::Oracle - 0.51
DBD::Oracle - 1.16
For some odd policy reason, upgrading the module to a newer version is not possible :(
The application relies on using CDBI to handle relationships on a large number of tables. A simplify snippet of the code is as below:
JOB:
foreach my $job (#jobs) {
my #records = $job->record;
RECORD:
foreach my $record (#records) {
my #datas = $record->data;
DATA:
foreach my $data (#datas) {
....
}
}
}
where each #jobs, $record, and $data is an object to a table and the inner most loop calls several other triggers.
Somewhere after several loops I'm getting an Oracle error: maximum open_cursor exceeded and then I got the error from the CDBI: issuing rollback() for database handle being DESTROYE'd without explicit disconnect.
I can workaround it by undef-ing the DBI CachedKids on the most outer loop, with:
# somewhere during initialization
$self->{_this_dbh} = __PACKAGE__->db_Main();
....
JOB:
foreach my $job (#jobs) {
RECORD: ....
DATA: ....
$self->{_this_dbh}->{CachedKids} = undef;
}
Is that the proper way to do it?
Or does CDBI support a way to clear statement handle the same way as DBI $sth->finish() ?
Thanks.
At some point, you will have to explain why you cannot upgrade to more nearly current versions of the software. You didn't mention which version of Perl you are using, or which version of Oracle; somehow, I suspect that it is neither 5.10.1 nor 11gR2.
Current versions:
Class::DBI 3.0.17
Class::DBI::Oracle 0.51
DBI 1.609 (version 1.48 is from 2005)
DBD::Oracle 1.23 (version 1.16 is from 2004)
Ima::DBI 0.35
What changed recently? Why are you suddenly finding problems in a piece of software that was, presumably, very stable? Is this new code?
With plain DBI, when you undef a statement handle (by having it go out of scope, for example), then the resources associated with it are released - more or less noisily. However, there is enough infrastructure between Class::DBI and DBI that it is hard to tell how this might map.
Have you worked out what the limit on open cursors actually is?
Have you worked out whether you've opened enough cursors to actually exceed that limit?
Have you tried running with DBI_TRACE set in the environment? A value such as 3 will tell you a fair amount about what it going on - maybe too much. It would show whether cursors are being released properly or not.
Have you tried reducing the number of tables manipulated in a single session?
Have you considered disconnecting and reconnecting between manipulating tables?
Is there a way to get to the statement handle corresponding to the Class::DBI abstractions, so that you can in fact execute $sth->finish()?