Perl CGI gets parameters from a different request to the current URL - perl

This is a weird one. :)
I have a script running under Apache 1.3, with Apache::PerlRun option of mod_perl. It uses the standard CGI.pm module. It's a regularly accessed script on a busy server, accessed over https.
The URL is typically something like...
/script.pl?action=edit&id=47049
Which is then brought into Perl the usual way...
my $action = $cgi->param("action");
my $id = $cgi->param("id");
This has been working successfully for a couple of years. However we started getting support requests this week from our customers who were accessing this script and getting blank pages. We already had a line like the following that put the current URL into a form we use for customers to report an issue about a page...
$cgi->url(-query => 1);
And when we view source of the page, the result of that command is the same URL, but with an entirely different query string.
/script.pl?action=login&user=foo&password=bar
A query string that we recognise as being from a totally different script elsewhere on our system.
However crazy it sounds, it seems that when users are accessing a URL with a query string, the query string that the script is seeing is one from a previous request on another script. Of course the script can't handle that action and outputs nothing.
We have some automated test scripts running to see how often this happens, and it's not every time. To throw some extra confusion into the mix, after an Apache restart, the problem seems to initially disappear completely only to come back later. So whatever is causing it is somehow relieved by a restart, but we can't see how Apache can possibly take the request from one user and mix it up with another.

This, it appears, is an interesting combination of Apache 1.3, mod_perl 1.31, CGI.pm and Apache::GTopLimit.
A bug was logged against CGI.pm in May last year: RT #57184
Which also references CGI.pm params not being cleared?
CGI.pm registers a cleanup handler in order to cleanup all of it's cache.... (line 360)
$r->register_cleanup(\&CGI::_reset_globals);
Apache::GTopLimit (like Apache::SizeLimit mentioned in the bug report) also has a handler like this:
$r->post_connection(\&exit_if_too_big) if $r->is_main;
In pre mod_perl 1.31, post_connection and register_cleanup appears to push onto the stack, while in 1.31 it appears as if the GTopLimit one clobbers the CGI.pm entry. So if your GTopLimit function fires because the Apache process has got to large, then CGI.pm won't be cleaned up, leaving it open to returning the same parameters the next time you use it.
The solution seems to be to change line 360 of CGI.pm to;
$r->push_handlers( 'PerlCleanupHandler', \&CGI::_reset_globals);
Which explicitly pushes the handler onto the list.
Our restart of Apache temporarily resolved the problem because it reduced the size of all the processes and gave GTopLimit no reason to fire.
And we assume it has appeared over the past few weeks because we have increased the size of the Apache process either through new developments which included something that wasn't before.
All tests so far point to this being the issue, so fingers crossed it is!

Related

Moving from file-based tracing session to real time session

I need to log trace events during boot so I configure an AutoLogger with all the required providers. But when my service/process starts I want to switch to real-time mode so that the file doesn't explode.
I'm using TraceEvent and I can't figure out how to do this move correctly and atomically.
The first thing I tried:
const int timeToWait = 5000;
using (var tes = new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl") { StopOnDispose = false })
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
using (var tes = new TraceEventSession("TEMPSESSIONNAME", TraceEventSessionOptions.Attach))
{
Thread.Sleep(timeToWait);
tes.SetFileName(null);
Thread.Sleep(timeToWait);
Console.WriteLine("Done");
}
Here I wanted to make that I can transfer the session to real-time mode. But instead, the file I got contained events from a 15s period instead of just 10s.
The same happens if I use new TraceEventSession("TEMPSESSIONNAME", #"c:\temp\TEMPSESSIONNAME.etl", TraceEventSessionOptions.Create) instead.
It seems that the following will cause the file to stop being written to:
using (var tes = new TraceEventSession("TEMPSESSIONNAME"))
{
tes.EnableProvider(ProviderExtensions.ProviderName<MicrosoftWindowsKernelProcess>());
Thread.Sleep(timeToWait);
}
But here I must reenable all the providers and according to the documentation "if the session already existed it is closed and reopened (thus orphans are cleaned up on next use)". I don't understand the last part about orphans. Obviously some events might occur in the time between closing, opening and subscribing on the events. Does this mean I will lose these events or will I get the later?
I also found the following in the documentation of the library:
In real time mode, events are buffered and there is at least a second or so delay (typically 3 sec) between the firing of the event and the reception by the session (to allow events to be delivered in efficient clumps of many events)
Does this make the above code alright (well, unless the improbable happens and for some reason my thread is delayed for more than a second between creating the real-time session and starting processing the events)?
I could close the session and create a new different one but then I think I'd miss some events. Or I could open a new session and then close the file-based one but then I might get duplicate events.
I couldn't find online any examples of moving from a file-based trace to a real-time trace.
I managed to contact the author of TraceEvent and this is the answer I got:
Re the exception of the 'auto-closing and restarting' feature, it is really questions about the OS (TraceEvent simply calls the underlying OS API). Just FYI, the deal about orphans is that it is EASY for your process to exit but leave a session going. This MAY be what you want, but often it is not, and so to make the common case 'just work' if you do Create (which is the default), it will close a session if it already existed (since you asked for a new one).
Experimentation of course is the touchstone of 'truth' but I would frankly expecting unusual combinations to just work is generally NOT true.
My recommendation is to keep it simple. You need to open a new session and close the original one. Yes, you will end up with duplicates, but you CAN filter them out (after all they are IDENTICAL timestamps).
The other possibility is use SetFileName in its intended way (from one file to another). This certainly solves your problem of file size growth, and often is a good way to deal with other scenarios (after all you can start up you processing and start deleting files even as new files are being generated).

How can I get exchange rates after google retired iGoogle?

I used this link to fetch the pound to euro exchange rate on a daily (nightly) basis:
http://www.google.com/ig/calculator?hl=en&q=1pound=?euro
This returned an array which I then stripped and used the data I needed.
Since the first of November they retired iGoogle resulting in the URL to forward to: https://support.google.com/websearch/answer/2664197
Anyone knows an alternative URL that won't require me to rewrite the whole function? I'm sure google didn't stop providing this service entirely.
I started getting cronjob errors today on this very issue. So I fell back to a prior URL I was using before I switched to the faster/reliable iGoogle.
Url to programmatically hit (USD to EUR):
http://www.webservicex.net/CurrencyConvertor.asmx/ConversionRate?FromCurrency=USD&ToCurrency=EUR
Details about it:
http://www.webservicex.net/ws/WSDetails.aspx?CATID=2&WSID=10
It works for now, but its prone to be slow at times, and used to respond with an "Out of space" error randomly. Just be sure to code in a way to handle that, and maybe run the cron four times a day instead of once. I run ours every hour.
Example code to get the rate out of the return (there is probably a more elegant way):
$ci = curl_init($accessurl);
curl_setopt($ci, CURLOPT_HTTPGET, 1);
curl_setopt($ci, CURLOPT_RETURNTRANSFER, 1);
$rawreturn = curl_exec($ci);
curl_close($ci);
$rate = trim(preg_replace("/.*<double[^>]*>([^<]*)<\/double[^>]*>.*/i","$1",$rawreturn));

EF6/Code First: Super slow during the 1st query, but only in Debug

I'm using EF6 rc1 with Code First strategy, without precompiled views and the problem is:
If I compile and run the exe application it takes like 15 seconds to run the first query (that's okay, since I'm still working on the pre-generated views). But if I use Visual Studio 2013 Preview to Debug the exact same application it takes almost 2 minutes BEFORE running the first query:
Dim Context = New MyEntities()
Dim Query = From I in Context.Itens '' <--- The debug takes 2 minutes in here
Dim Item = Query.FirstOrDefault()
Is there a way to remove this extra time? Am I doing something wrong here?
Ps.: The context itself is not complicated, its just full with 200+ tables.
Edit: Found out that the problem is that during debug time the EF appears to be generating the Views ignoring the pre-generated ones.
Using the source code from EF I discovered that the property:
IQueryProvider IQueryable.Provider
{
get
{
return _provider ?? (_provider = new DbQueryProvider(
GetInternalQueryWithCheck("IQueryable.Provider").InternalContext,
GetInternalQueryWithCheck("IQueryable.Provider").ObjectQueryProvider));
}
}
is where the time is being consumed. But this is strange since it only takes time in debug. Am I missing something here?
Edit: Found more info related to the question:
Using the Process Monitor (by Sysinternals) I found out that there its the 'desenv.exe' process that is consuming tons of time. To be more specific its consuming time with an 'Thread Exit'. It repeats the Thread Exit stack 36 times. I don't know if this info is very useful, but I saved a '.cvs' with the stack, here is his body: [...] (edit: removed the '.cvs' body, I can post it again by the comments if someone really think its going to be useful, but it was confusing and too big.)
Edit: Installed VS2013 Ultimate and Entity Framework 6 RTM. Installed the Entity Framework Power Tools Beta 4 and used it to generate the Views. Nothing changed... If I run the exe it takes 20 seconds, if I 'Start' debugging it takes 120 seconds.
Edit: Created a small project to simulate the error: http://sdrv.ms/16pH9Vm
Just run the project inside the environment and directly through the .exe, click the button and compare the loading time.
This is a known performance issue in Lazy (which EF is using) when the debugger is attached. We are currently working on a fix (the current approach we are looking at is removing the use of Lazy). We hope to ship this fix in a patch release soon. You can track progress of this issue on our CodePlex site - http://entityframework.codeplex.com/workitem/1778.
More details on the coming 6.0.2 patch release that will include a fix are here - http://blogs.msdn.com/b/adonet/archive/2013/10/31/ef6-performance-issues.aspx
I don't know if you have found the solution. But in my case, I had similar issue which wasted me close to a week after trying different suggestions. Finally, I found a solution by changing my web.config to optimizeCompilations="true" and performance improved dramatically from 15-30 seconds to about 2 seconds.

WWW::Mechanize in Perl, Script Gets Killed

I have written a Perl Script which uses WWW::Mechanize to connect to a site, login and then visit a few pages inside the site. It all works good, however, when I try to visit a large number of pages, the script gets killed. I am sure this has got nothing to with the HTTP Server's Configuration and the connection limits configured. This is because, the script is running on my own site.
Here's a high level overview of my script:
$url="http://example.com";
$mech=WWW::Mechanize->new();
$mech->cookie_jar(HTTP::Cookies->new());
$mech->get($url);
login to the site using the form fields.
Now, once I am logged in, I connect to URLs within the site as follows:
$i is the iteration counter in a for loop
$internal_url="http://example.com/index.php?page=$i";
$mech->get($internal_url);
perform some operations on the page returned ($mech->content using HTML::TreeBuilder::XPath)
now, I iterate over the for loop connecting to a different internal_url, since the value of $i is incremented in every iteration.
As I said, it all works good. However, after about 180 pages, the script gets killed.
What could be the reason? I have tried multiple times.
I even added a $mech->delete; right before the end of the FOR loop to prevent any memory leak.
However, the only issue is that the login session which was maintained by $mech would be destroyed as a result of this.
I have tried multiple times and this script always gets killed after visiting the same number of pages.
Thanks.
Try this code:
$mech=WWW::Mechanize->new();
$mech->stack_depth(0);
OR
$mech=WWW::Mechanize->new(stack_depth=>0);
According to the docs: Get or set the page stack depth. Use this if
you're doing a lot of page scraping and running out of memory.
A value of 0 means "no history at all." By default, the max stack
depth is humongously large, effectively keeping all history.

the svnpoller is not triggered (warning in the twistd.log)

I am not sure what is going on, but i get this weird issue with buildbot.
The SVNPoller is configured as it should (checked various config example files), when i run the buildbot checkconfig it says that everything is fine....but it won't work at all.
If i trigger a build via the scheduler class it works fine, i can retrieve the source updates and build without problems (tried with a 1h timeframe).
The problem thou is that the poller is not working, so even if i build each hour, the changes column stays empty (i get the changes for the various versions thou, so if i click on the build detail i can see the sourcestamp carrying the right and most recent revision everytime that i modify the codebase); so I have no way to know if the build fails who did the last change.
Another peculiar thing is that in the twistd.log i see this line:
Warning: no ChangeSources specified in c['change_source']
And i am not sure why it wouldn't work since the checkconfig does not raise any error.
The result of this is of course that the only thing built is the hourly one, leaving me without the poller, and without knowing who is putting code in each build.
This is the code for the poller:
c['change source']=SVNPoller
(svnurl="svn+ssh://user#svnserver.domain.com/svn/project/trunk,
pollinterval=60*5,
histmax=10,
project=myproj,
svnbin = '/usr/bin/svn')
So far it looks good, so I am not really sure what is wrong here...why the SVNPoller is not triggering any build.
Anyone that has some suggestions about why is this happening ? Is there any other way to get changes from an SVN server? I am a total newbie at BuildBot and I am not really getting too much out of the manual; that looks much more like a scholastic book instead of being a manual that shows you how you do stuff :)
Thanks!!!!!
Ok, silly me :) the problem is the missing underscore on change_source...once added it the problem is solved
c['change_source'] = SVNPoller (svnurl=source_svn_url,
pollinterval=60,
histmax=10,
project='The_project',
svnbin= '/usr/bin/svn'
)
this will poll the svn codebase at source_svn_url (just put your svn:// path); and will check every minute to see if anyone has done changes; and will keep 10 changes in the record list (any change after the 10th will not show up so use it carefully if you do a lot of commits).
Hope that this helps who uses buildbot!