Devel::Cover not collecting any data after startup with mod_perl2 - perl

I want to check Selenium's coverage of my web app, which runs on mod_perl2 on CentOS 6.5.
So I installed Devel::Cover, put use Devel::Cover; in my httpd.conf's <Perl> section, and restarted Apache. It immediately writes some coverage data from my custom ErrorLogging.pm module, but then if I hit any of the app's pages via a browser, nothing further happens.
I also tried changing this in httpd.conf:
StartServers 1
MinSpareServers 1
MaxSpareServers 1
...just to make sure it'd be collecting all data from the same process. However, after restarting Apache and trying again, the result was the same.
UPDATE: I also tried launching httpd with -D ONE_PROCESS as mentioned in this thread, but the result was more or less the same, except that I had to Ctrl+C the service when done testing, because it takes over the terminal, and at that point it segfaulted. But the coverage database in the end was virtually identical.
The docs don't mention anything different that I can see. How can I get Devel::Cover to record coverage data for code execution that happens in response to actual browser requests via mod_perl2?

Related

Running Flask at startup as a Service in Windows won't work in background

Before explaining what my problem is, please know that I have looked up for solutions on similar topics but none of them seems to work nor even to corresponds to my problem.
What I am trying to do:
I have this python code on multiple files that I run with flask with the following command:
python -m flask run --host=0.0.0.0
So far, everything works, but I would like this code to automatically run everytime the computer boots. In the future this will be used on mini PCs without any graphical interface nor human intervention.
Since I need to do some configuration checks before running the web server, I've created a powershell script that ends with Flask running (using the previous command).
So far, everything works too. Now we're coming to the problem:
I'd like this script to run when I boot the machine. Specificity: Every things needs to work with Administrator privileges, on the local system without any interaction.
I've tried scheduled tasks but Flask won't run even if the rest of the script works (like creating folders or other things)
Ok, it's not a big deal I have other ways to do it, so I've created a Windows Service in C# to run the Script at startup on the local system.
The script works, I've checked the privileges too, everything's fine but arriving at the flask command line that is supposed to make it run, nothing works.
It's the same thing if I run flask using "pythonw" which is supposed to run python as a background process.
What the problem seems to be:
Well, as long as I run flask and I have either a command prompt or a powershell terminal, everything works greats. But if in a way or another I run the script as a background process, it won't work.
Normally it would take around 30 seconds for Flask to start-up. Here if I try to create a folder right after flask ended starting up (as a test) I can see the folder is created almost instantly, which means the process is immediately killed.
The problem doesn't seem to come from the service itself but really Windows that kills the process I don't know why
I'm running out of idea so if you guys have anything that I could try it would really help me.

Why do Selenium tests behave different on different machines?

I couldn't find much information on Google regarding this topic. Below, I have provided three results from the same Selenium tests. Why am I getting different results when running the tests from different places?
INFO:
So our architecture: Bitbucket, Bamboo Stage 1 (Build, Deploy to QA), Bamboo Stage 2 (start Amazon EC2 instance "Test", run tests from Test against recently deployed QA)
Using Chrome Webdriver.
For all three of the variations I am using the same QA URL that our application is deployed on.
I am running all tests Parallelizable per fixture
The EC2 instance is running Windows Server 2012 R2 with the Chrome browser installed
I have made sure that the test solution has been properly deployed to the EC2 "test" instance. It is indeed the exact same solution and builds correctly.
First, Local:
Second, from EC2 Via SSM Script that invokes the tests:
Note that the PowerShell script calls the nunit3-console.exe just like it would be utilized in my third example using the command line.
Lastly, RDP in on EC2 and run tests from the command line:
This has me perplexed... Any reasons why Selenium is running different on different machines?
This really should be a comment, but I can't comment yet so...
I don't know enough about the application you are testing to say for sure, but this seems like something I've seen testing the application I'm working on.
I have seen two issues. First, Selenium is checking for the element before it's created. Sometimes it works and sometimes it fails, it just depends on how quickly the page loads when the test runs. There's no rhyme or reason to it. Second, the app I'm testing is pretty dumb. When you touch a field, enter data and move on to the next, it, effectively, posts all editable fields back to the database and refreshes all the fields. So, Selenium enters the value, moves to the next field and pops either a stale element error or can't find element error depending on when in the post/refresh cycle it attempts to interact with the element.
The solution I have found is moderately ugly, I tried the wait until, but because it's the same element name, it's already visible and is grabbed immediately which returns a stale element. As a result, the only thing that I have found is that by using explicit waits between calls, I can get it to run correctly consistently. Below is an example of what I have to do with the app I'm testing. (I am aware that I can condense the code, I am working within the style manual for my company)
Thread.Sleep(2000);
By nBaseLocator = By.XPath("//*[#id='attr_seq_1240']");
IWebElement baseRate = driver.FindElement(nBaseLocator);
baseRate.SendKeys(Keys.Home + xBaseRate + Keys.Tab);
If this doesn't help, please tell us more about the app and how it's functioning so we can help you find a solution.
#Florent B. Thank you!
EDIT: This ended up not working...
The tests are still running different when called remotely with a powershell script. But, the tests are running locally on both the ec2 instance and my machine correctly.
So the headless command switch allowed me to replicate my failed tests locally.
Next I found out that a headless chrome browser is used during the tests when running via script on an EC2 instance... That is automatic, so the tests where indeed running and the errors where valid.
Finally, I figured out that the screen size is indeed the culprit as it was stuck to a size of 600/400 (600/400?)
So after many tries, the only usable screen size option for Windows, C# and ChromeDriver 2.32 is to set your webDriver options when you initiate you driver:
ChromeOptions chromeOpt = new ChromeOptions();
chromeOpt.AddArguments("--headless");
chromeOpt.AddArgument("--window-size=1920,1080");
chromeOpt.AddArguments("--disable-gpu");
webDriver = new ChromeDriver(chromeOpt);
FINISH EDIT:
Just to update
Screen size is large enough.
Still attempting to solve the issue. Anyone else ran into this?
AWS SSM Command -> Powershell -> Run Selenium Tests with Start-Process -> Any test that requires an element fails because ElementNotFound or ElementNotVisible exceptions.
Using POM for tests. FindsBy filter in c# is not finding elements.
Running tests locally on EC2 run fine from cmd, powershell and Powershell ISE.
The tests do not work correctly when executing with the AWS SSM Command. Cannot find any resources to fix problem.

Profiling foswiki with NYTProf results in incomplete profile data

I've an foswiki installation which is really slow (~ 60 seconds for a uncached page). I've tried to profile the installation with NYTProf, according to http://foswiki.org/Support/NYTProfDebugging with the following command:
> sudo -u www-data NYTPROF="file=/tmp/nytprof.out:addpid=1:endatexit=1" perl -wTd:NYTProf view -topic Some.Topic -username MyUsername
The script fails with an exit code 141 when I run it with profiler. If I run it without profiler (remote d:NYTProf) it exits successful and producing output.
After the profiling I've gotten a bunch of profile files in my /tmp directory:
nytprof.out.[841-1860]
But when I try to merge these files, I've get an error for the first file:
> nytprofmerge nytprof.out.*
Profile data incomplete, inflate error -5 ((null)) at end of input file, perhaps the process didn't exit cleanly or the file has been truncated (refer to TROUBLESHOOTING in the documentation)
I can merge the files without the first file, but the results are useless and shows only 87 calls to Foswiki::Sandbox::CORE:open and that's it.
Do I have any chance got get an valid profiling result? Or is there an other tool, that I can use in this case?
I'm not sure why you can't get NYTProfiler to work, we've used it to figure out some performance issues in Foswiki 2.0.2, which have been partially addressed in Foswiki 2.0.3. There are a couple of issues going on, but one major cause is our conversion to UNICODE internally, and some Perl regex issues in perl versions before 5.20. https://rt.perl.org/Public/Bug/Display.html?id=66852
Foswiki 2.0.3 made the following performance updates:
Changed some heavily called internal functions from regular expressions, to index()
Changed EditRowPlugin to generate less html that requires processing by regular expressions in the rendering module.
Made some other improvements to reduce excessive re-reading of topics.
If 2.0.3 doesn't significantly help, Check to see if the problem pages have large tables in them. If so, you might try disabling the EditRowPlugin and use EditTablePlugin.
Other than that, you might try our official support channel #foswiki on IRC, http://irclogs.foswiki.org/
The script fails with an exit code 141 when I run it with profiler.
That suggests the process received a SIGPIPE signal. The sigexit option may help.
If I run it without profiler ... it exits successful and producing output.
You're using sudo so permissions might be an issue, but that's just a guess. You'll need to dig deeper to confirm if a SIGPIPE is being received and why.
I'm not familiar with foswiki. Perhaps someone in that community could be more helpful.

Spawn external process from a CGI script

I've searched and found several very similar questions to mine but nothing in those answers have worked for me yet.
I have a perl CGI script that accepts a file upload. It looks at the file and determines how it should be processed and then calls a second non-CGI script to do the actual processing. At least, that's how it should work.
This is running on Windows with Apache 2.0.59 and ActiveState Perl 5.8.8. The file uploading part works fine but I can't seem to get the upload.cgi script to run the second script that does the actual processing. The second script doesn't communicate in any way with the user that sent the file (other than it sends an email when it's done). I want the CGI script to run the second script (in a separate process) and then 'go away'.
So far I've tried exec, system (passing a 1 as the first parameter), system (without using 1 as first parameter and calling 'start'), and Win32::Process. Using system with 1 as the first parameter gave me errors in the Apache log:
'1' is not recognized as an internal or external command,\r, referer: http://my.server.com/cgi-bin/upload.cgi
Nothing else has given me any errors but they just don't seem to work. The second script logs a message to the Windows event log as one of the first things it does. No log entry is being created.
It works fine on my local machine under Omni webserver but not on the actual server machine running Apache. Is there an Apache config that could be affecting this? The upload.cgi script resides in the d:\wwwroot\test\cgi-bin dir but the other script is elsewhere on the same machine (d:\wwwroot\scripts).
There may be a security related problem, but it should be apparent in the logs.
This won't exactly answer your question but it may give you other implementation ideas where you will not face with potential security and performance problems.
I don't quite like mixing my web server environment with system() calls. Instead, I create an application server (with POE usually) which accepts the relevant parameters from the web server, processes the job, and notifies the web server upon completion. (well, the notification part may not be straightforward but that's another topic.)

PHP Slow to process soap request via browser but fine on the command line

I am trying to connect to an external SOAP service using PHP and have written a small php test script that just connects to the service and performs a simple request to check everything is working.
This all works correctly but when I run via a browser request, it is very slow taking somewhere in the region of 40s to establish the initial connection. When I do the same request using the exact same script on the command line, it goes through straight away.
Does anyone have any ideas as to why this might be?
Cheers
PHP caches the wsdl in /tmp. If you run from the command line first, the cache file will be owned by whatever user you're running the script as, and apache won't be able to read the cache. The wsdl will have to be downloaded and parsed every time which will be slow.
Check the permissions of /tmp/wsdl*.
Maybe external SOAP service trying to check your IP, and your server has ICMP allowed, when your local network - not.
Anyway, this question might be answered more clearly by administrator of external SOAP service :)
Is there a difference between the php.inis that are being used?
On a standard ubuntu server installation:
diff /etc/php5/apache2/php.ini /etc/php5/cli/php.ini
//edit:
Another difference might be in the include paths. Had this trouble myself on a local test server, it didn't actually use the soap class that was included (it didn't include anything, because the search paths weren't valid), but it included the built-in soap_client class.