Why do Selenium tests behave different on different machines? - powershell

I couldn't find much information on Google regarding this topic. Below, I have provided three results from the same Selenium tests. Why am I getting different results when running the tests from different places?
INFO:
So our architecture: Bitbucket, Bamboo Stage 1 (Build, Deploy to QA), Bamboo Stage 2 (start Amazon EC2 instance "Test", run tests from Test against recently deployed QA)
Using Chrome Webdriver.
For all three of the variations I am using the same QA URL that our application is deployed on.
I am running all tests Parallelizable per fixture
The EC2 instance is running Windows Server 2012 R2 with the Chrome browser installed
I have made sure that the test solution has been properly deployed to the EC2 "test" instance. It is indeed the exact same solution and builds correctly.
First, Local:
Second, from EC2 Via SSM Script that invokes the tests:
Note that the PowerShell script calls the nunit3-console.exe just like it would be utilized in my third example using the command line.
Lastly, RDP in on EC2 and run tests from the command line:
This has me perplexed... Any reasons why Selenium is running different on different machines?

This really should be a comment, but I can't comment yet so...
I don't know enough about the application you are testing to say for sure, but this seems like something I've seen testing the application I'm working on.
I have seen two issues. First, Selenium is checking for the element before it's created. Sometimes it works and sometimes it fails, it just depends on how quickly the page loads when the test runs. There's no rhyme or reason to it. Second, the app I'm testing is pretty dumb. When you touch a field, enter data and move on to the next, it, effectively, posts all editable fields back to the database and refreshes all the fields. So, Selenium enters the value, moves to the next field and pops either a stale element error or can't find element error depending on when in the post/refresh cycle it attempts to interact with the element.
The solution I have found is moderately ugly, I tried the wait until, but because it's the same element name, it's already visible and is grabbed immediately which returns a stale element. As a result, the only thing that I have found is that by using explicit waits between calls, I can get it to run correctly consistently. Below is an example of what I have to do with the app I'm testing. (I am aware that I can condense the code, I am working within the style manual for my company)
Thread.Sleep(2000);
By nBaseLocator = By.XPath("//*[#id='attr_seq_1240']");
IWebElement baseRate = driver.FindElement(nBaseLocator);
baseRate.SendKeys(Keys.Home + xBaseRate + Keys.Tab);
If this doesn't help, please tell us more about the app and how it's functioning so we can help you find a solution.

#Florent B. Thank you!
EDIT: This ended up not working...
The tests are still running different when called remotely with a powershell script. But, the tests are running locally on both the ec2 instance and my machine correctly.
So the headless command switch allowed me to replicate my failed tests locally.
Next I found out that a headless chrome browser is used during the tests when running via script on an EC2 instance... That is automatic, so the tests where indeed running and the errors where valid.
Finally, I figured out that the screen size is indeed the culprit as it was stuck to a size of 600/400 (600/400?)
So after many tries, the only usable screen size option for Windows, C# and ChromeDriver 2.32 is to set your webDriver options when you initiate you driver:
ChromeOptions chromeOpt = new ChromeOptions();
chromeOpt.AddArguments("--headless");
chromeOpt.AddArgument("--window-size=1920,1080");
chromeOpt.AddArguments("--disable-gpu");
webDriver = new ChromeDriver(chromeOpt);
FINISH EDIT:
Just to update
Screen size is large enough.
Still attempting to solve the issue. Anyone else ran into this?
AWS SSM Command -> Powershell -> Run Selenium Tests with Start-Process -> Any test that requires an element fails because ElementNotFound or ElementNotVisible exceptions.
Using POM for tests. FindsBy filter in c# is not finding elements.
Running tests locally on EC2 run fine from cmd, powershell and Powershell ISE.
The tests do not work correctly when executing with the AWS SSM Command. Cannot find any resources to fix problem.

Related

The HTTP request to the remote WebDriver server timed out after 60 seconds. Happens only when running through the task scheduler

I'm troubleshooting an Selenium script that runs through the Task Scheduler on a Windows Server. It's running in PowerShell using version 3.0.1 of the Selenium module (found here:https://www.powershellgallery.com/packages/Selenium/3.0.1) with the Edge browser (the one with Chromium).
The "The HTTP request to the remote WebDriver server for URL [localhost] timed out after 60 seconds." error has been quite persistent and only appears when run through the Task Scheduler. The script runs fine when running manually through ISE.
Also to note, there's another script that is more or less the same as the one having the issue, albeit using a slightly different url (same site). This second script runs without issue through the task scheduler. They're performing the same sequence of actions which is why I'm not entirely sure why it would fail for one script but not the other.
I haven't found a suitable solution while looking at other posters facing the same issue. Any help is much appreciated!
I had the same issue. I tried all kinds of code changes but the only step that worked for me was to change the Security Options in Task Scheduler.
Task (right-click) > Properties > General > Switch from Run whether user is logged on or not to Run only when the user is logged on.
I guess this would be a temporary solution. I'll keep looking and update this if I find a better solution.

Running Flask at startup as a Service in Windows won't work in background

Before explaining what my problem is, please know that I have looked up for solutions on similar topics but none of them seems to work nor even to corresponds to my problem.
What I am trying to do:
I have this python code on multiple files that I run with flask with the following command:
python -m flask run --host=0.0.0.0
So far, everything works, but I would like this code to automatically run everytime the computer boots. In the future this will be used on mini PCs without any graphical interface nor human intervention.
Since I need to do some configuration checks before running the web server, I've created a powershell script that ends with Flask running (using the previous command).
So far, everything works too. Now we're coming to the problem:
I'd like this script to run when I boot the machine. Specificity: Every things needs to work with Administrator privileges, on the local system without any interaction.
I've tried scheduled tasks but Flask won't run even if the rest of the script works (like creating folders or other things)
Ok, it's not a big deal I have other ways to do it, so I've created a Windows Service in C# to run the Script at startup on the local system.
The script works, I've checked the privileges too, everything's fine but arriving at the flask command line that is supposed to make it run, nothing works.
It's the same thing if I run flask using "pythonw" which is supposed to run python as a background process.
What the problem seems to be:
Well, as long as I run flask and I have either a command prompt or a powershell terminal, everything works greats. But if in a way or another I run the script as a background process, it won't work.
Normally it would take around 30 seconds for Flask to start-up. Here if I try to create a folder right after flask ended starting up (as a test) I can see the folder is created almost instantly, which means the process is immediately killed.
The problem doesn't seem to come from the service itself but really Windows that kills the process I don't know why
I'm running out of idea so if you guys have anything that I could try it would really help me.

Debugging Azure IoT Edge modules using Visual Studio Code

I can't get local debug of IoT Edge modules working on VS Code, but part of the problem could be that I don't understand what I'm doing in the steps.
I'm following the Microsoft guide here. Can anyone explain to me when I run the command "Azure IoT Edge: Start IoT Edge Hub Simulator for Single Module" in VS Code, why do I need to pass an "input name"? Why doesn the simulator need to know this. I've got multiple input commands on my edge module and the fact I need to pass it is making me question what the simulator actually does. I want to be able to debug multiple inputs.
Also on the same documentation, I can't see how it defines which module I want to run in the simulator. Am I missing something or is the process confusing?
When you Start the IoT Edge Hub Simulator for a Single Module, you spawn two Docker containers. One is the edgeHub and the other is a testing utility. The testing utility acts as a server that you can send HTTP requests to, the requests specify the input name and the data. You can use this to send messages to various inputs on your module. Just looking at that, I understand why it is confusing to supply the input name to the simulator. But when you inspect the edgeHub container, you'll see the following environment values being passed:
"routes__output=FROM /messages/modules/target/outputs/* INTO BrokeredEndpoint(\"/modules/input/inputs/print\")",
"routes__r1=FROM /messages/modules/input/outputs/input2 INTO BrokeredEndpoint(\"/modules/target/inputs/input2\")",
"routes__r2=FROM /messages/modules/input/outputs/foo INTO BrokeredEndpoint(\"/modules/target/inputs/foo\")",
"routes__r3=FROM /messages/modules/input/outputs/input1 INTO BrokeredEndpoint(\"/modules/target/inputs/input1\")"
Just like on a real device, you need routes to talk to your module. The edgeHub container registers these routes with the values you supplied during the starting of the simulator. That input can be a comma-separated list. So if you are using more inputs, feel free to supply them when you start the simulator. Under the covers, that command runs:
iotedgehubdev start -i "input1,input2,foo"
Note: when I was testing this with the latest VS Code Extension, the first time I ran it, the textbox contained: "input1,input2".

How to track installer script in a pipeline not executing?

I'm new to the whole Azure DevOps world and just got transferred to a new team that does just that.
One of my assignments is to fix an issue with a pipeline where one of the steps runs a shell script that installs an application. Currently, the step seems to run without any issue shown on the log, but when we connect to the container's pod, the app is not there.
If we run the script directly inside the pod, the application is installed correctly. I'm not sure how to track this. One of the things I've tried was to check the event log to see if there's any error while the installation is executed:Get-Eventlog -LogNmae "Windows PowerShell" -Newest 20, so far no luck here. Again, kinda of new at this, not sure what other tools are out there to track the reason why the script is not installing during the pipeline execution.
To troubleshoot your pipeline run, you can configure your pipeline logs to be more verbose.
1, To configure verbose logs for a single run, you can start a new build by choosing Run pipeline and selecting Enable system diagnostics, Run.
2,To configure verbose logs for all runs, you can add a variable named system.debug and set its value to true.
You can also try logging into your agent server and check for the event log. See this blog for view event log on windows.
The issue was related to how the task was awaited. Adding this piped params helped us solve the issue:
RUN powershell C:\dev\myprocess.ps1 -PassThru | Wait-Process;

Devel::Cover not collecting any data after startup with mod_perl2

I want to check Selenium's coverage of my web app, which runs on mod_perl2 on CentOS 6.5.
So I installed Devel::Cover, put use Devel::Cover; in my httpd.conf's <Perl> section, and restarted Apache. It immediately writes some coverage data from my custom ErrorLogging.pm module, but then if I hit any of the app's pages via a browser, nothing further happens.
I also tried changing this in httpd.conf:
StartServers 1
MinSpareServers 1
MaxSpareServers 1
...just to make sure it'd be collecting all data from the same process. However, after restarting Apache and trying again, the result was the same.
UPDATE: I also tried launching httpd with -D ONE_PROCESS as mentioned in this thread, but the result was more or less the same, except that I had to Ctrl+C the service when done testing, because it takes over the terminal, and at that point it segfaulted. But the coverage database in the end was virtually identical.
The docs don't mention anything different that I can see. How can I get Devel::Cover to record coverage data for code execution that happens in response to actual browser requests via mod_perl2?