Using Selenium WebDriver, Pester, and PowerShell to get and verify the network response after clicking a button - powershell

I managed to use PowerShell 7.1 and Selenium WebDriver and Module to control the Chrome Browser. I need now to access the network response of a post request which will be invoked after the button is clicked from Selenium.
I found some good info here, here, here, and here , however, it is for Python and Java so I need to convert some code to PowerShell. I hope you can help me out.
The code snippet below from one of the above-mentioned sources is where I am having difficulties with:
...
options = webdriver.ChromeOptions()
options.add_argument("--remote-debugging-port=8000")
driver = webdriver.Chrome(ChromeDriverManager().install(), chrome_options=options)
dev_tools = pychrome.Browser(url="http://localhost:8000")
tab = dev_tools.list_tab()[0]
tab.start()
...
Specifically, this part dev_tools = pychrome.Browser(url="http://localhost:8000")
Below is another code snippet I got from one of the above-mentioned sources:
ChromeDriver driver = new ChromeDriver();
DevTools devTool = driver.getDevTools()
devTool.createSession();
devTool.send(Network.enable(Optional.empty(), Optional.empty(), Optional.empty()));
devTool.addListener(Network.responseReceived(), <lamda-function>)
So it is clear the Selenium 4 has support for DevTools, but I am not finding the main C# documentation so that I can use it with PowerShell.
Finally, after accessing the response, I need to verify it using PowerShell Pester.
I appreciate your help.

Related

Selenium WebDriver Handling Windows Server Authentication using powershell

Selenium Webdriver based test using powershell and chrome,
The windows server, is returning an authentication pop up window.
This is my current script:
$CDriver = New-Object OpenQA.Selenium.Chrome.ChromeDriver
$CDriver.Navigate().GoToURL('http://mytestserver/login/')
$CDriver.Navigate().GoToURL('http://admin:adm!n#mytestserver/login/')
I understand, that i will not be able to use elements, as this isn't a message from the website itself.
But i'm trying to understand if i can use the Alert mechanism in Selenium Webdriver,
Instead of passing the username and password, into the actual url?
This what i tried so far, but with no successes:
$alert = $CDriver.SwitchTo().Alert()
$alert.SendKeys("admin");
$alert.SendKeys("adm!n");
$alert.SwitchTo().Alert().Accept();
Also,
Is there is any official documentation for Selenium WebDriver and powershell?
Many websites would rediret to a authentication site while accepting the credentials.
This renders $CDriver.Navigate().GoToURL('http://admin:adm!n#mytestserver/login/') useless.
An option which worked for me was as follows:
[Ofcourse this is in java, but i hope you figure it out]
`//url=orignal url of your application.
driver.get(url);
authURL = driver.getCurrentUrl();
driver.get("http://admin:adm!n#"+authURL.replaceFirst("https://","");`
Hope that works or please let me know in case of any questions.

Cannot install dymo web services

I have been trying to install the dymo web services for the last 2 hours on my Windows 10. I have tried everything. I tried regular installation. I tried custom installation selection dymo web service. I cannot locate the DYMO.DLS.Printing.Host.exe in my computer. I do have DYMO.WebApi.Win.Host.exe on my computer. The app says that the dymo connect service is running on port 41951.
I even have this icon:
But when i click diagnose I do not see this:
All I see is this:
When I go to this
https://127.0.0.1:41951/DYMO/DLS/Printing/Check
url to check my print service
I get:
No HTTP resource was found that matches the request
URI 'https://127.0.0.1:41951/DYMO/DLS/Printing/Check'.
No route data was found for this
request.
a bit late but I just had the same issue. These are the endpoints I have found, according to the latest js framework, below. You could use https://127.0.0.1:41951/DYMO/DLS/Printing/StatusConnected to test connectivity.
WS_CMD_STATUS = "StatusConnected",
WS_CMD_GET_PRINTERS = "GetPrinters",
WS_CMD_OPEN_LABEL = "OpenLabelFile",
WS_CMD_PRINT_LABEL = "PrintLabel",
WS_CMD_PRINT_LABEL2 = "PrintLabel2",
WS_CMD_RENDER_LABEL = "RenderLabel",
WS_CMD_LOAD_IMAGE = "LoadImageAsPngBase64",
WS_CMD_GET_JOB_STATUS = "GetJobStatus";
Hope that helps you or someone else. The documentation is not very good.

How to dump more than <body> on chrome / chromium headless?

Chrome's documentation states:
The --dump-dom flag prints document.body.innerHTML to stdout:
As per the title, how can more of the DOM object (ideally all) be dumped with Chromium headless? I can manually save the entire DOM via the developer tools, but I want a programmatic solution.
Update 2019-04-23 Google was very active on headless front and many updates happened
The answer below is valid for the v62 current version is v73 and it's updating all the time.
https://www.chromestatus.com/features/schedule
I highly recommend checking puppeteer for any future development with headless chrome. It is maintained by Google and it installs required Chrome version together with npm package so you just use puppeteer API by the docs and not worry about Chrome versions and setting up the connection between headless Chrome and dev tools API which allows doing 99% of the magic.
Repo: https://github.com/GoogleChrome/puppeteer
Docs: https://pptr.dev/
Update 2017-10-29 Chrome has already --dump-html flag which returns full HTML, not only body.
v62 does have it, it is already on stable channel.
Issue which fixed this: https://bugs.chromium.org/p/chromium/issues/detail?id=752747
Current chrome status (version per channel) https://www.chromestatus.com/features/schedule
Leaving old answer for legacy
You can do it with google chrome remote interface. I have tried it and
wasted couple hours trying to launch chrome and get full html,
including title and it is just not ready yet, i would say.
It works sometimes but i've tried to run it in production environment
and got errors time to time. All kind of random errors like
connection reset and no chrome found to kill. Those errors rised
up sometimes and it's hard to debug.
I personally use --dump-dom to get html when i need body and when i
need title i just use curl for now. Of course chrome can give you
title from SPA applications, which can not be done with only curl if
title is set from JS. Will switch to google chrome after having stable
solution.
Would love to have --dump-html flag on chrome and just get all html.
If Google's engineer is reading this, please add such flag to chrome.
I've created issue on Chrome issue tracker, please click favorite "star" to get noticed by google developers:
https://bugs.chromium.org/p/chromium/issues/detail?id=752747
Here is a long list of all kind of flags for chrome, not sure if it's
full and all flags:
https://peter.sh/experiments/chromium-command-line-switches/ nothing
to dump title tag.
This code is from Google's blog post, you can try your luck with this:
const CDP = require('chrome-remote-interface');
...
(async function() {
const chrome = await launchChrome();
const protocol = await CDP({port: chrome.port});
// Extract the DevTools protocol domains we need and enable them.
// See API docs: https://chromedevtools.github.io/devtools-protocol/
const {Page, Runtime} = protocol;
await Promise.all([Page.enable(), Runtime.enable()]);
Page.navigate({url: 'https://www.chromestatus.com/'});
// Wait for window.onload before doing stuff.
Page.loadEventFired(async () => {
const js = "document.querySelector('title').textContent";
// Evaluate the JS expression in the page.
const result = await Runtime.evaluate({expression: js});
console.log('Title of page: ' + result.result.value);
protocol.close();
chrome.kill(); // Kill Chrome.
});
})();
Source:
https://developers.google.com/web/updates/2017/04/headless-chrome
You are missing --headless to get stdout.
chromium --incognito \
--proxy-auto-detect \
--temp-profile \
--headless \
--dump-dom https://127.0.0.1:8080/index.html
Pipe it all in | html2text to recompile html into text.

Jasmine-Protractor Rest Call

I am trying to make a rest call to an oAuth server to get a token to serve as input to my Jasmine test. Now, I was assuming that I would be able to do this with XMLHttpRequest.
But when I run the test I get an error saying
"Message:Failed: XMLHttpRequest is not defined"
ReferenceError: XMLHttpRequest is not defined
This is for the part of the code which errors out var xhr = new XMLHttpRequest();
Any idea what's going wrong here, or is there a better way to do this? I see a lot of articles which talk about XMLHttpRequest in context of mock objects, but I am not trying to mock this cuz I need a token to be returned by the server.
The code I am using is here -->
var data = "someData";
var xhr = new XMLHttpRequest();
xhr.withCredentials = true;
xhr.setRequestHeader('Authorization', 'someAuth');
xhr.setRequestHeader('Content-Type', '*/*');
xhr.open("POST", url);
xhr.send(data);
Any help would be appreciated.
I figured this could happen due to two reasons:
You have not installed XMLHttp, which can be done using node package manager with the following command:
npm install xmlhttprequest
More details here --> XMLHTTP INSTALL LINK
If you already had installed xmlhttprequest, the issue then could be because the editor is not able to get the environment variables. This had happened to me when I ran into this issue. I was using WebStorm on Mac, which would not use the environment variables from my system. To over come this, I launched the editor from Terminal and then xmlhttpRequest was recognized in my code.

Android / Eclipse / Emulator / WAMP: Error 503 and timeouts

I am trying to request JSON data from a PHP script in my Android app.
The whole thing works well when I'm directly connecting my mobile phone and directly access the "real" Internet webserver. However, it doesn't work at all using the emulator and localhost (WAMP installation). Running the script directly on the local webserver, again, gives me the expected results- only if I am calling it on the same machine using the emulator, the troubles begin.
Here are the combinations I have tried so far (the scripts are located in the subdirectory "zz" of the root):
private static String url_all_venues = "http://10.0.2.2/zz/pullvenuesjson.php";
or
private static String url_all_venues = "http://10.0.2.2:80/zz/pullvenuesjson.php";
leads to the following exception: E/JSON Parser(657): Error parsing data org.json.JSONException: Value
When I'm trying the following:
private static String url_all_venues = "http://10.0.2.2:8080/zz/pullvenuesjson.php";
the code runs 'forever' and I'm getting a timeout error after about 5 minutes.
Any idea how to fix that so I can also test my app on my local webserver / the emulator? Most probably the problem is in my local Apache configuration, but then, I'm not sureā€¦
EDIT:
Here is some of the code that seems to trigger the error:
DefaultHttpClient httpClient = new DefaultHttpClient();
String paramString = URLEncodedUtils.format(params, "utf-8");
url += "?" + paramString;
HttpGet httpGet = new HttpGet(url);
HttpResponse httpResponse = httpClient.execute(httpGet);
HttpEntity httpEntity = httpResponse.getEntity();
is = httpEntity.getContent();
"url" is the one mentioned above. As mentioned in my comment below, it looks as if the problem comes from the local WAMP webserver not responding appropriately, since the code works fine when I'm directly accessing the "real" server via the Internet (i.o.w, do nothing else than changing the URL to the Internet address of the php script). The odd thing is that the script also works fine when I run it locally, but NOT through the emulator.
I finally figured it out! I logged the transferred data using Log.d(..) and found out that the username/password combination was not valid as I didn't check for the localhost condition... The script and app retrieves the data just fine now on both, localhost and the web. Thanks for your inquiries which helped me to get down to the root of the error!
[Answered myself in order to avoid the Q showing up as unanswered]