Using Powershell to "Discover" devices with a Web interface - powershell

I am trying to develop a PowerShell tool to gather "Discovery" information about devices on our network. We already have a commercial discovery tool but in quite a lot of cases, it is not giving us very much information.
The idea is to probe a subnet for devices (typically appliances) that have Web-based management interfaces. Our theory is that in many cases the home page content will allow us to detect what sort of device it is (containing manufacturer name, device model name etc). Obviously, such info will need to be extracted by parsing the body of the page. So, the script I have written first uses Test-NetConnection to do a port 80 test and a port 443 test. If the device is listening on port 80 or 443 the script will then use Invoke-WebRequest to grab the contents of the page.
I have used some code from here Ignore SSL warning with powershell downloadstring to disable certificate warnings as a lot of these devices will have self-signed untrusted certificates. That all works OK. The problem that I am having is that some of the devices I am testing on will display a page in a browser but using PowerShell's Invoke-WebRequest raises an error. After some investigation, this is because the Webserver of the device returns a non 200 status code. An example of this is setting up Apache on a Linux Box and enabling https with a self-signed certificate. Accessing the page using MS Edge displays the "Testing 123" page with a not secure warning on the address bar. However, accessing the same page via Invoke-WebRequest throws an exception. In this particular case it is because Apache returns a 403 Forbidden error. This is by design for Apache straight out of the box with the "Require all denied" setting in the httpd.conf file. Of course, the exception can be caught (which I have done) but the web page content is not available in this case within PowerShell even though it is displayed in a browser.
My next thoughts were that the Web server is behaving differently because it knows the PowerShell script is not one of the common browsers. So, I tried to use the -UserAgent parameter to Invoke-WebRequest to fooling the server into behaving the same way as it does with the browser and returning the content. However, this does not achieve what I am looking for.
The 403 return is just one example. It seems Appliances with a home page that require credentials (most/all? of them) returns a 401 error and again the page content is not available within PowerShell.
Does anyone have any pointers as to how I can make this work?

Related

Fiddler doesn't show traffic from Cypress

Background: I'm trying to send a request through cy.request and I get a different response from what I receive when I send a presumably similar request through PostMan. According to the debug information that Cypress writes to the console, I couldn't spot the difference. Therefore I wanted to look at Fiddler and see if I can spot the difference when looking at the raw requests side by side.
However, when I opened Fiddler I realized that I don't see any traffic from it, including the navigation to the home page using cy.visit().
Any ideas why I can't see the traffic in Fiddler, and if there's some way to capture it?
Fiddler is a proxy, it has to be explicitly used by an application, otherwise it will not be recogized by Fiddler.
There are three reasons that often cause that traffic is not visible in Fiddler:
The Windows application explicitly ignores the Windows/IE proxy settings. Usually such apps have an own proxy configuration. Configure it manually to use Fiddler. A common example of such an application is Firefox.
If you have activated the "Act as system proxy at startup" Fiddler changes the proxy settings while running. Any application that is already running when Fiddler starts may have already cached the old proxy configuration and therefore does not use Fiddler. Therefore start Fiddle before any program you want to capture.
The setting "Act as system proxy at startup" is AFAIK user specific, therefore any apps running on a different user or service account are not affected. You have to manually configure them to use Fiddler.
Cypress does not actually make an XHR request from the browser. Cypresd making the HTTP request from the Cypress Test Runner (in Node). So, you won’t see the request inside of your Developer Tools or filddler

Fiddler not capturing traffic from certain host

I want to capture traffic from a host using HTTP, but I do not see a response coming back. If I close fiddler, my application runs as normal.
I see '-' in the Result section, where it should have been an HTTP response code. If I manually execute the request using Composer, I get a 200 response. Fiddler is able to capture traffic from all other web applications without issue.
I have installed Fiddler certificate. Troubleshooting Mode returns 200. The host does not use HTTPS, but I have enabled Capture HTTPS Connects anyways.
I am using Fiddler v5.0.20182
Some applications performs certificate pinning. Also web applications can perform certificate pinning e.g. via HTTP Public Key Pinning (HPKP). If you have ever used the web application in your browser without Fiddler, the web app public key has been downloaded and cached in the web-browser.
Afterwards the Fiddler root certificate is no longer accepted for that site/app even it it has been installed correctly. You should be able to identify such problematic connections in Fiddler if you only see a CONNECT request but no subsequent requests to the same domain.
To delete the HPKP in your web browser you should use a fresh profile or clear the complete browser cache. Afterwards only use it with activated Fiddler proxy and SSL decryption. As far as I know Fiddler will remove HPKP data from responses so that the web application should also work with Fiddler in between.
I think you should be able to uncheck the options for https, uncheck the boxes which appear checked here? Or you might be able to skip decryption by adding the host in the box below where it says Skip decryption for the following hosts

Bypassing `blocked: mixed-content` restrictions in browsers

I have an internal WEB application I use, with a local printer attached.
To control the local printer (it's a ticketing printer) I use locally a small program that manages it. In order for my WEB application to "use" the printer, I make it to POST AJAX request to the small local program.
My WEB application is served with HTTPS, while the local program exposes a simple HTTP API through HTTP (non-secure).
The problem is, I am facing blocked: mixed-content restrictions when accessing the application through HTTPS (in development mode I wasn't seen this, of course).
I have several fixes (don't like any of them):
Make the local program to expose its simple HTTP API through HTTPS.
It's doable, but I will face problems with self signed certificates (will have to install them on the target machine), or will have to use DNS tricks to expose it under a "name".
Disallow browsers to block mixed-content
Doable. But will have to configure each browser accessing my application, plus will make them less secure.
====
So my question is: is there another way of circumventing/bypassing the blocked: mixed-content restriction? Ideally supported on new Firefox and Chrome versions.
You shouldn't but you can upgrade all non-secure requests by allowing it in your header
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">

WS Federation (single sign on) module - redirect issue when using SSL offloading

We have a site that we are trying to configure as a client in a SSO scenario, using WS Federation and SAML.
Our site sits behind a load balancer that is doing SSL offloading - the connection to the balancer is under https, but decrypted and forwarded (internally) to the actual site under http and port 81.
Somewhere the WS federation module is attempting to redirect us, but is building up the URL based on the port and incoming protocol to the website:
We request:
https://www.contoso.com/application
and are getting redirected to:
http://www.contoso.com:81/Application
Which doesn't work as the load balancer (correctly) won't respond on this port.
And it seems to be related to the casing of the virtual directory. Browsing to
https://www.contoso.com/Application
seems to work without issue.
(Note for completeness, attempting to browse to http://www.contoso.com/Application with no port will correctly redirect us to the SSL secured URL).
I am trying to find out:
a) Where this redirect is happening in the pipeline and
b) How to configure it to use the correct external address.
If anybody is able to point me in the right direction, I would very much appreciate it.
EDIT 14:19: Seems to be either the WsFederationAuthenticationModule or the SessionAuthenticationModule. These do a case sensitive comparison of the incoming url to what it expects and redirects otherwise:
https://brockallen.com/2013/02/08/beware-wif-session-authentication-module-sam-redirects-and-webapi-services-in-the-same-application/
So that seems to be happening, its a matter now of trying to get the site to behave nicely and redirect to the correct external url.
The following seems to be related and ultimately points to the culprit in the default CookieHandler:
Windows Identity Foundation and Port Forwarding
Looking at that code decompiled in VS, it compares HttpContext.Current.Request.Url against the targetUrl and will redirect to the expected 'cased' version otherwise (in our case including the errant port number).
It would seem that explicitly setting the path attribute of the cookie fixes this issue. Either an empty string or the virtual directory name seems to work:
<federationConfiguration>
<cookieHandler requireSsl="true" name="ContosoAuth" path="/Application/"/>
<wsFederation passiveRedirectEnabled="true" issuer="https://adfsSite" realm="https://www.contoso.com/Application/" reply="https://www.contoso.com/Application/Home" requireHttps="true"/>
</federationConfiguration>

Azure Websites: socket reading port 80 returns 404 error

I have a php script that run perfectly when requested by the browser (example):
http://www.kwiksher.com/k3Serial.php?key="XXXXX"
in this case, I get the information of an user with the key XXXXX, which is the expected behavior.
However, inside my Photoshop plugin, I must to call it via socket, having to force a port in the connection:
http://www.kwiksher.com:80/k3Serial.php?key="XXXXX"
Doing that, I get the the content of Azure default 404 page (it is not even my customized 404 page).
If I use the same call (with the port added to the domain) on a browser, it works fine as well.
Any idea on how to fix it? I tried to flushDNS on my machine as well without success.
Thanks a lot,
Alex
It's likely that the socket library won't be using HTTP and therefore isn't sending a host header and the web tier on Azure can't actually figure out which Website it should serve the content from.
As you using this with a plug-in perhaps try and use the default hostname issued by Azure instead of a custom domain.