Powershell Invoke-WebRequest returns "remote name could not be resolved" but not proxy error - powershell

I am trying to download a page using wget (alias for Invoke-WebRequest) in powershell. The page in question is www.privatahyresvärdar.nu.
When using Internet Explorer I can navigate to www.privatahyresvärdar.nu but I cannot run wget from powershell nor can i ping the site. Neither command can resolve the hostname.
I have followed several advice on SO and other sites commenting on using Proxies as an error source for wget failing, but i am not using any Proxy.
Please help me identify the error source!

After a moment of clarity it struck me that it might be an encoding error! After replacing the hostname with punycode with an online punycode encoder (first hit on google) it worked like a charm!

Related

Unable to install LMD on CentOS 7.9.2009 (core)

Can someone please help me with this? I'm attempting to follow the below guide on installing LMD (Linux Malware Detect) on CentOS.
https://www.tecmint.com/install-linux-malware-detect-lmd-in-rhel-centos-and-fedora/
The issue that I am having is that whenever I attempt to use "wget" on the specified link to LMD, it always pulls an HTML file instead of a .gz file.
Troubleshooting: I've attempted HTTPS instead of HTTP, but that results in an "unable to establish SSL connection" error message (see below). I've already looked around the internet for other guides on installing LMD on Cent and every one of them advised to "wget" the .gz at the below link. I'm hoping that someone can help me to work through this.
http://www.rfxn.com/downloads/maldetect-current.tar.gz
SSL error below
If you need further information from me, please let me know. Thank you.
Best,
B
wget --spider: enter image description here
wget --spider: enter image description here
This is interesting, you requested asset from http://www.rfxn.com but was redirected finally to https://block.charter-prod.hosted.cujo.io which seems to page with text like
Let's stop for a moment
This website has been blocked as it may contain inappropriate content
I am unable to fathom why exactly this happend, but this probably something to do with your network, as I run wget --spider and it did detect (1,5M) [application/x-gzip].
You replace in command http with https. Try wget as it is mentioned in the manual:
wget http://www.rfxn.com/downloads/maldetect-current.tar.gz
Here is what I get with --spider option:
# wget --spider http://www.rfxn.com/downloads/maldetect-current.tar.gz
Spider mode enabled. Check if remote file exists.
--2022-07-06 22:04:57-- http://www.rfxn.com/downloads/maldetect-current.tar.gz
Resolving www.rfxn.com... 172.67.144.156, 104.21.28.71
Connecting to www.rfxn.com|172.67.144.156|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1549126 (1.5M) [application/x-gzip]
Remote file exists.
It was my ISP. They had router-based software preventing Linux extra-network commands from getting past the gateway.

Using WGET to retrieve information from PLC - Error 400 Bad Request

I'm attempting to use the wget program to retrieve and save a list of data from my Siemens S7-1200 PLC. Using a batch file I had written I was able drill down the folder path to my wget.exe file. Upon running the wget executable I get the error message seen in the attached screenshot, labeled "Command Prompt Screenshot".
The command prompt shows me that I've "connected" and I know the username and password are correct because I can log into the PLC using my web browser. It's for those reasons I'm stumped on what the problem is.
Has anyone seen this before or can anyone point me in the right direction?
Thanks for the response Ken. I was actually able to get it working with the assistance of the Siemens technical support. Apparently my computer didn't like the way I was trying to pass it the username and password login credentials. Through Siemens TIA Portal software I was able remove the login restrictions, allowing all users access to reading data off the PLC and it works now. I've attached a copy of the exact batch file I used. Also, to make sure I'm adding as much detail as possible, I have the batch file and the wget.exe file saved to a folder on my c:\ drive. Functional wget batch file

Proxy setting in gsutil tool

I use gsutil tool for download archives from Google Storage.
I use next CMD command:
python c:\gsutil\gsutil cp gs://pubsite_prod_rev_XXXXXXXXXXXXX/YYYYY/*.zip C:\Tmp\gs
Everything works fine, but if I try to run that command from corporate proxy, I receive error:
Caught socket error, retrying: [Errno 10051] A socket operation was attempted to an unreachable network
I tried several times to set the proxy settings in .boto file, but all to no avail.
Someone faced with such a problem?
Thanks!
Please see the section "I'm connecting through a proxy server, what do I need to do?" at https://developers.google.com/storage/docs/faq#troubleshooting
Basically, you need to configure the proxy settings in your .boto file, and you need to ensure that your proxy allows traffic to accounts.google.com as well as to *.storage.googleapis.com.
A change was just merged into github yesterday that fixes some of the proxy support. Please try it out, or specifically, overwrite this file with your current copy:
https://github.com/GoogleCloudPlatform/gsutil/blob/master/gslib/util.py
I believe I am having the same problem with the proxy settings being ignored under Linux (Ubuntu 12.04.4 LTS) and gsutils 4.2 (downloaded today).
I've been watching tcpdump on the host to confirm that gsutils is attempting to directly route to Google IPs instead of to my proxy server.
It seems that on the first execution of a simple command like "gsutil -d ls" it will use my proxy settings specified .boto for the first POST and then switch back to attempting to route directly to Google instead of my proxy server.
Then if I CTRL-C and re-run the exact same command, the proxy setting is no longer used at all. This difference in behaviour baffles me. If I wait long enough, I think it will work for the initial request again so this suggests some form on caching taking place. I'm not 100% of this behaviour yet because I haven't been able to predict when it occurs.
I also noticed that it always first tries to connect to 169.254.169.254 on port 80 regardless of proxy settings. A grep shows that it's hardcoded into oauth2_client.py, test_utils.py, layer1.py, and utils.py (under different subdirectories of the gsutils root).
I've tried setting the http_proxy environment variable but it appears that there is code that unsets this.

problems with Crypt::SSLeay and using HTTPS request?

I'm trying to connect to a website via HTTPS, by sending a WWW::Mechanize get request and whenever I try and run my script I get this error:
This Application has faile to start because libeay32_.dll was not found. Re-installing the application may fix this problem
And inside the command prompt I get:
Error GETing http...: can't load 'C:/strawberry/perl/vendor/lib/auto/Crypt/SSLeay/SSLeay.dll for module Crypt::SSLeay: load_file: The specified module could not be found (Crypt::SSLeay or IO::Socket::SSL no installed) at ...
I don't understand the problem because I'm very new to programming with Perl. Crypt::SSLeay is installed, the .dll is in the proper location and IO::Socket::SSL is also installed, or whenever I try to install it via cpan i get the libeay error again. The libeay32_.dll is located in the C:\straberry\c\bin. I don't have full access right to the computer because I am doing this from work. If someone could explain to me the reason for the problem it would be appreciated.
I make an answer out of my comments so you can check this question as answered:
Add "C:\straberry\c\bin" to the PATH-environment variable
Close the explorer- and/or commandline-windows since running processes aren't notified if the environment changes and thus keep the old environment active (okay, in the command line you could apply the update manually by set PATH=...new path...).
Have you read the README.SSL file that comes with LWP? (WWW::Mechanize uses LWP to make the actual HTTP requests).

How to configure MAMP to serve perl CGI scripts (NOT localhost!)

I'm using MAMP-pro to serve my domain to the outside world.
I'm not a very experienced sys-admin, though I've slogged my way through a few basic things. I know what apache is, and I can read-most-of but not generate-without-guide related .conf files.
I've got a perl script which I've tested from the command line and it works (outputs as desired.)
When I try to access said script from the browser, I get 404.
I've tried placing the script at:
/Users/me/Sites/mydomain.com/htdocs/mycgi.pl
/Users/me/Sites/mydomain.com/cgi-bin/mycgi.pl
/Users/me/Sites/mydomain.com/htdocs/cgi-bin/mycgi.pl
and accessing it as:
http://www.mydomain.com/mycgi.pl
http://www.mydomain.com/cgi-bin/mycgi.pl
and all the various combinations, all to no avail (404.)
The script and its container directory have permissions 755.
So, what other steps am I missing? Are there any good set-up guides? I tried the MAMP-Pro manual, but it is filled with such information as "the cancel button cancels the current operation" and not really anything useful. Google turned up several hits that all seem to talk about how to make this work on localhost, but I'm trying to serve this to the outside world.
Any hints?
Thanks!
The official online documentation has a section on virtual hosts. When creating a host for www.mydomain.com you can choose the DocumentRoot which is called "Disk location" within MAMP PRO. If you still get a 404 error, take a look into the error_log for a more specific reason (i.e., where Apache tries to find the file in question).