Unable to install LMD on CentOS 7.9.2009 (core) - centos

Can someone please help me with this? I'm attempting to follow the below guide on installing LMD (Linux Malware Detect) on CentOS.
https://www.tecmint.com/install-linux-malware-detect-lmd-in-rhel-centos-and-fedora/
The issue that I am having is that whenever I attempt to use "wget" on the specified link to LMD, it always pulls an HTML file instead of a .gz file.
Troubleshooting: I've attempted HTTPS instead of HTTP, but that results in an "unable to establish SSL connection" error message (see below). I've already looked around the internet for other guides on installing LMD on Cent and every one of them advised to "wget" the .gz at the below link. I'm hoping that someone can help me to work through this.
http://www.rfxn.com/downloads/maldetect-current.tar.gz
SSL error below
If you need further information from me, please let me know. Thank you.
Best,
B
wget --spider: enter image description here

wget --spider: enter image description here
This is interesting, you requested asset from http://www.rfxn.com but was redirected finally to https://block.charter-prod.hosted.cujo.io which seems to page with text like
Let's stop for a moment
This website has been blocked as it may contain inappropriate content
I am unable to fathom why exactly this happend, but this probably something to do with your network, as I run wget --spider and it did detect (1,5M) [application/x-gzip].

You replace in command http with https. Try wget as it is mentioned in the manual:
wget http://www.rfxn.com/downloads/maldetect-current.tar.gz
Here is what I get with --spider option:
# wget --spider http://www.rfxn.com/downloads/maldetect-current.tar.gz
Spider mode enabled. Check if remote file exists.
--2022-07-06 22:04:57-- http://www.rfxn.com/downloads/maldetect-current.tar.gz
Resolving www.rfxn.com... 172.67.144.156, 104.21.28.71
Connecting to www.rfxn.com|172.67.144.156|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1549126 (1.5M) [application/x-gzip]
Remote file exists.

It was my ISP. They had router-based software preventing Linux extra-network commands from getting past the gateway.

Related

AEM login screen is not apprearing in a linux machine AEM6.3

I have set up AEM 6.3 on remote Linux machine. But when I try to access the AEM from browser, it says "Connection has timed out".
I am not getting any error in the error.log file. Also, in stdout.log file, it says "Startup completed".
Also, I checked that port(4502) is not blocked on the server.
When I put command "curl http://localhost:4502/" on the server, I am not getting any error, which makes me assume that the connection is established.
Do I need to do any other configuration or something in order to access it from the browser? I am using http://ip:4502/ in the browser..
Almost certainly a firewall issue, check and check again :)
Look in the AEM Access log (same folder as the other logs you looked in) can you see any requests coming in from your browser? There is no other config required on AEM to access other than starting it up, assuming there is nothing network/firewall related blocking then you should be able to access it.

Powershell Invoke-WebRequest returns "remote name could not be resolved" but not proxy error

I am trying to download a page using wget (alias for Invoke-WebRequest) in powershell. The page in question is www.privatahyresvärdar.nu.
When using Internet Explorer I can navigate to www.privatahyresvärdar.nu but I cannot run wget from powershell nor can i ping the site. Neither command can resolve the hostname.
I have followed several advice on SO and other sites commenting on using Proxies as an error source for wget failing, but i am not using any Proxy.
Please help me identify the error source!
After a moment of clarity it struck me that it might be an encoding error! After replacing the hostname with punycode with an online punycode encoder (first hit on google) it worked like a charm!

Grafana fails to start server

I'm trying to install Grafana on a server, and installation goes through properly. However, when I try to start the service (using sudo service grafana start) it fails with the cryptic message:
2016/02/11 18:45:38 [web.go:93 StartServer()] [E] Fail to start server: open : no such file or directory
I have been unable to find an answer to this.
I assume that I'm simply missing an apt-get package or something really simple, but there's no more information than this.
Anyone have an idea?
Thanks for your time.
EDIT:
While unable to solve the actual problem, I realized that though I configured the server to run over HTTPS, the actual SSL is handled through the proxy by my host, and the server should run internally on HTTP. When changing this, the server started properly. It's not a solution to this specific problem, but as it may point others with this problem in the right direction;
the problem had to do with running over HTTPS.
Good luck!
when configuring Grafana to use HTTPs you need to specify cert & key paths, looks likely that Grafana could not find one of them.

Proxy setting in gsutil tool

I use gsutil tool for download archives from Google Storage.
I use next CMD command:
python c:\gsutil\gsutil cp gs://pubsite_prod_rev_XXXXXXXXXXXXX/YYYYY/*.zip C:\Tmp\gs
Everything works fine, but if I try to run that command from corporate proxy, I receive error:
Caught socket error, retrying: [Errno 10051] A socket operation was attempted to an unreachable network
I tried several times to set the proxy settings in .boto file, but all to no avail.
Someone faced with such a problem?
Thanks!
Please see the section "I'm connecting through a proxy server, what do I need to do?" at https://developers.google.com/storage/docs/faq#troubleshooting
Basically, you need to configure the proxy settings in your .boto file, and you need to ensure that your proxy allows traffic to accounts.google.com as well as to *.storage.googleapis.com.
A change was just merged into github yesterday that fixes some of the proxy support. Please try it out, or specifically, overwrite this file with your current copy:
https://github.com/GoogleCloudPlatform/gsutil/blob/master/gslib/util.py
I believe I am having the same problem with the proxy settings being ignored under Linux (Ubuntu 12.04.4 LTS) and gsutils 4.2 (downloaded today).
I've been watching tcpdump on the host to confirm that gsutils is attempting to directly route to Google IPs instead of to my proxy server.
It seems that on the first execution of a simple command like "gsutil -d ls" it will use my proxy settings specified .boto for the first POST and then switch back to attempting to route directly to Google instead of my proxy server.
Then if I CTRL-C and re-run the exact same command, the proxy setting is no longer used at all. This difference in behaviour baffles me. If I wait long enough, I think it will work for the initial request again so this suggests some form on caching taking place. I'm not 100% of this behaviour yet because I haven't been able to predict when it occurs.
I also noticed that it always first tries to connect to 169.254.169.254 on port 80 regardless of proxy settings. A grep shows that it's hardcoded into oauth2_client.py, test_utils.py, layer1.py, and utils.py (under different subdirectories of the gsutils root).
I've tried setting the http_proxy environment variable but it appears that there is code that unsets this.

Slow wget speeds when connecting to https pages

I'm using wget to connect to a secure site like this:
wget -nc -i inputFile
where inputeFile consists of URLs like this:
https://clientWebsite.com/TheirPageName.asp?orderValue=1.00&merchantID=36&programmeID=92&ref=foo&Ofaz=0
This page returns a small gif file. For some reason, this is taking around 2.5 minutes. When I paste the same URL into a browser, I get back a response within seconds.
Does anyone have any idea what could be causing this?
The version of wget, by the way, is "GNU Wget 1.9+cvs-stable (Red Hat modified)"
I know this is a year old but this exact problem plagued us for days.
Turns out it was our DNS server but I got around it by disabling IP6 on my box.
You can test it out prior to making the system change by adding "--inet4-only" to the end of the command (w/o quotes).
Try forging your UserAgent
-U "Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-GB; rv:1.9.0.1) Gecko/2008070206 Firefox/3.0.1"
Disable Ceritificate Checking ( slow )
--no-check-certificate
Debug whats happening by enabling verbostity
-v
Eliminate need for DNS lookups:
Hardcode thier IP address in your HOSTS file
/etc/hosts
123.122.121.120 foo.bar.com
Have you tried profiling the requests using strace/dtrace/truss (depending on your platform)?
There are a wide variety of issues that could be causing this. What version of openssl is being used by wget - there could be an issue there. What OS is this running on (full information would be useful there).
There could be some form of download slowdown being enforced due to the agent ID being passed by wget implemented on the site to reduce the effects of spiders.
Is wget performing full certificate validation? Have you tried using --no-check-certificate?
Is the certificate on the client site valid? You may want to specify --no-certificate-check if it is a self-signed certificate.
HTTPS (SSL/TLS) Options for wget
One effective solution is to delete https:\\.
This accelerate my download for around 100 times.
For instance, you wanna download via:
wget https://data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2
You can use the following command alternatively to speed up.
wget data.keithito.com/data/speech/LJSpeech-1.1.tar.bz2