How to set up Beanstalk + Nginx to redirect http to https? - redirect

My domain is pointing to a Beanstalk app (DNS ALIAS).
I have already set up SSL certificates properly on my Beanstalk instance.
So now:
http://www.mysite.com -> Beanstalk app with http
https://www.mysite.com -> Beanstalk app with https
I would like to redirect all http requests to https. So http://www.mysite.com -> https://www.mysite.com
I already tried to create an AWS container to implement something like "server { listen 80; return 301 https://www.mysite.com/$request_uri;}" but it is not working.
I have already spent several hours on Google trying to find some guidance on how to do that. I found some clues such as the 301 redirect, rewrite... but I am not being able to apply any solution to my Beanstalk EC2 instance.
Perhaps I need a more detailed explanation on how to do that.
Could someone help me, please?
PS: one thing that I am struggling to understand is the fact that the Load Balancer says that Load Balancer Port 80 is pointing to Instance Port 80 and Load Balancer Port 443 (HTTPS) is also pointing to Instance Port 80, but with Cipher/SSL cert.
Well, when I examine the nginx configuration files on my EC2 instance I only find a "server { listen 8080", not "listen 80".
Thank you all.

I've online this solution.
Add .ebextensions/00_nginx_https_rw.config
files:
"/tmp/45_nginx_https_rw.sh":
owner: root
group: root
mode: "000644"
content: |
#! /bin/bash
CONFIGURED=`grep -c "return 301 https" /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf`
if [ $CONFIGURED = 0 ]
then
sed -i '/listen 8080;/a \ if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; } \n' /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
logger -t nginx_rw "https rewrite rules added"
exit 0
else
logger -t nginx_rw "https rewrite rules already set"
exit 0
fi
container_commands:
00_appdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/appdeploy/enact
01_configdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact
02_rewrite_hook_perms:
command: chmod 755 /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
03_rewrite_hook_ownership:
command: chown root:users /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh

Based on the code above, this is the code that I used to redirect the http requests to https for a standalone (i.e. not behind a load balancer) Docker image:
files:
"/tmp/000_nginx_https_redirect.sh":
owner: root
group: root
mode: "000644"
content: |
#!/bin/bash
sed -i 's/80;/80;\n return 301 https:\/\/$http_host$request_uri;\n/' /etc/nginx/sites-available/elasticbeanstalk-nginx-docker-proxy.conf
container_commands:
00_appdeploy_rewrite_hook:
command: cp -v /tmp/000_nginx_https_redirect.sh /opt/elasticbeanstalk/hooks/appdeploy/enact
01_configdeploy_rewrite_hook:
command: cp -v /tmp/000_nginx_https_redirect.sh /opt/elasticbeanstalk/hooks/configdeploy/enact
02_rewrite_hook_perms:
command: chmod 755 /opt/elasticbeanstalk/hooks/appdeploy/enact/000_nginx_https_redirect.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/000_nginx_https_redirect.sh
03_rewrite_hook_ownership:
command: chown root:users /opt/elasticbeanstalk/hooks/appdeploy/enact/000_nginx_https_redirect.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/000_nginx_https_redirect.sh

For those, who don't use the Load Balancer, the if block from user3888643's answer wouldn't work. So I've removed it completely (not sure if this solution has any problems) and it works for me:
sed -i '/listen 8080;/a \ if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; }\n' /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
to:
sed -i '/listen 8080;/a \ return 301 https://$host$request_uri;\n' /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf

I wasn't sure if user3888643's answer was still the correct one, since aws updated the way some of their own setup scripts run on elastic beanstalk earlier this year, but I just checked with aws support, this is still the advised solution. Add a file to .ebextensions, e.g .ebextensions/00_nginx_https_rw.config with the following contents
files:
"/tmp/45_nginx_https_rw.sh":
owner: root
group: root
mode: "000644"
content: |
#!/usr/bin/env bash
CONFIGURED=`grep -c "return 301 https" /opt/elasticbeanstalk/support/conf/webapp.conf`
if [ $CONFIGURED = 0 ]
then
sed -i '/ location \/ {/a \ if ($http_x_forwarded_proto = "http") { \n return 301 https://$host$request_uri;\n }' /opt/elasticbeanstalk/support/conf/webapp.conf
logger -t nginx_rw "https rewrite rules added"
exit 0
else
logger -t nginx_rw "https rewrite rules already set"
exit 0
fi
container_commands:
00_appdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/appdeploy/enact
01_configdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact
02_rewrite_hook_perms:
command: chmod 755 /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
03_rewrite_hook_ownership:
command: chown root:users /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
04_reload_nginx:
command: /etc/init.d/nginx reload
One thing to look out for: I found I couldn't deploy this because of an interaction between a previous (incorrect) version of the file in .ebextensions, there would be an error and the deployment would fail, even though the file was no longer in the repo being deployed. :
[Instance: i-0c767ece] Command failed on instance.
Return code: 6
Output: nginx: [warn] duplicate MIME type "text/html" in /etc/nginx/nginx.conf:38 nginx:
[emerg] unknown directive "...." in /etc/nginx/conf.d/000_config.conf:4
nginx: configuration file /etc/nginx/nginx.conf test failed.
container_command 04_reload_nginx in .ebextensions/ssl_redirect.config failed.
For more detail, check /var/log/eb-activity.log using console or EB CLI.
It looks like each instance still had a copy of the previously deployed file in /etc/nginx/conf.d/, so I had to go into each instance and delete my previous config files in /etc/nginx/conf.d , once I did that the deployment went through fine.

Related

Brute forcing http digest with Hydra

I am having some trouble brute forcing a HTTP digest form with Hydra. I am using the following command however when proxied through burp suite hydra I can see hydra is using basic auth and not digest.
How do I get hydra to use the proper auth type?
Command:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -vV http-get /digest
Request as seen in proxy:
GET /digest HTTP/1.1
Host: 127.0.0.1
Connection: close
Authorization: Basic YWRtaW46aWxvdmV5b3U=
User-Agent: Mozilla/4.0 (Hydra)
I have studied this case, if the digest method is implemented on Nginx or apache servers level, hydra might work. But if the authentication is implemented on the application server like Flask, Expressjs, Django, it will not work at all
You can create a bash script for password spraying
#!/bin/bash
cat $1 | while read USER; do
cat $2 | while read PASSWORD; do
if curl -s $3 -c /tmp/cookie --digest -u $USER:$PASSWORD | grep -qi "unauth"
then
continue
else
echo [+] Found $USER:$PASSWORD
exit 0
fi
done
done
Save this file as app.sh
$ chmod +x app.sh
$ ./app.sh /path/to/users.txt /path/to/passwords.txt http://example.com/path
Since no Hydra version was specified, I assume the latest one: 9.2.
#tbhaxor is correct:
Against a server like Apache or nginx Hydra works. Flask using digest authentication as recommended in the standard documentation does not work (details later). You could add the used web server so somebody can verify this.
Hydra does not provide explicit parameters to distinguish between basic and digest authentication.
Technically, it first sends a request that attempts to authenticate itself via basic authentication. After that it evaluates the corresponding response.
The specification of digest authentication states that the web application has to send a header WWW-Authenticate : Digest ... in the response if the requested documented is protected using the scheme.
So Hydra now can distinguish between the two forms of authentication.
If it receives this response (cf. code), it sends a second attempt using digest authentication.
The reason why you only can see basic auth and not digest requests is due to the default setting of what Hydra calls "tasks". This is set to 16 by default, which means it initially creates 16 threads.
Thus, if you go to the 17th request in your proxy you will find a request using digest auth. You can also see the difference if you set the number of tasks to 1 with the parameter -t 1.
Following 3 Docker setups where you can test the differences in basic auth (nginx), digest auth(nginx) and digest auth(Flask) using "admin/password" credentials based upon your example:
basic auth:
cat Dockerfile.http_basic_auth
FROM nginx:1.21.3
LABEL maintainer="secf00tprint"
RUN apt-get update && apt-get install -y apache2-utils
RUN touch /usr/share/nginx/html/.htpasswd
RUN htpasswd -db /usr/share/nginx/html/.htpasswd admin password
RUN sed -i '/^ location \/ {/a \ auth_basic "Administrator\x27s Area";\n\ auth_basic_user_file /usr/share/nginx/html/.htpasswd;' /etc/nginx/conf.d/default.conf
:
sudo docker build -f Dockerfile.http_basic_auth -t http-server-basic-auth .
sudo docker run -ti -p 127.0.0.1:8888:80 http-server-basic-auth
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 8888 http-get /
digest auth (nginx):
cat Dockerfile.http_digest
FROM ubuntu:20.10
LABEL maintainer="secf00tprint"
RUN apt-get update && \
# For digest module
DEBIAN_FRONTEND=noninteractive apt-get install -y curl unzip \
# For nginx
build-essential libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev libgd-dev libxml2 libxml2-dev uuid-dev make apache2-utils expect
RUN curl -O https://nginx.org/download/nginx-1.21.3.tar.gz
RUN curl -OL https://github.com/atomx/nginx-http-auth-digest/archive/refs/tags/v1.0.0.zip
RUN tar -xvzf nginx-1.21.3.tar.gz
RUN unzip v1.0.0.zip
RUN cd nginx-1.21.3 && \
./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/ nginx.lock --pid-path=/run/nginx.pid --modules-path=/etc/nginx/modules --add-module=../nginx-http-auth-digest-1.0.0/ && \
make && make install
COPY generate.exp /usr/share/nginx/html/
RUN chmod u+x /usr/share/nginx/html/generate.exp && \
cd /usr/share/nginx/html/ && \
expect -d generate.exp
RUN sed -i '/^ location \/ {/a \ auth_digest "this is not for you";' /etc/nginx/nginx.conf
RUN sed -i '/^ location \/ {/i \ auth_digest_user_file /usr/share/nginx/html/passwd.digest;' /etc/nginx/nginx.conf
CMD nginx && tail -f /var/log/nginx/access.log -f /var/log/nginx/error.log
:
cat generate.exp
#!/usr/bin/expect
set timeout 70
spawn "/usr/bin/htdigest" "-c" "passwd.digest" "this is not for you" "admin"
expect "New password: " {send "password\r"}
expect "Re-type new password: " {send "password\r"}
wait
:
sudo docker build -f Dockerfile.http_digest -t http_digest .
sudo docker run -ti -p 127.0.0.1:8888:80 http_digest
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 8888 http-get /
digest auth (Flask):
cat Dockerfile.http_digest_fask
FROM ubuntu:20.10
LABEL maintainer="secf00tprint"
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip3 install -r requirements.txt
COPY ./app.py /app/
CMD ["flask", "run", "--host=0.0.0.0"]
:
cat requirements.txt
Flask==2.0.2
Flask-HTTPAuth==4.5.0
:
cat app.py
from flask import Flask
from flask_httpauth import HTTPDigestAuth
app = Flask(__name__)
app.secret_key = 'super secret key'
auth = HTTPDigestAuth()
users = {
"admin" : "password",
"john" : "hello",
"susan" : "bye"
}
#auth.get_password
def get_pw(username):
if username in users:
return users.get(username)
return None
#app.route("/")
#auth.login_required
def hello_world():
return "<p>Flask Digest Demo</p>"
:
sudo docker build -f Dockerfile.http_digest_flask -t digest_flask .
sudo docker run -ti -p 127.0.0.1:5000:5000 digest_flask
:
hydra -l admin -P /usr/share/wordlists/rockyou.txt 127.0.0.1 -s 5000 http-get /
If you want to see more information I wrote about it in more detail here.

gss_accept_sec_context() error:ASN.1 structure is missing a required field

I'm trying to implement Kerberos authentication on Ubuntu.
run_kerberos_server.sh
#!/usr/bin/env bash
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
docker stop krb5-server && docker rm krb5-server && true
docker run -d --network=altexy --name krb5-server \
-e KRB5_REALM=EXAMPLE.COM -e KRB5_KDC=localhost -e KRB5_PASS=12345 \
-v /etc/localtime:/etc/localtime:ro \
-v /etc/timezone:/etc/timezone:ro \
--network-alias example.com \
-p 88:88 -p 464:464 -p 749:749 gcavalcante8808/krb5-server
echo "=== Init krb5-server docker container ==="
docker exec krb5-server /bin/sh -c "
# Create users bob as normal user
# and add principal for the service
cat << EOF | kadmin.local
add_principal -randkey \"HTTP/service.example.com#EXAMPLE.COM\"
ktadd -k /etc/krb5-service.keytab -norandkey \"HTTP/service.example.com#EXAMPLE.COM\"
ktadd -k /etc/admin.keytab -norandkey \"admin/admin#EXAMPLE.COM\"
listprincs
quit
EOF
"
echo "=== Copy keytabs ==="
docker cp krb5-server:/etc/krb5-service.keytab "${DIR}"/krb5-service.keytab
docker cp krb5-server:/etc/admin.keytab "${DIR}"/admin.keytab
Get Kerberos ticket:
alex#alex-secfense:~/projects/proxy-auth/etc/kerberos$ kinit admin/admin#EXAMPLE.COM
Password for admin/admin#EXAMPLE.COM:
alex#alex-secfense:~/projects/proxy-auth/etc/kerberos$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: admin/admin#EXAMPLE.COM
Valid starting Expires Service principal
16.12.2020 12:05:38 17.12.2020 00:05:38 krbtgt/EXAMPLE.COM#EXAMPLE.COM
renew until 17.12.2020 12:05:35
Then I start nginx, also in Docker container, image is derived from openresty/openresty:xenial.
My /etc/hosts file has 127.0.0.1 service.example.com line.
My Firefox is configured for network.negotiate-auth.trusted-uris = service.example.com
I open service.example.com:<mapped_port> page in Firefox, nginx responds with 401 and Firefox send Authorization: Negotiate ...` header.
My server side code (error and result handling is stripped):
MYAPI int authenticate(const char* token, size_t length)
{
gss_buffer_desc service = GSS_C_EMPTY_BUFFER;
gss_name_t my_gss_name = GSS_C_NO_NAME;
gss_cred_id_t my_gss_creds = GSS_C_NO_CREDENTIAL;
OM_uint32 minor_status;
OM_uint32 major_status;
gss_ctx_id_t gss_context = GSS_C_NO_CONTEXT;
gss_name_t client_name = GSS_C_NO_NAME;
gss_buffer_desc output_token = GSS_C_EMPTY_BUFFER;
gss_buffer_desc input_token = GSS_C_EMPTY_BUFFER;
input_token.length = length;
input_token.value = (void*)token;
major_status = gss_accept_sec_context(&minor_status, &gss_context, my_gss_creds, &input_token,
GSS_C_NO_CHANNEL_BINDINGS, &client_name, NULL, &output_token, NULL, NULL, NULL);
return 0;
}
Eventually, I get gss_accept_sec_context() error:ASN.1 structure is missing a required field error.
The same code works great with Windows Kerberos setup.
Any idea what does it mean or how to debug the issue?
I did define KRB5_TRACE=/<log_file_name> environment variable and see lines like below:
[7] 1607798057.341744: Sending request (937 bytes) to EXAMPLE.COM
[6] 1608109670.292389: Sending request (937 bytes) to EXAMPLE.COM
[6] 1608109670.660887: Sending request (937 bytes) to EXAMPLE.COM
May it be DNS issue?
UPDATE: I missed that I specify keyatb file to use on server side before calling gss_accept_sec_context (again error handling is stripped out):
OM_uint32 major_status = gsskrb5_register_acceptor_identity(keytab_filename)
You code breaks the fundemantal concept of context completion. It violates RFC 7546 and is not trustworthy, plus you completely ignore major/minor. Now, you tokens get modified somehow in-flight because the ASN.1 encoding is broken.
Dump token before and after transmission and compare.
Start with gss-server and gss-client first.
Read their code and implement your alike. Do not deviate from the imperative of the context look completion,
Show the ticket cache after Firefox has obtained a service ticket.
As soon as you have to tokens, display inspect them with https://lapo.it/asn1js/.

OWASP/ZAP dangling when trying to scan

I am trying out OWASP/ZAP to see if it is something we can use for our project, but I cannot make it work I don't know what I am doing wrong and the documentation really does not help. What I am trying is to run a scan on my api running in a docker container locally on my windows machine so I run the command:
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py -t http://172.21.0.2:8080/swagger.json -g gen.conf -r testreport.html the ip 172.21.0.2 is the IPAddress of my api container even tried with localhost and 127.0.0.1
but it just hangs in the following log message:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 1:43:31 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Nothing happens and my zap docker container is in a unhealthy state, after some time it just crashes and ends up with a bunch of NullPointerExceptions. Is zap docker only working for linux, something specifically I need to do when running it on a windows machine? I don't get why this is not working even when I am following specifically the guideline in https://github.com/zaproxy/zaproxy/wiki/Docker
Edit 1
My latest try where I am trying to target my host ip address directly and the port that I am exposing my api to gives me the following error:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 2:12:07 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 3 URLs
ERROR Permission denied
2019-02-14 14:12:57,116 I/O error(13): Permission denied
Traceback (most recent call last):
File "/zap/zap-baseline.py", line 347, in main
with open(base_dir + generate, 'w') as f:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
Found Java version 1.8.0_151
Available memory: 3928 MB
Setting jvm heap size: -Xmx982m
213 [main] INFO org.zaproxy.zap.DaemonBootstrap
When you run docker with: docker run -v $(pwd):/zap/wrk/:rw ...
you are mapping the /zap/wrk/ directory in the docker image to the current working directory (cwd) of the machine in which you are running docker.
I think the problem is that your current user doesn't have write access to the cwd.
Try below command, hope it resolves issue.
$docker run --user $(id -u):$(id -g) -v $(pwd):/zap/wrk/:rw --rm -t owasp/zap2docker-stable zap-baseline.py -t https://your_url -g gen.conf -r testreport.html
The key error here is:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
This means that the script cannot write to the gen.conf file that you have mounted on /zap/wrk
Do you have write access to the cwd when its not mounted?
The reason for that is, if you use -r parameter, zap will attempt to generate the file report.html at location /zap/wrk/. In order to make this work, we have to mount a directory to this location /zap/wrk.
But when you do so, it is important that the zap container is able to perform the write operations on the mounted directory.
So, below is the working solution using gitlab ci yml. I started with this approach of using image: owasp/zap2docker-stable however then had to go to the vanilla docker commands to execute it.
test_site:
stage: test
image: docker:latest
script:
# The folder zap-reports created locally will be mounted to owasp/zap2docker docker container,
# On execution it will generate the reports in this folder. Current user is passed so reports can be generated"
- mkdir zap-reports
- cd zap-reports
- docker pull owasp/zap2docker-stable:latest || echo
- docker run --name zap-container --rm -v $(pwd):/zap/wrk -u $(id -u ${USER}):$(id -g ${USER}) owasp/zap2docker-stable zap-baseline.py -t "https://example.com" -r report.html
artifacts:
when: always
paths:
- zap-reports
allow_failure: true
So the trick in the above code is
Mount local directory zap-reports to /zap/wrk as in $(pwd):/zap/wrk
Pass the current user and group on the host machine to the docker container so the process is using the same user / group. This allows writing of reports on the directory mounted from local host. This is done by -u $(id -u ${USER}):$(id -g ${USER})
Below is the working code with image: owasp/zap2docker-stable
test_site:
variables:
GIT_STRATEGY: none
stage: test
image:
name: owasp/zap2docker-stable:latest
before_script:
- mkdir -p /zap/wrk
script:
- zap-baseline.py -t "https://example.com" -g gen.conf -I -r testreport.html
- cp /zap/wrk/testreport.html testreport.html
artifacts:
when: always
paths:
- zap.out
- testreport.html

How to set up cron using curl command?

After apache rebuilt my cron jobs stopped working.
I used the following command:
wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
Now my DC support suggests me to change the wget method to curl. What would be the correct value in this case?
-O - is equivalent to curl's default behavior, so that's easy.
-q is curl's -s (or --silent)
--retry N will substitute for wget's -t N
All in all:
curl -s --retry 1 http://example.com/cgi-bin/loki/autobonus.pl
try run change with the full path of wget
/usr/bin/wget -O - -q -t 1 http://example.com/cgi-bin/loki/autobonus.pl
you can find the full path with:
which wget
and more, check if you can reach the destination domain with ping or other methods:
ping example.com
Update:
based on the comments, seems to be caused by the line in /etc/hosts:
127.0.0.1 example.com #change example.com to the real domain
It seems that you have restricted options in terms that on the server where the cron should run you have the domain pinned to 127.0.0.1 but the virtual host configuration does not work with that.
What you can do is to let wget connect by IP but send the Host header so that the virtual host matching would work:
wget -O - -q -t 1 --header 'Host: example.com' http://xx.xx.35.162/cgi-bin/loki/autobonus.pl
Update
Also probably you don't need to run this over the web server, so why not just run:
perl /path/to/your/script/autobonus.pl

Unable to run "gearman" command line tool with gearman 1.1.6

I am trying to run the example on "http://gearman.org/getting_started" on Ubuntu in VirtualBox environment.
At first I tried to download an old version 0.16 by using apt-get install gearman-job-server, apt-get install gearman-tools and everything worked well. The server ran in the background, I was able to create 2 workers and verify that I can call them by creating a client.
I decided to download and compile the latest version, 1.1.6. Now, I am trying to do the same thing with the new version and I am having errors.
I run the server as admin:
sudo gearmand
The statement
gearadmin --getpid
seems to work - it returns me the process ID of the server. Thus, the server is running, and this answer is not relevant.
Now, I am adding a worker:
gearman -w -f wc -- wc -l
It seems to run.
Nevertheless,
gearadmin --workers
results in something that probably represents and empty list :
33 127.0.0.1 - :
.
(In version 0.16, I was able to see 2 lines, the second showing the registered function name.)
Attempting to run the client
gearman -f wc < /etc/passwd
results in
gearman: gearman_client_run_tasks : flush(GEARMAN_COULD_NOT_CONNECT) localhost:0 -> libgearman/connection.cc:671"
This might be the very same problem described in here - the port not specified, but I have no idea how to do it through the command line tool.
Any idea?
Ok, It looks like the answer in here was the key to success. Probably, the "getting started" section was not updated for a while. Indeed, one must specify a port explicitly for gearmand and gearman .
Server:
sudo gearmand -p 5000
Worker:
gearman -p 5000 -w -f wc -- wc -l
Client:
gearman -p 5000 -f wc < /etc/passwd