I am using minikube in no-driver mode ( sudo minikube start --vm-driver none ) and I can't free port 80.
With
sudo netstat -nlplute
I get:
tcp 0 0 192.168.0.14:2380 0.0.0.0:* LISTEN 0 58500 7200/etcd
tcp6 0 0 :::80 :::* LISTEN 0 62030 8681/docker-proxy
tcp6 0 0 :::8080 :::* LISTEN 0 57318 8656/docker-proxy
I tried to stop minikube, but it doesn't seem to be working when using driver=none
How should I free port 80 ?
EDIT: Full netstat ouput
➜ ~ sudo netstat -nlpute
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 102 35399 1019/systemd-resolv
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 0 6629864 11358/cupsd
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 128 45843 1317/postgres
tcp 0 0 127.0.0.1:6942 0.0.0.0:* LISTEN 1000 14547489 16086/java
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 0 58474 1053/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 0 71361 10409/kube-proxy
tcp 0 0 127.0.0.1:45801 0.0.0.0:* LISTEN 0 57445 1053/kubelet
tcp 0 0 192.168.0.14:2379 0.0.0.0:* LISTEN 0 56922 7920/etcd
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 0 56921 7920/etcd
tcp 0 0 192.168.0.14:2380 0.0.0.0:* LISTEN 0 56917 7920/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 0 56084 7920/etcd
tcp 0 0 127.0.0.1:63342 0.0.0.0:* LISTEN 1000 14549242 16086/java
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 0 15699 1/init
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 0 60857 7889/kube-controlle
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 0 56932 7879/kube-scheduler
tcp 0 0 127.0.0.1:5939 0.0.0.0:* LISTEN 0 48507 2205/teamviewerd
tcp6 0 0 ::1:631 :::* LISTEN 0 6629863 11358/cupsd
tcp6 0 0 :::8443 :::* LISTEN 0 55158 7853/kube-apiserver
tcp6 0 0 :::44444 :::* LISTEN 1000 16217187 7252/___go_build_gi
tcp6 0 0 :::32028 :::* LISTEN 0 74556 10409/kube-proxy
tcp6 0 0 :::10250 :::* LISTEN 0 58479 1053/kubelet
tcp6 0 0 :::30795 :::* LISTEN 0 74558 10409/kube-proxy
tcp6 0 0 :::10251 :::* LISTEN 0 56926 7879/kube-scheduler
tcp6 0 0 :::10252 :::* LISTEN 0 60851 7889/kube-controlle
tcp6 0 0 :::30285 :::* LISTEN 0 74559 10409/kube-proxy
tcp6 0 0 :::31406 :::* LISTEN 0 74557 10409/kube-proxy
tcp6 0 0 :::111 :::* LISTEN 0 15702 1/init
tcp6 0 0 :::80 :::* LISTEN 0 16269016 16536/docker-proxy
tcp6 0 0 :::8080 :::* LISTEN 0 16263128 16524/docker-proxy
tcp6 0 0 :::10256 :::* LISTEN 0 75123 10409/kube-proxy
udp 0 0 0.0.0.0:45455 0.0.0.0:* 115 40296 1082/avahi-daemon:
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 16274723 23811/chrome --type
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 16270144 23728/chrome
udp 0 0 224.0.0.251:5353 0.0.0.0:* 1000 16270142 23728/chrome
udp 0 0 0.0.0.0:5353 0.0.0.0:* 115 40294 1082/avahi-daemon:
udp 0 0 127.0.0.53:53 0.0.0.0:* 102 35398 1019/systemd-resolv
udp 0 0 192.168.0.14:68 0.0.0.0:* 0 12307745 1072/NetworkManager
udp 0 0 0.0.0.0:111 0.0.0.0:* 0 18653 1/init
udp 0 0 0.0.0.0:631 0.0.0.0:* 0 6628156 11360/cups-browsed
udp6 0 0 :::5353 :::* 115 40295 1082/avahi-daemon:
udp6 0 0 :::111 :::* 0 15705 1/init
udp6 0 0 :::50342 :::* 115 40297 1082/avahi-daemon:
Ive reproduced your environment (--vm-driver=none). At first I thought it might be connected with minikube built-in configuration, however clean Minikube don't use port 80 on default configuration.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
$ minikube version
minikube version: v1.6.2
$ sudo netstat -nlplute
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.1:10257 0.0.0.0:* LISTEN 0 49556 9345/kube-controlle
tcp 0 0 127.0.0.1:10259 0.0.0.0:* LISTEN 0 50223 9550/kube-scheduler
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 101 15218 752/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 21550 1541/sshd
tcp 0 0 127.0.0.1:44197 0.0.0.0:* LISTEN 0 51016 10029/kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 0 51043 10029/kubelet
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 0 52581 10524/kube-proxy
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 0 49728 9626/etcd
tcp 0 0 10.156.0.11:2379 0.0.0.0:* LISTEN 0 49727 9626/etcd
tcp 0 0 10.156.0.11:2380 0.0.0.0:* LISTEN 0 49723 9626/etcd
tcp 0 0 127.0.0.1:2381 0.0.0.0:* LISTEN 0 49739 9626/etcd
tcp6 0 0 :::10256 :::* LISTEN 0 52577 10524/kube-proxy
tcp6 0 0 :::22 :::* LISTEN 0 21552 1541/sshd
tcp6 0 0 :::8443 :::* LISTEN 0 49120 9419/kube-apiserver
tcp6 0 0 :::10250 :::* LISTEN 0 51050 10029/kubelet
tcp6 0 0 :::10251 :::* LISTEN 0 50217 9550/kube-scheduler
tcp6 0 0 :::10252 :::* LISTEN 0 49550 9345/kube-controlle
udp 0 0 127.0.0.53:53 0.0.0.0:* 101 15217 752/systemd-resolve
udp 0 0 10.156.0.11:68 0.0.0.0:* 100 15574 713/systemd-network
udp 0 0 127.0.0.1:323 0.0.0.0:* 0 23984 2059/chronyd
udp6 0 0 ::1:323 :::* 0 23985 2059/chronyd
Good description for what docker-proxy is used you can check this article.
When a container starts with its port forwarded to the Docker host on which it runs, in addition to the new process that runs inside the container, you may have noticed an additional process on the Docker host called docker-proxy
This docker-proxy might be something similar like docker zombie process where container was removed, however allocated port wasn't unlocked. Unfortunately it seems that this is recurrent docker issue occuring across versions and OS since 2016. As I mentioned I think currently there is no fix for this, however you can find workaround.
cd /usr/libexec/docker/
ln -s docker-proxy-current docker-proxy
service docker restart
===
$ sudo service docker stop
$ sudo service docker start
===
$ sudo service docker stop
# remove all internal docker network: rm /var/lib/docker/network/files/
$ sudo service docker start
===
$ sudo systemctl stop docker
$ sudo systemctl start docker
There are a few github threads mentioning about this issue. For more information please check this and this thread.
After checking that my port 8080 was also used by docker proxy, I did
$ docker ps
and notices that both port 80 and port 8080 are used by traefik controller:
$ kubectl get services
traefik-ingress-service ClusterIP 10.96.199.177 <none> 80/TCP,8080/TCP 25d
When I checked for traefik service, I found:
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
So, I think this is why I get a docker-proxy. If I need it to use another port, I can change it here. My bad :(
Having a time trying to connect to a PostgreSQL database on Ubuntu 18.04 server.
Here is my:
postgresql.conf file:
port=5432
listen_addresses='*'
pg_hba.conf:
host all all 0.0.0.0/0 md5
firewall is currently disabled
here is the output when I did the command (saw in another thread to do this...):
sudo netstat -ltpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 608/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 842/sshd
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 2922/postgres
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 1055/master
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 867/nginx: master p
tcp6 0 0 :::22 :::* LISTEN 842/sshd
tcp6 0 0 :::25 :::* LISTEN 1055/master
tcp6 0 0 :::80 :::* LISTEN
I have restarted postgresql each when making a change using the command:
sudo service postgresql restart.
I have tried to access the db using the python library psycopg2 on macOS and getting this error
could not connect to server: Connection refused
Is the server running on host "<ip_address>" and accepting
TCP/IP connections on port 5432?
What am I missing?
From the netstat output it is obvious that you didn't restart PostgreSQL after changing listen_addresses.
When no SSL configs applied :
pg_hba.conf
host database user 0.0.0.0/0 scram-sha-256
postgresql.conf
listen_addresses = ‘*’
port = 5432
ssl = on
ssl_cert_file = ‘/etc/ssl/certs/ssl-cert-snakeoil.pem’
ssl_key_file = ‘/wtc/ssl/private/ssl-cert-snakeoil.key’
I get : netstat -nltp
smadmin#studymatepro:~$ sudo netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 970/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1405/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1079/cupsd
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 3780/postgres
tcp6 0 0 :::22 :::* LISTEN 1405/sshd
tcp6 0 0 ::1:631 :::* LISTEN 1079/cupsd
tcp6 0 0 :::5432 :::* LISTEN 3780/postgres
smadmin#studymatepro:~$
you can see the remote tcp/ip on port 5432 ; and can get SSL connection (server side authentication only)
Now , when I configure SSL , and add client.crt,client.key & root.crt to the client machine :
pg_hba.conf
hostssl database user 0.0.0.0/0 scram-sha-256 clientcert=1
postgresql.conf
listen_addresses = ‘*’
port = 5432
ssl = on
ssl_cert_file = ‘/etc/ssl/certs/server.crt’ // my self signed crt
ssl_key_file = ‘/etc/ssl/private/server.key’
ssl_ca_file = ‘/etc/ssl/certs/rootCert.crt’
ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL' # allowed SSL ciphers
ssl_prefer_server_ciphers = on
ssl_ecdh_curve = 'prime256v1'
password_encryption = scram-sha-256
and do : netstat -nltp ; I get
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 970/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1405/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1079/cupsd
tcp6 0 0 :::22 :::* LISTEN 1405/sshd
tcp6 0 0 ::1:631 :::* LISTEN 1079/cupsd
The remote TCP/IP on port 5432 gone !!! and thats why I’m getting the connection refused since remote port 5432 are no longer active .
The question is why this happined ...I’m I doing some wrong ?
I have postgreSQL runiing on my google cloud instance and i added firewall rule "tcp 5432" on Google cloud firewall but still i am unable to connect, even telnet is not working.
officetaskpy#instance-1:/etc/postgresql/9.5/main$ netstat -ntpl
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:5910 0.0.0.0:* LISTEN 9020/Xvnc
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:44801 0.0.0.0:* LISTEN 16023/phantomjs
tcp 0 0 0.0.0.0:53619 0.0.0.0:* LISTEN 812/phantomjs
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::5432 :::* LISTEN -
Result of netstat command
Above is my firewall rule. Is there anything which i am missing here.
I am running a logstash shipper, rsyslog sends logs to logstash on port 5140, I can confirm the packets are arriving with:
tcpdump -vvv -A -i any port 5140
I have logstash configured like so:
input {
udp {
type => "syslog"
port => 5140
}
}
filter { }
output {
stdout {
codec => rubydebug
}
redis {
host => "172.30.114.151"
key => "logstash"
port => "6379"
data_type => "list"
}
}
I have also tried the following on for the input:
input {
syslog {
port => 5140
}
}
Which netstat shows tcp Listen but not udp.
I have disabled ipv6 for logstash with the following flag:
_JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
When I run:
netstat -tulpan
I get:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1191/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2135/master
tcp 0 0 0.0.0.0:514 0.0.0.0:* LISTEN 7593/rsyslogd
tcp 0 0 172.26.33.182:22 172.30.230.152:47975 ESTABLISHED 2260/sshd:
tcp 0 0 172.26.33.182:22 172.30.230.151:42811 ESTABLISHED 6781/sshd:
tcp6 0 0 :::22 :::* LISTEN 1191/sshd
tcp6 0 0 :::4440 :::* LISTEN 1296/java
tcp6 0 0 ::1:25 :::* LISTEN 2135/master
tcp6 0 0 :::514 :::* LISTEN 7593/rsyslogd
udp 0 0 0.0.0.0:5140 0.0.0.0:* 8499/java
udp 0 0 0.0.0.0:37934 0.0.0.0:* 653/avahi-daemon: r
udp 0 0 0.0.0.0:5353 0.0.0.0:* 653/avahi-daemon: r
Process 8499 is logstash. I have tried running as root and as well as other ports. I cannot seem to get logstash to "listen" on udp
I have also confirmed that the port is open and working with:
telnet <ipaddress> 5140
Selinux is disabled:
sestatus
SELinux status: disabled
I need some help with this. I have searched and searched. I have looked into every other solution I have come across with no luck. This may seem like a duplicate. However, the other solutions are not working for me. This is a centos installation. Have also tried ports 514, 10514 to no avail.
You have to allow the port in firewall as centos comes up with default firewall which doesn't allow traffic to get to logstash input
Allow traffic on a specific port by following command:
firewall-cmd --zone=public --add-port=2888/tcp
disable firewall or stop service with following command:
systemctl disable firewalld
systemctl stop firewalld
**Disabling firewall can be a security concern but for experimental purposes you can give it a try