nmap show port is closed even though I opened it - centos

My VPS is running on CentOS 7.2 , I opened a port by firewall-cmd --zone=public --add-port=8006/tcp --permanent and have already type the firewall-cmd --reload command, but when I check the port by nmap, nmap -p 8006 ip-addressxxx, it still shows it is closed. Here is some information may help:
[root#localhost ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2017-04-07 02:06:50 EDT; 3 days ago
Docs: man:firewalld(1)
Main PID: 663 (firewalld)
CGroup: /system.slice/firewalld.service
└─663 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
Apr 07 02:06:50 localhost.localdomain systemd[1]: Starting firewalld - dynamic firewall daemon...
Apr 07 02:06:50 localhost.localdomain systemd[1]: Started firewalld - dynamic firewall daemon.
Apr 10 02:03:42 localhost.localdomain firewalld[663]: ERROR: ALREADY_ENABLED: 80:tcp
Apr 10 02:03:49 localhost.localdomain firewalld[663]: ERROR: ALREADY_ENABLED: 8006:tcp
.
.
.
[root#localhost ~]# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: ens3
sources:
services: dhcpv6-client ssh
ports: 8009/tcp 80/tcp 8080/tcp 8006/tcp
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
.
.
.
[root#localhost ~]# firewall-cmd --list-ports
8009/tcp 80/tcp 8080/tcp 8006/tcp
.
.
.
[root#localhost ~]# netstat -plunt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address
State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 992/sshd
tcp6 0 0 :::8009 :::* LISTEN 1027/java
tcp6 0 0 :::3306 :::* LISTEN 1383/mysqld
tcp6 0 0 :::80 :::* LISTEN 1027/java
tcp6 0 0 :::22 :::* LISTEN 992/sshd
tcp6 0 0 127.0.0.1:8006 :::* LISTEN 1027/java

Revisited my answer
The process you have listening on port 8006 is only listening on the loopback interface, 127.0.0.1, it should be listening on 0.0.0.0. See the sshd process in your process list 0.0.0.0:22 it works fine.
Use something like netcat to test. This will open a port on 8006 on the 0.0.0.0 interface, which is open to the world because of your firewall rules
On your VPS Try:
nc -l 8006
and then scan with nmap again and you will see the port is open, provided your firewall rules are in place.
You want to see this in the process list
tcp6 0 0 0.0.0.0:8006 :::* LISTEN 1027/java
and not
tcp6 0 0 127.0.0.1:8006 :::* LISTEN 1027/java

Related

Unable to connect to PostgreSQL db on Ubuntu 18.04 Server

Having a time trying to connect to a PostgreSQL database on Ubuntu 18.04 server.
Here is my:
postgresql.conf file:
port=5432
listen_addresses='*'
pg_hba.conf:
host all all 0.0.0.0/0 md5
firewall is currently disabled
here is the output when I did the command (saw in another thread to do this...):
sudo netstat -ltpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 608/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 842/sshd
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN 2922/postgres
tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 1055/master
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 867/nginx: master p
tcp6 0 0 :::22 :::* LISTEN 842/sshd
tcp6 0 0 :::25 :::* LISTEN 1055/master
tcp6 0 0 :::80 :::* LISTEN
I have restarted postgresql each when making a change using the command:
sudo service postgresql restart.
I have tried to access the db using the python library psycopg2 on macOS and getting this error
could not connect to server: Connection refused
Is the server running on host "<ip_address>" and accepting
TCP/IP connections on port 5432?
What am I missing?
From the netstat output it is obvious that you didn't restart PostgreSQL after changing listen_addresses.

How to enable listening 10255 in my kubelet service

I am learning to work with Kubernetes and trying to configure monitoring of my Kubernetes cluster. For this I use metricbeat and elk.
After deploying and configuring metricbeat, I get an error:
error making http request: Get http://172.16.0.205:10255/stats/summary: dial tcp 172.16.0.205:10255: connect: connection refused
I found that my Kubelet is not listening on port 10255:
[root#kube2 /]# netstat -ap | grep -i "listen" | grep "kubelet"
tcp 0 0 localhost:40450 0.0.0.0:* LISTEN 8560/kubelet
tcp 0 0 localhost:10248 0.0.0.0:* LISTEN 8560/kubelet
tcp6 0 0 [::]:10250 [::]:* LISTEN 8560/kubelet
How can I enable this port. I found information that I need to use the parameter --read-only-port = 10255, but how do I apply it to my kubelet, I do not quite understand. For example:
[root#kube2 /]# kubelet --config --read-only-port=10255
\F1010 13:32:48.592306 15851 server.go:196] failed to load Kubelet config file --read-only-port=10255, error failed to read kubelet config file "/--read-only-port=10255", error: open /--read-only-port=10255: no such file or directory
It's does't work. Which file does it need?
Can anyone help me with a solution to this problem?
I resolved this issue. I added flags in /var/lib/kubelet/kubelet-flags in every my kubertenes' nodes:
KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.1 --read-only-port=10255"
and restart kubelet service.
Now I have open port 10255:
[root#kube2 7.1]# netstat -ap | grep -i "listen" | grep "kubelet"
tcp 0 0 localhost:44799 0.0.0.0:* LISTEN 6281/kubelet
tcp 0 0 localhost:10248 0.0.0.0:* LISTEN 6281/kubelet
tcp6 0 0 [::]:10250 [::]:* LISTEN 6281/kubelet
tcp6 0 0 [::]:10255 [::]:* LISTEN 6281/kubelet
And I see some logs of kubernetes in my kibana.

kubernetes master starts only on tcp6 , how to join a node?

I have a local Kubernetes master started on a tcp6:6443 but not on tcp so how to start a kubeadm join for using the right port?
tcp6 0 0 :::10250 :::* LISTEN -
tcp6 0 0 :::6443 :::* LISTEN -
tcp6 0 0 :::10251 :::* LISTEN -
Starting Nmap 7.01 ( https://nmap.org ) at 2019-09-25 15:40 CEST
Nmap scan report for 10.0.2.15
Host is up (0.000081s latency).
PORT STATE SERVICE
6443/tcp closed unknown
You should run the below command (on master host):
$ kubeadm init --apiserver-advertise-address=<private-ip of master host>
--apiserver-advertise-address parameter - if the node should host a new control plane instance, the IP address the API Server will advertise it's listening on. If not set the default network interface will be used.
Now try to run the join command that was generated in the output of kubeadm init. It should works fine.
Also, what you can check is a firewall running on your master node that should be disabled. It’s blocking incoming traffic.
systemctl stop firewalld

Network connectivity/DNS issues on a GKE 1.10 kubernetes cluster

I'm running into DNS issues on a GKE 1.10 kubernetes cluster. Occasionally pods start without any network connectivity. Restarting the pod tends to fix the issue.
Here's the result of the same few commands inside a container without network, and one with.
BROKEN:
kc exec -it -n iotest app1-b67598997-p9lqk -c userapp sh
/app $ nslookup www.google.com
nslookup: can't resolve '(null)': Name does not resolve
/app $ cat /etc/resolv.conf
nameserver 10.63.240.10
search iotest.svc.cluster.local svc.cluster.local cluster.local c.myproj.internal google.internal
options ndots:5
/app $ curl -I 10.63.240.10
curl: (7) Failed to connect to 10.63.240.10 port 80: Connection refused
/app $ netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 1/python
tcp 0 0 ::1:50051 :::* LISTEN 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 :::* LISTEN 1/python
WORKING:
kc exec -it -n iotest app1-7d985bfd7b-h5dbr -c userapp sh
/app $ nslookup www.google.com
nslookup: can't resolve '(null)': Name does not resolve
Name: www.google.com
Address 1: 74.125.206.147 wk-in-f147.1e100.net
Address 2: 74.125.206.105 wk-in-f105.1e100.net
Address 3: 74.125.206.99 wk-in-f99.1e100.net
Address 4: 74.125.206.104 wk-in-f104.1e100.net
Address 5: 74.125.206.106 wk-in-f106.1e100.net
Address 6: 74.125.206.103 wk-in-f103.1e100.net
Address 7: 2a00:1450:400c:c04::68 wk-in-x68.1e100.net
/app $ cat /etc/resolv.conf
nameserver 10.63.240.10
search iotest.svc.cluster.local svc.cluster.local cluster.local c.myproj.internal google.internal
options ndots:5
/app $ curl -I 10.63.240.10
HTTP/1.1 404 Not Found
date: Sun, 29 Jul 2018 15:13:47 GMT
server: envoy
content-length: 0
/app $ netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:15000 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:15001 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN 1/python
tcp 0 0 10.60.2.6:56508 10.60.48.22:9091 ESTABLISHED -
tcp 0 0 127.0.0.1:57768 127.0.0.1:50051 ESTABLISHED -
tcp 0 0 10.60.2.6:43334 10.63.255.44:15011 ESTABLISHED -
tcp 0 0 10.60.2.6:15001 10.60.45.26:57160 ESTABLISHED -
tcp 0 0 10.60.2.6:48946 10.60.45.28:9091 ESTABLISHED -
tcp 0 0 127.0.0.1:49804 127.0.0.1:50051 ESTABLISHED -
tcp 0 0 ::1:50051 :::* LISTEN 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 :::* LISTEN 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 ::ffff:127.0.0.1:49804 ESTABLISHED 1/python
tcp 0 0 ::ffff:127.0.0.1:50051 ::ffff:127.0.0.1:57768 ESTABLISHED 1/python
These pods are identical, just one was restarted.
Does anyone have advice about how to analyse and fix this issue?
Some steps to try:
1) ifconfig eth0 or whatever the primary interface is.
Is the interface up? Are the tx and rx packet counts increasing?
2)If interface is up, you can try tcpdump as you are running the nslookup command that you posted. See if the dns request packets are getting sent out.
3) See which node the pod is scheduled on, when network connectivity gets broken. Maybe it is on the same node every time? If yes, are other pods on that node running into similar problem?
I also faced the same problem, and I simply worked around it for now by switching to the 1.9.x GKE version (after spending many hours trying to debug why my app wasn't working).
Hope this helps!

Logstash not listening on UDP port 5140

I am running a logstash shipper, rsyslog sends logs to logstash on port 5140, I can confirm the packets are arriving with:
tcpdump -vvv -A -i any port 5140
I have logstash configured like so:
input {
udp {
type => "syslog"
port => 5140
}
}
filter { }
output {
stdout {
codec => rubydebug
}
redis {
host => "172.30.114.151"
key => "logstash"
port => "6379"
data_type => "list"
}
}
I have also tried the following on for the input:
input {
syslog {
port => 5140
}
}
Which netstat shows tcp Listen but not udp.
I have disabled ipv6 for logstash with the following flag:
_JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
When I run:
netstat -tulpan
I get:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1191/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 2135/master
tcp 0 0 0.0.0.0:514 0.0.0.0:* LISTEN 7593/rsyslogd
tcp 0 0 172.26.33.182:22 172.30.230.152:47975 ESTABLISHED 2260/sshd:
tcp 0 0 172.26.33.182:22 172.30.230.151:42811 ESTABLISHED 6781/sshd:
tcp6 0 0 :::22 :::* LISTEN 1191/sshd
tcp6 0 0 :::4440 :::* LISTEN 1296/java
tcp6 0 0 ::1:25 :::* LISTEN 2135/master
tcp6 0 0 :::514 :::* LISTEN 7593/rsyslogd
udp 0 0 0.0.0.0:5140 0.0.0.0:* 8499/java
udp 0 0 0.0.0.0:37934 0.0.0.0:* 653/avahi-daemon: r
udp 0 0 0.0.0.0:5353 0.0.0.0:* 653/avahi-daemon: r
Process 8499 is logstash. I have tried running as root and as well as other ports. I cannot seem to get logstash to "listen" on udp
I have also confirmed that the port is open and working with:
telnet <ipaddress> 5140
Selinux is disabled:
sestatus
SELinux status: disabled
I need some help with this. I have searched and searched. I have looked into every other solution I have come across with no luck. This may seem like a duplicate. However, the other solutions are not working for me. This is a centos installation. Have also tried ports 514, 10514 to no avail.
You have to allow the port in firewall as centos comes up with default firewall which doesn't allow traffic to get to logstash input
Allow traffic on a specific port by following command:
firewall-cmd --zone=public --add-port=2888/tcp
disable firewall or stop service with following command:
systemctl disable firewalld
systemctl stop firewalld
**Disabling firewall can be a security concern but for experimental purposes you can give it a try