currently I try to setup port forwarding for CentOS with firewall-cmd.
Currently my box has two interface: eth0, eth1.
eth0 represents the internal network and is in zone=public (default),
eth1 represents the external network and is in zone=external
currently eth1 is connected to another network which contains a router to the internet.
my external firewall looks like that:
external (active)
target: default
icmp-block-inversion: no
interfaces: eth1
sources: 192.168.178.0/24
services: dhcpv6-client http https ssh
ports: 1194/udp
protocols:
masquerade: yes
forward-ports: port=1194:proto=udp:toport=:toaddr=192.168.179.4
sourceports:
icmp-blocks:
rich rules:
I also had the rule for Port 22:
firewall-cmd --zone=external --add-forward-port=port=22:proto=tcp:toport=22:toaddr=192.168.179.8
However both rules don't work, neither the 1194 nor the one with port 22.
Actually I tested if Port-Forwarding from our Router works to the machine, which in fact it does because if I setup HAPROXY to point to the other SSH machine:
frontend sshd
bind 192.168.178.254:22
mode tcp
default_backend ssh
timeout client 1h
backend ssh
mode tcp
server static 192.168.179.8:22 check
and removing the port=22 rule, I can connect to it.
I actually run on a permissive selinux rule.
The public zone looks like that:
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources: 192.168.179.0/24
services: dhcpv6-client http https ssh
ports: 7583/tcp
protocols:
masquerade: no
forward-ports:
sourceports:
icmp-blocks:
rich rules:
Is there anything I'm missing?
I mean I also tried to make it work with iptables, but it didn't worked, at all.
sysctl net.ipv4.ip_forward returns net.ipv4.ip_forward = 1
The Linux Box is not the default Router for neither networks. Both have other routers in place.
Looks like masquerade needs to be turned on for both networks:
firewall-cmd --zone=external --permanent --add-masquerade
firewall-cmd --zone=public --permanent --add-masquerade
firewall-cmd --reload
zone target can be set to reject, which will block all incoming traffic except for the services and ports explicitly defined in zone config
Related
I have a local cluster up and running with Kubernetes.
With COVID, I work now in 2 places, home, and at office.
When I start my local network at home, it will work only at home.
When changing location , I get:
Unable to connect to the server: dial tcp 192.168.0.78:8443: connect: no route to host
I tried to update context with
minikube update-context
But it doesn't work.
The only solution that I've found is to drop Minikube and deploy it again
Any idea how to fix it without dropping minikube?
On your host PC, get the IP address of the minikube VM, execute command:
$ minikube ip
Sample result - xxx.yyy.zzz.qqq
Then create a rich firewall rule to allow all traffic from this VM to your Host:
$ firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="xxx.yyy.zzz.qqq" accept'
If you create and delete your minikube VM frequently, you can also allow the whole subnet:
$ firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="xxx.yyy.zzz.qqq/24" accept'
Take a look: minikube-cannot-connect.
I opened a Kubernetes NodePort on a machine and blocked all traffic to this port with the following rule:
sudo ufw deny 30001
But I can still access that port via browser. Is it common? I can't find any information on that in the docs.
Finally found the issue: kube-proxy is writing iptables rules (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-kube-proxy-writing-iptables-rules) which are catched before the ufw rules one added manually. This can be confirmed by checking the order in the output of iptables -S -v.
I am running a tomcat based application inside a container, and a Postgres database container on my ubuntu host using docker compose. They are in same docker bridge network defined by me. I have my firewall enabled. My firewall doesn't have any deny rule for 5432 port. When my firewall is disabled, my tomcat application can connect to a database container by using either its IP or service name. But when the firewall is enabled, it does not connect to the database container. I have set DOCKER_OPTS="--iptables=false" in docker.conf and restarted docker. Why it is not connecting when firewall is enabled?
1)These are my active rules:-
To Action From
-- ------ ----
2377/tcp ALLOW Anywhere
7946/tcp ALLOW Anywhere
7946/udp ALLOW Anywhere
4789/udp ALLOW Anywhere
22 ALLOW Anywhere
8443 ALLOW 10.20.220.185
8443 ALLOW 10.20.220.78
8081 ALLOW 10.5.0.7
5432 ALLOW Anywhere
8081 ALLOW 10.5.0.5
2377/tcp (v6) ALLOW Anywhere (v6)
7946/tcp (v6) ALLOW Anywhere (v6)
7946/udp (v6) ALLOW Anywhere (v6)
4789/udp (v6) ALLOW Anywhere (v6)
22 (v6) ALLOW Anywhere (v6)
5432 (v6) ALLOW Anywhere (v6)
=========================================================================
2)These is my application configuration to connect to database using service name:-
driverClass=org.postgresql.Driver
jdbcUrl=jdbc:postgresql://PostgresDatabase:5432/dockerdb
user=dockeruser
Setting --iptables=false means that docker daemon could not configure iptables rule(s) on host. However it's kind of essential while you have ufw enabled.
I am sure this issue would disappear after you delete DOCKER_OPTS="--iptables=false" in configuration and restart docker daemon.
During start process, Docker daemon would configure some extra iptable rules to make communication going well among containers/between container and outside world, since firewall/ufw would DROP packets by DEFAULT_FORWARD_POLICY.
Below, it is a rough process how docker create iptable rules:
Enable Enable NAT on docker0 with iptables tool.
iptables -I POSTROUTING -t nat -s 172.17.0.1 ! -o docker0 -j MASQUERADE
Enable communication within containers.
iptables -I FORWARD -i docker0 -o docker0 -j ACCEPT
Enable communication between container and out world.
iptables -I FORWARD -i docker0 ! -o docker0 -j ACCEPT
Accept any packets from outside connections which already established.
iptables -I FORWARD -o docker0 -m conntrack -ctstate RELATED,ESTABLISHED -j ACCEPT.
Above all, you have iptables set false and firewall enabled without any extra moves. Just like you throw the key away with door locked, but you still want to go outside. So I strongly suggest you not to change any docker network settings before you totally understand the architecture of docker network and how those components work together.
This is another question asked in a different way. May the answers help you more.
I'm a newbie in mongodb and As far as I have seen, we always pass constant IP values like 127.0.0.1 or 172.17.0.5 as Bind IP in mongod.conf file.
This is the bind ip configuration in my mongod.conf>
net:
port: 27017
bindIp: 127.0.0.1, 172.17.0.5 # Listen to local interface only, comment to listen on all interfaces.
I have defined an environmental variable in /etc/environment file
DHOST= 172.17.0.5
When I try to give Below configuration in mongod.conf, I cannot connect to mongo shell:
net:
port: 27017
bindIp: 127.0.0.1, *$DHOST* # Listen to local interface only, comment to listen on all interfaces.
Please help me to add a ENV var as bind ip in mongo db configuration
You should pretty much always bind to IPv4 0.0.0.0 or IPv6 ::0 (that is, “all addresses”) for things that run inside Docker containers. The docker run -p option has an optional field that can limit what IP address on the host a published port will bind to; the container can’t be reached directly from off-host without configuration like this and so trying to bind to specific interfaces within the container isn’t especially helpful.
I am trying to open up some ports on my compute VM.
For example, I have this in firewall-rules
$ gcloud compute firewall-rules list
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
default-allow-http default 0.0.0.0/0 tcp:80 http-server
default-allow-https default 0.0.0.0/0 tcp:443 https-server
default-allow-icmp default 0.0.0.0/0 icmp
default-allow-internal default 10.128.0.0/9 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default 0.0.0.0/0 tcp:3389
default-allow-ssh default 0.0.0.0/0 tcp:22
test-24284 default 0.0.0.0/0 tcp:24284 test-tcp-open-24284
I have created a centos 7 instance to which I have attached the tags
$ gcloud compute instances describe test-network-opened
...
...
items:
- http-server
- https-server
- test-tcp-open-24284
...
...
Now when I try to check from my dev box to see whether the port is opened or not using nmap on the public IP showed in the console for the VM
$ nmap -p 24284 35.193.xxx.xxx
Nmap scan report for 169.110.xxx.xx.bc.googleusercontent.com (35.193.xxx.xxx)
Host is up (0.25s latency).
PORT STATE SERVICE
24284/tcp closed unknown
Nmap done: 1 IP address (1 host up) scanned in 1.15 seconds
Now it's hitting the external NAT IP for my VM which would be 169.110.xxx.xx
I tried checking using the iptables rules, but that didn't show anything
[root#test-network-opened ~]# iptables -S | grep 24284
[root#test-network-opened ~]#
So I enabled firewalld and tried opening the port with it
[root#test-network-opened ~]# firewall-cmd --zone=public --add-port=24284/tcp --permanent
success
[root#test-network-opened ~]# firewall-cmd --reload
success
[root#test-network-opened ~]# iptables -S | grep 24284
[root#test-network-opened ~]#
I am not sure where I am doing it wrong with this. I referred these relevant questions on SO about this
How to open a specific port such as 9090 in Google Compute Engine
Can't open port 8080 on Google Compute Engine running Debian
How to open a port on google compute engine
https://cloud.google.com/compute/docs/vpc/using-firewalls
https://cloud.google.com/sdk/gcloud/reference/compute/instances/describe
The ports were opened by the firewall but since I didn't have an application using the port already, nmap was showing the closed port which meant it was able to reach to the server and not firewalled
If it was it would have showed filtered.
I didn't have any application running on it so, didn't know this as a possibility. Careless of me.
Thanks for pointing this out.