Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I installed kvm and set several guests on a server using vmbuilder. Here is the following configuration :
server host1 (xxx.xxx.xxx.xxx) -> guest vm1 (192.168.122.203)
-> guest vm2 (192.168.122.204)
Where xxx.xxx.xxx.xxx is the fix IP address of host1.
I would like to connect to vm1 using the following command:
ssh username#host1 -p 2222
I tried to do it by adding the following rule in iptables:
sudo iptables --table nat --append PREROUTING --protocol tcp --destination xxx.xxx.xxx.xxx --destination-port 2222 --jump DNAT --to-destination 192.168.122.203:22
But I got a timeout when I'm running:
ssh username#host1 -p 2222
Here are my iptables rules:
sudo iptables -nL -v --line-numbers -t nat
Chain PREROUTING (policy ACCEPT 32446 packets, 3695K bytes)
num pkts bytes target prot opt in out source destination
1 7 420 DNAT tcp -- * * 0.0.0.0/0 xxx.xxx.xxx.xxx tcp dpt:2222 to:192.168.122.203:22
Chain INPUT (policy ACCEPT 8961 packets, 968K bytes)
num pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 350 packets, 23485 bytes)
num pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 357 packets, 23905 bytes)
num pkts bytes target prot opt in out source destination
1 151 9060 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
2 99 7524 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
3 3 252 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24
sudo iptables -nL -v --line-numbers
Chain INPUT (policy ACCEPT 14 packets, 1147 bytes)
num pkts bytes target prot opt in out source destination
1 454 30229 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:53
2 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
3 0 0 ACCEPT udp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 udp dpt:67
4 0 0 ACCEPT tcp -- virbr0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:67
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
num pkts bytes target prot opt in out source destination
1 589K 2304M ACCEPT all -- * virbr0 0.0.0.0/0 192.168.122.0/24 state RELATED,ESTABLISHED
2 403K 24M ACCEPT all -- virbr0 * 192.168.122.0/24 0.0.0.0/0
3 0 0 ACCEPT all -- virbr0 virbr0 0.0.0.0/0 0.0.0.0/0
4 1 60 REJECT all -- * virbr0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
5 0 0 REJECT all -- virbr0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT 4 packets, 480 bytes)
num pkts bytes target prot opt in out source destination
Any advices will be appreciate.
OK, I found the answer:
I added those 2 rules to the nat table:
$sudo iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 192.168.122.203:22
$sudo iptables -t nat -A POSTROUTING -p tcp --dport 22 -d 192.168.122.203 -j SNAT --to 192.168.122.1
Then I deleted the rule 4 et 5 of the chain FORWARD of the table filter
$sudo iptables -nL -v --line-numbers -t filter
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
(...)
4 7 420 REJECT all -- * virbr0 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
5 0 0 REJECT all -- virbr0 * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
$sudo iptables -D FORWARD 5 -t filter
$sudo iptables -D FORWARD 4 -t filter
And now I connect to vm1 by doing:
$ssh user1#host -p 2222
user1#vm1:~$
Related
I know that the rule in KUBE-MARK-MASQ chain is a mark rule:
iptables -t nat -nvL KUBE-MARK-MASQ
Chain KUBE-MARK-MASQ (123 references)
pkts bytes target prot opt in out source destination
16 960 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
It's to mark a packet and then the packets can do SNAT in KUBE-POSTROUTING chain,the source IP can be changed to node's ip.But what confused me is that why there are so many different KUBE-MARK-MASQ rules in k8s chains?For example,in KUBE-SERVICES chain,there are lots of KUBE-MARK-MASQ rules.What are they marking for?The pod..Or else?
Let's see an example:
KUBE-MARK-MASQ tcp -- * * !10.244.0.0/16 10.96.0.10 /* kube-system/kube-dns:metrics cluster IP */ tcp dpt:9153
It's a kube-dns's clusterip rule.My pods' CIDR is 10.244.0.0/16.Why the rule's source ip has ! ?If a pod in the node want to send a packet outbound,it shouldn't have !,then it can do SNAT in KUBE-POSTROUTING to change to node's ip,is my understanding wrong?
And there are also other KUBE-MARK-MASQ rule in KUBE-SEP-XXX chain:
KUBE-MARK-MASQ all -- * * 10.244.2.162 0.0.0.0/0 /* default/echo-load-balance: */
DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echo-load-balance: */ tcp to:10.244.2.162:8080
The pod's ip is 10.244.2.162,and the rule source's ip matches pod's ip.What is it used for?
And in KUBE-FW-XXX chain:
KUBE-MARK-MASQ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echo-load-balance: loadbalancer IP */
KUBE-SVC-P24HJGZOUZD6OJJ7 all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echo-load-balance: loadbalancer IP */
KUBE-MARK-DROP all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echo-load-balance: loadbalancer IP */
Why the source's ip here is 0.0.0.0/0?What is it used for?
To see all rules iptables-save output is useful.
iptables processing chart might help understand this:
(diagram from Wikipedia)
When you isolate rules for a single service nat rules looks like this:
*nat
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-SEP-232DQYSHL5HNRYWJ -s 10.244.0.7/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-232DQYSHL5HNRYWJ -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.7:53
-A KUBE-SEP-LPGSDLJ3FDW46N4W -s 10.244.0.5/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-LPGSDLJ3FDW46N4W -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.5:53
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.5:53" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-LPGSDLJ3FDW46N4W
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns -> 10.244.0.7:53" -j KUBE-SEP-232DQYSHL5HNRYWJ
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
In this case it balances DNS queries between 2 instances with 50% chance to hit one DNS instance:
--mode random --probability 0.5
And yes, it does look a bit complicated. But that's what you get when your building universal solution for all cases.
The IP address 0.0.0.0/0 means it match any kind of IP, the only thing I'm not quite sure is the rules below:
KUBE-MARK-MASQ all -- * * 10.244.2.162 0.0.0.0/0 /* default/echo-load-balance: */
DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/echo-load-balance: */ tcp to:10.244.2.162:8080
Why do we need to masquerade the packet when the source IP address is itself?
Fresh Kubernetes (1.10.0) cluster stood up using kubeadm (1.10.0) install on RHEL7 bare metal VMs
Linux 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Dec 28 14:23:39 EST 2017 x86_64 x86_64 x86_64 GNU/Linux
kubeadm.x86_64 1.10.0-0 installed
kubectl.x86_64 1.10.0-0 installed
kubelet.x86_64 1.10.0-0 installed
kubernetes-cni.x86_64 0.6.0-0 installed
and 1.12 docker
docker-engine.x86_64 1.12.6-1.el7.centos installed
docker-engine-selinux.noarch 1.12.6-1.el7.centos installed
With Flannel v0.9.1 pod network installed
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
kubeadm init comand I ran is
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version stable-1.10
which completes successfully and kubeadm join on worker node also successful. I can deploy busybox pod on the master and nslookups are successful, but as soon as I deploy anything to the worker node I get failed API calls from the worker node on the master:
E0331 03:28:44.368253 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Service: Get https://172.30.0.85:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.30.0.85:6443: getsockopt: connection refused
E0331 03:28:44.368987 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: Get https://172.30.0.85:6443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 172.30.0.85:6443: getsockopt: connection refused
E0331 03:28:44.735886 1 event.go:209] Unable to write event: 'Post https://172.30.0.85:6443/api/v1/namespaces/default/events: dial tcp 172.30.0.85:6443: getsockopt: connection refused' (may retry after sleeping)
E0331 03:28:51.980131 1 reflector.go:205] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:86: Failed to list *core.Endpoints: endpoints is forbidden: User "system:serviceaccount:kube-system:kube-proxy" cannot list endpoints at the cluster scope
I0331 03:28:52.048995 1 controller_utils.go:1026] Caches are synced for service config controller
I0331 03:28:53.049005 1 controller_utils.go:1026] Caches are synced for endpoints config controller
and nslookup times out
kubectl exec -it busybox -- nslookup kubernetes
Server: 10.96.0.10
Address 1: 10.96.0.10
nslookup: can't resolve 'kubernetes'
command terminated with exit code 1
I have looked at many similar post on stackoverflow and github and all seem to be resolved with setting iptables -A FORWARD -j ACCEPT but not this time. I have also included the iptables from the worker node
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- anywhere anywhere /* kubernetes service portals */
DOCKER all -- anywhere !loopback/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
KUBE-POSTROUTING all -- anywhere anywhere /* kubernetes postrouting rules */
MASQUERADE all -- 172.17.0.0/16 anywhere
RETURN all -- 10.244.0.0/16 10.244.0.0/16
MASQUERADE all -- 10.244.0.0/16 !base-address.mcast.net/4
RETURN all -- !10.244.0.0/16 box2.ara.ac.nz/24
MASQUERADE all -- !10.244.0.0/16 10.244.0.0/16
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-MARK-DROP (0 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK or 0x8000
Chain KUBE-MARK-MASQ (6 references)
target prot opt source destination
MARK all -- anywhere anywhere MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
Chain KUBE-POSTROUTING (1 references)
target prot opt source destination
MASQUERADE all -- anywhere anywhere /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-HZC4RESJCS322LXV (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.0.18 anywhere /* kube-system/kube-dns:dns-tcp */
DNAT tcp -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */ tcp to:10.244.0.18:53
Chain KUBE-SEP-JNNVSHBUREKVBFWD (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.244.0.18 anywhere /* kube-system/kube-dns:dns */
DNAT udp -- anywhere anywhere /* kube-system/kube-dns:dns */ udp to:10.244.0.18:53
Chain KUBE-SEP-U3UDAUPXUG5BP2NG (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- box1.ara.ac.nz anywhere /* default/kubernetes:https */
DNAT tcp -- anywhere anywhere /* default/kubernetes:https */ recent: SET name: KUBE-SEP-U3UDAUPXUG5BP2NG side: source mask: 255.255.255.255 tcp to:172.30.0.85:6443
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- anywhere 10.96.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:https
KUBE-MARK-MASQ udp -- !10.244.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-SVC-TCOU7JCQXEZGVUNU udp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns cluster IP */ udp dpt:domain
KUBE-MARK-MASQ tcp -- !10.244.0.0/16 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-SVC-ERIFXISQEP7F7OF4 tcp -- anywhere 10.96.0.10 /* kube-system/kube-dns:dns-tcp cluster IP */ tcp dpt:domain
KUBE-NODEPORTS all -- anywhere anywhere /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-ERIFXISQEP7F7OF4 (1 references)
target prot opt source destination
KUBE-SEP-HZC4RESJCS322LXV all -- anywhere anywhere /* kube-system/kube-dns:dns-tcp */
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target prot opt source destination
KUBE-SEP-U3UDAUPXUG5BP2NG all -- anywhere anywhere /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-U3UDAUPXUG5BP2NG side: source mask: 255.255.255.255
KUBE-SEP-U3UDAUPXUG5BP2NG all -- anywhere anywhere /* default/kubernetes:https */
Chain KUBE-SVC-TCOU7JCQXEZGVUNU (1 references)
target prot opt source destination
KUBE-SEP-JNNVSHBUREKVBFWD all -- anywhere anywhere /* kube-system/kube-dns:dns */
Chain WEAVE (0 references)
target prot opt source destination
Chain cali-OUTPUT (0 references)
target prot opt source destination
Chain cali-POSTROUTING (0 references)
target prot opt source destination
Chain cali-PREROUTING (0 references)
target prot opt source destination
Chain cali-fip-dnat (0 references)
target prot opt source destination
Chain cali-fip-snat (0 references)
target prot opt source destination
Chain cali-nat-outgoing (0 references)
target prot opt source destination
Also I can see packets getting dropped on the flannel interface
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.1.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::a096:47ff:fe58:e438 prefixlen 64 scopeid 0x20<link>
ether a2:96:47:58:e4:38 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 198 bytes 14747 (14.4 KiB)
TX errors 0 dropped 27 overruns 0 carrier 0 collisions 0
I have installed the same versions of Kubernetes/Docker and Flannel on other VMs and it works, but not sure why I am getting these failed API calls to the master proxy from the worker nodes in this install? I have several fresh installs and tried weave and calico pod networks as well with the same result.
Right so I got this going by changing from flannel to weave container networking, and a kubeadm reset and restart of the VMs.
Not sure what the problem was flannel and my VMs but happy to have got it going.
Just installed mongodb on centos 6. trying to connect to mongo sh with command "mongo" but got this error message:
2015-09-26T07:07:35.309+0000 W NETWORK Failed to connect to 127.0.0.1:27017 after 5000 milliseconds, giving up.
2015-09-26T07:07:35.316+0000 E QUERY Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed
at connect (src/mongo/shell/mongo.js:179:14)
at (connect):1:6 at src/mongo/shell/mongo.js:179
however, once i stop my firewall (iptables stop) i can access mongo shell
here is my iptables:
Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:28017
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:21
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 192.168.1.0/24 0.0.0.0/0 tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
ACCEPT tcp -- 127.0.0.1 0.0.0.0/0 tcp dpt:27017 state NEW,ESTABLISHED
LOGGING all -- 0.0.0.0/0 0.0.0.0/0
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:51396
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 127.0.0.1 tcp spt:27017 state ESTABLISHED
Chain LOGGING (1 references)
target prot opt source destination
LOG all -- 0.0.0.0/0 0.0.0.0/0 limit: avg 2/min burst 5 LOG flags 0 level 4 prefix `IPTables-Dropped: '
DROP all -- 0.0.0.0/0 0.0.0.0/0
Searched and tried different solutions.
remove lock, repair; reset iptables, nothing helps.
These are the iptables's log of dropping packets
Sep 26 06:59:38 xxx kernel: IPTables-Dropped: IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=27017 DPT=51396 WINDOW=32768 RES=0x00 ACK SYN URGP=0
Sep 26 07:04:47 xxx kernel: IPTables-Dropped: IN=lo OUT= MAC=00:00:00:00:00:00:00:00:00:00:00:00:08:00 SRC=127.0.0.1 DST=127.0.0.1 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=27017 DPT=59830 WINDOW=32768 RES=0x00 ACK SYN URGP=0
can't figure why its still blocking 27017.
Open Port (27017) in Firewall.
I have a VM (Ubuntu 12.04.4 LTS) with mongodb (2.0.4) that I want to restrict with iptables to only accepting SSH (in/out) and nothing else.
This is how my setup script looks like to setup the rules:
#!/bin/sh
# DROP everything
iptables -F
iptables -X
iptables -P FORWARD DROP
iptables -P INPUT DROP
iptables -P OUTPUT DROP
# input
iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A INPUT -s 127.0.0.1 -j ACCEPT # accept all ports for local conns
# output
iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -m tcp --dport 22 -j ACCEPT # ssh
But with these rules activated, I can't connect to mongodb locally.
ubuntu ~ $ mongo
MongoDB shell version: 2.0.4
connecting to: test
Fri Mar 28 09:40:40 Error: couldn't connect to server 127.0.0.1 shell/mongo.js:84
exception: connect failed
Without them, it works fine. Is there any special firewall case one needs to consider when deploying mongodb?
I tried installing mysql, and it works perfectly for local connections.
SSH works as exepected (can connect from outside and inside).
The iptables rules looks like this once set:
ubuntu ~ $ sudo iptables -nvL
Chain INPUT (policy DROP 8 packets, 1015 bytes)
pkts bytes target prot opt in out source destination
449 108K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
0 0 ACCEPT all -- * * 127.0.0.1 0.0.0.0/0
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
32 2048 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy DROP 27 packets, 6712 bytes)
pkts bytes target prot opt in out source destination
379 175K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
Outbound traffic must be accepted for the loopback (127.0.0.1) as well.
Adding this made it work:
iptables -A OUTPUT -o lo -j ACCEPT
You migth want to try, substituting the line
iptables -A INPUT -s 127.0.0.1 -j ACCEPT
With
iptables -A INPUT -i lo -j ACCEPT
Can we filter all the packets coming to Host1:Port_A from *:Port_B and forward them to say Host1:Port_C.
I want to forward all the packets coming from port 9875 of any host at port 22 of my machine to port 5432 of my machine. What should be iptable rules corresponding to this ?
Try this one:
iptables -t nat -A PREROUTING -p tcp --source-port 9876 --destination-port 22 -j DNAT --to-destination 192.168.1.1:5432
Don't forget to change address in destination :)