Which ports should be open for Facebook authentication - facebook

I tried to improve my server security by setting up some iptables firewall rules. The result is that Facebook login with Omniauth stopped working. In my logs I see that Facebook is sending some packages to my server ports 37035 and 41198 at least. Why? There is nothing running in those ports.
Can someone say which ports I should open so that Facebook login with Omniauth could start working again on my site.
The rules I applied are:
# Delete all existing rules
iptables -X
# Set default rules
iptables -P INPUT DROP
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
# Allow ssh in
iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
# Allow incoming HTTP
iptables -A INPUT -i eth0 -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
# Allow outgoing SSH
iptables -A OUTPUT -o eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -i eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
# Allow ping from outside
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A OUTPUT -p icmp --icmp-type echo-reply -j ACCEPT
# Allow pingging other servers
iptables -A OUTPUT -p icmp --icmp-type echo-request -j ACCEPT
iptables -A INPUT -p icmp --icmp-type echo-reply -j ACCEPT
# Allow loopback access
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# Allow sendmail and postfix
iptables -A INPUT -i eth0 -p tcp --dport 25 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -o eth0 -p tcp --sport 25 -m state --state ESTABLISHED -j ACCEPT
# Allow dns lookups
iptables -A OUTPUT -p udp -o eth0 --dport 53 -j ACCEPT
iptables -A INPUT -p udp -i eth0 --sport 53 -j ACCEPT
# Prevent dos attacks - upgrade to hashlimit if needed
iptables -A INPUT -p tcp --dport 80 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT
# Log dropped packages
iptables -N LOGGING
iptables -A INPUT -j LOGGING
iptables -A LOGGING -m limit --limit 2/min -j LOG --log-prefix "IPTables Packet Dropped: " --log-level 7
iptables -A LOGGING -j DROP
Here is an example log entry from my syslog (My IP is filtered)
IPTables Packet Dropped: IN=eth0 OUT= MAC=40:40:ea:31:ac:8d:64:00:f1:cd:1f:7f:08:00 SRC=69.171.224.54 DST=my_ip LEN=56 TOS=0x00 PREC=0x00 TTL=86 ID=0 DF PROTO=TCP SPT=443 DPT=44605 WINDOW=14480 RES=0x00 ACK SYN URGP=0

Finally got an answer for this question. https://superuser.com/questions/479503/why-are-ports-30000-to-60000-needed-when-browsing-the-net
Ports from 32768 to 61000 are needed for http connections.

Related

Kubernetes pods have no outbound internet [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I currently have a cluster with 2 nodes:
node1 - 192.168.1.20 (master, worker)
node2 - 192.168.1.30 (worker)
When running a pod on either of the nodes there is no outbound internet connection.
The problem seems to not be related to DNS since even pinging a public IP address of google.com does not work.
pod1-node1$ traceroute to google.com (216.58.215.46), 30 hops max, 46 byte packets
1 192-168-1-20.kubernetes.default.svc.home-cloud.local (192.168.1.20) 0.008 ms 0.009 ms 0.009 ms
2 *
Nodes have internet access and everything works there
node1$ ping 216.58.215.46 -> OK
node2$ ping 216.58.215.46 -> OK
Ip tables on master node:
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-N DOCKER-INGRESS
-N KUBE-FIREWALL
-N KUBE-KUBELET-CANARY
-N KUBE-LOAD-BALANCER
-N KUBE-MARK-DROP
-N KUBE-MARK-MASQ
-N KUBE-NODE-PORT
-N KUBE-POSTROUTING
-N KUBE-SERVICES
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER-INGRESS
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -m addrtype --dst-type LOCAL -j DOCKER-INGRESS
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -o docker_gwbridge -m addrtype --src-type LOCAL -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.18.0.0/16 ! -o docker_gwbridge -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 9000 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p udp -m udp --dport 34197 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 27015 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -i docker_gwbridge -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 9000 -j DNAT --to-destination 172.17.0.2:9000
-A DOCKER ! -i docker0 -p udp -m udp --dport 34197 -j DNAT --to-destination 172.17.0.3:34197
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 27015 -j DNAT --to-destination 172.17.0.3:27015
-A DOCKER-INGRESS -p tcp -m tcp --dport 3306 -j DNAT --to-destination 172.18.0.2:3306
-A DOCKER-INGRESS -p tcp -m tcp --dport 9980 -j DNAT --to-destination 172.18.0.2:9980
-A DOCKER-INGRESS -j RETURN
-A KUBE-FIREWALL -j KUBE-MARK-DROP
-A KUBE-LOAD-BALANCER -j KUBE-MARK-MASQ
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODE-PORT -p tcp -m comment --comment "Kubernetes nodeport TCP port for masquerade purpose" -m set --match-set KUBE-NODE-PORT-TCP dst -j KUBE-MARK-MASQ
-A KUBE-POSTROUTING -m comment --comment "Kubernetes endpoints dst ip:port, source ip for solving hairpin purpose" -m set --match-set KUBE-LOOP-BACK dst,dst,src -j MASQUERADE
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE --random-fully
-A KUBE-SERVICES -m comment --comment "Kubernetes service lb portal" -m set --match-set KUBE-LOAD-BALANCER dst,dst -j KUBE-LOAD-BALANCER
-A KUBE-SERVICES ! -s 10.233.64.0/18 -m comment --comment "Kubernetes service cluster ip + port for masquerade purpose" -m set --match-set KUBE-CLUSTER-IP dst,dst -j KUBE-MARK-MASQ
-A KUBE-SERVICES -m addrtype --dst-type LOCAL -j KUBE-NODE-PORT
-A KUBE-SERVICES -m set --match-set KUBE-CLUSTER-IP dst,dst -j ACCEPT
-A KUBE-SERVICES -m set --match-set KUBE-LOAD-BALANCER dst,dst -j ACCEPT
From within a pod I get a strange looking ip route result. My docker overlay network is '10.233.64.0/18' configured with calico.
pod1-node1$ ip route
default via 169.254.1.1 dev eth0
169.254.1.1 dev eth0 scope link
I also have metallb installed and it installs a kube-proxy but not sure how exactly it works or if it can be related to the problem.
Any advice is greatly appreciated.
Cheers Alex
EDIT: Cluster is bare metal, installed with kubespray, I haven't installed any CNI manually
It turned out to be a bug in the CNI - Calico.
I manually set the calico version in my kubespray to 3.22.1 and the problem was solved.
I then encountered other connectivity issues between my nodes and opted to use flannel.
Since then everything works fine.
My advise to others after spending a week of research myself :
Container Networking is HARD
Take it slow and do not cut corners ! If you are missing some
knowledge about a topic - stop and learn at least the basics of the
said topic before continuing.
Diagnose your problem - most commonly it is DNS or Packet Routing:
Get the IP of google.com on another machine and ping the IP from within a pod - if it works you have a DNS problem if it doesn't then probably CNI related (packet routing).
Learn how resolved works and how it is configured
This will allow you to check what DNS is actually used on your node and within your pod.
Learn how kubernetes DNS works and why it is needed:
It allows pod to service communication to work among other things. It also needs upstream DNS addresses so that it can resolve things outside your cluster.
Learn how IP tables work and how masquerading works
It allows you to understand how network packets are routed between your node and your pods, using the bridge interface.
Learn what a bridge interface is
The docker0 bridge interface allows all your containers (and pods by extension) to talk to outside world.
Read this very informational article on how calico interacts with your IP tables to create nat masquerading rules on the fly:
https://medium.com/#bikramgupta/pod-network-troubleshooting-while-using-calico-on-kubernetes-ee78b731d4d8

No internet for connected clients with freshly install pivpn (wireguard)

I have a freshly installed pi4, and after running curl -L https://install.pivpn.io | bash to install pivpn (wireguard), the clients connect but have no internet access.
From my Pi I can ping the clients. Using tcpdump I can also see the clients are connected, but still no internet access
My Iptables is set like this:
#iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT
-A INPUT -i eth0 -p udp -m udp --dport 51820 -m comment --comment wireguard-input-rule -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 22 -m comment --comment ssh -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp --dport 443 -m comment --comment ssh -j ACCEPT
-A FORWARD -d 10.6.0.0/24 -i eth0 -o wg0 -m conntrack --ctstate RELATED,ESTABLISHED -m comment --comment wireguard-forward-rule -j ACCEPT
-A FORWARD -s 10.6.0.0/24 -i wg0 -o eth0 -m comment --comment wireguard-forward-rule -j ACCEPT
in my /etc/sysctl.conf the net.ipv4.ip_forward=1
and the rest of the settings of ipvpn are just standard.
In the wireguard settings I just use google DNS: 8.8.8.8
I am running out of ideas.
Can anyone please help ?
Sounds like the only thing you're missing is IP Forwarding:
You can enable it on the fly with: sysctl -w net.ipv4.ip_forward=1
And you can make it permanent by editing /etc/sysctl.conf and setting
net.ipv4.ip_forward = 1

allow port for some IPs IPTABLES

On CentOS 7 i Use following commands to drop some port and allow for one IP :
iptables -A INPUT -p tcp --dport 2001 -s 1.1.1.1 -j ACCEPT
iptables -A INPUT -p tcp --dport 2001 -j DROP
service iptables save
and everything work fine.
But when i want add another ip to allow with this command it doesn't work for second IP.
iptables -A INPUT -p tcp --dport 2001 -s 2.2.2.2 -j ACCEPT
service iptables save
** Solved **
I use -I Flag This would insert your rule on first position of inputs rule.
iptables -I INPUT -p tcp --dport 2001 -s 2.2.2.2 -j ACCEPT

How to delete iptables rules added by kube-proxy?

I want to manually delete iptables rules for debugging.
I have several rules created by kube-proxy based on service nettools:
# kubectl get endpoints nettools
NAME ENDPOINTS AGE
nettools 172.16.27.138:7493 1h
And its iptables rules:
# iptables-save|grep nettools
-A KUBE-SEP-6DFMUWHMXOYMFWKG -s 172.16.27.138/32 -m comment --comment "default/nettools:web" -j KUBE-MARK-MASQ
-A KUBE-SEP-6DFMUWHMXOYMFWKG -p tcp -m comment --comment "default/nettools:web" -m tcp -j DNAT --to-destination 172.16.27.138:7493
-A KUBE-SERVICES -d 10.0.1.2/32 -p tcp -m comment --comment "default/nettools:web cluster IP" -m tcp --dport 7493 -j KUBE-SVC-INDS3KD6I5PFKUWF
-A KUBE-SVC-INDS3KD6I5PFKUWF -m comment --comment "default/nettools:web" -j KUBE-SEP-6DFMUWHMXOYMFWKG
However,I cannot delete those rules:
# iptables -D KUBE-SVC-INDS3KD6I5PFKUWF -m comment --comment "default/nettools:web" -j KUBE-SEP-6DFMUWHMXOYMFWKG
iptables v1.4.21: Couldn't load target `KUBE-SEP-6DFMUWHMXOYMFWKG':No such file or directory
# iptables -D KUBE-SERVICES -d 10.0.1.2/32 -p tcp -m comment --comment "default/nettools:web cluster IP" -m tcp --dport 7493 -j KUBE-SVC-INDS3KD6I5PFKUWF
iptables v1.4.21: Couldn't load target `KUBE-SVC-INDS3KD6I5PFKUWF':No such file or directory
There are multiple tables in play when dealing with iptables. filter table is the default if nothing is specified. The rules that you are trying to delete are part of the nat table.
Just add -t nat to your rules to delete those rules.
Example:
# iptables -t nat -D KUBE-SVC-INDS3KD6I5PFKUWF -m comment --comment "default/nettools:web" -j KUBE-SEP-6DFMUWHMXOYMFWKG

IP tables on VPS

I am trying to setup iptables on a GoDaddy Virtual Host using the following:
iptables -P INPUT ACCEPT
iptables -F
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -i eth0 -j ACCEPT
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables -L -v
## open port ssh tcp port 22 ##
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
## open tcp port 25 (smtp) for all ##
iptables -A INPUT -p tcp --dport 25 -j ACCEPT
## open dns server ports for all ##
iptables -A INPUT -p udp --dport 53 -j ACCEPT
iptables -A INPUT -p tcp --dport 53 -j ACCEPT
## open http/https (Apache) server port to all ##
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
## open tcp port 110 (pop3) for all ##
iptables -A INPUT -p tcp --dport 110 -j ACCEPT
## open tcp port 143 (imap) for all ##
iptables -A INPUT -p tcp --dport 143 -j ACCEPT
Every time I start the iptables service, none of the websites on this server are functioning and I cant access Plesk.
If I service iptables stop, it all works again. Is there a simple syntax error here?
Are you running /etc/init.d/iptables save to save your rules? If you are on CentOS 6 you will need to run yum install policycoreutils prior to /etc/init.d/iptables save or you will get an error. Once you save the rules you need to restart iptables.
Also, you need to add a rule for port 8443 to be able to get to Plesk.