What is blocking the IP address of my iPhone as soon as I access my WordPress instance? - iphone

I have an Arch Linux Linode, it runs WordPress, using the Linux Server IO Swag container. It works. I installed UFW and Tailscale, all SSH traffic is over the Tailnet, port 80 and 443 are open:
Status: active
To Action From
-- ------ ----
Anywhere on tailscale0 ALLOW Anywhere
80 ALLOW Anywhere
443 ALLOW Anywhere
Anywhere (v6) on tailscale0 ALLOW Anywhere (v6)
Everything works well, until I access the WordPress instance on my iPhone (Firefox for iOS), at that point the IP I use to access the system from the iPhone gets blocked for some time. Let me demonstrate:
I scan the ports
nmap -p 443,80 anacreon.domain.nl
Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-05 21:50 CET
Nmap scan report for anacreon.domain.nl (139.144.66.219)
Host is up (0.034s latency).
Other addresses for anacreon.domain.nl (not scanned): 2a01:7e01::f03c:93ff:fea2:10ab
PORT STATE SERVICE
80/tcp open http
443/tcp open https
Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds
Now, let me connect with my iPhone (to another domain pointing to the same box, routed to WP), and see what we get within 2 sec or so:
nmap -p 443,80 anacreon.domain.nl
Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-05 21:51 CET
Nmap scan report for anacreon.domain.nl (139.144.66.219)
Host is up (0.034s latency).
Other addresses for anacreon.domain.nl (not scanned): 2a01:7e01::f03c:93ff:fea2:10ab
PORT STATE SERVICE
80/tcp closed http
443/tcp closed https
Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds
Now, if I switch on my Mullvad VPN on my iPhone I can find the WP instance and click about 1 link on it before it's blocked again. If I switch on my Mullvad VPN on my laptop can access the system, let me demonstrate, I execute these commands withing 3 sec or so:
[freek#freex ~]$ nmap -p 443,80 anacreon.domain.nl
Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-05 21:54 CET
Nmap scan report for anacreon.domain.nl (139.144.66.219)
Host is up (0.033s latency).
Other addresses for anacreon.domain.nl (not scanned): 2a01:7e01::f03c:93ff:fea2:10ab
PORT STATE SERVICE
80/tcp closed http
443/tcp closed https
Nmap done: 1 IP address (1 host up) scanned in 0.12 seconds
[freek#freex ~]$ wg-quick up mullvad-se3
wg-quick must be run as root. Please enter the password for freek to continue:
[#] ip link add mullvad-se3 type wireguard
[#] wg setconf mullvad-se3 /dev/fd/63
[#] ip -4 address add 10.66.88.174/32 dev mullvad-se3
[#] ip -6 address add fc00:bbbb:bbbb:bb01::3:58ad/128 dev mullvad-se3
[#] ip link set mtu 1420 up dev mullvad-se3
[#] resolvconf -a mullvad-se3 -m 0 -x
[#] wg set mullvad-se3 fwmark 51820
[#] ip -6 route add ::/0 dev mullvad-se3 table 51820
[#] ip -6 rule add not fwmark 51820 table 51820
[#] ip -6 rule add table main suppress_prefixlength 0
[#] nft -f /dev/fd/63
[#] ip -4 route add 0.0.0.0/0 dev mullvad-se3 table 51820
[#] ip -4 rule add not fwmark 51820 table 51820
[#] ip -4 rule add table main suppress_prefixlength 0
[#] sysctl -q net.ipv4.conf.all.src_valid_mark=1
[#] nft -f /dev/fd/63
[freek#freex ~]$ nmap -p 443,80 anacreon.domain.nl
Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-05 21:54 CET
Nmap scan report for anacreon.domain.nl (139.144.66.219)
Host is up (0.050s latency).
Other addresses for anacreon.domain.nl (not scanned): 2a01:7e01::f03c:93ff:fea2:10ab
PORT STATE SERVICE
80/tcp open http
443/tcp open https
Nmap done: 1 IP address (1 host up) scanned in 0.25 seconds
Indeed now I can access and use the site from my laptop normally. And, in about 10-20 minutes the ports open for my private IP address again.
I'm really baffled, I didn't install any firewall other than UFW and it shouldn't even block anything. I did not install fail2ban or any such service.
What could this be? Why does my iPhone trigger it (with normal use even)?
Any suggestions on how to further investigate?
Oh, when I disable UFW it still happens, here I keep port scanning while I access the WP instance:
nmap -p 443,80 anacreon.domain.nl
Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-05 22:03 CET
Nmap scan report for anacreon.domain.nl (139.144.66.219)
Host is up (0.036s latency).
Other addresses for anacreon.domain.nl (not scanned): 2a01:7e01::f03c:93ff:fea2:10ab
PORT STATE SERVICE
80/tcp open http
443/tcp open https
Nmap done: 1 IP address (1 host up) scanned in 0.16 seconds
[freek#freex ~]$ nmap -p 443,80 anacreon.domain.nl
Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-05 22:03 CET
Nmap scan report for anacreon.domain.nl (139.144.66.219)
Host is up (0.16s latency).
Other addresses for anacreon.domain.nl (not scanned): 2a01:7e01::f03c:93ff:fea2:10ab
PORT STATE SERVICE
80/tcp closed http
443/tcp filtered https
Nmap done: 1 IP address (1 host up) scanned in 4.50 seconds
[freek#freex ~]$ nmap -p 443,80 anacreon.domain.nl
Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-05 22:03 CET
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 3.04 seconds
[freek#freex ~]$ nmap -p 443,80 anacreon.domain.nl
Starting Nmap 7.93 ( https://nmap.org ) at 2023-01-05 22:03 CET
Nmap scan report for anacreon.domain.nl (139.144.66.219)
Host is up (0.27s latency).
Other addresses for anacreon.domain.nl (not scanned): 2a01:7e01::f03c:93ff:fea2:10ab
PORT STATE SERVICE
80/tcp closed http
443/tcp closed https
Nmap done: 1 IP address (1 host up) scanned in 2.27 seconds
Notice that for a time it indicates the port is filtered...
Perhaps I should also mention that I have nginx basic auth enabled in front of the WP instance.
As said, I'm confused. I would really like to learn to determine what goes wrong here.

Ok, I found it, the Swag container has fail2ban activated... It seems that is does not play nice with the Nginx basic auth: https://docs.linuxserver.io/images/docker-swag#using-fail2ban
I guess this is the flip side of complex infra as code, 15 years ago there never was some service I didn't know I was running ;)

Related

How to delete a subnet from bigger subnet route delete PowerShell Windows?

This is route print output:
IPv4 Route Table
===========================================================================
Active Routes:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.50 55
0.0.0.0 128.0.0.0 192.168.10.252 192.168.10.96 257
128.0.0.0 128.0.0.0 192.168.10.252 192.168.10.96 257
1.1.1.1 255.255.255.0 192.168.1.1 192.168.1.50 56
185.1.1.1.1 255.255.255.255 192.168.1.1 192.168.1.50 311
192.168.10.0 255.255.255.0 On-link 192.168.10.96 257
192.168.10.96 255.255.255.255 On-link 192.168.10.96 257
192.168.10.255 255.255.255.255 On-link 192.168.10.96 257
===========================================================================
Persistent Routes:
Network Address Netmask Gateway Address Metric
1.1.1.1 255.255.255.0 192.168.1.1 1
===========================================================================
As you see, 1.1.1.1/24 is added in routing table.
I want to delete a subnet (like /32 or bigger) from this but I get error:
route delete -p 1.1.1.10/32 192.168.1.1
route delete -p 1.1.1.0/25 192.168.1.1
Error:
The route deletion failed: Element not found.
I know I can remove the whole /24 subnet and then use a Python script to generate the desired subnets, but my question is whether it's possible to remove a smaller subnet in routing table.
Windows 10, PowerShell version is 5.1 (18200).

ubuntu 20.04 in vagrant open port to private network

Running 2VMs by Vagrant within the private network like:
host1: 192.168.1.1/24
host2: 192.168.1.2/24
In host1, the app listens port 6443. But cannot access in host2:
# host1
root#host1:~# ss -lntp | grep 6443
LISTEN 0 4096 *:6443 *:* users:(("kube-apiserver",pid=10537,fd=7))
# host2
root#host2:~# nc -zv -w 3 192.168.1.2 6443
nc: connect to 192.168.1.2 port 6443 (tcp) failed: Connection refused
(Actually, the app is the "kube-apiserver" and fail to join the host2 as a worker node with kubeadm)
What am I missed?
Both are ubuntu focal (box_version '20220215.1.0') and ufw are inactivated.
After change the IP of hosts, it works:
host1: 192.168.1.1/24 -> 192.168.1.2/24
host2: 192.168.1.2/24 -> 192.168.1.3/24
I guess it is caused by using the reserved IP as the gateway, the first IP of the subnet, 192.168.1.1.
I'll update the references about that here later, I have to setup the k8s cluster for now.

k8s - pod can ping external ip, but cannot wget?

Running a clean install of microk8s 1.19 on Fedora 32, I am able to ping an external IP address, but when I try to wget, I get "no route to host" (this is the output of commands run from a busybox pod):
/ # wget x.x.x.x
Connecting to x.x.x.x (x.x.x.x:80)
wget: can't connect to remote host (x.x.x.x): No route to host
/ # ping x.x.x.x
PING x.x.x.x (x.x.x.x): 56 data bytes
64 bytes from x.x.x.x: seq=0 ttl=127 time=1.209 ms
64 bytes from x.x.x.x: seq=1 ttl=127 time=0.765 ms
Finally found https://github.com/ubuntu/microk8s/issues/408
Had to enable masquerade in the firewall zone associated with the bridge interface, or in my case, my ethernet connection.

How to add IP route(s) So Kubernetes cluster addresses go via through appropriate adapter

I have installed Kubernetes cluster(one Master and one Worker- Node) on CentOS-8 OS stand-alone server separately as per the below link instructions.
https://www.tecmint.com/install-a-kubernetes-cluster-on-centos-8/
Weave-Net - CNI plugin installed as per above link. Now I can see below new network adapter in our K8s Master & Worker-Node server.
weave: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1376
inet 10.32.0.1 netmask 255.240.0.0 broadcast 10.47.255.255
inet6 fe80::a07d:21ff:fef1:4656 prefixlen 64 scopeid 0x20<link>
ether a2:7d:21:f1:46:56 txqueuelen 1000 (Ethernet)
RX packets 141 bytes 13322 (13.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 48 bytes 4896 (4.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
But the problem is from host server unable to ping (Or) access any of our remote site/location IPs (ping response given below). whereas Local IPs are pinging & accessible.
ping -c 4 120.121.5.48
PING 120.121.5.48 (120.121.5.48) 56(84) bytes of data.
From 10.32.0.1 icmp_seq=1 Destination Host Unreachable
From 10.32.0.1 icmp_seq=2 Destination Host Unreachable
From 10.32.0.1 icmp_seq=3 Destination Host Unreachable
From 10.32.0.1 icmp_seq=4 Destination Host Unreachable
--- 120.121.5.48 ping statistics ---
4 packets transmitted, 0 received, +4 errors, 100% packet loss, time 2999ms
pipe 4
Also from host server tried to connect our remote LDAP server through telnet it shows below error message.
# telnet 120.121.5.48 389
Trying 120.121.5.48...
telnet: connect to address 120.121.5.48: No route to host
In our K8s Master & Worker-Node server have 23 network adapters, Statically network IP have configured, So any additional configuration need to be configured for K8s CNI reachable in default routing?
ip route show & route -n output as follows.
# ip route show
default via 45.46.47.1 dev ens1f0 proto static metric 100
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1
45.46.47.0/24 dev ens1f0 proto kernel scope link src 45.46.47.48 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 45.46.47.1 0.0.0.0 UG 100 0 0 ens1f0
10.32.0.0 0.0.0.0 255.255.255.0 U 10 0 0 ens1f0
10.32.0.0 0.0.0.0 255.240.0.0 U 0 0 0 weave
45.46.47.0 0.0.0.0 255.255.255.0 U 100 0 0 ens1f0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
Tried to change the weave route to default with below command. it executed successfully, But still same problem.
ip route add 10.32.0.0/24 via 45.46.47.1 dev ens1f0 metric 100
Suppose if i run ifconfig weave down everything is working fine. But to use Kubernetes cluster i need Weave-net network adapter. So please help me to add IP route(s) So that my Kubernetes cluster addresses go via through appropriate adapter, So that i will be able to access both our local & remote location server.
I have changed the CNI-Weave-Net plugin to Flannel, now it is working as excepted.

connection timed out; no servers could be reached when connect CoreDNS server

When I using dig command to test the CoreDNS server,it shows: connection timed out; no servers could be reached:
[root#ops001 ~]# /opt/k8s/bin/kubectl exec -ti soa-user-service-5c8b744d6d-7p9hr -n dabai-fat /bin/sh
/ # dig -t A kubernetes.default.svc.cluster.local. #10.254.0.2
; <<>> DiG 9.12.4-P2 <<>> -t A kubernetes.default.svc.cluster.local. #10.254.0.2
;; global options: +cmd
;; connection timed out; no servers could be reached
when I ping server,it success.
[root#ops001 ~]# /opt/k8s/bin/kubectl exec -ti soa-user-service-5c8b744d6d-7p9hr -n dabai-fat /bin/sh
/ # ping 10.254.0.2
PING 10.254.0.2 (10.254.0.2): 56 data bytes
64 bytes from 10.254.0.2: seq=0 ttl=64 time=0.100 ms
64 bytes from 10.254.0.2: seq=1 ttl=64 time=0.071 ms
64 bytes from 10.254.0.2: seq=2 ttl=64 time=0.094 ms
64 bytes from 10.254.0.2: seq=3 ttl=64 time=0.087 ms
why the dig could not connect to DNS server althrough the network is ok?This is my CoreDNS service:
when azshara-k8s03‘s node connection to CoreDNS server:
/ # telnet 10.254.0.2 53
Connection closed by foreign host
when azshara-k8s02‘s and azshara-k8s01‘s node connection to CoreDNS server:
/ # telnet 10.254.0.2 53
telnet: can't connect to remote host (10.254.0.2): Connection refused
I just confusing why port 53 is not open,when I scan from host using same command,the port 53 is open:
I finally find some server's kube-proxy not start,and the route foward rule not refresh,using this command to start kube-proxy fix this problem:
systemctl start kube-proxy