I opened a Kubernetes NodePort on a machine and blocked all traffic to this port with the following rule:
sudo ufw deny 30001
But I can still access that port via browser. Is it common? I can't find any information on that in the docs.
Finally found the issue: kube-proxy is writing iptables rules (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-kube-proxy-writing-iptables-rules) which are catched before the ufw rules one added manually. This can be confirmed by checking the order in the output of iptables -S -v.
Related
I have make my deployment work with istio ingressgateway before. I am not aware of any changes made in istio or k8s side.
When I tried to deploy, I see an error in replicaset side that's why it cannot create new pod.
Error creating: Internal error occurred: failed calling webhook
"namespace.sidecar-injector.istio.io": Post
"https://istiod.istio-system.svc:443/inject?timeout=10s": dial tcp
10.104.136.116:443: connect: no route to host
When I try to go inside api-server and ping 10.104.136.116 (istiod service IP) it just hangs.
What I have tried so far:
Deleted all coredns pods
Deleted all istiod pods
Deleted all weave pods
Reinstalling istio via istioctl x uninstall --purge
turning all of VMs firewall
sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -F
restarted all of the nodes
manual istio pod injection
Setup
k8s version: 1.21.2
istio: 1.10.3
HA setup
CNI: weave
CRI: containerd
In my case this was related to firewall. More info can be found here.
The gist of it is that on GKE at least you need to open another port 15017 in addition to 10250 and 443. This is to allow communication from your master node(s) to you VPC.
I don't have a definite answer unto why is this happening. But kube-apiserver cannot access istiod via service IP, wherein it can connect when I used the istiod pod IP.
Since I don't have the control over the VM and lower networking layer and not sure if they have changed something (because it is working before).
I made this work by changing my CNI from weave to flannel
In my case it was due to firewall. Following this Istio debug guide, I identified that the kubectl get --raw /api/v1/namespaces/istio-system/services/https:istiod:https-webhook/proxy/inject -v4 command was timing out while all other cluster internal calls were ok.
The best way to diagnose this is to open temporarly your AWS Security Groups involved to 0.0.0.0/0 for port 15017 and then try again.
If the errror won't show again, you know there's need to fix this part.
I am using EKS with Amazon VPC CNI v1.12.2-eksbuild.1
I am new to K8s.
I setup a cluster with 1 master and 1 worker in Azure. I am using Azure VMs.
I could be able to setup etcd, api server, scheduler etc on master and kubelet, kube=proxy on worker and can fetch nodes using kubectl get nodes in master.
The nodes are at NotReady state as I was trying to create networking using weavenet.
But the pods are not creating. 1 Pod is created but the other one is throwing error. Upon investigation it looks the kubernetes service has
endpoint which is not reachable from worker node. How can I fix this?
As suggested in comment section, root cause of the issue was related to Firewall configuration. After adjusting firewall rules, OP confirmed that it's working.
Thank you confusedgenius, PjoterS, yes it was firewall issue.
In general no route to host indicates that
The host is unavailable
Network issues
In this scenario firewall probably blocked traffic from port 443 on the 10.96.0.1 node.
Depends on your OS, you can use preinstalled firewall management tools like Firewalld
### Temporary disable FirewallD
$ sudo systemctl stop firewalld
### Adding port on firewall
$ sudo firewall-cmd --add-port=port/protocol --permanent.
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --list-all
UFW on Ubuntu
$ sudo ufw disable
Firewall stopped and disabled on system startup
or just flush iptables like in this article.
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker
When I modified the iptables on the host,the k8s pods went down. seems like communication within cluster blocked. pod status turned into ContainerCreating.
I just want to do a simple ip white list like below.
iptables -A INPUT -s 10.xxx.4.0/24 -p all -j ACCEPT
iptables -A INPUT -j REJECT
Then I delete the reject item in the iptables,pods went running again.
I just want to know how to do a simple ip white list on a host and Not affecting k8s pods running?
the events:
!
Kubernetes uses iptables to control the network connections between pods (and between nodes), handling many of the networking and port forwarding rules. With the iptables -A INPUT -j REJECT you are actually not allowing him to do that.
Taken from the understanding kubernestes networking model article:
In Kubernetes, iptables rules are configured by the kube-proxy controller that watches the Kubernetes API server for changes. When a change to a Service or Pod updates the virtual IP address of the Service or the IP address of a Pod, iptables rules are updated to correctly route traffic directed at a Service to a backing Pod. The iptables rules watch for traffic destined for a Service’s virtual IP and, on a match, a random Pod IP address is selected from the set of available Pods and the iptables rule changes the packet’s destination IP address from the Service’s virtual IP to the IP of the selected Pod. As Pods come up or down, the iptables ruleset is updated to reflect the changing state of the cluster. Put another way, iptables has done load-balancing on the machine to take traffic directed to a service’s IP to an actual pod’s IP.
To secure cluster its better to put all custom rules on the gateway(ADC) or into cloud security groups. Cluster security level can be handled via Network Policies, Ingress, RBAC and others.
Kubernetes has also a good article about Securing a Cluster and github guide with best practices about kubernestes security.
Is there a corresponding command with kubectl to:
ssh -L8888:rds.aws.com:5432 example.com
kubectl has port-forward you can also specify --address but that strictly requires an IP address.
The older answer is valid.
Still, a workaround would be to use something like
https://hub.docker.com/r/marcnuri/port-forward
kubectl run --env REMOTE_HOST=your.service.com --env REMOTE_PORT=8080 --env LOCAL_PORT=8080 --port 8080 --image marcnuri/port-forward test-port-forward
Run it on the cluster and then port forward to it.
kubectl port-forward test-port-forward 8080:8080
Short answer, No.
In OpenSSH, local port forwarding is configured using the -L option:
ssh -L 80:intra.example.com:80 gw.example.com
This example opens a connection to the gw.example.com jump server, and forwards any connection to port 80 on the local machine to port 80 on intra.example.com.
By default, anyone (even on different machines) can connect to the specified port on the SSH client machine. However, this can be restricted to programs on the same host by supplying a bind address:
ssh -L 127.0.0.1:80:intra.example.com:80 gw.example.com
You can read the docs here.
The port-forward in Kubernetes works only within the cluster, you can forward traffic that will hit specified port to Deployment or Service or a Pod
kubectl port-forward TYPE/NAME [options] [LOCAL_PORT:]REMOTE_PORT [...[LOCAL_PORT_N:]REMOTE_PORT_N]
--address flag is to specify what to listen on 0.0.0.0 means everything localhost is as name and you can set an IP on which it can be listening on.
Documentation is available here, you can also read Use Port Forwarding to Access Applications in a Cluster.
One workaround you can use if you have an SSH server somewhere on the Internet is to SSH to your server from your pod, port-forwarding in reverse:
# Suppose a web console is being served at
# http://my-service-8f6717ab-e.default:8888/
# inside your cluster:
kubectl exec -it my-job-f523b248-7htj6 -- ssh -R8888:my-service-8f6717ab-e.default:8888 user#34.23.1.2
Then you can connect to the service inside Kubernetes from outside of it. If the SSH server is not local to you, you can SSH to it from your local machine with a normal port forward:
me#my-macbook-pro:$ ssh -L8888:localhost:8888 user#34.23.1.2
Then point your browser to http://localhost:8888/
I am trying to open up some ports on my compute VM.
For example, I have this in firewall-rules
$ gcloud compute firewall-rules list
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
default-allow-http default 0.0.0.0/0 tcp:80 http-server
default-allow-https default 0.0.0.0/0 tcp:443 https-server
default-allow-icmp default 0.0.0.0/0 icmp
default-allow-internal default 10.128.0.0/9 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default 0.0.0.0/0 tcp:3389
default-allow-ssh default 0.0.0.0/0 tcp:22
test-24284 default 0.0.0.0/0 tcp:24284 test-tcp-open-24284
I have created a centos 7 instance to which I have attached the tags
$ gcloud compute instances describe test-network-opened
...
...
items:
- http-server
- https-server
- test-tcp-open-24284
...
...
Now when I try to check from my dev box to see whether the port is opened or not using nmap on the public IP showed in the console for the VM
$ nmap -p 24284 35.193.xxx.xxx
Nmap scan report for 169.110.xxx.xx.bc.googleusercontent.com (35.193.xxx.xxx)
Host is up (0.25s latency).
PORT STATE SERVICE
24284/tcp closed unknown
Nmap done: 1 IP address (1 host up) scanned in 1.15 seconds
Now it's hitting the external NAT IP for my VM which would be 169.110.xxx.xx
I tried checking using the iptables rules, but that didn't show anything
[root#test-network-opened ~]# iptables -S | grep 24284
[root#test-network-opened ~]#
So I enabled firewalld and tried opening the port with it
[root#test-network-opened ~]# firewall-cmd --zone=public --add-port=24284/tcp --permanent
success
[root#test-network-opened ~]# firewall-cmd --reload
success
[root#test-network-opened ~]# iptables -S | grep 24284
[root#test-network-opened ~]#
I am not sure where I am doing it wrong with this. I referred these relevant questions on SO about this
How to open a specific port such as 9090 in Google Compute Engine
Can't open port 8080 on Google Compute Engine running Debian
How to open a port on google compute engine
https://cloud.google.com/compute/docs/vpc/using-firewalls
https://cloud.google.com/sdk/gcloud/reference/compute/instances/describe
The ports were opened by the firewall but since I didn't have an application using the port already, nmap was showing the closed port which meant it was able to reach to the server and not firewalled
If it was it would have showed filtered.
I didn't have any application running on it so, didn't know this as a possibility. Careless of me.
Thanks for pointing this out.