We are running Kubernetes cluster in AWS. We setup the cluster with Kubeadm without the cloud provider option (like bare metal).
Nginx-ingress controller is exposed as a service over 32000 port as NodePort service. We have configured AWS ALB to pass the external request to the K8s worker node over port 32000.
We have been noticing that worker nodes turn up unhealthy. On investigating further, looks like the NodePort connection seems to be inconsistent. As you can see below, connecting to the same IP on port 32000 works most of the time but just sits in "Trying to connect" often. I am not able to see any error message related to this. Any help is highly appreciated.
[root#ip-10-35-2-205 ~]# telnet 10.35.3.76 32000
Trying 10.35.3.76...
Connected to 10.35.3.76.
Escape character is '^]'.
^CConnection closed by foreign host.
[root#ip-10-35-2-205 ~]# telnet 10.35.3.76 32000
Trying 10.35.3.76...
^C
[root#ip-10-35-2-205 ~]# telnet 10.35.3.76 32000
Trying 10.35.3.76...
Connected to 10.35.3.76.
Escape character is '^]'.
Related
I am just getting started with Kubernetes. I'm using microk8s and I'm currently having a mental breakdown. My 5 days are being spent here.
I currently have airflow installed using microk8s helm. However, I forwarded to port 8080, but the connection is being refused.And i'm using AWS EC2
helm chart
https://github.com/airflow-helm/charts/tree/main/charts/airflow
pod status
webserver log
Describe
Port
I allowed port 8080 in security group of aws ec2
Port-forward 8080
i did kubectl port-forward svc/airflow-web 8080:8080
and check port
i did netstat -ntlp
and
i did kubectl get cs
...but I connected to 127.0.0.1:8080 but the connection was denied:
Here is the log for postgresql. There is an error, is it related to this?
service:
type: NodePort
externalPort: 8080
I solved the problem by changing the webserver's service type to NodePort.
In the case of ClusterIP, I don't know why I can't connect
In the instance security group, I allowed all ports 30000 - 32767
i am very happy
My cluster is running on-prem. Currently when I try to ping the external IP of service type LoadBalancer assigned to it from Metal LB. I get a reply from one of the VM's hosting the pods - Destination Host unreachable. Is this because the pods are on an internal kubernetes network(I am using calico) and cannot be pinged. A detailed explanation of the scenario can help to understand it better. Also all the services are performing as expected. I am just curious to know the exact reason behind this since I am new at this. Any help will be much appreciated. Thank you
The LoadbalancerIP or the External SVC IP will never be pingable.
When you define a service of type LoadBalancer, you are saying I would like to Listen on TCP port 8080 for eg: on this SVC.
And that is the only thing your External SVC IP will respond to.
A ping would be UDP ICMP packets that do not match the destination of TCP port 8080.
You can do an nc -v <ExternalIP> 8080 to test it.
OR
use a tool like mtr and pass --tcp --port 8080 to do your tests
Actually, during installation of metal LB, we need to assign a ip range from which metal LB can assign ip. Those ip must be in range of your dhcp network. for example in virtual box, network ip is assigned from the Virtualbox host-only adapter dhcp server if you use host-only adapter.
The components of metal LB are:
The metallb-system/controller deployment. This is the cluster-wide controller that handles IP address assignments.
The metallb-system/speaker daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable.
when you change the service type loadbalancer, Metal LB will assigned a ip address from its ip pools which is basically maping of kubernets internal ip with the metal LB assigned ip.
you can see this by
kubect get svc -n namespaces
For more details, please check this document.
I am making my first steps with Kubernetes and I have some difficulties.
I have a pod with it's service defined as NodePort at port 30010.
I have a load balancer configured in front of this Kubernetes cluster where port 8443 directs traffic to this 30010 port.
when I try to access this pod from outside the cluster to port 8443 the pod is not getting any connections but I can see the incoming connections via tcptrack in the host in port 30010 with means the load balancer is doing it's job.
when I do curl -k https://127.0.0.1:30010 in the host I get a response from the pods.
what am I missing?
how can I debug it?
thanks
I have a K8s cluster installed in several RHEL 7.2 VMs.
Seems that the installation form yum repository comes without addons.
Currently I am facing the following problem almost with any service I am trying to deploy: Jenkins, Kube-ui, influxdb-grafana
Endpoints IPs are not in the range that is defined for Flannel and obviously the services are not available.
Any ideas on how to debug\resolve the problem?
System details:
# lsb_release -i -r
Distributor ID: RedHatEnterpriseServer
Release: 7.2
Packages installed:
kubernetes.x86_64 1.2.0-0.9.alpha1.gitb57e8bd.el7
etcd.x86_64 2.2.5-1.el7
flannel.x86_64 0.5.3-9.el7
docker.x86_64 1.9.1-25.el7.centos
ETCD network configuration
# etcdctl get /atomic.io/network/config
{"Network":"10.0.0.0/16"}
Service gets proper IP but wrong Endpoints
# kubectl describe svc jenkinsmaster
Name: jenkinsmaster
Namespace: default
Labels: kubernetes.io/cluster-service=true,kubernetes.io/name=JenkinsMaster
Selector: name=jenkinsmaster
Type: NodePort
IP: 10.254.113.89
Port: http 8080/TCP
NodePort: http 30996/TCP
Endpoints: 172.17.0.2:8080
Port: slave 50000/TCP
NodePort: slave 31412/TCP
Endpoints: 172.17.0.2:50000
Session Affinity: None
No events.
Thank you.
I think the flannel network subnet and the kubernetes internal network subnet seems to be conflicting here.
With the amount of information as I see now all I can say is that there is a conflict here. To verify that flannel is working just start contianer in two different machines connected with flannel and see if they can talk and what IP address they get. If they are being assigned IP of range 10.0.0.0/16 and they can talk then flannel is doing good. And something is wrong with the integration with kubernetes.
If you are not getting the IP addresses of some other range flannel is not doing good.
kubernetes 1.12...docker 1.9... They are ancient version now. So you don't have CNI or kubeadm. I can barely remember how to setup a kubernetes cluster with flannel that time.
Anyway, you need to know Endpoint IP is same as target Pod IP, that is IP of docker container. So your docker container IP is not the same range as your flannel IP, and 172.17.0.x is the default docker IP range. So I think you need to change docker start parameter like --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}, you can use 10.0.0.0/16 as FLANNEL_SUBNET is you want a basic setup.
we currently have following Kubernetes setup (v1.13.1, setup with kubeadm) with connectivity set up between them:
Master node (bare metal)
5 worker nodes (bare metal)
2 worker nodes (cloud)
There is no proxy in between to access cluster, currently we are accessing services via hostname:NodePort
We are experiencing issue with accessing services via NodePort on 2 cloud worker nodes. What is happening is that service is accessible via IPv6, but not via IPv4:
IPv6:
telnet localhost6 30005
Trying ::1...
Connected to localhost6.
Escape character is '^]'.
IPv4:
telnet localhost4 30005
Trying 127.0.0.1...
Thing is that both are working on bare metal nodes. If I use netstat -napl | grep 30005, I can see kube-proxy is listening on this port (tcp6). I presumed this means that it does not listen on tcp, but aparently this is not the case (I have same picture on bare metal worker nodes):
tcp6 7 0 :::30005 :::* LISTEN 24658/kube-proxy
I have also read that services are using IPv6, but based on bare metal worker nodes, it seems there should not be a problem using IPv4 there as well.
Any idea what would cause that issue and how to solve it?
Thank you and best regards,
Bostjan
In case someone stumbles upon same issue, there was issue with unopened ports on FW for flannel network overlay:
8285 UDP - flannel UDP backend
8472 UDP - flannel vxlan backend