Deploying a stateless Go app with Redis on Kubernetes - kubernetes

I had deploy a stateless Go web app with Redis on Kubernetes. Redis pod is running fine but the main issue with application pod and getting error dial tcp: i/o timeout in log. Thank you!!

Please take look: aks-vm-timeout.
Make sure that the default network security group isn't modified and that both port 22 and 9000 are open for connection to the API server. Check whether the tunnelfront pod is running in the kube-system namespace using the kubectl get pods --namespace kube-system command.
If it isn't, force deletion of the pod and it will restart.
Also make sure if Redis port is open.
More info about troubleshooting: dial-backend-troubleshooting.
EDIT:
Answering on your question about tunnelfront:
tunnelfront is an AKS system component that's installed on every cluster that helps to facilitate secure communication from your hosted Kubernetes control plane and your nodes. It's needed for certain operations like kubectl exec, and will be redeployed to your cluster on version upgrades.
Speaking about VM:
I would SSH into the it and start watching the disk IO latency using bpf / bcc tools and the docker / kubelet logs.

Related

Kubernetes service with port-forward does not load balance

im goofin around with K8s for my master thesis at the moment. For this im spinning up an K8s Cluster with the help of KinD. I have also developed an small flask REST API which will echo en ENV var.
Now im starting 3 Services which hold a number of pods of the flask App and they are calling each other. For better understanding i have an hello svc, an world service and an world2 svc.
So far so good.
I have successfully deployed them and now i want to port-forward the hello svc.
kubectl --namespace test port-forward svc/hello 30000
This works fine but as soon as im starting my JMeter Application to test the load balancing features something odd happens.
As you can see in the grafana dashboard the other services are happily load balancing the traffic but the svc which is port forwarded is sending all of its traffic into one hello pod.
This is my deployment:
deployment.yml
Am i missing something? Or did i deploy my application wrong?
Thanks in advance!
Port-forward allows the use of services for convenience purposes only. Behind the scenes connects to a single pod directly. Connection will be dropped should this pod die. There is no load balancing in port forward.One pod selected by the service is chosen and all traffic is forwarded there for the entire lifetime of the port forward command.I would suggest using NodePort type service if you need to test load balancing via JMeter from outside the kubernetes cluster.
For all these who are interested i found a Workaround which is also closer to production.
First of all i installed MetalLB https://mauilion.dev/posts/kind-metallb/
With this LoadBalancer i declared an IP Range which ist the same as the one of my nodes.
Also the service which i am exposing received the type: LoadBalancer with this grafana is now showing an equal distribution of requests.

How do we debug networking issues within istio pods?

I am working on setting up istio in my kubernetes cluster.
I downloaded istio-1.4.2 and installed demo profile and did manual sidecar injection.
But when I check sidecar pod logs, I am getting the below error.
2019-12-26T08:54:17.694727Z error k8s.io/client-go#v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:98: Failed to list *v1beta1.MutatingWebhookConfiguration: Get https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations?fieldSelector=metadata.name%3Distio-sidecar-injector&limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
It seems to be the networking issue, but could you please let me know what it is trying to do exactly?
Is there a way to get more logs than just 'connection refused'?
How do we verify networking issues between istio pods. It seems I cannot run 'wget', 'curl', 'tcpdump', 'netstat' etc within istio sidecar pod to debug further.
All the pods in kube-system namespace are working fine.
Check what port your API Server is serving https traffic(controlled by this flag --secure-port int Default: 6443). It may be 6443 instead of 443.
Check what is the value of server in your kubeconfig and are you able to connect to your kubernetes via kubectl using that kubeconfig.
Another thing to check is whether you have network policy attached to the namespace which blocks egress traffic.
And you could use an ephemeral container to debug issue with the sidecar
https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/
https://github.com/aylei/kubectl-debug

Why go-micro Kubernetes plugin requires to register the pod to registry?

I have a question regarding how to use go-micro with Kubernetes. AFAIK, Kubernetes already has kube-dns for service discovery and kube-proxy with Service abstraction to expose the pods.
Is it possible to use go-micro, but skip the kubernetes go-micro plugin to register itself to the Kubernetes API server?
Because I am not sure why it is necessary in first place. The fact is that kubelet will do that for us automatically (by livenessProbe and readinessProbe check, it can then determine pod is healthy or not), by only including the healthy pod to the endpoint of service.
I am asking the question because we're also using istio-proxy. We got micro-services errors whenever the pod is starting, due to istio-proxy is not yet ready to route the traffic (even the traffic to kube api, since it intercepts the egress traffic from our main container (it uses the go-micro Kubernetes plugin)).
2018/10/17 04:37:55 Can't create server! reason: Patch
https://10.32.64.1:443/api/v1/namespaces/data-cdp/pods/cdp-booking-context-svc-stable-864645684b-xd2tb:
dial tcp 10.32.64.1:443: connect: connection refused
It then causes the main container (go-micro kube plugin app) in the crashloopback multiple times, until the istio-proxy is ready. This is not a big issue, but it troubles my mind about the motivation behind the registration thing.

Is kubectl port-forward encrypted?

I couldn't find any information on wherever a connection creation between cluster's pod and locahost is encrypted when running "kubectl port-forward" command.
It seems like it uses "socat" library which supports encryption, but I'm not sure if kubernetes actually uses it.
As far as I know when you port-forward the port of choice to your machine kubectl connects to one of the masters of your cluster so yes, normally communication is encrypted. How your master communicate to the pod though is dependent on how you set up internal comms.
kubectl port-forward uses socat to make an encrypted TLS tunnel with port forwarding capabilities.
The tunnel goes from you to the kube api-server to the pod so it may actually be 2 tunnels with the kube api-server acting as a pseudo router.
An example of where I've found it useful was that I was doing a quick PoC of a Jenkins Pipeline hosted on Azure Kubernetes Service and earlier in my Kubernetes studies I didn't know how to setup an Ingress, but I could reach the Server via port 80 unencrypted, but I knew my traffic could be snooped on. So I just did kubectl port-forward to temporarily login and securely to debug my POC. Also really helpful with RabbitMQ Cluster hosted on Kubernetes, you can go into the management webpage with kubectl port-forward and make sure that it's clustering the way you wanted it to.

Does the kube-apiserver expect the presence of kube-proxy?

I've been running my kubernetes masters separate from my kubernetes nodes. So I have kube-apiserver, kube-scheduler and kube-controllermanager running on a server without kubelet, kube-proxy or flannel.
So far this has worked perfectly. However, today I attempted to set up the Web UI and access it through an API server. I got the the following error when accessing http://kube-master-0:8080/ui:
Error: 'dial tcp 172.16.72.12:9090: getsockopt: connection timed out'
Trying to reach: 'http://172.16.72.12:9090/'
This suggests to me that the API server is trying to connect to the pod IP, as we don't have flannel or kube-proxy running on this host, the 172.16.72.12 IP will not be routed.
Am I expected to run kube-proxy and flannel on my API servers? Is there another way to let the API server proxy the UI?
It's not required, but it will certainly make your life easier.
The reason this isn't working is because kube-proxy isn't directing traffic to the service. Try kube-node:8080/ui (assuming you have exposed it as with NodePort configuration
In theory, Kube apiserver does not expect the presence of kube-proxy.
This means kube apiserver will run correctly, receives requests and handles them(mostly reads from and writes to etcd).
But if you want the whole cluster working, you will need other components running, for example:
if you want pods or deployments to be scheduled, kube-scheduler should be running
if you want pods and containers be running in nodes, kubelet has to be running
if you want replications can be guarded, controller-manager should be runing
As for kube-proxy and flannel, they are critical parts to make sure networking is working. Load Balance, service, across-hosts pod communication etc all depends on them.