Kind Kubernetes cluster doesn't have container logs - kubernetes

I have installed a Kubernetes cluster using kind k8s as it was easier to setup and run in my local VM. I also installed Docker separately. I then created a docker image for Spring boot application I built for printing messages to the stdout. It was then added to kind k8s local registry. Using this newly created local image, I created a deployment in the kubernetes cluster using kubectl apply -f config.yaml CLI command. Using similar method I've also deployed fluentd hoping to collect logs from /var/log/containers that would be mounted to fluentD container.
I noticed /var/log/containers/ symlink link doesn't exist. However there is /var/lib/docker/containers/ and it has folders for some containers that were created in the past. None of the new container IDs doesn't seem to exist in /var/lib/docker/containers/ either.
I can see logs in the console when I run kubectl logs pod-name even though I'm unable to find the logs in the local storage.
Following the answer in another thread given by stackoverflow member, I was able to get some information but not all.
I have confirmed Docker is configured with json logging driver by running the following command.
docker info | grep -i logging
When I run the following command (found in the thread given above) I can get the image ID.
kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'
However I cannot use it to inspect the docker image using docker inspect as Docker is not aware of such image which I assume it due to the fact it is managed by kind control plane.
Appreciate if the experts in the forum can assist to identify where the logs are written and recreate the /var/log/containers symbolink link to access the container logs.

It's absolutely normal that your local installed Docker doesn't have containers running in pod created by kind Kubernetes. Let me explain why.
First, we need to figure out, why kind Kubernetes actually needs Docker. It needs it not for running containers inside pods. It needs Docker to create container which will be Kubernetes node - and on this container you will have pods which will have containers that are you looking for.
kind is a tool for running local Kubernetes clusters using Docker container “nodes”.
So basically the layers are : your VM -> container hosted on yours VM's docker which is acting as Kubernetes node -> on this container there are pods -> in those pods are containers.
In kind quickstart section you can find more detailed information about image used by kind:
This will bootstrap a Kubernetes cluster using a pre-built node image. Prebuilt images are hosted atkindest/node, but to find images suitable for a given release currently you should check the release notes for your given kind version (check with kind version) where you'll find a complete listing of images created for a kind release.
Back to your question, let's find missing containers!
On my local VM, I setup kind Kubernetes and I have installed kubectl tool Then, I created an example nginx-deployment. By running kubectl get pods I can confirm pods are working.
Let's find container which is acting as node by running docker ps -a:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d2892110866 kindest/node:v1.21.1 "/usr/local/bin/entr…" 50 minutes ago Up 49 minutes 127.0.0.1:43207->6443/tcp kind-control-plane
Okay, now we can exec into it and find containers. Note that kindest/node image is not using docker as the container runtime but crictl.
Let's exec into node: docker exec -it 1d2892110866 sh:
# ls
bin boot dev etc home kind lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
#
Now we are in node - time to check if containers are here:
# crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
135c7ad17d096 295c7be079025 47 minutes ago Running nginx 0 4e5092cab08f6
ac3b725061e12 295c7be079025 47 minutes ago Running nginx 0 6ecda41b665da
a416c226aea6b 295c7be079025 47 minutes ago Running nginx 0 17aa5c42f3512
455c69da57446 296a6d5035e2d 57 minutes ago Running coredns 0 4ff408658e04a
d511d62e5294d e422121c9c5f9 57 minutes ago Running local-path-provisioner 0 86b8fcba9a3bf
116b22b4f1dcc 296a6d5035e2d 57 minutes ago Running coredns 0 9da6d9932c9e4
2ebb6d302014c 6de166512aa22 57 minutes ago Running kindnet-cni 0 6ef310d8e199a
2a5e0a2fbf2cc 0e124fb3c695b 57 minutes ago Running kube-proxy 0 54342daebcad8
1b141f55ce4b2 0369cf4303ffd 57 minutes ago Running etcd 0 32a405fa89f61
28c779bb79092 96a295389d472 57 minutes ago Running kube-controller-manager 0 2b1b556aeac42
852feaa08fcc3 94ffe308aeff9 57 minutes ago Running kube-apiserver 0 487e06bb5863a
36771dbacc50f 1248d2d503d37 58 minutes ago Running kube-scheduler 0 85ec6e38087b7
Here they are. You can also notice that there are other container which are acting as Kubernetes Components.
For further debugging containers I would suggest reading documentation about debugging Kubernetes nodes with crictl.
Please also note that on your local VM there is file ~/.kube/config which has information needed for kubectl to communicate between your VM and the Kubernetes cluster (in case of kind Kubernetes - docker container running locally).
Hope It will help you. Feel free to ask any question.
EDIT - ADDED INFO HOW TO SETUP MOUNT POINTS
Answering question from comment about mounting directory from node to local VM. We need to setup "Extra Mounts". Let's create a definition needed for kind Kubernetes:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
# add a mount from /path/to/my/files on the host to /files on the node
extraMounts:
- hostPath: /tmp/logs/
containerPath: /var/log/pods
# optional: if set, the mount is read-only.
# default false
readOnly: false
# optional: if set, the mount needs SELinux relabeling.
# default false
selinuxRelabel: false
# optional: set propagation mode (None, HostToContainer or Bidirectional)
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
# default None
propagation: Bidirectional
Note that I'm using /var/log/pods instead of /var/log/containers/ - it is because on the cluster created by kind Kubernetes containers directory has only symlinks to logs in pod directory.
Save this yaml, for example as cluster-with-extra-mount.yaml , then create a cluster using this (create a directory /tmp/logs before applying this command!):
kind create cluster --config=/tmp/cluster-with-extra-mount.yaml
Then all containers logs will be in /tmp/logs on your VM.

Related

Failing to run Mattermost locally on a Kubernetes cluster using Minikube

Summary in one sentence
I want to deploy Mattermost locally on a Kubernetes cluster using Minikube
Steps to reproduce
I used this tutorial and the Github documentation:
https://mattermost.com/blog/how-to-get-started-with-mattermost-on-kubernetes-in-just-a-few-minutes/
https://github.com/mattermost/mattermost-operator/tree/v1.15.0
To start minikube: minikube start --kubernetes-version=v1.21.5
To start ingress; minikube addons enable ingress
I cloned the Github repo with tag v1.15.0 (second link)
In the Github documentation (second link) they state that you need to install Custom Resources by running: kubectl apply -f ./config/crd/bases
Afterwards I installed MinIO and MySQL operators by running: make mysql-minio-operators
Started the Mattermost-operator locally by running: go run .
In the end I deployed Mattermost (I followed step 2, 7 and 9 from the first link)
Observed behavior
Unfortunately I keep getting the following error in the mattermost-operator:
INFO[1419] [opr.controllers.Mattermost] Reconciling Mattermost Request.Name=mm-demo Request.Namespace=mattermost
INFO[1419] [opr.controllers.Mattermost] Updating resource Reconcile=fileStore Request.Name=mm-demo Request.Namespace=mattermost kind="&TypeMeta{Kind:,APIVersion:,}" name=mm-demo-minio namespace=mattermost patch="{\"status\":{\"availableReplicas\":0}}"
INFO[1419] [opr.controllers.Mattermost.health-check] mattermost pod not ready: pod mm-demo-ccbd46b9c-9nq8k is in state 'Pending' Request.Name=mm-demo Request.Namespace=mattermost
INFO[1419] [opr.controllers.Mattermost.health-check] mattermost pod not ready: pod mm-demo-ccbd46b9c-tp567 is in state 'Pending' Request.Name=mm-demo Request.Namespace=mattermost
ERRO[1419] [opr.controllers.Mattermost] Error checking Mattermost health Request.Name=mm-demo Request.Namespace=mattermost error="found 0 updated replicas, but wanted 2"
By using k9s I can see that mm-demo won't start. See below for photo.
Another variation of deployment
Also tried another variation by following all the steps from the first link (without the licences secret step). At this point the mattermost-operator is visible using k9s and won't getting any errors. But unfortunately the mm-demo pod keeps crashing (empty logs, so seeing no errors or something).
Anybody an idea?
As #Ashish faced the same issue, he fixed it by upgrading the resources.
Minikube will be able to run all the pods by running minikube start --kubernetes-version=v1.21.5 --memory 4000 --cpus 4

Unable to connect to the server: net/http: TLS handshake timeout

On minikube for windows I created a deployment on the kubernetes cluster, then I tried to scale it by changing replicas from 1 to 2, and after that kubectl hangs and my disk usage is 100%.
I only have one container in my deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: first-deployment
spec:
replicas: 1
selector:
matchLabels:
run: app
template:
metadata:
labels:
run: app
spec:
containers:
- name: demo
image: ner_app
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000
all I did was run this after the pods were successfully deployed and running
kubectl scale --replicas=2 deployment first-deployment
In another terminal I was watching the pods using
kubectl get pods --watch
But everything is unresponsive and I'm not sure how to recover from this.
When I run kubectl get pods again it gives the following message
PS D:\docker\ner> kubectl get pods
Unable to connect to the server: net/http: TLS handshake timeout
Is there a way to recover, or cancel whatever process is running?
Also my VM's are on Hyper-V for Windows 10 Pro (minikube and Docker Desktop) both have the default RAM allocated - 2048MB
The container in my pod is a machine learning process and the model it loads could be large, in the order of 200MB to 300MB
You may have some proxy problems. Try following commands:
$ unset http_proxy
$ unset https_proxy
and repeat your kubectl call.
For me, the problem is that Docker ran out of memory. (EDIT: Possibly anyway; I wrote this post a while ago, and am now not so sure that is the root case, but did not write down my rationale, so idk.)
Anyway, to fix:
Fully close your k8s emulator. (docker desktop, minikube, etc.)
Shutdown WSL2. (wsl --shutdown) [EDIT: This step is apparently not necessary -- at least not always, since this time I skipped it, and the problem still resolved.]
Restart your k8s emulator.
Rerun the commands you wanted.
Sometimes it also works to simply:
Right click the Docker Desktop tray-icon, press "Restart Docker", and wait a few minutes for things to restart. (sometimes this fails, with Docker Desktop saying "Docker failed to start", so I'd generally recommend the more thorough process above)
Just happened to me on a new Windows 10 install with Ubuntu distro in WSL2. I solved the problem by running:
$ sudo ifconfig eth0 mtu 1350
(BTW, I was on a VPN connection when trying the 'kubectl get pods' command)
You can set up resource limits on deployments so that pods will not use the entire available resource in the node.
In my case I have my private EKS cluster and there is no 443(HTTPS) enabled in security groups.
My issue is solved after enabling the (HTTPS)443 port in security groups.
Kindly refer for AWS documentation for more details: "You must ensure that your Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your connected network"
i solved this problem when execute the following command
minikube delete
and then start it
minikube start --vm-driver="virtualbox"
if use this why your pods will deleted
and when run kubectl get pods
you can see this result
No resources found in default namespace.
You could try $ unset all_proxy to reset the socket proxy.
Also, if you're connected to a VPN, try disconnecting - it seems that can interfere with connecting to a cluster.
I think the other answers don't really mention or refer to the vpn and proxy documentation for minikube: https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/
The NO_PROXY variable here is important: Without setting it, minikube may not be able to access resources within the VM. minikube uses two IP ranges, which should not go through the proxy:
192.168.99.0/24: Used by the minikube VM. Configurable for some hypervisors via --host-only-cidr
192.168.39.0/24: Used by the minikube kvm2 driver.
192.168.49.0/24: Used by the minikube docker driver’s first cluster.
10.96.0.0/12: Used by service cluster IP’s. Configurable via --service-cluster-ip-rang
So adding those IP ranges to your NO_PROXY environment variable should fix the issue.
Simply closing cmd, opening again, then
minikube start
And then executing the commands again solved this issue for me.
P.S: minikube start took less than a minute
Adding the IP address to the no_proxy list worked for me.
Obtain the IP address from ip addr output.
export no_proxy=localhost,127.0.0.1,<IP_ADDRESS>
restart minikube will work.
But if you don't want to delete it
then you can just switch to other cluster and then switch back.
I just click other kubenete cluster (ex: docker-desktop)
and then click back to the cluster I want to run (ex: minikube)
If you're on Linux or Mac, go to your virtualbox, and then on the toolbar choose 'Global Tools', then if you see two machines are using the same ip address, you should remove one of them. this image shows virtual box GUI
As this answer comes first on search for net-http-tls-handshake-timeout error
For those having issue with AWS EKS (and likely any K8s),
NO_PROXY solves problem by adding related IP/host to environment variable.
As suggested in comments for first answer.
For AWS EKS (when seeing this intermittently after vpc-cni addon upgrade)
replace for specific region or single url for your use case.
NO_PROXY=$NO_PROXY;eks.amazonaws.com
At least for Windows 10 and 11
$PS C:\oc rollback dc/my-app
Unable to connect to the server: net/http: TLS handshake timeout
For OpenShift 4.x the problem is that for some reason you are logged-out:
$PS C:\oc status
error: You must be logged in to the server (Unauthorized)
logging in by e.g.
$oc login -u developer
resolves the problem
Open PowerShell as an administrator and run the command "wsl --shutdown". You will see the same notification in your open Ubuntu terminal.
Open Docker Desktop.
Open a new terminal.
Run the command "minikube status" in the Ubuntu terminal.
Run the Minikube container. You can do this in Docker Desktop.
Run the command "minikube start".
That's it! You don't need to close your computer after this, and Minikube should work fine.

Waiting for pods: apiserver get stuck

I am trying to implement auditing policy
My yaml
~/.minikube/addons$ cat audit-policy.yaml
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
Pods got stuck
minikube start --extra-config=apiserver.Authorization.Mode=RBAC --extra-config=apiserver.Audit.LogOptions.Path=/var/logs/audit.log --extra-config=apiserver.Audit.PolicyFile=/etc/kubernetes/addons/audit-policy.yaml
😄 minikube v0.35.0 on linux (amd64)
💡 Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🔄 Restarting existing virtualbox VM for "minikube" ...
⌛ Waiting for SSH access ...
📶 "minikube" IP address is 192.168.99.101
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
▪ apiserver.Authorization.Mode=RBAC
▪ apiserver.Audit.LogOptions.Path=/var/logs/audit.log
▪ apiserver.Audit.PolicyFile=/etc/kubernetes/addons/audit-policy.yaml
🚜 Pulling images required by Kubernetes v1.13.4 ...
🔄 Relaunching Kubernetes v1.13.4 using kubeadm ...
⌛ Waiting for pods: apiserver
Why?
I can do this
minkub start
Then I go for minikube ssh
$ sudo bash
$ cd /var/logs
bash: cd: /var/logs: No such file or directory
ls
cache empty lib lock log run spool tmp
How to apply extra-config?
I don't have good news. Although you made some mistakes with the /var/logs it does not matter in this case, as there seems to be no way of implement auditing policy in Minikube (I mean there is, few ways at least but they all seem to fail).
You can try couple of ways presented in GitHub issues and other links I will provide, but I tried probably all of them and they do not work with current Minikube version. You might try to make this work with earlier versions maybe, as it seems like at some point it was possible with the way you have provided in your question, but as for now in the updated version it is not. Anyway I have spend some time on trying the ways from the links and couple of my own ideas but no success, maybe you will be able to find the missing piece.
You can find more information in this documents:
Audit Logfile Not Created
Service Accounts and Auditing in Kubernetes
fails with -extra-config=apiserver.authorization-mode=RBAC and audit logging: timed out waiting for kube-proxy
How do I enable an audit log on minikube?
Enable Advanced Auditing Webhook Backend Configuration

Kubernetes event logs

As a part of debug i need to track down events like pod creation and removal. in my kubernetes set up I am using logging level 5.
Kube api server, scheduler, controller, etcd are running on master node and the minion nodes is running kubelet and docker.
I am using journalctl to get K8s logs on master node as well on worker node. On worker node i can see logs from Docker and Kubelet. These logs contain events as i would expect as i create and destroy pods.
However on Master node i dont see any relevant logs which may indicate a pod creation or removal request handling.
what other logs or method i can use to get such logs from Kubernetes master components (API server, controller, scheduler, etcd)?
i have checked the logs from API server, controller, scheduler, etcd pods; they dont seem to have such information.
thanks
System component logs:
There are two types of system components:
those that run in a container
and those that do not run in a container.
For example:
The Kubernetes scheduler and kube-proxy run in a container
The kubelet and container runtime, for example Docker, do not run in containers.
On machines with systemd, the kubelet and container runtime write to journald. If systemd is not present, they write to .log files in the /var/log directory. System components inside containers always write to the /var/log directory, bypassing the default logging mechanism. They use the klog logging library.
Master components logs:
Get them from those containers running on master nodes.
$
$ docker ps | grep apiserver
d6af65a248f1 af20925d51a3 "kube-apiserver --ad…" 2 weeks ago Up 2 weeks k8s_kube-apiserver_kube-apiserver-minikube_kube-system_177a3eb80503eddadcdf8ec0423d04b9_0
5f0e6b33a29f k8s.gcr.io/pause-amd64:3.1 "/pause" 2 weeks ago Up 2 weeks k8s_POD_kube-apiserver-minikube_kube-system_177a3eb80503eddadcdf8ec0423d04b9_0
$
$
$ docker logs -f d6a
But all of this approach to logging is just for testing , you should stream all the logs , ( app logs , container logs , cluster level logs , everything) to a centeral logging system such as ELK or EFK.

kubernetes pods spawn across all servers but kubectl only shows 1 running and 1 pending

I have new setup of Kubernetes and I created replication with 2. However what I see when I do " kubectl get pods' is that one is running another is "pending". Yet when I go to my 7 test nodes and do docker ps I see that all of them are running.
What I think is happening is that I had to change the default insecure port from 8080 to 7080 (the docker app actually runs on 8080), however I don't know how to tell if I am right, or where else to look.
Along the same vein, is there any way to setup config for kubectl where I can specify the port. Doing kubectl --server="" is a bit annoying (yes I know I can alias this).
If you changed the API port, did you also update the nodes to point them at the new port?
For the kubectl --server=... question, you can use kubectl config set-cluster to set cluster info in your ~/.kube/config file to avoid having to use --server all the time. See the following docs for details:
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_set-cluster.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_set-context.html
http://kubernetes.io/v1.0/docs/user-guide/kubectl/kubectl_config_use-context.html