Why are our Jenkins Kubernetes Pods/Slaves showing as Offline - kubernetes

Jenkins ver. 2.77
K8s Version: v1.6.6"
We have installed the Jenkins Kubernetes Plugin and configured it to work with our K8s Cluster.
We are able to succesfully connect to the cluster when we test our connection via
“Manage Jenkins” -> “Configure System” -> Cloud, Kubernetes.
Our Template config can be seen here
Kubernetes Pod Termplate Config
We then create a simple job to test the plugin and see if the slaves would be created and then run a few simple bash commands.
The bash commands we are testing are:
sleep 10
echo "I am a slave"
echo "This is a K8s plugin generated slave"
When we configured our Plugin we assigned the label "autoscale". In addition we set up our job to work with the label autoscale.
Within the configuration of the Job under Label Expression we also see the following
"Label autoscale is serviced by no nodes and 1 cloud"
We then start the job in Jenkins "Build Now"
We then see the pods created in our K8s cluster
jenkins-pod-slave-d4j3n 1/1 Running 0 21h
jenkins-pod-slave-tb2td 1/1 Running 0 21h
However note that under Build History we can see the following message
1
(pending—All nodes of label ‘autoscale’ are offline)
Investigating the logs of the pods outputs nothing
kubectl logs jenkins-pod-slave-d4j3n
kubectl logs jenkins-pod-slave-tb2td
Investigation of the Jenkins logs we can see the following message appear.
Oct 08, 2017 6:18:16 PM org.csanchez.jenkins.plugins.kubernetes.KubernetesCloud addProvisionedSlave
INFO: Template instance cap of 2 reached for template Jenkins-Pod-Slave, not provisioning: 2 running in namespace {3} with label {4}
Our concern is that the namespace and label value are not being picked up correctly, and could be the source of the problem.

your issue maybe the command and arguments.
the command should be blank, and the arguments should be set to:
${computer.jnlpmac} ${computer.name}
this will allow the jnlp slave to connect to the jenkins master correctly

Related

How to fix error with GitLab runner inside Kubernetes cluster - try setting KUBERNETES_MASTER environment variable

I have setup two VMs that I am using throughout my journey of educating myself in CI/CD, GitLab, Kubernetes, Cloud Computing in general and so on. Both VMs have Ubuntu 22.04 Server as a host.
VM1 - MicroK8s Kubernetes cluster
Most of the setup is "default". Since I'm not really that knowledgeable, I have only configured two pods and their respective services - one with PostGIS and the other one with GeoServer. My intent is to add a third pod, which is the deployment of a app that I a have in VM2 and that will communicate with the GeoServer in order to provide a simple map web service (Leaflet + Django). All pods are exposed both within the cluster via internal IPs as well as externally (externalIp).
I have also installed two GitLab-related components here:
GitLab Runner with Kubernetes as executor
GitLab Kubernetes Agent
In VM2 both are visible as connected.
VM2 - GitLab
Here is where GitLab (default installation, latest version) runs. In the configuration (/etc/gitlab/gitlab.rb) I have enabled the agent server.
Initially I had the runner in VM1 configured to have Docker as executor. I had not issues with that. However then I thought it would be nice to try out running the runner inside the cluster so that everything is capsuled (using the internal cluster IPs without further configuration and exposing the VM's operating system).
Both the runner and agent are showing as connected but running a pseudo-CI/CD pipeline (the one provided by GitLab, where you have build, test and deploy stages with each consisting of a simple echo and waiting for a few seconds) returns the following error:
Running with gitlab-runner 15.8.2 (4d1ca121)
on testcluster-k8s-runner Hko2pDKZ, system ID: s_072d6d140cfe
Preparing the "kubernetes" executor
Using Kubernetes namespace: gitlab-runner
ERROR: Preparation failed: getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Will be retried in 3s ...
Using Kubernetes namespace: gitlab-runner
ERROR: Preparation failed: getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Will be retried in 3s ...
Using Kubernetes namespace: gitlab-runner
ERROR: Preparation failed: getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Will be retried in 3s ...
ERROR: Job failed (system failure): getting Kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
I am unable to find any information regarding KUBERNETES_MASTER except in issue tickets (GitLab) and questions (SO and other Q&A platforms). I have no idea what it is, where to set it. My guess would be it belongs in the runner's configuration on VM1 or at least the environment of the gitlab-runner (the user that contains the runner's userspace with its respective /home/gitlab-runner directory).
The only one possible solution I have found so far is to create the .kube directory from the user which uses kubectl (in my case microk8s kubectl since I use MicroK8s) to the home directory of the GitLab runner. I didn't see anything special in this directory (no hidden files) except for a cache subdirectory, hence my decision to simply create it at /home/gitlab-runner/.kube, which didn't change a thing.

Failing to run Mattermost locally on a Kubernetes cluster using Minikube

Summary in one sentence
I want to deploy Mattermost locally on a Kubernetes cluster using Minikube
Steps to reproduce
I used this tutorial and the Github documentation:
https://mattermost.com/blog/how-to-get-started-with-mattermost-on-kubernetes-in-just-a-few-minutes/
https://github.com/mattermost/mattermost-operator/tree/v1.15.0
To start minikube: minikube start --kubernetes-version=v1.21.5
To start ingress; minikube addons enable ingress
I cloned the Github repo with tag v1.15.0 (second link)
In the Github documentation (second link) they state that you need to install Custom Resources by running: kubectl apply -f ./config/crd/bases
Afterwards I installed MinIO and MySQL operators by running: make mysql-minio-operators
Started the Mattermost-operator locally by running: go run .
In the end I deployed Mattermost (I followed step 2, 7 and 9 from the first link)
Observed behavior
Unfortunately I keep getting the following error in the mattermost-operator:
INFO[1419] [opr.controllers.Mattermost] Reconciling Mattermost Request.Name=mm-demo Request.Namespace=mattermost
INFO[1419] [opr.controllers.Mattermost] Updating resource Reconcile=fileStore Request.Name=mm-demo Request.Namespace=mattermost kind="&TypeMeta{Kind:,APIVersion:,}" name=mm-demo-minio namespace=mattermost patch="{\"status\":{\"availableReplicas\":0}}"
INFO[1419] [opr.controllers.Mattermost.health-check] mattermost pod not ready: pod mm-demo-ccbd46b9c-9nq8k is in state 'Pending' Request.Name=mm-demo Request.Namespace=mattermost
INFO[1419] [opr.controllers.Mattermost.health-check] mattermost pod not ready: pod mm-demo-ccbd46b9c-tp567 is in state 'Pending' Request.Name=mm-demo Request.Namespace=mattermost
ERRO[1419] [opr.controllers.Mattermost] Error checking Mattermost health Request.Name=mm-demo Request.Namespace=mattermost error="found 0 updated replicas, but wanted 2"
By using k9s I can see that mm-demo won't start. See below for photo.
Another variation of deployment
Also tried another variation by following all the steps from the first link (without the licences secret step). At this point the mattermost-operator is visible using k9s and won't getting any errors. But unfortunately the mm-demo pod keeps crashing (empty logs, so seeing no errors or something).
Anybody an idea?
As #Ashish faced the same issue, he fixed it by upgrading the resources.
Minikube will be able to run all the pods by running minikube start --kubernetes-version=v1.21.5 --memory 4000 --cpus 4

Kind Kubernetes cluster doesn't have container logs

I have installed a Kubernetes cluster using kind k8s as it was easier to setup and run in my local VM. I also installed Docker separately. I then created a docker image for Spring boot application I built for printing messages to the stdout. It was then added to kind k8s local registry. Using this newly created local image, I created a deployment in the kubernetes cluster using kubectl apply -f config.yaml CLI command. Using similar method I've also deployed fluentd hoping to collect logs from /var/log/containers that would be mounted to fluentD container.
I noticed /var/log/containers/ symlink link doesn't exist. However there is /var/lib/docker/containers/ and it has folders for some containers that were created in the past. None of the new container IDs doesn't seem to exist in /var/lib/docker/containers/ either.
I can see logs in the console when I run kubectl logs pod-name even though I'm unable to find the logs in the local storage.
Following the answer in another thread given by stackoverflow member, I was able to get some information but not all.
I have confirmed Docker is configured with json logging driver by running the following command.
docker info | grep -i logging
When I run the following command (found in the thread given above) I can get the image ID.
kubectl get pod pod-name -ojsonpath='{.status.containerStatuses[0].containerID}'
However I cannot use it to inspect the docker image using docker inspect as Docker is not aware of such image which I assume it due to the fact it is managed by kind control plane.
Appreciate if the experts in the forum can assist to identify where the logs are written and recreate the /var/log/containers symbolink link to access the container logs.
It's absolutely normal that your local installed Docker doesn't have containers running in pod created by kind Kubernetes. Let me explain why.
First, we need to figure out, why kind Kubernetes actually needs Docker. It needs it not for running containers inside pods. It needs Docker to create container which will be Kubernetes node - and on this container you will have pods which will have containers that are you looking for.
kind is a tool for running local Kubernetes clusters using Docker container “nodes”.
So basically the layers are : your VM -> container hosted on yours VM's docker which is acting as Kubernetes node -> on this container there are pods -> in those pods are containers.
In kind quickstart section you can find more detailed information about image used by kind:
This will bootstrap a Kubernetes cluster using a pre-built node image. Prebuilt images are hosted atkindest/node, but to find images suitable for a given release currently you should check the release notes for your given kind version (check with kind version) where you'll find a complete listing of images created for a kind release.
Back to your question, let's find missing containers!
On my local VM, I setup kind Kubernetes and I have installed kubectl tool Then, I created an example nginx-deployment. By running kubectl get pods I can confirm pods are working.
Let's find container which is acting as node by running docker ps -a:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1d2892110866 kindest/node:v1.21.1 "/usr/local/bin/entr…" 50 minutes ago Up 49 minutes 127.0.0.1:43207->6443/tcp kind-control-plane
Okay, now we can exec into it and find containers. Note that kindest/node image is not using docker as the container runtime but crictl.
Let's exec into node: docker exec -it 1d2892110866 sh:
# ls
bin boot dev etc home kind lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
#
Now we are in node - time to check if containers are here:
# crictl ps -a
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
135c7ad17d096 295c7be079025 47 minutes ago Running nginx 0 4e5092cab08f6
ac3b725061e12 295c7be079025 47 minutes ago Running nginx 0 6ecda41b665da
a416c226aea6b 295c7be079025 47 minutes ago Running nginx 0 17aa5c42f3512
455c69da57446 296a6d5035e2d 57 minutes ago Running coredns 0 4ff408658e04a
d511d62e5294d e422121c9c5f9 57 minutes ago Running local-path-provisioner 0 86b8fcba9a3bf
116b22b4f1dcc 296a6d5035e2d 57 minutes ago Running coredns 0 9da6d9932c9e4
2ebb6d302014c 6de166512aa22 57 minutes ago Running kindnet-cni 0 6ef310d8e199a
2a5e0a2fbf2cc 0e124fb3c695b 57 minutes ago Running kube-proxy 0 54342daebcad8
1b141f55ce4b2 0369cf4303ffd 57 minutes ago Running etcd 0 32a405fa89f61
28c779bb79092 96a295389d472 57 minutes ago Running kube-controller-manager 0 2b1b556aeac42
852feaa08fcc3 94ffe308aeff9 57 minutes ago Running kube-apiserver 0 487e06bb5863a
36771dbacc50f 1248d2d503d37 58 minutes ago Running kube-scheduler 0 85ec6e38087b7
Here they are. You can also notice that there are other container which are acting as Kubernetes Components.
For further debugging containers I would suggest reading documentation about debugging Kubernetes nodes with crictl.
Please also note that on your local VM there is file ~/.kube/config which has information needed for kubectl to communicate between your VM and the Kubernetes cluster (in case of kind Kubernetes - docker container running locally).
Hope It will help you. Feel free to ask any question.
EDIT - ADDED INFO HOW TO SETUP MOUNT POINTS
Answering question from comment about mounting directory from node to local VM. We need to setup "Extra Mounts". Let's create a definition needed for kind Kubernetes:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
# add a mount from /path/to/my/files on the host to /files on the node
extraMounts:
- hostPath: /tmp/logs/
containerPath: /var/log/pods
# optional: if set, the mount is read-only.
# default false
readOnly: false
# optional: if set, the mount needs SELinux relabeling.
# default false
selinuxRelabel: false
# optional: set propagation mode (None, HostToContainer or Bidirectional)
# see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
# default None
propagation: Bidirectional
Note that I'm using /var/log/pods instead of /var/log/containers/ - it is because on the cluster created by kind Kubernetes containers directory has only symlinks to logs in pod directory.
Save this yaml, for example as cluster-with-extra-mount.yaml , then create a cluster using this (create a directory /tmp/logs before applying this command!):
kind create cluster --config=/tmp/cluster-with-extra-mount.yaml
Then all containers logs will be in /tmp/logs on your VM.

How to and what components manually should be troubleshooted in Kubernetes using shell script

I am trying write a bash script to troubleshoot a kubernetes cluster
I have a kubernetes cluster with one master node and few minions.
I am trying to write a script to troubleshoot the cluster which finally outputs a detailed report with errors(if existed) and successes
Q.) What I really need to know is, what steps should I follow and what components should I be testing/checking during the troubleshoot. I need a list of procedures, steps (maybe in bullet form) on how to troubleshoot a Kubernetes Cluster manually
PS: I don't want to use kubernetes in-built testing mechanism, I need it to be manually tested/troubleshooted.
Any one here can give me a good descriptive mechanism/steps?
You need to check below services on master to confirm that kubernetes is functioning fine
docker should be running
kubelet should be running ( if you run control-plane components in containers )
etcd
kubernetes scheduler
kubernetes controller manager
kubernetes components health ( kubectl get cs )
all services in kube-system should be running ( kubectl get pods -n kube-system )

Kubernetes event logs

As a part of debug i need to track down events like pod creation and removal. in my kubernetes set up I am using logging level 5.
Kube api server, scheduler, controller, etcd are running on master node and the minion nodes is running kubelet and docker.
I am using journalctl to get K8s logs on master node as well on worker node. On worker node i can see logs from Docker and Kubelet. These logs contain events as i would expect as i create and destroy pods.
However on Master node i dont see any relevant logs which may indicate a pod creation or removal request handling.
what other logs or method i can use to get such logs from Kubernetes master components (API server, controller, scheduler, etcd)?
i have checked the logs from API server, controller, scheduler, etcd pods; they dont seem to have such information.
thanks
System component logs:
There are two types of system components:
those that run in a container
and those that do not run in a container.
For example:
The Kubernetes scheduler and kube-proxy run in a container
The kubelet and container runtime, for example Docker, do not run in containers.
On machines with systemd, the kubelet and container runtime write to journald. If systemd is not present, they write to .log files in the /var/log directory. System components inside containers always write to the /var/log directory, bypassing the default logging mechanism. They use the klog logging library.
Master components logs:
Get them from those containers running on master nodes.
$
$ docker ps | grep apiserver
d6af65a248f1 af20925d51a3 "kube-apiserver --ad…" 2 weeks ago Up 2 weeks k8s_kube-apiserver_kube-apiserver-minikube_kube-system_177a3eb80503eddadcdf8ec0423d04b9_0
5f0e6b33a29f k8s.gcr.io/pause-amd64:3.1 "/pause" 2 weeks ago Up 2 weeks k8s_POD_kube-apiserver-minikube_kube-system_177a3eb80503eddadcdf8ec0423d04b9_0
$
$
$ docker logs -f d6a
But all of this approach to logging is just for testing , you should stream all the logs , ( app logs , container logs , cluster level logs , everything) to a centeral logging system such as ELK or EFK.