After connecting my Gitlab repo to my self-putup Kubernetes cluster via Operations > Kubernetes, I want to install Helm Tiller via the GUI; but I get:
Something went wrong while installing Helm Tiller
Kubernetes error: configmaps "values-content-configuration-helm" already exists
There are no pods running on the cluster and kubectl version returns:
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
update
the output of kubectl get cm --all-namespaces:
NAMESPACE NAME DATA AGE
gitlab-managed-apps values-content-configuration-helm 3 7d
...
deleting this namespace solves the issue!
Find the gitlab-managed-apps namespace with kubectl get cm --all-namespaces:
NAMESPACE NAME DATA AGE
gitlab-managed-apps values-content-configuration-helm 3 7d
...
deleting this namespace solves the issue:
kubectl delete namespace gitlab-managed-apps
Thanks to Lev Kuznetsov.
Related
I want to know what nodes correspond to pods in a K8s cluster. I am working with a 3 node K8s cluster which has 2 specific pods among other pods.
How can I see which pod exists in which node using kubectl?
When I use kubectl get pods, I get the following:
NAME READY STATUS RESTARTS AGE
pod1-7485f58945-zq8tg 1/1 Running 2 2d
pod2-64c4564b5c-8rh5x 1/1 Running 0 2d1h
Following is the version of K8s (kubectl version) that I am using
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.13", GitCommit:"53c7b65d4531a749cd3a7004c5212d23daa044a9", GitTreeState:"clean", BuildDate:"2021-07-15T20:53:19Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
Try kubectl get pods -o wide.
You can get more details in this very detailed Kubernetes cheatsheet.
I am following this example
When I run the following command I get the error:
➜ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
Error from server (NotFound): the server could not find the requested resource
I just installed fresh minikube, kubectl and the data is below:
Development/tools/k8s
➜ kubectl get nodes
NAME STATUS AGE
minikube Ready 2m
Development/tools/k8s
➜ kubectl get pods
No resources found.
Development/tools/k8s
➜ kubectl get rc --all-namespaces
No resources found.
Development/tools/k8s
➜ kubectl cluster-info
Kubernetes master is running at https://192.168.99.101:8443
KubeDNS is running at https://192.168.99.101:8443/api/v1/proxy/namespaces/kube-system/services/kube-dns
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
➜ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:40:50Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
➜ minikube version
minikube version: v1.7.3
commit: 436667c819c324e35d7e839f8116b968a2d0a3ff
What am I doing wrong?
Your kubectl is version 1.5 which is very old. It’s trying to use an outdated format for the Deployment resource which no longer exists in the server.
When I run this command
kubectl get deployments
at my Linux Ubuntu 18 machine, I got different output than expected (according to documentation).
Expected:
Actual:
Of course, I am not talking about values, I am talking about names of labels.
[EDIT]
My k8s version:
This is just an old output format. The newer output you're getting below contains all the same information; the "READY" field is a combination of the old "DESIRED" and "CURRENT".
It's showing as 4/5 in your output to indicate 4 pods ready/current, and 5 pods desired.
Hope this helps.
The output depends on client version. Let's check it with the same server
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get deployments kube-dns -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-dns 2 2 2 2 10d
Switching the kubectl version changes output:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get deployments kube-dns -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
kube-dns 2/2 2 2 10d
My issue is that I can not connect between our machines (master and slaves)
My connection command should be
kubeadm join xxx:xxx:xxx:xxx:6443 --token a72x22.ofmqdjyzi7ot4l70 --discovery-token-ca-cert-hash sha256:3cfd9ddb1e655ef2172c12d914e2bb001434cc4c8a756919a7a6a9f0603e3131
I have been execute
echo 1 >/proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 >/proc/sys/net/ipv4/ip_forward
swapoff -a
and I got the error
[kubelet-start] Downloading configuration for the kubelet from the
"kubelet-config-1.15" ConfigMap in the kube-system namespace error
execution phase kubelet-start: configmaps "kubelet-config-1.15" is
forbidden: User "system:bootstrap:a61x22" can
not get resource "configmaps" in API group "" in the namespace
"kube-system"
master kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
slaves kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
maybe my issue is connected to the host or port?
how can I solve this issue?
Check if configmaps "kubelet-config-1.15" exists with the command below.
kubectl -n kube-system get configmap kubelet-config-1.15
Maybe your master is at version 1.14 and your new node downloaded a kubelet version 1.15.
In that case your configmap didn't exists and you have a configmap kubelet-config-1.14.
Upgrade your master node to v 1.15 or install kubernetes v1.14 into your worker node.
You can see what version your nodes are with
kubectl get nodes
[root#master /]# k get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 32d v1.14.0
node6 Ready 32d v1.14.2
nodo2 Ready 32d v1.14.2
I am gettting this error in kubectl describe nodes nodename. I just did a google search but nothing useful found , what does this mean.
Failed to update Node Allocatable Limits "": failed to set supported cgroup subsystems for cgroup : Failed to set config for supported subsystems : failed to write 3783778304 to memory.limit_in_bytes: write /rootfs/sys/fs/cgroup/memory/memory.limit_in_bytes: invalid argument
Do I need to change some kernel settings using sysctl ?
[iahmad#lxplus000 ~]$ kubectl --version
Kubernetes v1.5.2
[iahmad#lxplus000 ~]$
[iahmad#lxplus000 ~]$
[iahmad#lxplus000 ~]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"269f928217957e7126dc87e6adfa82242bfe5b1e", GitTreeState:"clean", BuildDate:"2017-07-03T15:31:10Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.6", GitCommit:"4bc5e7f9a6c25dc4c03d4d656f2cefd21540e28c", GitTreeState:"clean", BuildDate:"2017-09-18T08:52:26Z", GoVersion:"go1.9", Compiler:"gc", Platform:"linux/amd64"}
[iahmad#lxplus000 ~]$
It depends on the kubectl/kubernetes version: this was seen (and fixed) in kubernetes issue 42701
1.6 should have been patched.
If this is not the same bug, the error message was also seen in issue 29166:
I just forgot to active back disk.uuid after creating back my new VMs!