I upgraded a self-hosted lab environment from Kubernetes 1.20.1 to 1.21.14.
I ran the command:
sudo kubeadm upgrade plan v1.21.14
Then, I had to provide current Kubernetes cluster config from ConfigMap or file.
I'm trying to figure out:
Is it possible to get the Kubernetes cluster config yaml file, in case I don't have the file I used to initialize the cluster?
It also turned out didn't exist in ConfigMap.
The above command output was:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[upgrade/config] In order to upgrade, a ConfigMap called "kubeadm-config" in the kube-system namespace must exist.
[upgrade/config] Without this information, 'kubeadm upgrade' won't know how to configure your upgraded cluster.
[upgrade/config] Next steps:
- OPTION 1: Run 'kubeadm config upload from-flags' and specify the same CLI arguments you passed to 'kubeadm init' when you created your control-plane.
- OPTION 2: Run 'kubeadm config upload from-file' and specify the same config file you passed to 'kubeadm init' when you created your control-plane.
- OPTION 3: Pass a config file to 'kubeadm upgrade' using the --config flag.
[upgrade/config] FATAL: the ConfigMap "kubeadm-config" in the kube-system namespace used for getting configuration information was not found
To see the stack trace of this error execute with --v=5 or higher
I tried:
kubeadm config view
The output:
Command "view" is deprecated, This command is deprecated and will be removed in a future release, please use 'kubectl get cm -o yaml -n kube-system kubeadm-config' to get the kubeadm config directly.
configmaps "kubeadm-config" not found
I ran:
kubectl -n kube-system get cm kubeadm-config -o yaml
The output was:
Error from server (NotFound): configmaps "kubeadm-config" not found
Related
When I run any kubectl command I get following WARNING:
W0517 14:33:54.147340 46871 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
I have followed the instructions in the link several times but the WARNING keeps appearing making kubectl output uncomfortable to read.
OS:
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=22.04
DISTRIB_CODENAME=jammy
DISTRIB_DESCRIPTION="Ubuntu 22.04 LTS"
kubectl version:
Client Version: v1.24.0
Kustomize Version: v4.5.4
gke-gcloud-auth-plugin:
Kubernetes v1.23.0-alpha+66064c62c6c23110c7a93faca5fba668018df732
gcloud version:
Google Cloud SDK 385.0.0
alpha 2022.05.06
beta 2022.05.06
bq 2.0.74
bundled-python3-unix 3.9.12
core 2022.05.06
gsutil 5.10
I "login" with:
gcloud init
and then:
gcloud container clusters get-credentials cluster_name --region my-region
finally:
myyser#mymachine:/$ k get pods -n madeupns
W0517 14:50:10.570103 50345 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
No resources found in madeupns namespace.
How can I remove the WARNING or fix the problem?
Removing my .kube/config and re-running get-credentials didn't work.
I fixed this problem by adding the correct export in .bashrc
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
After sourcing .bashrc with . ~/.bashrc and reloading cluster config with:
gcloud container clusters get-credentials clustername
the warning dissapeared:
user#laptop:/$ k get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP
kube-system default-http-backend NodePort 10.10.13.157 <none>
kube-system kube-dns ClusterIP 10.10.0.10 <none>
kube-system kube-dns-upstream ClusterIP 10.10.13.92 <none>
kube-system metrics-server ClusterIP 10.10.2.191 <none>
Got a similar issue, while connecting to a fresh Kubernetes cluster having a version v1.22.10-gke.600
gcloud container clusters get-credentials my-cluster --zone europe-west6-b --project project
and got the below error, as seems like now its become error for the newer version
Fetching cluster endpoint and auth data.
CRITICAL: ACTION REQUIRED: gke-gcloud-auth-plugin, which is needed for continued use of kubectl, was not found or is not executable. Install gke-gcloud-auth-plugin for use with kubectl by following https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
fix that worked for me
gcloud components install gke-gcloud-auth-plugin
You need to do the following things to avoid this warning message now and to avoid errors in the future.
Add the correct export in .bashrc. I am using .zshrc instead of .bashrc so added export in .zshrc
export USE_GKE_GCLOUD_AUTH_PLUGIN=True
Reload .bashrc
source ~/.bashrc
Update gcloud to the latest version.
gcloud components update
Run the following command. Replace the CLUSTER_NAME with the name of your cluster. This will force the kubeconfig for this cluster to be updated to the Client-go Credential Plugin configuration.
gcloud container clusters get-credentials CLUSTER_NAME
Check kubeconfig file by enter the following command. Now you should be able to detect the changes(gke-gcloud-auth-plugin) in the kubeconfig file in the users section in the Root/Home directory
cat ~/.kube/config
The reason behind this is:
kubectl starting the version from v1.26 will no longer have a built-in authentication mechanism for GKE. So GKE users will need to download and use a separate authentication plugin to generate GKE-specific tokens to support the authentication of GKE. To get more details please read here.
I have a Kubernetes cluster 1.17, and I want to add some extraArgs and extraVolumes (like in https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/) in the apiserver. Usually, I update the manifest file /etc/kubernetes/manifests/kube-apiserver.yaml to apply my new config, then I update the kubeadm-config ConfigMap to keep this new configuration for the next Kubernetes upgrade (because static pod manifests are re-generated from this ConfigMap when upgrading).
Is it possible to update only the kubeadm-config ConfigMap then apply the configuration with a command like kubeadm init phase control-plane apiserver ? What are the risks ?
That's the way to go to upgrade static pod definitions of control plane components, but instead of init command I guess you meant upgrade.
$ kubeadm upgrade command consults each time current cluster configuration from ConfigMap ($ kubectl -n kube-system get cm kubeadm-config -o yaml) before applying changes.
Talking about risks, you can try to envision them by studying output of kubeadm upgrade diff command, e.g.
kubeadm upgrade diff v1.20.4. More details in this documentation. You could also try to use --dry-run flag from this doc. It won't change any state, it will display the actions that would be performed.
As addition, you could also read about --experimental-patches from this docs
If you mean change the apiserver config in a live cluster,you can change /etc/kubernetes/manifest/kubeadm-apiserver.conf to apply.
But you must be careful becouse the old static pod will be killed before the new pod ready.
When I use the kubectl run command instead of creating a deployment it creates a pod/selenium-node-chrome and as a result, I am unable to scale the selenium-node-chrome using the replicas command.
PS C:\Users\Test> kubectl run selenium-node-chrome --image selenium/node-chrome:latest --env="HUB_PORT_4444_TCP_ADDR=selenium-hub" --env="HUB_PORT_4444_TCP_PORT=4444"
pod/selenium-node-chrome created
PS C:\Users\Test> kubectl scale deployment selenium-node-chrome --replicas=5
Error from server (NotFound): deployments.extensions "selenium-node-chrome" not found
The video tutorial that I followed successfully created deployment "selenium-node-chrome" after running the same command. Please I need help and I am new to Kubernetes. Thanks.
You should use a generator
kubectl run selenium-node-chrome \
--image selenium/node-chrome:latest \
--env="HUB_PORT_4444_TCP_ADDR=selenium-hub" \
--env="HUB_PORT_4444_TCP_PORT=4444" \
--generator=deployment/apps.v1beta1
https://v1-17.docs.kubernetes.io/docs/reference/kubectl/conventions/#generators
All generators are deprecated in Kubernetes version 1.18. From the docs here
Note: All kubectl generators are deprecated. See the Kubernetes v1.17
documentation for a list of generators and how they were used.
You can use kubectl create deployment my-dep --image=busybox to create a deployment.
Also to create a yaml file use kubectl create deployment my-dep --image=busybox --dry-run=client -o yaml > deployment.yaml and then edit the yaml file to add env or any other details and apply the yaml via kubectl apply -f deployment.yaml
There is a k8s single master node, I need to back it up and restore it
I googled this topic and found a solution -
https://elastisys.com/2018/12/10/backup-kubernetes-how-and-why/
Everything looked easy; so,I followed the instruction and got a copy of the certificates and a snapshot of the etcd database.
But at last, I am not able to find kubeadm-config.yaml on my master server.
Where to find this file?
During kubeadm init, kubeadm uploads the ClusterConfiguration object to your cluster in a ConfigMap called kubeadm-config in the kube-system namespace. You can get it from the ConfigMap and take a backup
kubectl get cm kubeadm-config -n kube-system -o yaml > kubeadm-config.yaml
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/
My core dns corefile got corrupted somehow and now I need to regenerate it or reset it to it's default installed value. How do I do that? I've tried copying and pasting a locally-saved version of the file via kubectl edit cm coredns -n kube-system but I get validation errors
error: configmaps "coredns" is invalid
A copy of your changes has been stored to "/tmp/kubectl-edit-suzaq.yaml"
error: Edit cancelled, no valid changes were saved.
When you directly edit the setting, it used to give the error.
What can you do?
before you run anything, please take a backup:
kubectl -n kube-system get configmap coredns -o yaml > coredns.yaml
Ways #1, forcely apply it.
kubectl apply --force -f /tmp/kubectl-edit-suzaq.yaml
In most cases, it will apply the latest setting successfully by this way. If failed, go through the error, update the file /tmp/kubectl-edit-suzaq.yaml and forcely apply again.
Ways #2, delete and apply again.
kubectl -n kube-system get configmap coredns -o yaml > coredns.yaml
# do a backup, if you don't 100% sure the change will work
cp coredns.yaml coredns.yaml.orig
# update the change in coredns.yaml
# delete coredns
kubectl delete configmap coredns
# apply new change
kubectl apply -f coredns.yaml
Be careful, above steps will generate outage. if you work on a prod environment, you should think about to backup all kubernetes setting before doing above change.