There is a k8s single master node, I need to back it up and restore it
I googled this topic and found a solution -
https://elastisys.com/2018/12/10/backup-kubernetes-how-and-why/
Everything looked easy; so,I followed the instruction and got a copy of the certificates and a snapshot of the etcd database.
But at last, I am not able to find kubeadm-config.yaml on my master server.
Where to find this file?
During kubeadm init, kubeadm uploads the ClusterConfiguration object to your cluster in a ConfigMap called kubeadm-config in the kube-system namespace. You can get it from the ConfigMap and take a backup
kubectl get cm kubeadm-config -n kube-system -o yaml > kubeadm-config.yaml
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/
Related
As stated in the title, is it possible to find out a K8s cluster name from the API? I looked around the API and could not find it.
kubectl config current-context does the trick (it outputs little bit more, like project name, region, etc., but it should give you the answer you need).
Unfortunately a cluster doesn't know its own name, or anything else that would uniquely identify it (K8s issue #44954). I wanted to know for helm issue #2055.
Update:
A common workaround is to create a ConfigMap containing the cluster name and read that when required (#2055 comment 1244537799).
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-system
data:
cluster-name: foo
There is no way to get the name via K8s API. But here is a one-liner in case the name you have in your .kube/config file is enough for you (if you download it from your cloud provider the names should match):
kubectl config view --minify -o jsonpath='{.clusters[].name}'
Note 1: The --minify is key here so it will output the name of your current context only. There are other similar answers posted here but without the "minify" you will be listing other contexts in your config that might confuse you.
Note 2: The name in your .kube/config might not reflect the name in your cloud provider, if the file was autogenerated by the cloud provider the names should match, if you configured it manually you could have typed any name just for local config.
Note 3: Do not rely on kubectl config current-context this returns just the name of the context, not the name of the cluster.
I dont believe there is a k8s cluster name. This command could provide some nice informations
kubectl cluster-info
The question is not really well described. However, if this question is related to Google Container Engine then as coreypobrien mentioned the name of cluster is stored in custom metadata of nodes. From inside a node, run the following command and the output will be name of cluster:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
If you specify your use case, I might be able to extend my answer to cover it.
The kubernetes API doesn't know much about the GKE cluster name, but you can easily get the cluster name from the Google metatdata server like this
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
It is the same as getting the current config, but the below command gives clear output:
kubectl config view
This command will Check all possible clusters, as you know .KUBECONFIG may have multiple contexts
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
And you will get output like
Cluster name Server
kubernetes https://localhost:6443
at-least for kubespray clusters, the following works for me
kubectl config current-context | cut -d '#' -f2
For clusters that were installed using kubeadm, the configuration stored in the kubeadm-config configmap has the cluster name used when installing the cluster.
$ kubectl -n kube-system get configmap kubeadm-config -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
clusterName: NAME_OF_CLUSTER
For clusters that are using CoreDNS for their DNS, the "cluster name" from kubeadm is also used as the domain suffix.
$ kubectl -n kube-system get configmap coredns -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
kubernetes NAME_OF_CLUSTER.local in-addr.arpa ip6.arpa {
Well this returns precisely one thing, a cluster name
K8s:
kubectl config view -o jsonpath='{.clusters[].name}{"\n"}'
Openshift:
oc config view -o jsonpath='{.clusters[].name}{"\n"}'
$ kubectl config get-clusters --> get you the list of existing clusters
Using python k8s client. But this won't work with incluster_kubeconfig.
from kubernetes import config
cluster_context = config.kube_config.list_kube_config_contexts()
print (cluster_context)
([{'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'}], {'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'})
cluster_name = cluster_context[1]['context']['cluster']
print (cluster_name)
k01.test.use1.aws.platform.gov
Using kubectl command:
$ kubectl config get-clusters
NAME
kubernetes
kubectl config get-clusters
kubectl config get-contexts
There is a great tool called kubectx https://github.com/ahmetb/kubectx.
kubectx - lists all previously added clusters and highlights the currently used one. This is only one word to type instead of kubectl config current-context.
kubectx <cluster> - switches to a chosen cluster.
Moreover this tool comes also with kubens which does exactly the same for namespaces:
kubens - lists all namespaces and shows the current one,
kubens <namespace> - switches to a chosen namespace.
I have a Kubernetes cluster 1.17, and I want to add some extraArgs and extraVolumes (like in https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/) in the apiserver. Usually, I update the manifest file /etc/kubernetes/manifests/kube-apiserver.yaml to apply my new config, then I update the kubeadm-config ConfigMap to keep this new configuration for the next Kubernetes upgrade (because static pod manifests are re-generated from this ConfigMap when upgrading).
Is it possible to update only the kubeadm-config ConfigMap then apply the configuration with a command like kubeadm init phase control-plane apiserver ? What are the risks ?
That's the way to go to upgrade static pod definitions of control plane components, but instead of init command I guess you meant upgrade.
$ kubeadm upgrade command consults each time current cluster configuration from ConfigMap ($ kubectl -n kube-system get cm kubeadm-config -o yaml) before applying changes.
Talking about risks, you can try to envision them by studying output of kubeadm upgrade diff command, e.g.
kubeadm upgrade diff v1.20.4. More details in this documentation. You could also try to use --dry-run flag from this doc. It won't change any state, it will display the actions that would be performed.
As addition, you could also read about --experimental-patches from this docs
If you mean change the apiserver config in a live cluster,you can change /etc/kubernetes/manifest/kubeadm-apiserver.conf to apply.
But you must be careful becouse the old static pod will be killed before the new pod ready.
I have followed instructions from this blog post to set up a k3s cluster on a couple of raspberry pi 4:
I'm now trying to get my hands dirty with traefik as front, but I'm having issues with the way it has been deployed as a 'HelmChart' I think.
From the k3s docs
It is also possible to deploy Helm charts. k3s supports a CRD
controller for installing charts. A YAML file specification can look
as following (example taken from
/var/lib/rancher/k3s/server/manifests/traefik.yaml):
So I have been starting up my k3s with the --no-deploy traefik option to manually add it with settings. So I therefore manually apply a yaml like this:
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: traefik
namespace: kube-system
spec:
chart: https://%{KUBERNETES_API}%/static/charts/traefik-1.64.0.tgz
set:
rbac.enabled: "true"
ssl.enabled: "true"
kubernetes.ingressEndpoint.useDefaultPublishedService: "true"
dashboard:
enabled: true
domain: "traefik.k3s1.local"
But when trying to iterate over settings to get it working as I want, I'm having trouble tearing it down. If I try kubectl delete -f on this yaml it just hangs indefinitely. And I can't seem to find a clean way to delete all the resources manually either.
I've been resorting now to just reinstall my entire cluster over and over because I can't seem to cleanup properly.
Is there a way to delete all the resources created by a chart like this without the helm cli (which I don't even have)?
Are you sure that kubectl delete -f is hanging?
I had the same issue as you and it seemed like kubectl delete -f was hanging, but it was really just taking a long time.
As far as I can tell, when you issue the kubectl delete -f a pod in the kube-system namespace with a name of helm-delete-* should spin up and try to delete the resources deployed via helm. You can get the full name of that container by running kubectl -n kube-system get pods, find the one with kube-delete-<name of yaml>-<id>. Then use the pod name to look at the logs using kubectl -n kube-system logs kube-delete-<name of yaml>-<id>.
An example of what I did was:
kubectl delete -f jenkins.yaml # seems to hang
kubectl -n kube-system get pods # look at pods in kube-system namespace
kubectl -n kube-system logs helm-delete-jenkins-wkjct # look at the delete logs
I see two options here:
Use the --now flag to delete your yaml file with minimal delay.
Use --grace-period=0 --force flags to force delete the resource.
There are other options but you'll need Helm CLI for them.
Please let me know if that helped.
I keep reading documentation that gives parameters for kube-proxy, but does not explain how where these parameters are supposed to be used. I create my cluster using az aks create with the azure-cli program, then I get credentials and use kubectl. So far everything I've done has involved yaml for services and deployments and such, but I can't figure out where all this kube-proxy stuff fits into all of this.
I've googled for days. I've opened question issues on github with AKS. I've asked on the kubernetes slack channel, but nobody has responded.
The kube-proxy on all your Kubernetes nodes runs as a Kubernetes DaemonSet and its configuration is stored on a Kubernetes ConfigMap. To make any changes or add/remove options you will have to edit the kube-proxy DaemonSet or ConfigMap on the kube-system namespace.
$ kubectl -n kube-system edit daemonset kube-proxy
or
$ kubectl -n kube-system edit configmap kube-proxy
For a reference on the kube-proxy command line options you can refer to here.
As stated in the title, is it possible to find out a K8s cluster name from the API? I looked around the API and could not find it.
kubectl config current-context does the trick (it outputs little bit more, like project name, region, etc., but it should give you the answer you need).
Unfortunately a cluster doesn't know its own name, or anything else that would uniquely identify it (K8s issue #44954). I wanted to know for helm issue #2055.
Update:
A common workaround is to create a ConfigMap containing the cluster name and read that when required (#2055 comment 1244537799).
apiVersion: v1
kind: ConfigMap
metadata:
name: cluster-info
namespace: kube-system
data:
cluster-name: foo
There is no way to get the name via K8s API. But here is a one-liner in case the name you have in your .kube/config file is enough for you (if you download it from your cloud provider the names should match):
kubectl config view --minify -o jsonpath='{.clusters[].name}'
Note 1: The --minify is key here so it will output the name of your current context only. There are other similar answers posted here but without the "minify" you will be listing other contexts in your config that might confuse you.
Note 2: The name in your .kube/config might not reflect the name in your cloud provider, if the file was autogenerated by the cloud provider the names should match, if you configured it manually you could have typed any name just for local config.
Note 3: Do not rely on kubectl config current-context this returns just the name of the context, not the name of the cluster.
I dont believe there is a k8s cluster name. This command could provide some nice informations
kubectl cluster-info
The question is not really well described. However, if this question is related to Google Container Engine then as coreypobrien mentioned the name of cluster is stored in custom metadata of nodes. From inside a node, run the following command and the output will be name of cluster:
curl http://metadata/computeMetadata/v1/instance/attributes/cluster-name -H "Metadata-Flavor: Google"
If you specify your use case, I might be able to extend my answer to cover it.
The kubernetes API doesn't know much about the GKE cluster name, but you can easily get the cluster name from the Google metatdata server like this
kubectl run curl --rm --restart=Never -it --image=appropriate/curl -- -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/cluster-name
It is the same as getting the current config, but the below command gives clear output:
kubectl config view
This command will Check all possible clusters, as you know .KUBECONFIG may have multiple contexts
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
And you will get output like
Cluster name Server
kubernetes https://localhost:6443
at-least for kubespray clusters, the following works for me
kubectl config current-context | cut -d '#' -f2
For clusters that were installed using kubeadm, the configuration stored in the kubeadm-config configmap has the cluster name used when installing the cluster.
$ kubectl -n kube-system get configmap kubeadm-config -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: kubeadm-config
namespace: kube-system
data:
ClusterConfiguration: |
clusterName: NAME_OF_CLUSTER
For clusters that are using CoreDNS for their DNS, the "cluster name" from kubeadm is also used as the domain suffix.
$ kubectl -n kube-system get configmap coredns -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
kubernetes NAME_OF_CLUSTER.local in-addr.arpa ip6.arpa {
Well this returns precisely one thing, a cluster name
K8s:
kubectl config view -o jsonpath='{.clusters[].name}{"\n"}'
Openshift:
oc config view -o jsonpath='{.clusters[].name}{"\n"}'
$ kubectl config get-clusters --> get you the list of existing clusters
Using python k8s client. But this won't work with incluster_kubeconfig.
from kubernetes import config
cluster_context = config.kube_config.list_kube_config_contexts()
print (cluster_context)
([{'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'}], {'context': {'cluster': 'k01.test.use1.aws.platform.gov', 'user': 'k01-test'}, 'name': 'k01.test.use1.aws.platform.gov'})
cluster_name = cluster_context[1]['context']['cluster']
print (cluster_name)
k01.test.use1.aws.platform.gov
Using kubectl command:
$ kubectl config get-clusters
NAME
kubernetes
kubectl config get-clusters
kubectl config get-contexts
There is a great tool called kubectx https://github.com/ahmetb/kubectx.
kubectx - lists all previously added clusters and highlights the currently used one. This is only one word to type instead of kubectl config current-context.
kubectx <cluster> - switches to a chosen cluster.
Moreover this tool comes also with kubens which does exactly the same for namespaces:
kubens - lists all namespaces and shows the current one,
kubens <namespace> - switches to a chosen namespace.