kube-controller-manager is not logging details - kubernetes

I have an issue setting up persistant volumes for gitlab on my bare-metal kubernetes cluster:
Operation for "provision-gitlab/repo-data-gitlab-gitaly-0[3f758288-290c-4d9c-a084-5506f58a22d7]" failed. No retries permitted until 2020-11-28 11:55:56.533202624 +0000 UTC m=+305.008238514 (durationBeforeRetry 4s). Error: "failed to create volume: failed to create volume: see kube-controller-manager.log for details"
Problem is: this file doesn't exist anywhere, and I cannot get any more details about the problem, even by adapting the configuration:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=192.168.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --port=0
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
- --log-dir=/var/log/
- --log-file=kube-controller-manager.log
- --logtostderr=false
image: k8s.gcr.io/kube-controller-manager:v1.19.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
startupProbe:
failureThreshold: 24
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /var/log/kube-controller-manager.log
name: logfile
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /var/log/kube-controller-manager.log
name: logfile
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
I tried to create it by hand, change permissions on it, but the pod is still not logging in this file

Control Plane components use klog library for logging which, for the moment, is rather badly documented.
Actually --log-dir and --log-file are mutually exclusive.
## it should be either --log-dir
--log-dir=/var/log/kube
...
volumeMounts:
- mountPath: /var/log/kube
name: log
...
volumes:
- hostPath:
path: /var/log/kube
type: DirectoryOrCreate
name: log
## or --log-file
--log-file=/var/log/kube-controller-manager.log
...
volumeMounts:
- mountPath: /var/log/kube-controller-manager.log
name: log
...
volumes:
- hostPath:
path: /var/log/kube-controller-manager.log
type: FileOrCreate
name: log
With --log-dir a component will write each log level a into separate file inside a given dir.
So you'll have a set of files with names like kube-controller-manager.INFO.log
With --log-file you'll have a single file as expected.
Don't forget to specify FileOrCreate in your volume definition, otherwise a directory will created by default.

Related

How to parse kubernetes pattern log with Filebeat

I've got Kubernetes cluster with ECK Operator deployed. I also deploy Filebeat to my cluster. Here's file:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
namespace: logging-dev
spec:
type: filebeat
version: 8.2.0
elasticsearchRef:
name: elastic-logging-dev
kibanaRef:
name: kibana
config:
filebeat:
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
processors:
- add_cloud_metadata: { }
- add_host_metadata: { }
daemonSet:
podTemplate:
metadata:
annotations:
co.elastic.logs/enabled: "false"
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: filebeat
securityContext:
runAsUser: 0
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
requests:
memory: 500Mi
cpu: 100m
limits:
memory: 500Mi
cpu: 200m
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
It's working very well, but I want to also parse Kubernetes logs, eg.:
E0819 18:57:51.309161 1 watcher.go:327] failed to prepare current and previous objects: conversion webhook for minio.min.io/v2, Kind=Tenant failed: Post "https://operator.minio-operator.svc:4222/webhook/v1/crd-conversion?timeout=30s": dial tcp 10.233.8.119:4222: connect: connection refused
How can I do that?
In Fluentd it's quite simple:
<filter kubernetes.var.log.containers.kube-apiserver-*_kube-system_*.log>
#type parser
key_name log
format /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)$/
time_format %m%d %H:%M:%S.%N
types pid:integer
reserve_data true
remove_key_name_field false
</filter>
But I cannot find any example, tutorial or whatever how to do this with Filebeat.

kubectl get componentstatus shows unhealthy

i've finished setting up my HA k8s cluster using kubeadm.
Everything seems to be working fine, but after checking with the command kubectl get componentstatus I get:
NAME STATUS MESSAGE
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 12
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 12
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
I see that manifests for scheduler and controller have other ports set up for the health check:
kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- --port=0
image: k8s.gcr.io/kube-scheduler:v1.18.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
status: {}
kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --port=0
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
image: k8s.gcr.io/kube-controller-manager:v1.18.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
status: {}
So these are using ports 10259 and 10257 respectively.
Any idea why is kubectl trying to perform health check using 10251 and 10252?
version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
PS: I am able to make deployments and expose services, no problem there.
This is a known issue which unfortunately is not going to be fixed as the feature is planned to be deprecated. Also, see this source:
I wouldn't expect a change for this issue. Upstream Kubernetes wants
to deprecate component status and does not plan on enhancing it. If
you need to check for cluster health using other monitoring sources is
recommended.
kubernetes/kubernetes#93171 - 'fix component status server address'
which is getting recommendation to close due to deprecation talk.
kubernetes/enhancements#553 - Deprecate ComponentStatus
kubernetes/kubeadm#2222 - kubeadm default init and they are looking to
'start printing a warning in kubect get componentstatus that this API
object is no longer supported and there are plans to remove it.'

Kubernetes kube-controller-manager. How can I apply a flag?

In the documentation, I found that the following flag should be applied on kube-controller-manager to solve my problem:
--horizontal-pod-autoscaler-use-rest-clients=1m0s
But how can I apply this flag on kube-controller-manager? I don't understand, since it is not YAML based setting and the only thing I have on my local machine is kubectl & oc CLI tools.
The kube-controller-manager runs in your K8s control plane. So you will have to add that flag on the servers where your control plane runs. Typically, this is an uneven number of server (one is the master) 3 or 5 due to the fact that it's the recommended quorum. (Example using kubeadm).
So typically the kube-controller-manager configs live under /etc/kubernetes/manifests in your masters. The file name typically is kube-controller-manager.yaml and the content can be changed to something like this:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/var/lib/minikube/certs/ca.crt
- --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt
- --cluster-signing-key-file=/var/lib/minikube/certs/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
- --root-ca-file=/var/lib/minikube/certs/ca.crt
- --service-account-private-key-file=/var/lib/minikube/certs/sa.key
- --use-service-account-credentials=true
- --horizontal-pod-autoscaler-use-rest-clients=1m0s <== add this line
image: k8s.gcr.io/kube-controller-manager:v1.16.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /var/lib/minikube/certs
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /var/lib/minikube/certs
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
Then you need to restart your kube-controller-manager.
This could vary depending on what you are running in your masters. If something like docker you can do sudo systemctl restart docker or you might need to restart containerd if you are using it instead of docker systemctl restart containerd
Or if you want to just start the kube-controller-manager you can do docker restart kube-controller-mamnager or crictl stop kube-controller-manager; crictl start kube-controller-manager

How to disable --basic-auth-file on kops? Kubernetes

I am trying to find a way to disable --basic-auth-file on my cluster.
Can someone help me?
Based on your comments your are using kops to deploy a cluster. In kops case, you need to add the following lines to disable the --basic-auth-file flag.
kops edit cluster --name <clustername> --state <state_path>
spec:
kubeAPIServer:
disableBasicAuth: true
spec and kubeAPIServer is probably already present in your cluster config
To apply the change, you need to run
kops update cluster --name <clustername> --state <state_path> <--yes>
and do a rolling upgrade
kops update cluster --name <clustername> --state <state_path> <--yes>
If you run the commands without --yes, it will basically shows you what it is going to do, with --yes it will apply the changes/roll the cluster.
Sadly KOPS is a bit lacking documentation on what are the options you can use in the cluster config yaml, the best I could find is their API definition:
https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/componentconfig.go#L246
You can disable it directly from the /etc/kubernetes/manifests/kube-apiserver.yaml file. For example
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --basic-auth-file=xxxxx . <===== Remove this line
- --authorization-mode=Node,RBAC
- --advertise-address=xxxxxx
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cloud-provider=aws
- --disable-admission-plugins=PersistentVolumeLabel
- --enable-admission-plugins=NodeRestriction,DefaultStorageClass,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver-amd64:v1.11.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 172.31.1.118
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
status: {}
Then restart your kube-apiserver containers on your master(s)

How to Setup HA for Kubernetes on CentOS, Runing 1 Master 2 Worker already using kubeadm

I have successfully setup Normal Cluster, Now when I am trying for HA setup following doc https://kubernetes.io/docs/admin/high-availability/
Here after copying etcd.yaml file in /etc/kubernetes/manifest I see 3 etcd container inside my cluster
default etcd-server-kuber-poc-app1 1/1 Running 1 2d
default etcd-server-kuber-poc-app2 1/1 Running 72 20h
kube-system etcd-kuber-poc-app1 1/1 Running 4 13d
But when I check logs for any etcd pods I see error like
2017-11-15 08:53:25.398815 E | discovery: error #0: x509: failed to
load system roots and no roots provided
2017-11-15 08:53:25.398907 I | discovery: cluster status check: error
connecting to https://discovery.etcd.io, retrying in 18h12m16s
Seems like missing certs for them
But I am not sure which certs to create and where to place
Yaml Content
apiVersion: v1
kind: Pod
metadata:
name: etcd-server
spec:
hostNetwork: true
containers:
- image: gcr.io/google_containers/etcd:3.0.17
name: etcd-container
command:
- /usr/local/bin/etcd
- --name
- NODE-1
- --initial-advertise-peer-urls
- http://10.127.38.18:2380
- --listen-peer-urls
- http://10.127.38.18:2380
- --advertise-client-urls
- http://10.127.38.18:4001
- --listen-client-urls
- http://127.0.0.1:4001
- --data-dir
- /var/etcd/data
- --discovery
- https://discovery.etcd.io/9458bcd46077d558fd26ced5cb9f2a6a
ports:
- containerPort: 2380
hostPort: 2380
name: serverport
- containerPort: 4001
hostPort: 4001
name: clientport
volumeMounts:
- mountPath: /var/etcd
name: varetcd
- mountPath: /etc/ssl
name: etcssl
readOnly: true
- mountPath: /usr/share/ssl
name: usrsharessl
readOnly: true
- mountPath: /var/ssl
name: varssl
readOnly: true
- mountPath: /usr/ssl
name: usrssl
readOnly: true
- mountPath: /usr/lib/ssl
name: usrlibssl
readOnly: true
- mountPath: /usr/local/openssl
name: usrlocalopenssl
readOnly: true
- mountPath: /etc/openssl
name: etcopenssl
readOnly: true
- mountPath: /etc/pki/tls
name: etcpkitls
readOnly: true
volumes:
- hostPath:
path: /var/etcd/data
name: varetcd
- hostPath:
path: /etc/ssl
name: etcssl
- hostPath:
path: /usr/share/ssl
name: usrsharessl
- hostPath:
path: /var/ssl
name: varssl
- hostPath:
path: /usr/ssl
name: usrssl
- hostPath:
path: /usr/lib/ssl
name: usrlibssl
- hostPath:
path: /usr/local/openssl
name: usrlocalopenssl
- hostPath:
path: /etc/openssl
name: etcopenssl
- hostPath:
path: /etc/pki/tls
name: etcpkitls
So 2 Issue
1) How to Create Certs?
2) Where to Keep them?
I don't think we can make kubeadm cluster as HA. your option is to recreate the cluster with kubespray https://github.com/kubespray/kubespray-cli tool, this will create the certificate with all the nodes.
for step by step instruction follow Kubernetes The Hard Way https://github.com/kelseyhightower/kubernetes-the-hard-way