apiserver not coming up after adding audit-policy-file param - kubernetes

Reference
I want to add audit-policy-file param and file is present in /etc/kubernetes/audit-policy.yaml
It has basic metadata logging rule.
But once i restart service apiserver is not coming up. If i keep value empty then it works fine and log in /var/log/containers say file read failed.
{"log":"\n","stream":"stderr","time":"2018-12-24T12:23:36.82013719Z"}
{"log":"error: loading audit policy file: failed to read file path \"/etc/kubernetes/audit-policy.yaml\": open /etc/kubernetes/audit-policy.yaml: no such file or directory\n","stream":"stderr","time":"2018-12-24T12:23:36.820146912Z"}
[root#kube2-master containers]# kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1", GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean", BuildDate:"2018-12-13T10:31:33Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
[root#kube2-master containers]# cat /etc/kubernetes/audit-policy.yaml
rules:
- level: Metadata
[root#kube2-master containers]# cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --authorization-mode=Node,RBAC
- --advertise-address=192.168.213.23
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
image: k8s.gcr.io/kube-apiserver:v1.13.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.213.23
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
status: {}

You're running the kube-apiserver as a pod, so it's looking for that audit file on the filesystem inside the container, whereas you're putting it on the filesystem of the host. You need to mount that path through to your kube-apiserver pod. Assuming you're using kubeadm, add the following to your ClusterConfiguration:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiServer:
extraVolumes:
- name: audit-policy
hostPath: /etc/kubernetes/audit-policy.yaml
mountPath: /etc/kubernetes/audit-policy.yaml
readOnly: true

Related

kube-controller-manager is not logging details

I have an issue setting up persistant volumes for gitlab on my bare-metal kubernetes cluster:
Operation for "provision-gitlab/repo-data-gitlab-gitaly-0[3f758288-290c-4d9c-a084-5506f58a22d7]" failed. No retries permitted until 2020-11-28 11:55:56.533202624 +0000 UTC m=+305.008238514 (durationBeforeRetry 4s). Error: "failed to create volume: failed to create volume: see kube-controller-manager.log for details"
Problem is: this file doesn't exist anywhere, and I cannot get any more details about the problem, even by adapting the configuration:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=192.168.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --port=0
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
- --log-dir=/var/log/
- --log-file=kube-controller-manager.log
- --logtostderr=false
image: k8s.gcr.io/kube-controller-manager:v1.19.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
startupProbe:
failureThreshold: 24
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /var/log/kube-controller-manager.log
name: logfile
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /var/log/kube-controller-manager.log
name: logfile
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}
I tried to create it by hand, change permissions on it, but the pod is still not logging in this file
Control Plane components use klog library for logging which, for the moment, is rather badly documented.
Actually --log-dir and --log-file are mutually exclusive.
## it should be either --log-dir
--log-dir=/var/log/kube
...
volumeMounts:
- mountPath: /var/log/kube
name: log
...
volumes:
- hostPath:
path: /var/log/kube
type: DirectoryOrCreate
name: log
## or --log-file
--log-file=/var/log/kube-controller-manager.log
...
volumeMounts:
- mountPath: /var/log/kube-controller-manager.log
name: log
...
volumes:
- hostPath:
path: /var/log/kube-controller-manager.log
type: FileOrCreate
name: log
With --log-dir a component will write each log level a into separate file inside a given dir.
So you'll have a set of files with names like kube-controller-manager.INFO.log
With --log-file you'll have a single file as expected.
Don't forget to specify FileOrCreate in your volume definition, otherwise a directory will created by default.

kubectl get componentstatus shows unhealthy

i've finished setting up my HA k8s cluster using kubeadm.
Everything seems to be working fine, but after checking with the command kubectl get componentstatus I get:
NAME STATUS MESSAGE
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 12
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 12
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
I see that manifests for scheduler and controller have other ports set up for the health check:
kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- --port=0
image: k8s.gcr.io/kube-scheduler:v1.18.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
status: {}
kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --port=0
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
image: k8s.gcr.io/kube-controller-manager:v1.18.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
status: {}
So these are using ports 10259 and 10257 respectively.
Any idea why is kubectl trying to perform health check using 10251 and 10252?
version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
PS: I am able to make deployments and expose services, no problem there.
This is a known issue which unfortunately is not going to be fixed as the feature is planned to be deprecated. Also, see this source:
I wouldn't expect a change for this issue. Upstream Kubernetes wants
to deprecate component status and does not plan on enhancing it. If
you need to check for cluster health using other monitoring sources is
recommended.
kubernetes/kubernetes#93171 - 'fix component status server address'
which is getting recommendation to close due to deprecation talk.
kubernetes/enhancements#553 - Deprecate ComponentStatus
kubernetes/kubeadm#2222 - kubeadm default init and they are looking to
'start printing a warning in kubect get componentstatus that this API
object is no longer supported and there are plans to remove it.'

get [error: json: unknown field “metadata” ] error when I follow tutorial of kubernetes

I followed this tutorial https://kubernetes.io/docs/tutorials/configuration/configure-redis-using-configmap/
I got the error as bellow when I try to create the pods
kubectl apply -k .
error: json: unknown field "metadata"
My kubectl version is as bellow:
kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:36:19Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Bellow are some files that I created following the toturial.
kustomization.yaml
configMapGenerator:
- name: example-redis-config
files:
- redis-config
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis:5.0.4
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
resources:
- redis-pod.yaml
redis-config
maxmemory 2mb
maxmemory-policy allkeys-lru
redis-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis:5.0.4
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
Any help?
I think you misinterpreted the kustomization.yaml instructions (which are confusing). You don't add the contents of pods/config/redis-pod.yaml to kustomization.yaml. You just download that file and add the resources snippet.
The resulting kustomization.yaml should look like:
configMapGenerator:
- name: example-redis-config
files:
- redis-config
resources:
- redis-pod.yaml

How to disable --basic-auth-file on kops? Kubernetes

I am trying to find a way to disable --basic-auth-file on my cluster.
Can someone help me?
Based on your comments your are using kops to deploy a cluster. In kops case, you need to add the following lines to disable the --basic-auth-file flag.
kops edit cluster --name <clustername> --state <state_path>
spec:
kubeAPIServer:
disableBasicAuth: true
spec and kubeAPIServer is probably already present in your cluster config
To apply the change, you need to run
kops update cluster --name <clustername> --state <state_path> <--yes>
and do a rolling upgrade
kops update cluster --name <clustername> --state <state_path> <--yes>
If you run the commands without --yes, it will basically shows you what it is going to do, with --yes it will apply the changes/roll the cluster.
Sadly KOPS is a bit lacking documentation on what are the options you can use in the cluster config yaml, the best I could find is their API definition:
https://github.com/kubernetes/kops/blob/master/pkg/apis/kops/componentconfig.go#L246
You can disable it directly from the /etc/kubernetes/manifests/kube-apiserver.yaml file. For example
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --basic-auth-file=xxxxx . <===== Remove this line
- --authorization-mode=Node,RBAC
- --advertise-address=xxxxxx
- --allow-privileged=true
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cloud-provider=aws
- --disable-admission-plugins=PersistentVolumeLabel
- --enable-admission-plugins=NodeRestriction,DefaultStorageClass,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --insecure-port=0
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-cluster-ip-range=10.96.0.0/12
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
image: k8s.gcr.io/kube-apiserver-amd64:v1.11.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 172.31.1.118
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
status: {}
Then restart your kube-apiserver containers on your master(s)

How to Setup HA for Kubernetes on CentOS, Runing 1 Master 2 Worker already using kubeadm

I have successfully setup Normal Cluster, Now when I am trying for HA setup following doc https://kubernetes.io/docs/admin/high-availability/
Here after copying etcd.yaml file in /etc/kubernetes/manifest I see 3 etcd container inside my cluster
default etcd-server-kuber-poc-app1 1/1 Running 1 2d
default etcd-server-kuber-poc-app2 1/1 Running 72 20h
kube-system etcd-kuber-poc-app1 1/1 Running 4 13d
But when I check logs for any etcd pods I see error like
2017-11-15 08:53:25.398815 E | discovery: error #0: x509: failed to
load system roots and no roots provided
2017-11-15 08:53:25.398907 I | discovery: cluster status check: error
connecting to https://discovery.etcd.io, retrying in 18h12m16s
Seems like missing certs for them
But I am not sure which certs to create and where to place
Yaml Content
apiVersion: v1
kind: Pod
metadata:
name: etcd-server
spec:
hostNetwork: true
containers:
- image: gcr.io/google_containers/etcd:3.0.17
name: etcd-container
command:
- /usr/local/bin/etcd
- --name
- NODE-1
- --initial-advertise-peer-urls
- http://10.127.38.18:2380
- --listen-peer-urls
- http://10.127.38.18:2380
- --advertise-client-urls
- http://10.127.38.18:4001
- --listen-client-urls
- http://127.0.0.1:4001
- --data-dir
- /var/etcd/data
- --discovery
- https://discovery.etcd.io/9458bcd46077d558fd26ced5cb9f2a6a
ports:
- containerPort: 2380
hostPort: 2380
name: serverport
- containerPort: 4001
hostPort: 4001
name: clientport
volumeMounts:
- mountPath: /var/etcd
name: varetcd
- mountPath: /etc/ssl
name: etcssl
readOnly: true
- mountPath: /usr/share/ssl
name: usrsharessl
readOnly: true
- mountPath: /var/ssl
name: varssl
readOnly: true
- mountPath: /usr/ssl
name: usrssl
readOnly: true
- mountPath: /usr/lib/ssl
name: usrlibssl
readOnly: true
- mountPath: /usr/local/openssl
name: usrlocalopenssl
readOnly: true
- mountPath: /etc/openssl
name: etcopenssl
readOnly: true
- mountPath: /etc/pki/tls
name: etcpkitls
readOnly: true
volumes:
- hostPath:
path: /var/etcd/data
name: varetcd
- hostPath:
path: /etc/ssl
name: etcssl
- hostPath:
path: /usr/share/ssl
name: usrsharessl
- hostPath:
path: /var/ssl
name: varssl
- hostPath:
path: /usr/ssl
name: usrssl
- hostPath:
path: /usr/lib/ssl
name: usrlibssl
- hostPath:
path: /usr/local/openssl
name: usrlocalopenssl
- hostPath:
path: /etc/openssl
name: etcopenssl
- hostPath:
path: /etc/pki/tls
name: etcpkitls
So 2 Issue
1) How to Create Certs?
2) Where to Keep them?
I don't think we can make kubeadm cluster as HA. your option is to recreate the cluster with kubespray https://github.com/kubespray/kubespray-cli tool, this will create the certificate with all the nodes.
for step by step instruction follow Kubernetes The Hard Way https://github.com/kelseyhightower/kubernetes-the-hard-way