I'm trying to troubleshoot a failing pod but I cannot gather enough info to do so. Hoping someone can assist.
[server-001 ~]$ kubectl get pods sandboxed-nginx-98bb68c4d-26ljd
NAME READY STATUS RESTARTS AGE
sandboxed-nginx-98bb68c4d-26ljd 0/1 ContainerCreating 0 18m
[server-001 ~]$ kubectl logs sandboxed-nginx-98bb68c4d-26ljd
Error from server (BadRequest): container "nginx-kata" in pod "sandboxed-nginx-98bb68c4d-26ljd" is waiting to start: ContainerCreating
[server-001 ~]$ kubectl describe pods sandboxed-nginx-98bb68c4d-26ljd
Name: sandboxed-nginx-98bb68c4d-26ljd
Namespace: default
Priority: 0
Node: worker-001/100.100.230.34
Start Time: Fri, 08 Jul 2022 09:41:08 +0000
Labels: name=sandboxed-nginx
pod-template-hash=98bb68c4d
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/sandboxed-nginx-98bb68c4d
Containers:
nginx-kata:
Container ID:
Image: dummy-registry.com/test/nginx:1.17.7
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-887n4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-887n4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned default/sandboxed-nginx-98bb68c4d-26ljd to worker-001
Warning FailedCreatePodSandBox 5m19s kubelet Failed to create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
[worker-001 ~]$ sudo crictl images
IMAGE TAG IMAGE ID SIZE
dummy-registry.com/test/externalip-webhook v1.0.0-1 e2e778d82e6c3 147MB
dummy-registry.com/test/flannel v0.14.1 52e470e10ebf9 209MB
dummy-registry.com/test/kube-proxy v1.22.8 93ab9e5f0c4d6 869MB
dummy-registry.com/test/nginx 1.17.7 db634ca7e0456 310MB
dummy-registry.com/test/pause 3.5 dabdc5fea3665 711kB
dummy-registry.com/test/linux 7-slim 41388a53234b5 140MB
[worker-001 ~]$ sudo crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
b1c6d1bf2f09a db634ca7e045638213d3f68661164aa5c7d5b469631bbb79a8a65040666492d5 34 minutes ago Running nginx 0 3598c2c4d3e88
caaa14b395eb8 e2e778d82e6c3a8cc82cdf3083e55b084869cd5de2a762877640aff1e88659dd 48 minutes ago Running webhook 0 8a9697e2af6a1
4f97ac292753c 52e470e10ebf93ea5d2aa32f5ca2ecfa3a3b2ff8d2015069618429f3bb9cda7a 48 minutes ago Running kube-flannel 2 a4e4d0c14cafc
aacb3ed840065 93ab9e5f0c4d64c135c2e4593cd772733b025f53a9adb06e91fe49f500b634ab 48 minutes ago Running kube-proxy 2 9e0bc036c2d00
[worker-001 ~]$ sudo crictl pods
POD ID CREATED STATE NAME NAMESPACE ATTEMPT RUNTIME
3598c2c4d3e88 34 minutes ago Ready nginx-9xtss default 0 (default)
8a9697e2af6a1 48 minutes ago Ready externalip-validation-webhook-7988bff847-ntv6d externalip-validation-system 0 (default)
9e0bc036c2d00 48 minutes ago Ready kube-proxy-9c7cb kube-system 0 (default)
a4e4d0c14cafc 48 minutes ago Ready kube-flannel-ds-msz7w kube-system 0 (default)
[worker-001 ~]$ cat /etc/crio/crio.conf
[crio]
[crio.image]
pause_image = "dummy-registry.com/test/pause:3.5"
registries = ["docker.io", "dummy-registry.com/test"]
[crio.network]
plugin_dirs = ["/opt/cni/bin"]
[crio.runtime]
cgroup_manager = "systemd"
conmon_cgroup = "system.slice"
conmon = "/usr/libexec/crio/conmon"
manage_network_ns_lifecycle = true
manage_ns_lifecycle = true
selinux = false
[crio.runtime.runtimes]
[crio.runtime.runtimes.kata]
runtime_path = "/usr/bin/containerd-shim-kata-v2"
runtime_type = "vm"
runtime_root = "/run/vc"
[crio.runtime.runtimes.runc]
runtime_path = "/usr/bin/runc"
runtime_type = "oci"
[worker-001 ~]$ egrep -v '^#|^;|^$' /usr/share/defaults/kata-containers/configuration-qemu.toml
[hypervisor.qemu]
initrd = "/usr/share/kata-containers/kata-containers-initrd.img"
path = "/usr/libexec/qemu-kvm"
kernel = "/usr/share/kata-containers/vmlinuz.container"
machine_type = "q35"
enable_annotations = []
valid_hypervisor_paths = ["/usr/libexec/qemu-kvm"]
kernel_params = ""
firmware = ""
firmware_volume = ""
machine_accelerators=""
cpu_features="pmu=off"
default_vcpus = 1
default_maxvcpus = 0
default_bridges = 1
default_memory = 2048
disable_block_device_use = false
shared_fs = "virtio-9p"
virtio_fs_daemon = "/usr/libexec/kata-qemu/virtiofsd"
valid_virtio_fs_daemon_paths = ["/usr/libexec/kata-qemu/virtiofsd"]
virtio_fs_cache_size = 0
virtio_fs_extra_args = ["--thread-pool-size=1", "-o", "announce_submounts"]
virtio_fs_cache = "auto"
block_device_driver = "virtio-scsi"
enable_iothreads = false
enable_vhost_user_store = false
vhost_user_store_path = "/usr/libexec/qemu-kvm"
valid_vhost_user_store_paths = ["/var/run/kata-containers/vhost-user"]
valid_file_mem_backends = [""]
pflashes = []
valid_entropy_sources = ["/dev/urandom","/dev/random",""]
[factory]
[agent.kata]
kernel_modules=[]
[runtime]
internetworking_model="tcfilter"
disable_guest_seccomp=true
disable_selinux=false
sandbox_cgroup_only=true
static_sandbox_resource_mgmt=false
sandbox_bind_mounts=[]
vfio_mode="guest-kernel"
disable_guest_empty_dir=false
experimental=[]
[image]
[server-001 ~]$ cat nginx.yaml
---
kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
name: kata-containers
handler: kata
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sandboxed-nginx
spec:
replicas: 1
selector:
matchLabels:
name: sandboxed-nginx
template:
metadata:
labels:
name: sandboxed-nginx
spec:
runtimeClassName: kata-containers
containers:
- name: nginx-kata
image: dummy-registry.com/test/nginx:1.17.7
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: sandboxed-nginx
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
name: sandboxed-nginx
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx
labels:
name: nginx
spec:
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: nginx
image: dummy-registry.com/test/nginx:1.17.7
ports:
- containerPort: 80
[server-001 ~]$ kubectl apply -f nginx.yaml
runtimeclass.node.k8s.io/kata-containers unchanged
deployment.apps/sandboxed-nginx created
service/sandboxed-nginx created
daemonset.apps/nginx created
Since you're using kata containers with cri-o runtime, your pod should have a RuntimeClass parameter which it is missing.
You need to create a RuntimeClass object which will point to the runtime installed. See the docs here for how to do that. Also, make sure that the cri-o setup on worker-001 is correctly configured with k8s. Here is documentation for that.
Afterwards, add a RuntimeClass parameter to your pod so that the container can actually run. The ContainerCreating stage is stuck since the Pod controller cannot run cri-o based containers unless the RuntimeClass is specified. Here is some documentation on understanding Container Runtimes.
Related
I have created a statefulset using the following manifest,
the issue is that my statefulset is created correctly
but its replicas never get to run.
I hope you can help me, thanks
Name: phponcio
Namespace: default
Priority: 0
Node: juanaraque-worknode/10.0.0.2
Start Time: Wed, 01 Jun 2022 13:49:08 +0200
Labels: run=phponcio
Annotations: cni.projectcalico.org/containerID: 768caa8f4e9b0683033e279aa259d6673d81fe38b62a2ea06c7d256805d522f0
cni.projectcalico.org/podIP: 192.168.240.69/32
cni.projectcalico.org/podIPs: 192.168.240.69/32
Status: Running
IP: 192.168.240.69
IPs:
IP: 192.168.240.69
Containers:
phponcio:
Container ID: docker://75f90344ada396be965f0ae5c33bb7ea8c110d9ea1e014b75f36263b9cd72504
Image: 84d3f7ba5876/php-apache-mysql
Image ID: docker-pullable://84d3f7ba5876/php-apache-mysql#sha256:bf0cd01ae4b77cca146dcd54d8a447ba8b6c7f5a9e11d6dab6d19f429fb111d1
Port: <none>
Host Port: <none>
State: Running
Started: Wed, 01 Jun 2022 14:31:53 +0200
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Wed, 01 Jun 2022 13:54:26 +0200
Finished: Wed, 01 Jun 2022 14:30:44 +0200
Ready: True
Restart Count: 2
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bld5b (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-bld5b:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Name: phponcio
Namespace: default
CreationTimestamp: Thu, 02 Jun 2022 10:35:08 +0200
Selector: app=phponcio
Labels: <none>
Annotations: <none>
Replicas: 3 desired | 1 total
Update Strategy: RollingUpdate
Partition: 0
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=phponcio
Containers:
phponcio:
Image: 84d3f7ba5876/php-apache-mysql
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share from slow (rw)
Volumes: <none>
Volume Claims:
Name: slow
StorageClass: slow
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Events: <none>
apiVersion: v1
kind: Service
metadata:
name: phponcio
labels:
app: phponcio
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: phponcio
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: phponcio
spec:
selector:
matchLabels:
app: phponcio # tiene que coincidir con .spec.template.metadata.labels
serviceName: phponcio
replicas: 3 # por defecto es 1
template:
metadata:
labels:
app: phponcio # tiene que coincidir con .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: phponcio
image: 84d3f7ba5876/php-apache-mysql
ports:
- containerPort: 80
name: phponcio
volumeMounts:
- name: slow
mountPath: /usr/share
volumeClaimTemplates:
- metadata:
name: slow
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: slow
resources:
requests:
storage: 1Gi
I have vault deployed from the official helm chart and it's running in HA mode, with auto-unseal, TLS enabled, raft as the backend, and the cluster is 1.17 in EKS. I have all of the raft followers joined to the vault-0 pod as the leader. I have followed this tutorial to the tee and I always end up with tls bad certificate. http: TLS handshake error from 123.45.6.789:52936: remote error: tls: bad certificate is the exact error.
I did find an issue with following this tutorial exactly. The part where they pipe the kubernetes CA to base64. For me this was multi-line and failed to deploy. So I pipped that output to tr -d '\n'. But this is where I get this error. I've tried the part of launching a container and testing it with curl, and it fails, then tailing the agent injector logs, I get that bad cert error.
Here is my values.yaml if it helps.
global:
tlsDisable: false
injector:
metrics:
enabled: true
certs:
secretName: vault-tls
caBundle: "(output of cat vault-injector.ca | base64 | tr -d '\n')"
certName: vault.crt
keyName: vault.key
server:
extraEnvironmentVars:
VAULT_CACERT: "/vault/userconfig/vault-tls/vault.ca"
extraSecretEnvironmentVars:
- envName: AWS_ACCESS_KEY_ID
secretName: eks-creds
secretKey: AWS_ACCESS_KEY_ID
- envName: AWS_SECRET_ACCESS_KEY
secretName: eks-creds
secretKey: AWS_SECRET_ACCESS_KEY
- envName: VAULT_UNSEAL_KMS_KEY_ID
secretName: vault-kms-id
secretKey: VAULT_UNSEAL_KMS_KEY_ID
extraVolumes:
- type: secret
name: vault-tls
- type: secret
name: eks-creds
- type: secret
name: vault-kms-id
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 512Mi
cpu: 500m
auditStorage:
enabled: true
storageClass: gp2
standalone:
enabled: false
ha:
enabled: true
raft:
enabled: true
config: |
ui = true
api_addr = "[::]:8200"
cluster_addr = "[::]:8201"
listener "tcp" {
tls_disable = 0
tls_cert_file = "/vault/userconfig/vault-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-tls/vault.ca"
tls_min_version = "tls12"
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
}
disable_mlock = true
service_registration "kubernetes" {}
seal "awskms" {
region = "us-east-1"
kms_key_id = "VAULT_UNSEAL_KMS_KEY_ID"
}
ui:
enabled: true
I've exec'd into the agent-injector and poked around. I can see the /etc/webhook/certs/ are there and they look correct.
Here is my vault-agent-injector pod
kubectl describe pod vault-agent-injector-6bbf84484c-q8flv
Name: vault-agent-injector-6bbf84484c-q8flv
Namespace: default
Priority: 0
Node: ip-172-16-3-151.ec2.internal/172.16.3.151
Start Time: Sat, 19 Dec 2020 16:27:14 -0800
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault-agent-injector
component=webhook
pod-template-hash=6bbf84484c
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 172.16.3.154
IPs:
IP: 172.16.3.154
Controlled By: ReplicaSet/vault-agent-injector-6bbf84484c
Containers:
sidecar-injector:
Container ID: docker://2201b12c9bd72b6b85d855de6917548c9410e2b982fb5651a0acd8472c3554fa
Image: hashicorp/vault-k8s:0.6.0
Image ID: docker-pullable://hashicorp/vault-k8s#sha256:5697b85bc69aa07b593fb2a8a0cd38daefb5c3e4a4b98c139acffc9cfe5041c7
Port: <none>
Host Port: <none>
Args:
agent-inject
2>&1
State: Running
Started: Sat, 19 Dec 2020 16:27:15 -0800
Ready: True
Restart Count: 0
Liveness: http-get https://:8080/health/ready delay=1s timeout=5s period=2s #success=1 #failure=2
Readiness: http-get https://:8080/health/ready delay=2s timeout=5s period=2s #success=1 #failure=2
Environment:
AGENT_INJECT_LISTEN: :8080
AGENT_INJECT_LOG_LEVEL: info
AGENT_INJECT_VAULT_ADDR: https://vault.default.svc:8200
AGENT_INJECT_VAULT_AUTH_PATH: auth/kubernetes
AGENT_INJECT_VAULT_IMAGE: vault:1.5.4
AGENT_INJECT_TLS_CERT_FILE: /etc/webhook/certs/vault.crt
AGENT_INJECT_TLS_KEY_FILE: /etc/webhook/certs/vault.key
AGENT_INJECT_LOG_FORMAT: standard
AGENT_INJECT_REVOKE_ON_SHUTDOWN: false
AGENT_INJECT_TELEMETRY_PATH: /metrics
Mounts:
/etc/webhook/certs from webhook-certs (ro)
/var/run/secrets/kubernetes.io/serviceaccount from vault-agent-injector-token-k8ltm (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-certs:
Type: Secret (a volume populated by a Secret)
SecretName: vault-tls
Optional: false
vault-agent-injector-token-k8ltm:
Type: Secret (a volume populated by a Secret)
SecretName: vault-agent-injector-token-k8ltm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 40m default-scheduler Successfully assigned default/vault-agent-injector-6bbf84484c-q8flv to ip-172-16-3-151.ec2.internal
Normal Pulled 40m kubelet, ip-172-16-3-151.ec2.internal Container image "hashicorp/vault-k8s:0.6.0" already present on machine
Normal Created 40m kubelet, ip-172-16-3-151.ec2.internal Created container sidecar-injector
Normal Started 40m kubelet, ip-172-16-3-151.ec2.internal Started container sidecar-injector
My vault deployment
kubectl describe deployment vault
Name: vault-agent-injector
Namespace: default
CreationTimestamp: Sat, 19 Dec 2020 16:27:14 -0800
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=vault-agent-injector
component=webhook
Annotations: deployment.kubernetes.io/revision: 1
Selector: app.kubernetes.io/instance=vault,app.kubernetes.io/name=vault-agent-injector,component=webhook
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault-agent-injector
component=webhook
Service Account: vault-agent-injector
Containers:
sidecar-injector:
Image: hashicorp/vault-k8s:0.6.0
Port: <none>
Host Port: <none>
Args:
agent-inject
2>&1
Liveness: http-get https://:8080/health/ready delay=1s timeout=5s period=2s #success=1 #failure=2
Readiness: http-get https://:8080/health/ready delay=2s timeout=5s period=2s #success=1 #failure=2
Environment:
AGENT_INJECT_LISTEN: :8080
AGENT_INJECT_LOG_LEVEL: info
AGENT_INJECT_VAULT_ADDR: https://vault.default.svc:8200
AGENT_INJECT_VAULT_AUTH_PATH: auth/kubernetes
AGENT_INJECT_VAULT_IMAGE: vault:1.5.4
AGENT_INJECT_TLS_CERT_FILE: /etc/webhook/certs/vault.crt
AGENT_INJECT_TLS_KEY_FILE: /etc/webhook/certs/vault.key
AGENT_INJECT_LOG_FORMAT: standard
AGENT_INJECT_REVOKE_ON_SHUTDOWN: false
AGENT_INJECT_TELEMETRY_PATH: /metrics
Mounts:
/etc/webhook/certs from webhook-certs (ro)
Volumes:
webhook-certs:
Type: Secret (a volume populated by a Secret)
SecretName: vault-tls
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: vault-agent-injector-6bbf84484c (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 46m deployment-controller Scaled up replica set vault-agent-injector-6bbf84484c to 1
What else can I check and verify or troubleshoot in order to figure out why the agent injector is causing this error?
Currently having a problem where the readiness probe is failing when deploying the Vault Helm chart. Vault is working but whenever I describe the pods get this error. How do I get the probe to use HTTPS instead of HTTP if anyone knows how to solve this I would be great as losing my mind slowly?
Kubectl Describe pod
Name: vault-0
Namespace: default
Priority: 0
Node: ip-192-168-221-250.eu-west-2.compute.internal/192.168.221.250
Start Time: Mon, 24 Aug 2020 16:41:59 +0100
Labels: app.kubernetes.io/instance=vault
app.kubernetes.io/name=vault
component=server
controller-revision-hash=vault-768cd675b9
helm.sh/chart=vault-0.6.0
statefulset.kubernetes.io/pod-name=vault-0
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.221.251
IPs:
IP: 192.168.221.251
Controlled By: StatefulSet/vault
Containers:
vault:
Container ID: docker://445d7cdc34cd01ef1d3a46f2d235cb20a94e48279db3fcdd84014d607af2fe1c
Image: vault:1.4.2
Image ID: docker-pullable://vault#sha256:12587718b79dc5aff542c410d0bcb97e7fa08a6b4a8d142c74464a9df0c76d4f
Ports: 8200/TCP, 8201/TCP, 8202/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/sh
-ec
Args:
sed -E "s/HOST_IP/${HOST_IP?}/g" /vault/config/extraconfig-from-values.hcl > /tmp/storageconfig.hcl;
sed -Ei "s/POD_IP/${POD_IP?}/g" /tmp/storageconfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
State: Running
Started: Mon, 24 Aug 2020 16:42:00 +0100
Ready: False
Restart Count: 0
Readiness: exec [/bin/sh -ec vault status -tls-skip-verify] delay=5s timeout=5s period=3s #success=1 #failure=2
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_K8S_POD_NAME: vault-0 (v1:metadata.name)
VAULT_K8S_NAMESPACE: default (v1:metadata.namespace)
VAULT_ADDR: http://127.0.0.1:8200
VAULT_API_ADDR: http://$(POD_IP):8200
SKIP_CHOWN: true
SKIP_SETCAP: true
HOSTNAME: vault-0 (v1:metadata.name)
VAULT_CLUSTER_ADDR: https://$(HOSTNAME).vault-internal:8201
HOME: /home/vault
VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
Mounts:
/home/vault from home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from vault-token-cv9vx (ro)
/vault/config from config (rw)
/vault/userconfig/vault-server-tls from userconfig-vault-server-tls (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-config
Optional: false
userconfig-vault-server-tls:
Type: Secret (a volume populated by a Secret)
SecretName: vault-server-tls
Optional: false
home:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
vault-token-cv9vx:
Type: Secret (a volume populated by a Secret)
SecretName: vault-token-cv9vx
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7s default-scheduler Successfully assigned default/vault-0 to ip-192-168-221-250.eu-west-2.compute.internal
Normal Pulled 6s kubelet, ip-192-168-221-250.eu-west-2.compute.internal Container image "vault:1.4.2" already present on machine
Normal Created 6s kubelet, ip-192-168-221-250.eu-west-2.compute.internal Created container vault
Normal Started 6s kubelet, ip-192-168-221-250.eu-west-2.compute.internal Started container vault
Warning Unhealthy 0s kubelet, ip-192-168-221-250.eu-west-2.compute.internal Readiness probe failed: Error checking seal status: Error making API request.
URL: GET http://127.0.0.1:8200/v1/sys/seal-status
Code: 400. Raw Message:
Client sent an HTTP request to an HTTPS server.
Vault Config File
# global:
# tlsDisable: false
injector:
enabled: false
server:
extraEnvironmentVars:
VAULT_CACERT: /vault/userconfig/vault-server-tls/vault.ca
extraVolumes:
- type: secret
name: vault-server-tls # Matches the ${SECRET_NAME} from above
affinity: ""
readinessProbe:
enabled: true
path: /v1/sys/health
# # livelinessProbe:
# # enabled: true
# # path: /v1/sys/health?standbyok=true
# # initialDelaySeconds: 60
ha:
enabled: true
config: |
ui = true
api_addr = "https://127.0.0.1:8200" # Unsure if this is correct
storage "dynamodb" {
ha_enabled = "true"
region = "eu-west-2"
table = "global-vault-data"
access_key = "KEY"
secret_key = "SECRET"
}
# listener "tcp" {
# address = "0.0.0.0:8200"
# tls_disable = "true"
# }
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
seal "awskms" {
region = "eu-west-2"
access_key = "KEY"
secret_key = "SECRET"
kms_key_id = "ID"
}
ui:
enabled: true
serviceType: LoadBalancer
In your environment variable definitions you have:
VAULT_ADDR: http://127.0.0.1:8200
And non TLS is diable on your Vault configs (TLS enabled):
listener "tcp" {
address = "0.0.0.0:8200"
cluster_address = "0.0.0.0:8201"
tls_cert_file = "/vault/userconfig/vault-server-tls/vault.crt"
tls_key_file = "/vault/userconfig/vault-server-tls/vault.key"
tls_client_ca_file = "/vault/userconfig/vault-server-tls/vault.ca"
}
And your Readiness probe is executing in the pod:
vault status -tls-skip-verify
So that's trying to connect to http://127.0.0.1:8200, you can try changing the environment variable to use HTTPS: VAULT_ADDR=https://127.0.0.1:8200
You may have another (different) issue with your configs and env variable not matching:
K8s manifest:
VAULT_API_ADDR: http://$(POD_IP):8200
Vault configs:
api_addr = "https://127.0.0.1:8200"
✌️
If you are on Mac add the Vault URL to your .zshrc or .bash_profile file.
On the terminal open either .zshrc or .bash_profile file by doing this:
$ open .zshrc
Copy and paste this into it export VAULT_ADDR='http://127.0.0.1:8200'
Save the file by issuing on the terminal
$ source .zshrc
You can also set the tlsDisable to false in the global settings like this:
global:
tlsDisable: false
As the documentation for the helm chart says here:
The http/https scheme is controlled by the tlsDisable value.
I am deploy traefik v2.1.6 using this yaml:
apiVersion: v1
kind: Service
metadata:
name: traefik
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '8080'
spec:
ports:
- name: web
port: 80
- name: websecure
port: 443
- name: metrics
port: 8080
selector:
app: traefik
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: traefik-ingress-controller
labels:
app: traefik
spec:
selector:
matchLabels:
app: traefik
template:
metadata:
name: traefik
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 1
containers:
- image: traefik:2.1.6
name: traefik-ingress-lb
ports:
- name: web
containerPort: 80
hostPort: 80 #hostPort方式,将端口暴露到集群节点
- name: websecure
containerPort: 443
hostPort: 443 #hostPort方式,将端口暴露到集群节点
- name: metrics
containerPort: 8080
resources:
limits:
cpu: 2000m
memory: 1024Mi
requests:
cpu: 1000m
memory: 1024Mi
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
envFrom:
- secretRef:
name: traefik-alidns-secret
args:
- --configfile=/config/traefik.yaml
- --logLevel=INFO
- --metrics=true
- --metrics.prometheus=true
- --entryPoints.metrics.address=:8080
- --metrics.prometheus.entryPoint=metrics
- --metrics.prometheus.addServicesLabels=true
- --metrics.prometheus.addEntryPointsLabels=true
- --metrics.prometheus.buckets=0.100000, 0.300000, 1.200000, 5.000000
# HTTPS证书配置
- --entryPoints.web.address=:80
- --entryPoints.websecure.address=:443
# 邮箱配置
- --certificatesResolvers.default.acme.email=jiangtingqiang#gmail.com
# 保存 ACME 证书的位置
- --certificatesResolvers.default.acme.storage=/config/acme.json
- --certificatesResolvers.default.acme.httpChallenge.entryPoint=web
# 下面是用于测试的ca服务,如果https证书生成成功了,则移除下面参数
- --certificatesResolvers.default.acme.dnsChallenge.provider=alidns
- --certificatesResolvers.default.acme.dnsChallenge=true
- --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
volumeMounts:
- mountPath: "/config"
name: "config"
volumes:
- name: config
configMap:
name: traefik-config
tolerations: #设置容忍所有污点,防止节点被设置污点
- operator: "Exists"
nodeSelector: #设置node筛选器,在特定label的节点上启动
app-type: "online-app"
the service start success:
$ k get daemonset -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
traefik-ingress-controller 1 1 1 1 1 app-type=online-app 61d
But when I access the treafik using this command, it shows 404 not found:
[root#fat001 ~]# curl -k --header 'Host:traefik.example.com' https://172.19.104.230
404 page not found
172.19.104.230 is the kubernetes cluster(v1.15.2) edge node runs traefik, what should I do to access traefik success? This is pod describe output:
$ k describe pod traefik-ingress-controller-t4rmx -n kube-system
Name: traefik-ingress-controller-t4rmx
Namespace: kube-system
Priority: 0
Node: azshara-k8s02/172.19.104.230
Start Time: Tue, 31 Mar 2020 00:14:38 +0800
Labels: app=traefik
controller-revision-hash=547587d6d5
pod-template-generation=44
Annotations: <none>
Status: Running
IP: 172.30.208.18
IPs: <none>
Controlled By: DaemonSet/traefik-ingress-controller
Containers:
traefik-ingress-lb:
Container ID: docker://88b74826c5e380e00a53d2d4741ab6b74d8628412275f062dda861ad26681971
Image: traefik:2.1.6
Image ID: docker-pullable://traefik#sha256:13c5e62a0757bd8bf57c8c36575f7686f06186994ad6d2bda773ed8f140415c2
Ports: 80/TCP, 443/TCP, 8080/TCP
Host Ports: 80/TCP, 443/TCP, 0/TCP
Args:
--configfile=/config/traefik.yaml
--logLevel=INFO
--metrics=true
--metrics.prometheus=true
--entryPoints.metrics.address=:8080
--metrics.prometheus.entryPoint=metrics
--metrics.prometheus.addServicesLabels=true
--metrics.prometheus.addEntryPointsLabels=true
--metrics.prometheus.buckets=0.100000, 0.300000, 1.200000, 5.000000
--entryPoints.web.address=:80
--entryPoints.websecure.address=:443
--certificatesResolvers.default.acme.email=jiangtingqiang#gmail.com
--certificatesResolvers.default.acme.storage=/config/acme.json
--certificatesResolvers.default.acme.httpChallenge.entryPoint=web
--certificatesResolvers.default.acme.dnsChallenge.provider=alidns
--certificatesResolvers.default.acme.dnsChallenge=true
--certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
State: Running
Started: Tue, 31 Mar 2020 00:14:39 +0800
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 1
memory: 1Gi
Environment Variables from:
traefik-alidns-secret Secret Optional: false
Environment: <none>
Mounts:
/config from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from traefik-ingress-controller-token-92vsc (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: traefik-config
Optional: false
traefik-ingress-controller-token-92vsc:
Type: Secret (a volume populated by a Secret)
SecretName: traefik-ingress-controller-token-92vsc
Optional: false
QoS Class: Burstable
Node-Selectors: app-type=online-app
Tolerations:
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 102m default-scheduler Successfully assigned kube-system/traefik-ingress-controller-t4rmx to azshara-k8s02
Normal Pulled 102m kubelet, azshara-k8s02 Container image "traefik:2.1.6" already present on machine
Normal Created 102m kubelet, azshara-k8s02 Created container traefik-ingress-lb
Normal Started 102m kubelet, azshara-k8s02 Started container traefik-ingress-lb
And this is my treafik route config:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dashboard-route
namespace: kube-system
spec:
entryPoints:
- websecure
tls:
certResolver: default
routes:
- match: Host(`traefik.example.com`) && PathPrefix(`/default`)
kind: Rule
services:
- name: traefik
port: 8080
curl from kubernetes container works fine like this:
/ # curl -L traefik.kube-system.svc.cluster.local:8080
<!DOCTYPE html><html><head><title>Traefik</title><meta charset=utf-8><meta name=description content="Traefik UI"><meta name=format-detection content="telephone=no"><meta name=msapplication-tap-highlight content=no><meta name=viewport content="user-scalable=no,initial-scale=1,maximum-scale=1,minimum-scale=1,width=device-width"><link rel=icon type=image/png href=statics/app-logo-128x128.png><link rel=icon type=image/png sizes=16x16 href=statics/icons/favicon-16x16.png><link rel=icon type=image/png sizes=32x32 href=statics/icons/favicon-32x32.png><link rel=icon type=image/png sizes=96x96 href=statics/icons/favicon-96x96.png><link rel=icon type=image/ico href=statics/icons/favicon.ico><link href=css/019be8e4.d05f1162.css rel=prefetch><link href=css/099399dd.9310dd1b.css rel=prefetch><link href=css/0af0fca4.e3d6530d.css rel=prefetch><link href=css/162d302c.9310dd1b.css rel=prefetch><link href=css/29ead7f5.9310dd1b.css rel=prefetch><link href=css/31ad66a3.9310dd1b.css rel=prefetch><link href=css/524389aa.619bfb84.css rel=prefetch><link href=css/61674343.9310dd1b.css rel=prefetch><link href=css/63c47f2b.294d1efb.css rel=prefetch><link href=css/691c1182.ed0ee510.css rel=prefetch><link href=css/7ba452e3.37efe53c.css rel=prefetch><link href=css/87fca1b4.8c8c2eec.css rel=prefetch><link href=js/019be8e4.d8726e8b.js rel=prefetch><link href=js/099399dd.a047d401.js rel=prefetch><link href=js/0af0fca4.271bd48d.js rel=prefetch><link href=js/162d302c.ce1f9159.js rel=prefetch><link href=js/29ead7f5.cd022784.js rel=prefetch><link href=js/2d21e8fd.f3d2bb6c.js rel=prefetch><link href=js/31ad66a3.12ab3f06.js rel=prefetch><link href=js/524389aa.21dfc9ee.js rel=prefetch><link href=js/61674343.adb358dd.js rel=prefetch><link href=js/63c47f2b.caf9b4a2.js rel=prefetch><link href=js/691c1182.5d4aa4c9.js rel=prefetch><link href=js/7ba452e3.71a69a60.js rel=prefetch><link href=js/87fca1b4.ac9c2dc6.js rel=prefetch><link href=css/app.e4fba3f1.css rel=preload as=style><link href=js/app.841031a8.js rel=preload as=script><link href=js/vendor.49a1849c.js rel=preload as=script><link href=css/app.e4fba3f1.css rel=stylesheet><link rel=manifest href=manifest.json><meta name=theme-color content=#027be3><meta name=apple-mobile-web-app-capable content=yes><meta name=apple-mobile-web-app-status-bar-style content=default><meta name=apple-mobile-web-app-title content=Traefik><link rel=apple-touch-icon href=statics/icons/apple-icon-120x120.png><link rel=apple-touch-icon sizes=180x180 href=statics/icons/apple-icon-180x180.png><link rel=apple-touch-icon sizes=152x152 href=statics/icons/apple-icon-152x152.png><link rel=apple-touch-icon sizes=167x167 href=statics/icons/apple-icon-167x167.png><link rel=mask-icon href=statics/icons/safari-pinned-tab.svg color=#027be3><meta name=msapplication-TileImage content=statics/icons/ms-icon-144x144.png><meta name=msapplication-TileColor content=#000000></head><body><div id=q-app></div><script type=text/javascript src=js/app.841031a8.js></script><script type=text/javascript src=js/vendor.49a1849c.js></script></body></html>/ #
curl from host failed:
[root#fat001 ~]# curl -k --header 'Host:traefik.example.com' https://172.19.104.230
404 page not found
env: vagrant + virtualbox
kubernetes: 1.14
docker 18.06.3~ce~3-0~debian
os: debian stretch
I have priority classes:
root#k8s-master:/# kubectl get priorityclass
NAME VALUE GLOBAL-DEFAULT AGE
cluster-health-priority 1000000000 false 33m < -- created by me
default-priority 100 true 33m < -- created by me
system-cluster-critical 2000000000 false 33m < -- system
system-node-critical 2000001000 false 33m < -- system
default-priority - has been set as globalDefault
root#k8s-master:/# kubectl get priorityclass default-priority -o yaml
apiVersion: scheduling.k8s.io/v1
description: Used for all Pods without priorityClassName
globalDefault: true <------------------
kind: PriorityClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"scheduling.k8s.io/v1","description":"Used for all Pods without priorityClassName","globalDefault":true,"kind":"PriorityClass","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile"},"name":"default-priority"},"value":100}
creationTimestamp: "2019-07-15T16:48:23Z"
generation: 1
labels:
addonmanager.kubernetes.io/mode: Reconcile
name: default-priority
resourceVersion: "304"
selfLink: /apis/scheduling.k8s.io/v1/priorityclasses/default-priority
uid: 5bea6f73-a720-11e9-8343-0800278dc04d
value: 100
I have some pods, which were created after policy classes creation
This
kube-state-metrics-874ccb958-b5spd 1/1 Running 0 9m18s 10.20.59.67 k8s-master <none> <none>
And this
tmp-shell-one-59fb949cb5-b8khc 1/1 Running 1 47s 10.20.59.73 k8s-master <none> <none>
kube-state-metrics pod is using priorityClass cluster-health-priority
root#k8s-master:/etc/kubernetes/addons# kubectl -n kube-system get pod kube-state-metrics-874ccb958-b5spd -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-07-15T16:48:24Z"
generateName: kube-state-metrics-874ccb958-
labels:
k8s-app: kube-state-metrics
pod-template-hash: 874ccb958
name: kube-state-metrics-874ccb958-b5spd
namespace: kube-system
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: kube-state-metrics-874ccb958
uid: 5c64bf85-a720-11e9-8343-0800278dc04d
resourceVersion: "548"
selfLink: /api/v1/namespaces/kube-system/pods/kube-state-metrics-874ccb958-b5spd
uid: 5c88143e-a720-11e9-8343-0800278dc04d
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kube-role
operator: In
values:
- master
containers:
- image: gcr.io/google_containers/kube-state-metrics:v1.6.0
imagePullPolicy: Always
name: kube-state-metrics
ports:
- containerPort: 8080
name: http-metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-state-metrics-token-jvz5b
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: k8s-master
nodeSelector:
namespaces/default: "true"
priorityClassName: cluster-health-priority <------------------------
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kube-state-metrics
serviceAccountName: kube-state-metrics
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: dedicated
operator: Equal
value: master
- key: CriticalAddonsOnly
operator: Exists
volumes:
- name: kube-state-metrics-token-jvz5b
secret:
defaultMode: 420
secretName: kube-state-metrics-token-jvz5b
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-07-15T16:48:24Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-07-15T16:48:58Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-07-15T16:48:58Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-07-15T16:48:24Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://a736dce98492b7d746079728b683a2c62f6adb1068075ccc521c5e57ba1e02d1
image: gcr.io/google_containers/kube-state-metrics:v1.6.0
imageID: docker-pullable://gcr.io/google_containers/kube-state-metrics#sha256:c98991f50115fe6188d7b4213690628f0149cf160ac47daf9f21366d7cc62740
lastState: {}
name: kube-state-metrics
ready: true
restartCount: 0
state:
running:
startedAt: "2019-07-15T16:48:46Z"
hostIP: 10.0.2.15
phase: Running
podIP: 10.20.59.67
qosClass: BestEffort
startTime: "2019-07-15T16:48:24Z"
tmp-shell pod has nothing about priority classes at all:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2019-07-15T16:56:49Z"
generateName: tmp-shell-one-59fb949cb5-
labels:
pod-template-hash: 59fb949cb5
run: tmp-shell-one
name: tmp-shell-one-59fb949cb5-b8khc
namespace: monitoring
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: tmp-shell-one-59fb949cb5
uid: 89c3caa3-a721-11e9-8343-0800278dc04d
resourceVersion: "1350"
selfLink: /api/v1/namespaces/monitoring/pods/tmp-shell-one-59fb949cb5-b8khc
uid: 89c71bad-a721-11e9-8343-0800278dc04d
spec:
containers:
- args:
- /bin/bash
image: nicolaka/netshoot
imagePullPolicy: Always
name: tmp-shell-one
resources: {}
stdin: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-g9lnc
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: k8s-master
nodeSelector:
namespaces/default: "true"
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: default-token-g9lnc
secret:
defaultMode: 420
secretName: default-token-g9lnc
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2019-07-15T16:56:49Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2019-07-15T16:57:20Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2019-07-15T16:57:20Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2019-07-15T16:56:49Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://545d4d029b440ebb694386abb09e0377164c87d1170ac79704f39d3167748bf5
image: nicolaka/netshoot:latest
imageID: docker-pullable://nicolaka/netshoot#sha256:b3e662a8730ee51c6b877b6043c5b2fa61862e15d535e9f90cf667267407753f
lastState:
terminated:
containerID: docker://dfdfd0d991151e94411029f2d5a1a81d67b5b55d43dcda017aec28320bafc7d3
exitCode: 130
finishedAt: "2019-07-15T16:57:17Z"
reason: Error
startedAt: "2019-07-15T16:57:03Z"
name: tmp-shell-one
ready: true
restartCount: 1
state:
running:
startedAt: "2019-07-15T16:57:19Z"
hostIP: 10.0.2.15
phase: Running
podIP: 10.20.59.73
qosClass: BestEffort
startTime: "2019-07-15T16:56:49Z"
According to the docs:
The globalDefault field indicates that the value of this PriorityClass
should be used for Pods without a priorityClassName
and
Pod priority is specified by setting the priorityClassName field of
podSpec. The integer value of priority is then resolved and populated
to the priority field of podSpec
So, the questions are:
Why tmp-shell pod is not using priorityClass default-priority, even it created after priority class with globalDefault to true?
Why kube-state-metrics pod does not have field priority with parsed value from the priority class cluster-health-priority in podSpec?(look at .yaml above)
What am I doing wrong?
The only way I can reproduce it is by disabling the Priority Admission Controller by adding this argument --disable-admission-plugins=Priority to the kube-api-server definition which is under /etc/kubernetes/manifests/kube-apiserver.yaml of the Host running the API Server.
According to the documentation in v1.14 this is enabled by default. Please make sure that it is enabled in your cluster as well.