I have the following conf
cat kubeadm-conf.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
apiServerExtraArgs:
enable-admission-plugins: NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook
networking:
podSubnet: 192.168.0.0/16
but, when I do
ps -aux | grep admission
root 20697 7.4 2.8 446916 336660 ? Ssl 03:49 0:21 kube-apiserver --authorization-mode=Node,RBAC --advertise-address=10.0.2.15 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction
I only see the NodeRestriction
Please, let me know if anyone can help me make sure that the admission-webhook is indeed running on my cluster.
I assume that MutatingAdmissionWebhook and ValidatingAdmissionWebhook have not being properly propagated through api-server as per your provided outputs.
I suggest to proceed with the following steps to achieve your goal:
Check and edit /etc/kubernetes/manifests/kube-apiserver.yaml manifest file by adding required admission control plugins to enable-admission-plugins Kubernetes API server flag:
--enable-admission-plugins=NodeRestriction,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook
Delete current kube-apiserver Pod and wait until Kubernetes will respawn the new one with reflected changes:
kubectl delete pod <kube-apiserver-Pod> -n kube-system
Hope it will help you, I've successfully checked these steps on my environment.
More information about Kubernetes Admission Controllers you can find in the official documentation.
Thanks for the reply, even that works, posting the kubeadm answer just in case anyone needs it, following is the right kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
networking:
podSubnet: 192.168.0.0/16
apiServer:
extraArgs:
enable-admission-plugins: "NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook"
Related
I got an alert while configuring the monitoring module using prometheus/kube-prometheus-stack 25.1.0.
Alert
[FIRING:1] KubeProxyDown - critical
Alert: Target disappeared from Prometheus target discovery. - critical
Description: KubeProxy has disappeared from Prometheus target discovery.
Details:
• alertname: KubeProxyDown
• prometheus: monitoring/prometheus-kube-prometheus-prometheus
• severity: critical
I think it is a new default rule in kube-prometheus-stack 25.x.x. It does not exist in prometheus/kube-prometheus-stack 21.x.x.
The same issue happened in the EKS and minikube.
KubeProxyDown Rule
alert: KubeProxyDown
expr: absent(up{job="kube-proxy"}
== 1)
for: 15m
labels:
severity: critical
annotations:
description: KubeProxy has disappeared from Prometheus target discovery.
runbook_url: https://runbooks.prometheus-operator.dev/runbooks/kubernetes/kubeproxydown
summary: Target disappeared from Prometheus target discovery.
How can I resolve this issue?
I would be thankful if anyone could help me
This is what worked for me in AWS EKS cluster v1.21:
$ kubectl edit cm/kube-proxy-config -n kube-system
---
metricsBindAddress: 127.0.0.1:10249 ### <--- change to 0.0.0.0:10249
$ kubectl delete pod -l k8s-app=kube-proxy -n kube-system
Note, the name of the config map is kube-proxy-config, not kube-proxy
There was a change in metrics-bind-address in kube-proxy. Following the issues posted here, here and here. I can suggest the following. Change kube-proxy ConfigMap to different value:
$ kubectl edit cm/kube-proxy -n kube-system
## Change from
metricsBindAddress: 127.0.0.1:10249 ### <--- Too secure
## Change to
metricsBindAddress: 0.0.0.0:10249
$ kubectl delete pod -l k8s-app=kube-proxy -n kube-system
Both previous answers are correct, but if you are upgrading to eks 1.22, you could only upgrade the kube-proxy addon to v1.22.11-eksbuild.2 (current one)
and the cm will be automatically updated
metricsBindAddress: 127.0.0.1:10249
to
metricsBindAddress: 0.0.0.0:10249
without the need to update it manually
You can see AWS kube-proxy addon documentation from
https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html
I'm trying to set which cri-o socket to use by kubeadm !
To achieve this I should use the flag --cri-socket /var/run/crio/crio.sock
The current command is in the form kubeadm init phase <phase_name>. I must add the --cri-socket flag to it.
I edited the command this way kubeadm init --cri-socket /var/run/crio/crio.sock phase <phase_name>.
Unfortunatly I am getting the error Error: unknown flag: --cri-socket.
=> It seems that the argument phase <phase_name> and the flag --cri-socket /var/run/crio/crio.sock is not compatible.
How do I fix that ?
Thx
##################Update 1######################
File : /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.10.3.15
bindPort: 6443
certificateKey: 9063a1ccc9c5e926e02f245c06b8xxxxxxxxxxx
nodeRegistration:
name: p3kubemaster1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
criSocket: /var/run/crio/crio.sock
I see two things that may help:
Check /var/lib/kubelet/kubeadm-flags.env if it is properly configured.
In addition to the flags used when starting the kubelet, the file also
contains dynamic parameters such as the cgroup driver and whether to
use a different CRI runtime socket (--cri-socket).
More details can be found here.
Check your init config file (kubeadm init --config string will show you the path do the configuration file) and try to add something like this:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
criSocket: "unix:///var/run/crio/crio.sock"
Please let me know if that helped.
I need to provide a specific node name to my master node in kuberenetes. I am using kubeadm to setup my cluster and I know there is an option --node-name master which you can provide to kubeadm init and it works fine.
Now, the issue is I am using the config file to initialise the cluster and I have tried various ways to provide that node-name to the cluster but it is not picking up the name. My config file of kubeadm init is:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 10.0.1.149
controlPlaneEndpoint: 10.0.1.149
etcd:
endpoints:
- http://10.0.1.149:2379
caFile: /etc/kubernetes/pki/etcd/ca.pem
certFile: /etc/kubernetes/pki/etcd/client.pem
keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
podSubnet: 192.168.13.0/24
kubernetesVersion: 1.10.3
apiServerCertSANs:
- 10.0.1.149
apiServerExtraArgs:
endpoint-reconciler-type: lease
nodeRegistration:
name: master
Now I run kubeadm init --config=config.yaml and it timeouts with following error:
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-
config" in the "kube-system" Namespace
[markmaster] Will mark node ip-x-x-x-x.ec2.internal as master by
adding a label and a taint
timed out waiting for the condition
PS: This issue also comes when you don't provide --hostname-override to kubelet along with --node-name to kubeadm init. I am providing both. Also, I am not facing any issues when I don't use config.yaml file and use command line to provide --node-name option to kubeadm init.
I want to know how can we provide --node-name option in config.yaml file. Any pointers are appreciated.
I am able to resolve this issue using the following config file, Just updating if anyone encounters the same issue:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 10.0.1.149
controlPlaneEndpoint: 10.0.1.149
etcd:
endpoints:
- http://10.0.1.149:2379
caFile: /etc/kubernetes/pki/etcd/ca.pem
certFile: /etc/kubernetes/pki/etcd/client.pem
keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
podSubnet: 192.168.13.0/24
kubernetesVersion: 1.10.3
apiServerCertSANs:
- 10.0.1.149
apiServerExtraArgs:
endpoint-reconciler-type: lease
nodeName: master
This is the way you can specify --node-name in config.yaml
With reference to https://github.com/kubernetes/kubeadm/issues/1239. How do I configure and start the latest kubeadm successfully?
kubeadm_new.config is generated by config migration:
kubeadm config migrate --old-config kubeadm_default.config --new-config kubeadm_new.config. Content of kubeadm_new.config:
apiEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
apiVersion: kubeadm.k8s.io/v1alpha3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: khteh-t580
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1alpha3
auditPolicy:
logDir: /var/log/kubernetes/audit
logMaxAge: 2
path: ""
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
etcd:
local:
dataDir: /var/lib/etcd
image: ""
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.12.2
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
unifiedControlPlaneImage: ""
I changed "kubernetesVersion: v1.12.2" in kubeadm_new.config and it seems to progress further and now stuck at the following error:
failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
How do I set fail-swap-on to FALSE to get it going?
Kubeadm comes with a command which prints default configuration, so you can check each of the assigned default values with:
kubeadm config print-default
In your case, if you want to disable swap check in the kubelet, you have to add the following lines to your current kubeadm config:
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
You haven't mentioned why you chose to disable swap.
I wouldn't consider it as a first option - not because memory swap is a bad practice (it is a useful and basic kernel mechanism) but because it seems the Kubelet is not designed to work properly with swap enabled.
K8S is very clear about this topic as you can see in the Kubeadm installation:
Swap disabled. You MUST disable swap in order for the kubelet to work
properly.
I would recommend reading about Evicting end-user Pods and the relevant features that K8S provides to prioritize memory of pods:
1 ) The 3 qos classes - Make sure that your high priority workloads are running with the Guaranteed (or at least Burstable) class.
2 ) Pod Priority and Preemption.
New cluster 1.8.10 spinned with kops.
In K8S 1.8 there is a new feature Pod Priority and Preemption.
More information: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#how-to-use-priority-and-preemption
kube-apiserver is logging errors
I0321 16:27:50.922589 7 wrap.go:42] GET
/apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations:
(140.067µs) 404 [[kube-apiserver/v1.8.10 (linux/amd64)
kubernetes/044cd26] 127.0.0.1:47500] I0321 16:27:51.257756 7
wrap.go:42] GET
/apis/scheduling.k8s.io/v1alpha1/priorityclasses?resourceVersion=0:
(168.391µs) 404 [[kube-apiserver/v1.8.10 (linux/amd64)
kubernetes/044cd26] 127.0.0.1:47500] E0321 16:27:51.258176 7
reflector.go:205]
k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:73:
Failed to list *scheduling.PriorityClass: the server could not find
the requested resource (get priorityclasses.scheduling.k8s.io)
I quite not understand why. No one should access it as it's not even enabled yet (it's alpha).
No pod is using priorityClassName.
Running explain:
kubectl explain priorityclass error: API version:
scheduling.k8s.io/v1alpha1 is not supported by the server. Use one of:
[apiregistration.k8s.io/v1beta1 extensions/v1beta1 apps/v1beta1
apps/v1beta2 authentication.k8s.io/v1
authentication.k8s.io/v1beta1 authorization.k8s.io/v1
authorization.k8s.io/v1beta1 autoscaling/v1 autoscaling/v2beta1
batch/v1 batch/v1beta1 certificates.k8s.io/v1beta1
networking.k8s.io/v1 policy/ v1beta1
rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1
storage.k8s.io/v1 storage.k8s.io/v1beta1 apiextensions.k8s.io/v1beta1
v1]
Is this normal or kops specific?
I think it is related to that Kops option in its config (kops get --name $NAME -oyaml):
kubeAPIServer:
runtimeConfig:
admissionregistration.k8s.io/v1alpha1: "true"
Anyway, all components working thru the API server and it is not a surprise that sometimes based on configuration it is trying to call some disable features. At least it has to check which APIs a supported, so why :)
So, I think you don't need to worry about it, that is the configuration-related message. Don't worry about it. Or just enable that feature, it will solve warning messages.