I'm trying to set which cri-o socket to use by kubeadm !
To achieve this I should use the flag --cri-socket /var/run/crio/crio.sock
The current command is in the form kubeadm init phase <phase_name>. I must add the --cri-socket flag to it.
I edited the command this way kubeadm init --cri-socket /var/run/crio/crio.sock phase <phase_name>.
Unfortunatly I am getting the error Error: unknown flag: --cri-socket.
=> It seems that the argument phase <phase_name> and the flag --cri-socket /var/run/crio/crio.sock is not compatible.
How do I fix that ?
Thx
##################Update 1######################
File : /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.10.3.15
bindPort: 6443
certificateKey: 9063a1ccc9c5e926e02f245c06b8xxxxxxxxxxx
nodeRegistration:
name: p3kubemaster1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
criSocket: /var/run/crio/crio.sock
I see two things that may help:
Check /var/lib/kubelet/kubeadm-flags.env if it is properly configured.
In addition to the flags used when starting the kubelet, the file also
contains dynamic parameters such as the cgroup driver and whether to
use a different CRI runtime socket (--cri-socket).
More details can be found here.
Check your init config file (kubeadm init --config string will show you the path do the configuration file) and try to add something like this:
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
criSocket: "unix:///var/run/crio/crio.sock"
Please let me know if that helped.
Related
I'm trying to figure out how to configure the kubernetes scheduler using a custom config but I'm having a bit of trouble understanding exactly how the scheduler is accessible.
The scheduler runs as a pod under the kube-system namespace called kube-scheduler-it-k8s-master. The documentation says that you can configure the scheduler by creating a config file and calling kube-scheduler --config <filename>. However I am not able to access the scheduler container directly as running kubectl exec -it kube-scheduler-it-k8s-master -- /bin/bash returns:
OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown
command terminated with exit code 126
I tried modifying /etc/kubernetes/manifests/kube-scheduler to mount my custom config file within the pod and explicitly call kube-scheduler with the --config option set, but it seems that my changes get reverted and the scheduler runs using the default settings.
I feel like I'm misunderstanding something fundamentally about the kubernetes scheduler. Am I supposed to pass in the custom scheduler config from within the scheduler pod itself? Or is this supposed to be done remotely somehow?
Thanks!
Since your X problem is "how to modify scheduler configuration", you can try the following for it.
Using kubeadm
If you are using kubeadm to bootstrap the cluster, you can use --config flag while running kubeadm init to pass a custom configuration object of type ClusterConfiguration to pass extra arguments to control plane components.
Example config for scheduler:
$ cat sched.conf
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.0
scheduler:
extraArgs:
address: 0.0.0.0
config: /home/johndoe/schedconfig.yaml
kubeconfig: /home/johndoe/kubeconfig.yaml
$ kubeadm init --config sched.conf
You could also try kubeadm upgrade apply --config sched.conf <k8s version> to apply updated config on a live cluster.
Reference: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/
Updating static pod manifest
You could also edit /etc/kubernetes/manifests/kube-scheduler.yaml, modify the flags to pass the config. Make sure you mount the file into the pod by updating volumes and volumeMounts section.
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- --config=/etc/kubernetes/mycustomconfig.conf
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
- mountPath: /etc/kubernetes/mycustomconfig.conf
name: customconfig
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /etc/kubernetes/mycustomconfig.conf
type: FileOrCreate
name: customconfig
I'm following this tutorial to create a Raspberry Pi Kubernetes cluster. This is what my config looks like:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
controllerManagerExtraArgs:
pod-eviction-timeout: 10s
node-monitor-grace-period: 10s
The problem is, when I run sudo kubeadm init --config kubeadm_conf.yaml I get the following error:
your configuration file uses an old API spec: "kubeadm.k8s.io/v1alpha1". Please use kubeadm v1.11 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
I've tried looking here for help, but nothing's worked. Help is appreciated.
If I use v1beta1"
>W0505 13:10:25.319213 15824 strict.go:47] unknown configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"MasterConfiguration"} for scheme definitions in "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme/scheme.go:31" and "k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs/scheme.go:28"
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta1, Kind=MasterConfiguration
no InitConfiguration or ClusterConfiguration kind was found in the YAML file
Verify versions:
kubeadm version
kubeadm config view
Generate default settings for your init command to see your settings (should be modified):
kubeadm init --config defaults
Did you try the solution provided by the output?
kubeadm config migrate --old-config old.yaml --new-config new.yaml
You can find tutorial about kubeadm init --config
In addition if you are using older version please take a look for documentation
It is recommended that you migrate your old v1alpha3 configuration to v1beta1 using the kubeadm config migrate command, because v1alpha3 will be removed in Kubernetes 1.15.
For more details on each field in the v1beta1 configuration you can navigate to our API reference pages
Migration from old kubeadm config versions:
kubeadm v1.11 should be used to migrate v1alpha1 to v1alpha2; kubeadm v1.12 should be used to translate v1alpha2 to v1alpha3)
Fo the second issue no InitConfiguration or ClusterConfiguration kind was found in the YAML file there is also answer in docs:
When executing kubeadm init with the --config option, the following configuration types could be used: InitConfiguration, ClusterConfiguration, KubeProxyConfiguration, KubeletConfiguration, but only one between InitConfiguration and ClusterConfiguration is mandatory.
what is the kubernetes version are you using?
try below
apiVersion: kubeadm.k8s.io/v1alpha2
OR
apiVersion: kubeadm.k8s.io/v1alpha3
I have the following conf
cat kubeadm-conf.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
apiServerExtraArgs:
enable-admission-plugins: NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook
networking:
podSubnet: 192.168.0.0/16
but, when I do
ps -aux | grep admission
root 20697 7.4 2.8 446916 336660 ? Ssl 03:49 0:21 kube-apiserver --authorization-mode=Node,RBAC --advertise-address=10.0.2.15 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction
I only see the NodeRestriction
Please, let me know if anyone can help me make sure that the admission-webhook is indeed running on my cluster.
I assume that MutatingAdmissionWebhook and ValidatingAdmissionWebhook have not being properly propagated through api-server as per your provided outputs.
I suggest to proceed with the following steps to achieve your goal:
Check and edit /etc/kubernetes/manifests/kube-apiserver.yaml manifest file by adding required admission control plugins to enable-admission-plugins Kubernetes API server flag:
--enable-admission-plugins=NodeRestriction,DefaultStorageClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook
Delete current kube-apiserver Pod and wait until Kubernetes will respawn the new one with reflected changes:
kubectl delete pod <kube-apiserver-Pod> -n kube-system
Hope it will help you, I've successfully checked these steps on my environment.
More information about Kubernetes Admission Controllers you can find in the official documentation.
Thanks for the reply, even that works, posting the kubeadm answer just in case anyone needs it, following is the right kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
networking:
podSubnet: 192.168.0.0/16
apiServer:
extraArgs:
enable-admission-plugins: "NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook"
I need to provide a specific node name to my master node in kuberenetes. I am using kubeadm to setup my cluster and I know there is an option --node-name master which you can provide to kubeadm init and it works fine.
Now, the issue is I am using the config file to initialise the cluster and I have tried various ways to provide that node-name to the cluster but it is not picking up the name. My config file of kubeadm init is:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 10.0.1.149
controlPlaneEndpoint: 10.0.1.149
etcd:
endpoints:
- http://10.0.1.149:2379
caFile: /etc/kubernetes/pki/etcd/ca.pem
certFile: /etc/kubernetes/pki/etcd/client.pem
keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
podSubnet: 192.168.13.0/24
kubernetesVersion: 1.10.3
apiServerCertSANs:
- 10.0.1.149
apiServerExtraArgs:
endpoint-reconciler-type: lease
nodeRegistration:
name: master
Now I run kubeadm init --config=config.yaml and it timeouts with following error:
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-
config" in the "kube-system" Namespace
[markmaster] Will mark node ip-x-x-x-x.ec2.internal as master by
adding a label and a taint
timed out waiting for the condition
PS: This issue also comes when you don't provide --hostname-override to kubelet along with --node-name to kubeadm init. I am providing both. Also, I am not facing any issues when I don't use config.yaml file and use command line to provide --node-name option to kubeadm init.
I want to know how can we provide --node-name option in config.yaml file. Any pointers are appreciated.
I am able to resolve this issue using the following config file, Just updating if anyone encounters the same issue:
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 10.0.1.149
controlPlaneEndpoint: 10.0.1.149
etcd:
endpoints:
- http://10.0.1.149:2379
caFile: /etc/kubernetes/pki/etcd/ca.pem
certFile: /etc/kubernetes/pki/etcd/client.pem
keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
podSubnet: 192.168.13.0/24
kubernetesVersion: 1.10.3
apiServerCertSANs:
- 10.0.1.149
apiServerExtraArgs:
endpoint-reconciler-type: lease
nodeName: master
This is the way you can specify --node-name in config.yaml
With reference to https://github.com/kubernetes/kubeadm/issues/1239. How do I configure and start the latest kubeadm successfully?
kubeadm_new.config is generated by config migration:
kubeadm config migrate --old-config kubeadm_default.config --new-config kubeadm_new.config. Content of kubeadm_new.config:
apiEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
apiVersion: kubeadm.k8s.io/v1alpha3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: khteh-t580
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1alpha3
auditPolicy:
logDir: /var/log/kubernetes/audit
logMaxAge: 2
path: ""
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: ""
etcd:
local:
dataDir: /var/lib/etcd
image: ""
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.12.2
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
unifiedControlPlaneImage: ""
I changed "kubernetesVersion: v1.12.2" in kubeadm_new.config and it seems to progress further and now stuck at the following error:
failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false.
How do I set fail-swap-on to FALSE to get it going?
Kubeadm comes with a command which prints default configuration, so you can check each of the assigned default values with:
kubeadm config print-default
In your case, if you want to disable swap check in the kubelet, you have to add the following lines to your current kubeadm config:
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
You haven't mentioned why you chose to disable swap.
I wouldn't consider it as a first option - not because memory swap is a bad practice (it is a useful and basic kernel mechanism) but because it seems the Kubelet is not designed to work properly with swap enabled.
K8S is very clear about this topic as you can see in the Kubeadm installation:
Swap disabled. You MUST disable swap in order for the kubelet to work
properly.
I would recommend reading about Evicting end-user Pods and the relevant features that K8S provides to prioritize memory of pods:
1 ) The 3 qos classes - Make sure that your high priority workloads are running with the Guaranteed (or at least Burstable) class.
2 ) Pod Priority and Preemption.