How to access kube-scheduler on a kubernetes cluster? - kubernetes

I'm trying to figure out how to configure the kubernetes scheduler using a custom config but I'm having a bit of trouble understanding exactly how the scheduler is accessible.
The scheduler runs as a pod under the kube-system namespace called kube-scheduler-it-k8s-master. The documentation says that you can configure the scheduler by creating a config file and calling kube-scheduler --config <filename>. However I am not able to access the scheduler container directly as running kubectl exec -it kube-scheduler-it-k8s-master -- /bin/bash returns:
OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown
command terminated with exit code 126
I tried modifying /etc/kubernetes/manifests/kube-scheduler to mount my custom config file within the pod and explicitly call kube-scheduler with the --config option set, but it seems that my changes get reverted and the scheduler runs using the default settings.
I feel like I'm misunderstanding something fundamentally about the kubernetes scheduler. Am I supposed to pass in the custom scheduler config from within the scheduler pod itself? Or is this supposed to be done remotely somehow?
Thanks!

Since your X problem is "how to modify scheduler configuration", you can try the following for it.
Using kubeadm
If you are using kubeadm to bootstrap the cluster, you can use --config flag while running kubeadm init to pass a custom configuration object of type ClusterConfiguration to pass extra arguments to control plane components.
Example config for scheduler:
$ cat sched.conf
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.0
scheduler:
extraArgs:
address: 0.0.0.0
config: /home/johndoe/schedconfig.yaml
kubeconfig: /home/johndoe/kubeconfig.yaml
$ kubeadm init --config sched.conf
You could also try kubeadm upgrade apply --config sched.conf <k8s version> to apply updated config on a live cluster.
Reference: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/control-plane-flags/
Updating static pod manifest
You could also edit /etc/kubernetes/manifests/kube-scheduler.yaml, modify the flags to pass the config. Make sure you mount the file into the pod by updating volumes and volumeMounts section.
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
- --config=/etc/kubernetes/mycustomconfig.conf
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
- mountPath: /etc/kubernetes/mycustomconfig.conf
name: customconfig
readOnly: true
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /etc/kubernetes/mycustomconfig.conf
type: FileOrCreate
name: customconfig

Related

Kubernetes in GCP: How a pod can access its parent node to perform some operation e.g. iptables update in node

Scenario is like this:
I have a pod running in a node in K8s cluster in GCP. cluster is created using kops and pod is created using kne_cli.
I know only the name of the pod e.g. "test-pod".
My requirement is to configure something in the node where this pod is running. e.g. I want to update "iptables -t nat" table in node.
how to access the node and configure it from within a pod?
any suggestion will be helpful.
You the Job or deployment or POD, not sure how POD is getting managed. If you just want to run that task Job is good fir for you.
One option is to use SSH way :
You can run one POD inside that you get a list of Nodes or specific node as per need and run SSH command to connect with that node.
That way you will be able to access Node from POD and run commands top of Node.
You can check this document for ref : https://alexei-led.github.io/post/k8s_node_shell/
Option two :
You can mount sh file on Node with IP table command and invoke that shell script from POD to execute which will run the command whenever you want.
Example :
apiVersion: v1
kind: ConfigMap
metadata:
name: command
data:
command.sh: |
#!/bin/bash
echo "running sh script on node..!"
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: command
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: cron-namespace-admin
containers:
- name: command
image: IMAGE:v1
imagePullPolicy: IfNotPresent
volumeMounts:
- name: commandfile
mountPath: /test/command.sh
subPath: command.sh
- name: script-dir
mountPath: /test
restartPolicy: OnFailure
volumes:
- name: commandfile
configMap:
name: command
defaultMode: 0777
- name: script-dir
hostPath:
path: /var/log/data
type: DirectoryOrCreate
Use privileged mode
securityContext:
privileged: true
Privileged - determines if any container in a pod can enable
privileged mode. By default a container is not allowed to access any
devices on the host, but a "privileged" container is given access to
all devices on the host. This allows the container nearly all the same
access as processes running on the host. This is useful for containers
that want to use linux capabilities like manipulating the network
stack and accessing devices.
Read more : https://kubernetes.io/docs/concepts/security/pod-security-policy/#privileged
You might be better off using GKE and configuring the ip-masq-agent as described here: https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent
In case you stick with kops on GCE, I would suggest following the guide for ip-masq-agent here instead of the GKE docs: https://kubernetes.io/docs/tasks/administer-cluster/ip-masq-agent/
In case you really need to run custom iptables rules on the host then your best option is to create a DaemonSet with pods that are privileged and have hostNetwork: true. That should allow you to modify iptable rules directly on the host from the pod.

Making use of ansible's dynamic kubernetes inventory in a playbook?

I'm trying to execute a few simple commands on a kubernetes pod in Azure. I've successfully done so with the localhost + pod-as-module-parameter syntax:
---
- hosts: localhost
connection: kubectl
collections:
- kubernetes.core
gather_facts: False
tasks:
- name: Get pod
k8s_info:
kind: Pod
namespace: my-namespace
register: pod_list
- name: Run command
k8s_exec:
pod: "{{pod_list.resources[0].metadata.name}}"
namespace: my_namespace
command: "/bin/bash -c 'echo Hello world'"
However, I want to avoid the repetition of specifying pod and namespace for every kubernetes.core module call, as well as parsing the namespace explicitly in every playbook.
So I got the kubernetes dynamic inventory plugin to work, and can see the desired pod in a group label_app_some-predictable-name, as confirmed by output of ansible-inventory.
What I don't get is if at this point I should be able to run regular command module (I couldn't get that to work at all), or if I need to keep using k8s_exec, which still requires pod and namespace to be specified explicitly (albeit now I can refer to the guest facts populated by the inventory plugin), on top of now requiring delegate_to: localhost:
---
- name: Execute command
hosts: label_app_some-predicatable-name
connection: kubectl
gather_facts: false
collections:
- kubernetes.core
tasks:
- name: Execute command via kubectl
delegate_to: localhost
k8s_exec:
command: "/bin/sh -c 'echo Hello world'"
pod: "{{ansible_kubectl_pod}}"
namespace: "{{ansible_kubectl_namespace}}"
What am I missing? Is there a playbook example that makes use of the kubernetes dynamic inventory?

Can't find files inside volume mount directory

I have an mysql container I'm deploying through k8s in which I am mounting a directory which contains a script, once the pod is up and running the plan is to execute that script.
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 1
template:
spec:
volumes:
- name: mysql-stuff
hostPath:
path: /home/myapp/scripts
type: Directory
containers:
- name: mysql-db
image: mysql:latest
volueMounts:
- name: mysql-stuff
mountPath: /scripts/
Once I have it up and running and run kubectl exec -it mysql-db -- bin/sh and ls scripts it returns nothing and the script that should be inside it is not there and I can't work out why.. For the sake of getting this working I have added no security context and am running the container as root. Any help would be greatly appreciated.
Since you are running your pod in a minikube cluster. Minikube itself is running in a VM , so the path mapping here implies the path of minikube VMs not your actual host.
However you can map your actual host path to the minikube path and then it will become accessible.
minikube mount /home/myapp/scripts:/home/myapp/scripts
See more here
https://minikube.sigs.k8s.io/docs/handbook/mount/

Accessing Nexus repository manager password in a kubernetes pod

I have installed Sonatype nexus repository manager in my Kubernetes Cluster using the helm chart.
I am using the Kyma installation.
Nexus repository manager got installed properly and I can access the application.
But it seems the login password file is in a pv volume claim /nexus-data attached in the pod.
Now whenever I am trying to access the pod with kubectl exec command:
kubectl exec -i -t $POD_NAME -n dev -- /bin/sh
I am getting the following error:
OCI runtime exec failed: exec failed: container_linux.go:367: starting container process caused: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
I understand that this issue is because of the image does not offer shell functionality.
Is there any other way i can access the password file present in the pvc?
You can try kubectl cp command but probably it won't work as the there is no shell inside the container.
You can't really access the pv used by pvc directly in Kubernetes, but there is a simple work-around - just create another pod (with shell) with this pvc mounted and access it. To avoid errors like Volume is already used by pod(s) / node(s) I suggest to schedule this pod on the same node as nexus pod.
Check on which node is located your nexus pod: NODE=$(kubectl get pod <your-nexus-pod-name> -o jsonpath='{.spec.nodeName}')
Set nexus label for node: kubectl label node $NODE nexus=here (avoid using "yes" or "true" instead of "here"; Kubernetes will read it as boolean, not as the string)
Get your nexus pvc name mounted on the pod by running kubectl describe pod <your-nexus-pod-name>
Create simple pod definition refereeing to nexus pvc from previous step:
apiVersion: v1
kind: Pod
metadata:
name: access-nexus-data
spec:
containers:
- name: access-nexus-data-container
image: busybox:latest
command: ["sleep", "999999"]
volumeMounts:
- name: nexus-data
mountPath: /nexus-data
readOnly: true
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: <your-pvc-name>
nodeSelector:
nexus: here
Access to the pod using kubectl exec access-nexus-data -it -- sh and read data. You can also use earlier mentioned kubectl cp command.
If you are using some cloud provided Kubernetes solution, you can try to mount pv volume used by pvc to VM hosted on the cloud.
Source: similar Stackoverflow topic

Whitelisting sysctl parameters for helm chart

I have a helm chart that deploys an app but also needs to reconfigure some sysctl parameters in order to run properly. When I install the helm chart and run kubectl describe pod/pod_name on the pod that was deployed, I get forbidden sysctl: "kernel.sem" not whitelisted. I have added a podsecuritypolicy like so but with no such luck.
apiVersion:policy/v1beta1
kind:PodSecurityPolicy
metadata:
name: policy
spec:
allowedUnsafeSysctls:
- kernel.sem
- kernel.shmmax
- kernel.shmall
- fs.mqueue.msg_max
seLinux:
rule: 'RunAsAny'
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule:'RunAsAny'
---UPDATE---
I also try to set the kubelet parameters via a config file in order to allow-unsafe-ctls but I get an error no kind "KubeletConfiguration" is registered for version "kubelet.config.k8s.io/v1beta1".
Here's the configuration file:
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
allowedUnsafeSysctls:
- "kernel.sem"
- "kernel.shmmax"
- "kernel.shmall"
- "fs.mqueue.msg_max"
The kernel.sem sysctl is considered as unsafe sysctl, therefore is disabled by default (only safe sysctls are enabled by default). You can allow one or more unsafe sysctls on a node-by-node basics, to do so you need to add --allowed-unsafe-sysctls flag to the kubelet.
Look at "Enabling Unsafe Sysctls"
I've created simple example to illustrate you how it works.
First I added --allowed-unsafe-sysctls flag to the kubelet.
In my case I use kubeadm, so I need to add this flag to
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf file:
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --allowed-unsafe-sysctls=kernel.sem"
...
NOTE: You have to add this flag on every node you want to run Pod with kernel.sem enabled.
Then I reloaded systemd manager configuration and restarted kubelet using below command:
# systemctl daemon-reload && systemctl restart kubelet
Next I created a simple Pod using this manifest file:
apiVersion: v1
kind: Pod
metadata:
labels:
run: web
name: web
spec:
securityContext:
sysctls:
- name: kernel.sem
value: "250 32000 100 128"
containers:
- image: nginx
name: web
Finally we can check if it works correctly:
# sysctl -a | grep "kernel.sem"
kernel.sem = 32000 1024000000 500 32000 // on the worker node
# kubectl get pod
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 110s
# kubectl exec -it web -- bash
root#web:/# cat /proc/sys/kernel/sem
250 32000 100 128 // inside the Pod
Your PodSecurityPolicy doesn't work as expected, because of as you can see in the documentation:
Warning: If you allow unsafe sysctls via the allowedUnsafeSysctls field in a PodSecurityPolicy, any pod using such a sysctl will fail to start if the sysctl is not allowed via the --allowed-unsafe-sysctls kubelet flag as well on that node.