How does priorityClass Works - kubernetes

I try to use priorityClass.
I create two pods, the first has system-node-critical priority and the second cluster-node-critical priority.
Both pods need to run in a node labeled with nodeName: k8s-minion1 but such a node has only 2 cpus while both pods request 1.5 cpu.
I then expect that the second pod runs and the first is in pending status. Instead, the first pod always runs no matter the classpriority I affect to the second pod.
I even tried to label the node afted I apply my manifest but does not change anything.
Here is my manifest :
apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
nodeSelector:
nodeName: k8s-minion1
priorityClassName: cluster-node-critical
---
apiVersion: v1
kind: Pod
metadata:
name: secondpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
priorityClassName: system-node-critical
nodeSelector:
nodeName: k8s-minion1
It is worth noting that I get an error "unknown object : priorityclass" when I do kubectl get priorityclass and when I export my running pod in yml with kubectl get pod secondpod -o yaml, I cant find any classpriority: field.
Here Are my version infos:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Any ideas why this is not working?
Thanks in advance,
Abdelghani

PriorityClasses first appeared in k8s 1.8 as alpha feature.
It graduated to beta in 1.11
You are using 1.10 and this means that this feature is in alpha.
Alpha features are not enabled by default so you would need to enable it.
Unfortunately k8s version 1.10 is no longer supported, so I'd suggest to upgrade at least to 1.14 where priorityClass feature became stable.

Related

Including extra flags in the apiserver manifest file in kubernetes v1.21.0 does not seem to have any effect

I am trying to add the two flags below to apiserver in the /etc/kubernetes/manifests/kube-apiserver.yaml file:
spec:
containers:
- command:
- kube-apiserver
- --enable-admission-plugins=NodeRestriction,PodNodeSelector
- --admission-control-config-file=/vagrant/admission-control.yaml
[...]
I am not mounting a volume or mount point for the /vagrant/admission-control.yaml file. It is completely accessible from the node master, since it is shared by the VM created by vagrant:
vagrant#master-1:~$ cat /vagrant/admission-control.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodNodeSelector
path: /vagrant/podnodeselector.yaml
vagrant#master-1:~$
Kubernetes version:
vagrant#master-1:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Link to the /etc/kubernetes/manifests/kube-apiserver.yaml file being used by the running cluster Here
vagrant#master-1:~$ kubectl delete pods kube-apiserver-master-1 -n kube-system
pod "kube-apiserver-master-1" deleted
Unfortunately "kubectl describe pods kube-apiserver-master-1 -n kube-system" only informs that the pod has been recreated. Flags do not appear as desired. No errors reported.
Any suggestion will be helpful,
Thank you.
NOTES:
I also tried to make a patch on the apiserver's configmap.
The patch is applied, but it does not take effect in the new
running pod.
I also tried to pass the two flags in a file via kubeadm
init --config, but there is little documentation on how to put these
two flags and all the other ones of the apiserver that I need in a configuration file in order to reinstall the master node.
UPDATE:
I hope that be useful for everyone facing the same issue...
After 2 days of searching the internet, and lots and lots of tests, I only managed to make it work with the procedure below:
sudo tee ${KUBEADM_INIT_CONFIG_FILE} <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "${INTERNAL_IP}"
bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: ${KUBERNETES_VERSION}
controlPlaneEndpoint: "${LOADBALANCER_ADDRESS}:6443"
networking:
podSubnet: "10.244.0.0/16"
apiServer:
extraArgs:
advertise-address: ${INTERNAL_IP}
enable-admission-plugins: NodeRestriction,PodNodeSelector
admission-control-config-file: ${ADMISSION_CONTROL_CONFIG_FILE}
extraVolumes:
- name: admission-file
hostPath: ${ADMISSION_CONTROL_CONFIG_FILE}
mountPath: ${ADMISSION_CONTROL_CONFIG_FILE}
readOnly: true
- name: podnodeselector-file
hostPath: ${PODNODESELECTOR_CONFIG_FILE}
mountPath: ${PODNODESELECTOR_CONFIG_FILE}
readOnly: true
EOF
sudo kubeadm init phase control-plane apiserver --config=${KUBEADM_INIT_CONFIG_FILE}
You need to create a hostPath volume mount like below
volumeMounts:
- mountPath: /vagrant
name: admission
readOnly: true
...
volumes:
- hostPath:
path: /vagrant
type: DirectoryOrCreate
name: admission

Unknown field "setHostnameAsFQDN" despite using latest kubectl client

I have a deployment yaml file that looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 1
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
setHostnameAsFQDN: true
hostname: hello
subdomain: world
containers:
- name: hello-kubernetes
image: redis
However, I am getting this error:
$ kubectl apply -f dep.yaml
error: error validating "dep.yaml": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "setHostnameAsFQDN" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
My kubectl version:
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.0", GitCommit:"af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38", GitTreeState:"clean", BuildDate:"2020-12-08T17:59:43Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
After specifying --validate=falsee, hostname and hostname -f still return different values.
I believe I missunderstood something. Doc says that setHostnameAsFQDN will be available from kubernetes v1.20
You showed kubectl version. Your kubernetes version also need to be v1.20. Make sure you are using kubernetes version v1.20.
Use kubectl version for seeing both client and server version. Where client version refers to kubectl version and server version refers to kubernetes version.
As far the k8s v1.20 release note doc: Previously introduced in 1.19 behind a feature gate, SetHostnameAsFQDN is now enabled by default. More details on this behavior is available in documentation for DNS for Services and Pods

Unable to create a pod with extended resources advertised on a node

Hello i have a single node cluster where i have advertised the extended resource named "sctrls" to the node softserv1141 by following the docs at kubernetes-extended-resource. Over here i ran the command:
kubectl get nodes -o yaml
for which the output contained the following part which means the resource creation was successful.
status:
addresses:
- address: 172.16.250.120
type: InternalIP
- address: softserv1141
type: Hostname
allocatable:
cpu: "3"
ephemeral-storage: "7721503937"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 16163880Ki
pods: "110"
sctrls: "64"
capacity:
cpu: "3"
ephemeral-storage: 8182Mi
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 16266280Ki
pods: "110"
sctrls: "64"
I tried creating assigning the extended resource to a pod and creating it by following the docs at kubernetes-assign-extended-resource-pod.
the pod file is as follows
$ cat nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: extended-resource-demo
spec:
containers:
- name: extended-resource-demo-ctr
image: nginx
resources:
requests:
sctrls: 3
I got the following problem during pod creation
$ kubectl create -f nginx-pod.yaml
The Pod "extended-resource-demo" is invalid:
* spec.containers[0].resources.limits[sctrls]: Invalid value: "sctrls": must be a standard resource type or fully qualified
* spec.containers[0].resources.limits[sctrls]: Invalid value: "sctrls": must be a standard resource for containers
* spec.containers[0].resources.requests[sctrls]: Invalid value: "sctrls": must be a standard resource type or fully qualified
* spec.containers[0].resources.requests[sctrls]: Invalid value: "sctrls": must be a standard resource for containers
I dont know why i am getting this error and havent found any good solution to this online. But i feel it might be the kubectl version as the docs mention this as a feature state : Kubernetes v1.18 [stable] where as my kubectl version is
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.8", GitCommit:"ec6eb119b81be488b030e849b9e64fda4caaf33c", GitTreeState:"clean", BuildDate:"2020-03-12T21:00:06Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.8", GitCommit:"ec6eb119b81be488b030e849b9e64fda4caaf33c", GitTreeState:"clean", BuildDate:"2020-03-12T20:52:22Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
I need to confirm if that is the problem or there is an untested solution.
Looks like this paragraph from the docs has the answer: "Extended resources are fully qualified with any domain outside of *.kubernetes.io/. Valid extended resource names have the form example.com/foo where example.com is replaced with your organization's domain and foo is a descriptive resource name.".

Where does the job status in GKE come from? And why is it different than 'kubectl get job'

The GKE UI shows a different status for my job than I get back from kubectl. Note that the GKE UI is the correct status AFAICT and kubectl is wrong. However, I want to programmatically get back the correct status using read_namespaced_job in the Python API, however that status matches kubectl, which seems to be the wrong status.
Where does this status in the GKE UI come from?
In GKE UI:
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: "2020-06-04T08:00:06Z"
labels:
controller-uid: ee750648-1189-4ed5-9803-054d407aa0b2
job-name: tf-nightly-transformer-translate-func-v2-32-1591257600
name: tf-nightly-transformer-translate-func-v2-32-1591257600
namespace: automated
ownerReferences:
- apiVersion: batch/v1beta1
blockOwnerDeletion: true
controller: true
kind: CronJob
name: tf-nightly-transformer-translate-func-v2-32
uid: 5b619895-4c08-45e9-8981-fbd95980ff4e
resourceVersion: "16109561"
selfLink: /apis/batch/v1/namespaces/automated/jobs/tf-nightly-transformer-translate-func-v2-32-1591257600
uid: ee750648-1189-4ed5-9803-054d407aa0b2
...
status:
completionTime: "2020-06-04T08:41:41Z"
conditions:
- lastProbeTime: "2020-06-04T08:41:41Z"
lastTransitionTime: "2020-06-04T08:41:41Z"
status: "True"
type: Complete
startTime: "2020-06-04T08:00:06Z"
succeeded: 1
From kubectl:
zcain#zcain:~$ kubectl get job tf-nightly-transformer-translate-func-v2-32-1591257600 --namespace=automated -o yaml
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: "2020-06-04T08:00:27Z"
labels:
controller-uid: b5d4fb20-df8d-45d8-a8b5-e3b0c40999be
job-name: tf-nightly-transformer-translate-func-v2-32-1591257600
name: tf-nightly-transformer-translate-func-v2-32-1591257600
namespace: automated
ownerReferences:
- apiVersion: batch/v1beta1
blockOwnerDeletion: true
controller: true
kind: CronJob
name: tf-nightly-transformer-translate-func-v2-32
uid: 51a40f4a-5595-49a1-b63f-db75b0849206
resourceVersion: "32712722"
selfLink: /apis/batch/v1/namespaces/automated/jobs/tf-nightly-transformer-translate-func-v2-32-1591257600
uid: b5d4fb20-df8d-45d8-a8b5-e3b0c40999be
...
status:
conditions:
- lastProbeTime: "2020-06-04T12:04:58Z"
lastTransitionTime: "2020-06-04T12:04:58Z"
message: Job was active longer than specified deadline
reason: DeadlineExceeded
status: "True"
type: Failed
startTime: "2020-06-04T11:04:58Z"[enter image description here][1]
Environment:
Kubernetes version (kubectl version):
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.9", GitCommit:"2e808b7cb054ee242b68e62455323aa783991f03", GitTreeState:"clean", BuildDate:"2020-01-18T23:33:14Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.9-gke.26", GitCommit:"525ce678faa2b28483fa9569757a61f92b7b0988", GitTreeState:"clean", BuildDate:"2020-03-06T18:47:39Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
OS:
cat /etc/os-release PRETTY_NAME="Debian GNU/Linux rodete"
Python version (python --version):
Python 3.7.7
Python client version (pip list | grep kubernetes):
kubernetes 10.0.1
For anyone else who finds a similar issue:
The problem is with the kubeconfig file (/usr/local/google/home/zcain/.kube/config for me)
There is a line in here like this:
current-context: gke_xl-ml-test_europe-west4-a_xl-ml-test-europe-west4
If the current-context is pointing to a different cluster or zone than where your job ran, then when you run kubectl job get or use the Python API, then the job status you get back will be weird. I feel like it should just error out but instead I got the behavior above where I get back an incorrect status.
You can run something like gcloud container clusters get-credentials xl-ml-test-europe-west4 --zone europe-west4-a to set your kubeconfig to the correct current-context

Kubenates RunAsUser is forbidden

when I try to create a pods with non-root fsgroup (here 2000)
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
hitting error
Error from server (Forbidden): error when creating "test.yml": pods "security-context-demo" is forbidden: pod.Spec.SecurityContext.RunAsUser is forbidden
Version
root#ubuntuguest:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Can any one help me how to set ClusterRoleBinding in cluster.
If the issue is indeed because of RBAC permissions, then you can try creating a ClusterRoleBinding with cluster role as explained here.
Instead of the last step in that post (using the authentication token to log in to dashboard), you'll have to use that token and the config in your kubectl client when creating the pod.
For more info on the use of contexts, clusters, and users visit here
Need to disable admission plugins SecurityContextDeny while setting up Kube-API
On Master node
ps -ef | grep kube-apiserver
And check enable plugins
--enable-admission-plugins=LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook,DenyEscalatingExec
Ref: SecurityContextDeny
cd /etc/kubernetes
cp apiserver.conf apiserver.conf.bak
vim apiserver.conf
find SecurityContextDeny keywords and delete it.
:wq
systemctl restart kube-apiserver
then fixed it