Kubenates RunAsUser is forbidden - kubernetes

when I try to create a pods with non-root fsgroup (here 2000)
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
hitting error
Error from server (Forbidden): error when creating "test.yml": pods "security-context-demo" is forbidden: pod.Spec.SecurityContext.RunAsUser is forbidden
Version
root#ubuntuguest:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Can any one help me how to set ClusterRoleBinding in cluster.

If the issue is indeed because of RBAC permissions, then you can try creating a ClusterRoleBinding with cluster role as explained here.
Instead of the last step in that post (using the authentication token to log in to dashboard), you'll have to use that token and the config in your kubectl client when creating the pod.
For more info on the use of contexts, clusters, and users visit here

Need to disable admission plugins SecurityContextDeny while setting up Kube-API
On Master node
ps -ef | grep kube-apiserver
And check enable plugins
--enable-admission-plugins=LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook,DenyEscalatingExec
Ref: SecurityContextDeny

cd /etc/kubernetes
cp apiserver.conf apiserver.conf.bak
vim apiserver.conf
find SecurityContextDeny keywords and delete it.
:wq
systemctl restart kube-apiserver
then fixed it

Related

no matches for kind "Profile" in version "kubeflow.org/v1beta1"

I install kubeflow and tried manual profile creation following to here, but got this print
error: unable to recognize "profile.yaml": no matches for kind "Profile" in version "kubeflow.org/v1beta1"
How can I solve it?
Your valuable help is needed.
my resource is profile.yaml
apiVersion: kubeflow.org/v1beta1
kind: Profile
metadata:
name: tmp_namespace
spec:
owner:
kind: User
name: example_01#gmail.com
resourceQuotaSpec:
hard:
cpu: "2"
memory: 2Gi
requests.nvidia.com/gpu: "1"
persistentvolumeclaims: "1"
requests.storage: "5Gi"
user infomation in dex:
- email: exam_01#gmail.com
hash: $2a$12$lRDeywzDl4ds0oRR.erqt.b5fmNpvJb0jdZXE0rMNYdmbfseTzxNW
userID: "example"
username: example
Of course I did restart dex
$ kubectl rollout restart deployment dex -n auth
$ kubectl version --client && kubeadm version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.13", GitCommit:"a43c0904d0de10f92aa3956c74489c45e6453d6e", GitTreeState:"clean", BuildDate:"2022-08-17T18:28:56Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.13", GitCommit:"a43c0904d0de10f92aa3956c74489c45e6453d6e", GitTreeState:"clean", BuildDate:"2022-08-17T18:27:51Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
I've found a way.
If you see the message no matches for kind "Profile" in version "kubeflow.org/v1beta1, you may not have done the two necessary installs.
go kubeflow/manifasts, and follow command to install Profiles + KFAM and User Namespace

Istio: Unable to mount secrets to the pod

I'm a noob with Istio and K8s so sorry if this question sounds a little dumb.
I'm trying to provide my own certs to the Gateway deployment for which I created secrets like below.
$ kubectl create -n istio-system secret tls certs --key example.comkey.pem --cert example.com.pem
$ kubectl create -n istio-system secret generic ca-certs --from-file=rootCA.pem
Edited my deployment
sidecar.istio.io/userVolumeMount: '[{"name":"certs", "mountPath":"/etc/certs", "readonly":true},{"name":"ca-certs", "mountPath":"/etc/ca-certs", "readonly":true}]'
sidecar.istio.io/userVolume: '[{"name":"certs", "secret":{"secretName":"certs"}},{"name":"ca-certs", "secret":{"secretName":"ca-certs"}}]'
Followed the steps provided in here and here but I still do not see the files mounted.
Am I missing something?
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
I was able to resolve this issue. I'm not sure if this is the right way to do it though.
I missed adding volumeMounts and volumes. Once I made the below change, I could see my files mounted.
volumeMounts:
- name: certs
mountPath: /etc/certs
readOnly: true
- name: ca-certs
mountPath: /etc/ca-certs
readOnly: true
volumes:
- name: certs
secret:
secretName: certs
optional: true
- name: ca-certs
secret:
secretName: ca-certs
optional: true

Including extra flags in the apiserver manifest file in kubernetes v1.21.0 does not seem to have any effect

I am trying to add the two flags below to apiserver in the /etc/kubernetes/manifests/kube-apiserver.yaml file:
spec:
containers:
- command:
- kube-apiserver
- --enable-admission-plugins=NodeRestriction,PodNodeSelector
- --admission-control-config-file=/vagrant/admission-control.yaml
[...]
I am not mounting a volume or mount point for the /vagrant/admission-control.yaml file. It is completely accessible from the node master, since it is shared by the VM created by vagrant:
vagrant#master-1:~$ cat /vagrant/admission-control.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: PodNodeSelector
path: /vagrant/podnodeselector.yaml
vagrant#master-1:~$
Kubernetes version:
vagrant#master-1:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Link to the /etc/kubernetes/manifests/kube-apiserver.yaml file being used by the running cluster Here
vagrant#master-1:~$ kubectl delete pods kube-apiserver-master-1 -n kube-system
pod "kube-apiserver-master-1" deleted
Unfortunately "kubectl describe pods kube-apiserver-master-1 -n kube-system" only informs that the pod has been recreated. Flags do not appear as desired. No errors reported.
Any suggestion will be helpful,
Thank you.
NOTES:
I also tried to make a patch on the apiserver's configmap.
The patch is applied, but it does not take effect in the new
running pod.
I also tried to pass the two flags in a file via kubeadm
init --config, but there is little documentation on how to put these
two flags and all the other ones of the apiserver that I need in a configuration file in order to reinstall the master node.
UPDATE:
I hope that be useful for everyone facing the same issue...
After 2 days of searching the internet, and lots and lots of tests, I only managed to make it work with the procedure below:
sudo tee ${KUBEADM_INIT_CONFIG_FILE} <<EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "${INTERNAL_IP}"
bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: ${KUBERNETES_VERSION}
controlPlaneEndpoint: "${LOADBALANCER_ADDRESS}:6443"
networking:
podSubnet: "10.244.0.0/16"
apiServer:
extraArgs:
advertise-address: ${INTERNAL_IP}
enable-admission-plugins: NodeRestriction,PodNodeSelector
admission-control-config-file: ${ADMISSION_CONTROL_CONFIG_FILE}
extraVolumes:
- name: admission-file
hostPath: ${ADMISSION_CONTROL_CONFIG_FILE}
mountPath: ${ADMISSION_CONTROL_CONFIG_FILE}
readOnly: true
- name: podnodeselector-file
hostPath: ${PODNODESELECTOR_CONFIG_FILE}
mountPath: ${PODNODESELECTOR_CONFIG_FILE}
readOnly: true
EOF
sudo kubeadm init phase control-plane apiserver --config=${KUBEADM_INIT_CONFIG_FILE}
You need to create a hostPath volume mount like below
volumeMounts:
- mountPath: /vagrant
name: admission
readOnly: true
...
volumes:
- hostPath:
path: /vagrant
type: DirectoryOrCreate
name: admission

kube-apiserver not coming up after adding --admission-control-config-file flag

root#ubuntu151:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.10", GitCommit:"62876fc6d93e891aa7fbe19771e6a6c03773b0f7", GitTreeState:"clean", BuildDate:"2020-10-15T01:43:56Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
My admission webhooks require authentication so I'm restarting the apiserver by specifying the location of the admission control configuration file via the --admission-control-config-file flag. Doing as:
root#ubuntu151:~# vi /etc/kubernetes/manifests/kube-apiserver.yaml
...
spec:
containers:
- command:
- --admission-control-config-file=/var/lib/kubernetes/kube-AdmissionConfiguration.yaml
...
root#ubuntu151:~# vi /var/lib/kubernetes/kube-AdmissionConfiguration.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ValidatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: WebhookAdmissionConfiguration
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml
- name: MutatingAdmissionWebhook
configuration:
apiVersion: apiserver.config.k8s.io/v1
kind: WebhookAdmissionConfiguration
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml
kubeConfigFile: /var/lib/kubernetes/kube-config.yaml is the file I copied from ~/.kube/config
Now my kube-apiserver is not coming up .
Please help !
Thanks in Advance !

How does priorityClass Works

I try to use priorityClass.
I create two pods, the first has system-node-critical priority and the second cluster-node-critical priority.
Both pods need to run in a node labeled with nodeName: k8s-minion1 but such a node has only 2 cpus while both pods request 1.5 cpu.
I then expect that the second pod runs and the first is in pending status. Instead, the first pod always runs no matter the classpriority I affect to the second pod.
I even tried to label the node afted I apply my manifest but does not change anything.
Here is my manifest :
apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
nodeSelector:
nodeName: k8s-minion1
priorityClassName: cluster-node-critical
---
apiVersion: v1
kind: Pod
metadata:
name: secondpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
priorityClassName: system-node-critical
nodeSelector:
nodeName: k8s-minion1
It is worth noting that I get an error "unknown object : priorityclass" when I do kubectl get priorityclass and when I export my running pod in yml with kubectl get pod secondpod -o yaml, I cant find any classpriority: field.
Here Are my version infos:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Any ideas why this is not working?
Thanks in advance,
Abdelghani
PriorityClasses first appeared in k8s 1.8 as alpha feature.
It graduated to beta in 1.11
You are using 1.10 and this means that this feature is in alpha.
Alpha features are not enabled by default so you would need to enable it.
Unfortunately k8s version 1.10 is no longer supported, so I'd suggest to upgrade at least to 1.14 where priorityClass feature became stable.