Hello i have a single node cluster where i have advertised the extended resource named "sctrls" to the node softserv1141 by following the docs at kubernetes-extended-resource. Over here i ran the command:
kubectl get nodes -o yaml
for which the output contained the following part which means the resource creation was successful.
status:
addresses:
- address: 172.16.250.120
type: InternalIP
- address: softserv1141
type: Hostname
allocatable:
cpu: "3"
ephemeral-storage: "7721503937"
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 16163880Ki
pods: "110"
sctrls: "64"
capacity:
cpu: "3"
ephemeral-storage: 8182Mi
hugepages-1Gi: "0"
hugepages-2Mi: "0"
memory: 16266280Ki
pods: "110"
sctrls: "64"
I tried creating assigning the extended resource to a pod and creating it by following the docs at kubernetes-assign-extended-resource-pod.
the pod file is as follows
$ cat nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: extended-resource-demo
spec:
containers:
- name: extended-resource-demo-ctr
image: nginx
resources:
requests:
sctrls: 3
I got the following problem during pod creation
$ kubectl create -f nginx-pod.yaml
The Pod "extended-resource-demo" is invalid:
* spec.containers[0].resources.limits[sctrls]: Invalid value: "sctrls": must be a standard resource type or fully qualified
* spec.containers[0].resources.limits[sctrls]: Invalid value: "sctrls": must be a standard resource for containers
* spec.containers[0].resources.requests[sctrls]: Invalid value: "sctrls": must be a standard resource type or fully qualified
* spec.containers[0].resources.requests[sctrls]: Invalid value: "sctrls": must be a standard resource for containers
I dont know why i am getting this error and havent found any good solution to this online. But i feel it might be the kubectl version as the docs mention this as a feature state : Kubernetes v1.18 [stable] where as my kubectl version is
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.8", GitCommit:"ec6eb119b81be488b030e849b9e64fda4caaf33c", GitTreeState:"clean", BuildDate:"2020-03-12T21:00:06Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.8", GitCommit:"ec6eb119b81be488b030e849b9e64fda4caaf33c", GitTreeState:"clean", BuildDate:"2020-03-12T20:52:22Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
I need to confirm if that is the problem or there is an untested solution.
Looks like this paragraph from the docs has the answer: "Extended resources are fully qualified with any domain outside of *.kubernetes.io/. Valid extended resource names have the form example.com/foo where example.com is replaced with your organization's domain and foo is a descriptive resource name.".
Related
I try to use priorityClass.
I create two pods, the first has system-node-critical priority and the second cluster-node-critical priority.
Both pods need to run in a node labeled with nodeName: k8s-minion1 but such a node has only 2 cpus while both pods request 1.5 cpu.
I then expect that the second pod runs and the first is in pending status. Instead, the first pod always runs no matter the classpriority I affect to the second pod.
I even tried to label the node afted I apply my manifest but does not change anything.
Here is my manifest :
apiVersion: v1
kind: Pod
metadata:
name: firstpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
nodeSelector:
nodeName: k8s-minion1
priorityClassName: cluster-node-critical
---
apiVersion: v1
kind: Pod
metadata:
name: secondpod
spec:
containers:
- name: container
image: nginx
resources:
requests:
cpu: 1.5
priorityClassName: system-node-critical
nodeSelector:
nodeName: k8s-minion1
It is worth noting that I get an error "unknown object : priorityclass" when I do kubectl get priorityclass and when I export my running pod in yml with kubectl get pod secondpod -o yaml, I cant find any classpriority: field.
Here Are my version infos:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0+coreos.0", GitCommit:"6bb2e725fc2876cd94b3900fc57a1c98ca87a08b", GitTreeState:"clean", BuildDate:"2018-04-02T16:49:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Any ideas why this is not working?
Thanks in advance,
Abdelghani
PriorityClasses first appeared in k8s 1.8 as alpha feature.
It graduated to beta in 1.11
You are using 1.10 and this means that this feature is in alpha.
Alpha features are not enabled by default so you would need to enable it.
Unfortunately k8s version 1.10 is no longer supported, so I'd suggest to upgrade at least to 1.14 where priorityClass feature became stable.
I use minikube on windows 10 and try to generate Persistent Volume with minikube dashboard. Belows are my PV yaml file contents.
apiVersion: v1
kind: PersistentVolume
metadata:
name: blog-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: blog-pv-claim
spec:
storageClassName: manual
volumeName: blog-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
But minikube dashboard throw the following errors.
## Deploying file has failed
the server could not find the requested resource
But I can generate PV with kubectl command as executing the following command
kubectl apply -f pod-pvc-test.yaml
For your information, the version of kubectl.exe is
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
How can I generate the Persistent Volume with minikube dashboard as well as kubectl command?
== Updated Part==
> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
blog-pv 1Gi RWO Recycle Bound default/blog-pv-claim manual 5m1s
First, apply the resource one by one. So make sure this problem can be isolated to PV(PersistentVolume) or PVC (PersistentVolumeClaim)
Second, please adjust the hostPath to others, /mnt/data normally is a mounted or NFS folder, maybe that's the issue, you can adjust to some other real path for testing.
After you applied them, please show the output
kubectl get pv,pvc
You should be fine to know the root cause now.
I've managed to reproduce the issue you've been describing on my minikube with the v2.0.0-beta8 dashboard.
$ minikube version
minikube version: v1.9.1
$ kubectl version
Client Version: GitVersion:"v1.17.4"
Server Version: GitVersion:"v1.18.0"
Please note that the offical guide reffers to v2.0.0-beta8 which is broken :).
Recently there were some fixes for the broken functionality (they'd been merged to master branch).
Please update the version of the dashboard to at least the v2.0.0-rc6.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc6/aio/deploy/recommended.yaml
I was able successfully creating PV and PVC (via dashboard) from yaml provided.
Hope that helps!
I'm seeing the following error when running a pod. I matched with the documentation in the Kubernetes webpage and it is the code is same as the one i have written below but Istill end up with the below error.
error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
apiVersion: v1
kind: pod
metadata:
name: helloworld-deployment
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: anishanil/kubernetes:node
ports:
containerPort: 3000
resources:
limits:
memory: "100Mi"
cpu: "100m"
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6+IKS", GitCommit:"44b769243cf9b3fe09c1105a4a8749e8ff5f4ba8", GitTreeState:"clean", BuildDate:"2019-08-21T12:48:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Any help is greatly appreciated
Thank you
I matched with the documentation in the Kubernetes webpage and it is
the code is same as the one i have written below...
Could you link the fragment of documentation with which you compare your code ? As other people already suggested in their answers and comments, your yaml is not valid. Are you sure you're not using some outdated tutorial or docs ?
Let's debug it together step by step:
When I use exactly the same code you posted in your question, the error message I got is quite different than the one you posted:
error: error parsing pod.yml: error converting YAML to JSON: yaml:
line 12: did not find expected key
OK, so let's go to mentioned line 12 and check where can be the problem:
11 ports:
12 containerPort: 3000
13 resources:
14 limits:
15 memory: "100Mi"
16 cpu: "100m"
Line 12 itself looks actually totally ok, so the problem should be elsewhere. Let's debug it further using this online yaml validator. It also suggests that this yaml is syntactically not correct however it pointed out different line:
(): did not find expected key while parsing a block mapping
at line 9 column 5
If you look carefully at the above quoted fragment of code, you may notice that the indentation level in line 13 looks quite strange. When you remove one unnecessary space right before resources ( it should be on the same level as ports ) yaml validador will tell you that your yaml syntax is correct. Although it may already be a valid yaml it does not mean that it is a valid input for Kubernetes which requires specific structure following certain rules.
Let's try it again... Now kubectl apply -f pod.yml returns quite different error:
Error from server (BadRequest): error when creating "pod.yml": pod in
version "v1" cannot be handled as a Pod: no kind "pod" is registered
for version "v1" in scheme
"k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:29"
Quick search will give you an answer to that as well. Proper value of kind: key is Pod but not pod.
Once we fixed that, let's run kubectl apply -f pod.yml again. Now it gives us back different error:
error: error validating "pod.yml": error validating data:
ValidationError(Pod.spec.containers[0].ports): invalid type for
io.k8s.api.core.v1.Container.ports: got "map", expected "array";
which is pretty self-explanatory and means that you are not supposed to use "map" in a place where an "array" was expected and the error message precisely pointed out where, namely:
Pod.spec.containers[0].ports.
Let's correct this fragment:
11 ports:
12 containerPort: 3000
In yaml formatting the - character implies the start of an array so it should look like this:
11 ports:
12 - containerPort: 3000
If we run kubectl apply -f pod.yml again, we finally got the expected message:
pod/helloworld-deployment created
The final, correct version of the Pod definition looks as follows:
apiVersion: v1
kind: Pod
metadata:
name: helloworld-deployment
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: anishanil/kubernetes:node
ports:
- containerPort: 3000
resources:
limits:
memory: "100Mi"
cpu: "100m"
Your yaml has error. You can use a yaml validation tool to get it checked. Or use the below instead:
---
apiVersion: v1
kind: pod
metadata:
labels:
app: helloworld
name: helloworld-deployment
spec:
containers:
-
image: "anishanil/kubernetes:node"
name: helloworld
ports:
containerPort: 3000
resources:
limits:
cpu: 100m
memory: 100Mi
resources should be inline with image, name, ports in yaml definition. OR You can use below yaml.
apiVersion: v1
kind: pod
metadata:
labels:
app: helloworld
name: helloworld-deployment
spec:
containers:
- image: "anishanil/kubernetes:node"
name: helloworld
ports:
containerPort: 3000
resources:
limits:
cpu: 100m
memory: 100Mi
For someone that stumble on this because of some similar issues. I found a solution that worked for me in the answer below. I disregarded it first because there was no way it should solve the issue... but it did.
Solution is basically to check the box "Check for latest version" below advanced drop-down in the Kubectl config window or add the following line under Kubernetes task inputs:
checkLatest: true
Link to answer:
ADO: error validating data: the server could not find the requested
Which in turn links to this:
Release Agent job kubectl apply returns 'error validating data'
when I try to create a pods with non-root fsgroup (here 2000)
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: gcr.io/google-samples/node-hello:1.0
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
hitting error
Error from server (Forbidden): error when creating "test.yml": pods "security-context-demo" is forbidden: pod.Spec.SecurityContext.RunAsUser is forbidden
Version
root#ubuntuguest:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:22:21Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Can any one help me how to set ClusterRoleBinding in cluster.
If the issue is indeed because of RBAC permissions, then you can try creating a ClusterRoleBinding with cluster role as explained here.
Instead of the last step in that post (using the authentication token to log in to dashboard), you'll have to use that token and the config in your kubectl client when creating the pod.
For more info on the use of contexts, clusters, and users visit here
Need to disable admission plugins SecurityContextDeny while setting up Kube-API
On Master node
ps -ef | grep kube-apiserver
And check enable plugins
--enable-admission-plugins=LimitRanger,NamespaceExists,NamespaceLifecycle,ResourceQuota,ServiceAccount,DefaultStorageClass,MutatingAdmissionWebhook,DenyEscalatingExec
Ref: SecurityContextDeny
cd /etc/kubernetes
cp apiserver.conf apiserver.conf.bak
vim apiserver.conf
find SecurityContextDeny keywords and delete it.
:wq
systemctl restart kube-apiserver
then fixed it
I am running minikube/Kubernetes and am having difficulty accessing a volume from a volumeMount in a deployment.
I can confirm that when the microservice starts up, it is not able to access the /config directory (ie. the "mountPath" in the "volumeMounts"). I have verified that the hostPath/path is valid.
I have experimented with a number of techniques and have also validated that the deployment files is correct. I have also tried using quotes/double-quotes/no-quotes around the path specifications, but this does not address the issue.
Note that I am using a "hostPath" for simple testing purposes, however, this is the scenario that I nevertheless need to address.
My minikube configuration is illustrated below:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T07:30:54Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"1970-01-01T00:00:00Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"linux/amd64"}
I am running minikube on MacOS/Sierra version 10.12.3 (16D32).
My deployment file (deployment.yaml):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: atmp1000-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: atmp1000
spec:
containers:
- name: atmp1000
image: atmp1000
ports:
- containerPort: 7010
volumeMounts:
- name: atmp1000-volume
mountPath: '/config'
volumes:
- name: atmp1000-volume
hostPath:
path: '/Users/<username>/<some-path>/config'
Any help is appreciated.
In the interest of completeness, below is the solution that I found... I got the hostPath and mounts working on minikube (on Mac) which took a few steps but required several "minikube delete" commands to get the most current version and reset the environment. Below are some additional notes about how to get this functioning:
I had to use the xhyve driver to make it all work properly -- it probably works using other drivers but I did not try them.
I found that minikube mounts host paths at "/User" which means the "volumes/hostPath/path" should start at "/User"
I found a variety of ways that this worked including using claims but the files in the original question now reflect a correct and simple configuration.
Host mounting directories is not supported by minikube yet. Please check https://github.com/kubernetes/minikube/issues/2
Internally minikube uses a virtual machine to host Kubernetes. If you specify hostPath in a POD spec, Kubernetes will host mount the specified directory inside the VM and not the directory on your actual host.
If you really need to access something on your host machine, you have to use NFS or any other network based volume type.