I'm seeing the following error when running a pod. I matched with the documentation in the Kubernetes webpage and it is the code is same as the one i have written below but Istill end up with the below error.
error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
apiVersion: v1
kind: pod
metadata:
name: helloworld-deployment
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: anishanil/kubernetes:node
ports:
containerPort: 3000
resources:
limits:
memory: "100Mi"
cpu: "100m"
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6+IKS", GitCommit:"44b769243cf9b3fe09c1105a4a8749e8ff5f4ba8", GitTreeState:"clean", BuildDate:"2019-08-21T12:48:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Any help is greatly appreciated
Thank you
I matched with the documentation in the Kubernetes webpage and it is
the code is same as the one i have written below...
Could you link the fragment of documentation with which you compare your code ? As other people already suggested in their answers and comments, your yaml is not valid. Are you sure you're not using some outdated tutorial or docs ?
Let's debug it together step by step:
When I use exactly the same code you posted in your question, the error message I got is quite different than the one you posted:
error: error parsing pod.yml: error converting YAML to JSON: yaml:
line 12: did not find expected key
OK, so let's go to mentioned line 12 and check where can be the problem:
11 ports:
12 containerPort: 3000
13 resources:
14 limits:
15 memory: "100Mi"
16 cpu: "100m"
Line 12 itself looks actually totally ok, so the problem should be elsewhere. Let's debug it further using this online yaml validator. It also suggests that this yaml is syntactically not correct however it pointed out different line:
(): did not find expected key while parsing a block mapping
at line 9 column 5
If you look carefully at the above quoted fragment of code, you may notice that the indentation level in line 13 looks quite strange. When you remove one unnecessary space right before resources ( it should be on the same level as ports ) yaml validador will tell you that your yaml syntax is correct. Although it may already be a valid yaml it does not mean that it is a valid input for Kubernetes which requires specific structure following certain rules.
Let's try it again... Now kubectl apply -f pod.yml returns quite different error:
Error from server (BadRequest): error when creating "pod.yml": pod in
version "v1" cannot be handled as a Pod: no kind "pod" is registered
for version "v1" in scheme
"k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:29"
Quick search will give you an answer to that as well. Proper value of kind: key is Pod but not pod.
Once we fixed that, let's run kubectl apply -f pod.yml again. Now it gives us back different error:
error: error validating "pod.yml": error validating data:
ValidationError(Pod.spec.containers[0].ports): invalid type for
io.k8s.api.core.v1.Container.ports: got "map", expected "array";
which is pretty self-explanatory and means that you are not supposed to use "map" in a place where an "array" was expected and the error message precisely pointed out where, namely:
Pod.spec.containers[0].ports.
Let's correct this fragment:
11 ports:
12 containerPort: 3000
In yaml formatting the - character implies the start of an array so it should look like this:
11 ports:
12 - containerPort: 3000
If we run kubectl apply -f pod.yml again, we finally got the expected message:
pod/helloworld-deployment created
The final, correct version of the Pod definition looks as follows:
apiVersion: v1
kind: Pod
metadata:
name: helloworld-deployment
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: anishanil/kubernetes:node
ports:
- containerPort: 3000
resources:
limits:
memory: "100Mi"
cpu: "100m"
Your yaml has error. You can use a yaml validation tool to get it checked. Or use the below instead:
---
apiVersion: v1
kind: pod
metadata:
labels:
app: helloworld
name: helloworld-deployment
spec:
containers:
-
image: "anishanil/kubernetes:node"
name: helloworld
ports:
containerPort: 3000
resources:
limits:
cpu: 100m
memory: 100Mi
resources should be inline with image, name, ports in yaml definition. OR You can use below yaml.
apiVersion: v1
kind: pod
metadata:
labels:
app: helloworld
name: helloworld-deployment
spec:
containers:
- image: "anishanil/kubernetes:node"
name: helloworld
ports:
containerPort: 3000
resources:
limits:
cpu: 100m
memory: 100Mi
For someone that stumble on this because of some similar issues. I found a solution that worked for me in the answer below. I disregarded it first because there was no way it should solve the issue... but it did.
Solution is basically to check the box "Check for latest version" below advanced drop-down in the Kubectl config window or add the following line under Kubernetes task inputs:
checkLatest: true
Link to answer:
ADO: error validating data: the server could not find the requested
Which in turn links to this:
Release Agent job kubectl apply returns 'error validating data'
Related
https://codelabs.developers.google.com/codelabs/cloud-mongodb-statefulset/index.html?index=..%2F..index#5
error: unable to recognize "mongo-statefulset.yaml": no matches for kind "StatefulSet" in version "apps/v1beta1"
The following command causes the above response in google cloud shell:
kubectl apply -f mongo-statefulset.yaml
I am working on deploying the stateful set MongoDB sidecar and following the instructions to a t in this demo but I received the following error. Does anyone have an explanation for the error? Or know a way to deploy stateful set mongo db in gke?
mongo-statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
Changing the yaml file with v1 in apiVersion returns a similar error:
error: unable to recognize "mongo-statefulset.yaml": no matches for kind "StatefulSet" in version "v1"
Explanation:
apiVersion of particular kubernetes resource is subject to change in time. The fact that once StatefulSet used apiVersion: apps/v1beta1 doesn't mean that it will use it forever.
As dany L already suggested in his answer, apps/v1beta1 most probably has already been deprecated in the version of kubernetes you're using. The mentioned changes in supported api versions were introduced quite a long time ago (date of publication of the article is Thursday, July 18, 2019) so chances are that you're using a version newer than 1.15.
I'd like to put it also in the body of the answer to make it clearly visible without a need to additionally searching through the docs. So what actually happen when 1.16 was released was (among other major changes):
StatefulSet in the apps/v1beta1 and apps/v1beta2 API versions
is no longer served
Migrate to use the apps/v1 API version, available since v1.9. Existing persisted data can be retrieved/updated via the new
version.
Solution:
Workaround, create your cluster with 1.15 version and it should work.
As to the above workaround I agree that this is a workaround and of course it will work but this is not a recommended approach. New kubernetes versions are developed and released for some reason. Apart from changes in APIs, many other important things are introduced, bugs are fixed, existing funcionalities are improved as well as new ones are added. So there is no point in using something that soon will be deprecated anyway.
Instead of that you should use the currently supported/required apiVersion for the particular kubernetes resource, in your case StatefulSet.
The easiest way to check what apiVersion supports StatefulSet in your kubernetes installation is by running:
kubectl explain statefulset
which will tell you (among many other interesting things) which apiVersion it is supposed to use:
KIND: StatefulSet
VERSION: apps/v1
...
To summarize your particular case:
If you're using kubernetes newer than 1.15, edit your mongo-statefulset.yaml and replace apiVersion: apps/v1beta1 with currently supported apiVersion: apps/v1.
Possible explanation.
Workaround, create your cluster with 1.15 version and it should work.
I am a beginner and start learning Kubernetes.
I'm trying to create a POD with the name of myfirstpodwithlabels.yaml and write the following specification in my YAML file. But when I try to create a POD I get this error.
error: error validating "myfirstpodwithlabels.yaml": error validating data: [ValidationError(Pod.spec): unknown field "contianers" in io.k8s.api.core.v1.PodSpec, ValidationError(Pod.spec): missing required field "containers" in io.k8s.api.core.v1.PodSpec]; if you choose to ignore these errors, turn validation off with --validate=false
My YAML file specification
kind: Pod
apiVersion: v1
metadata:
name: myfirstpodwithlabels
labels:
type: backend
env: production
spec:
contianers:
- image: aamirpinger/helloworld:latest
name: container1
ports:
- containerPort: 80
There is a typo in your .spec Section of the yaml.
you have written:
"contianers"
as seen in the error message when it really should be
"containers"
also for future reference: if there is an issue in your resource definitions yaml it helps if you actually post the yaml on stackoverflow otherwise helping is not an easy task.
After fixing the problem from this topic Can't use Google Cloud Kubernetes substitutions (yaml files are all there, to not copy-paste them once again) I got a new problem. Making a new topic because there is the correct answer for the previous one.
Step #2: Running: kubectl apply -f deployment.yaml
Step #2: Warning:
kubectl apply should be used on resource created by either kubectl
create --save-config or kubectl apply
Step #2: The Deployment
"myproject" is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"app":"myproject",
"run":"myproject"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is
immutable
I've checked similar issues but hasn't been able to find anything related.
Also, is that possible that this error related to upgrading App Engine -> Docker -> Kubernetes? I created valid configuration on each step. Maybe there are some things that were created and immutable now? What should I do in this case?
One more note, maybe that matters, it says "kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply" (you can see above), but executing
kubectl create deployment myproject --image=gcr.io/myproject/myproject
gives me this
Error from server (AlreadyExists): deployments.apps "myproject" already exists
which is actually expected, but, at the same time, controversial with warning above (at least from my prospective)
Any idea?
Output of kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.7", GitCommit:"8fca2ec50a6133511b771a11559e24191b1aa2b4", GitTreeState:"clean", BuildDate:"2019-09-18T14:47:22Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
Current YAML file:
steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/$PROJECT_ID/myproject:latest || exit 0'
]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA',
'-t',
'gcr.io/$PROJECT_ID/myproject:latest',
'.'
]
- name: 'gcr.io/cloud-builders/kubectl'
args: [ 'apply', '-f', 'deployment.yaml' ]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'myproject',
'myproject=gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- 'DB_PORT=5432'
- 'DB_SCHEMA=public'
- 'TYPEORM_CONNECTION=postgres'
- 'FE=myproject'
- 'V=1'
- 'CLEAR_DB=true'
- 'BUCKET_NAME=myproject'
- 'BUCKET_TYPE=google'
- 'KMS_KEY_NAME=storagekey'
timeout: 1600s
images:
- 'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
- 'gcr.io/$PROJECT_ID/myproject:latest
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myproject
spec:
replicas: 1
selector:
matchLabels:
app: myproject
template:
metadata:
labels:
app: myproject
spec:
containers:
- name: myproject
image: gcr.io/myproject/github.com/weekendman/{{repo name here}}:latest
ports:
- containerPort: 80
From apps/v1 on, a Deployment’s label selector is immutable after it gets created.
excerpt from Kubernetes's document:
Note: In API version apps/v1, a Deployment’s label selector is
immutable after it gets created.
So, you can delete this deployment first, then apply it.
The MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable because it is different from your previous deployment.
Try looking at the existing deployment with kubectl get deployment -o yaml. I suspect the existing yaml has a different matchLables stanza.
Specifically your file has:
matchLabels:
app: myproject
my guess is the output of kubectl get deployment -o yaml while have something different, like:
matchLabels:
app: old-project-name
or
matchLabels:
app: myproject
version: alpha
The new deployment cannot change the matchLabels stanza because, well, because it is immutable. That stanza in the new deployment must match the old. If you want to change it, you need to delete the old deployment with kubectl delete deployment myproject.
Note: if you do that in production your app will be unavailable for a while. (A much longer discussion about how to do this in production is not useful here.)
I am stuggling with a simple one replica deployment of the official event store image on a Kubernetes cluster. I am using a persistent volume for the data storage.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-eventstore
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: my-eventstore
spec:
imagePullSecrets:
- name: runner-gitlab-account
containers:
- name: eventstore
image: eventstore/eventstore
env:
- name: EVENTSTORE_DB
value: "/usr/data/eventstore/data"
- name: EVENTSTORE_LOG
value: "/usr/data/eventstore/log"
ports:
- containerPort: 2113
- containerPort: 2114
- containerPort: 1111
- containerPort: 1112
volumeMounts:
- name: eventstore-storage
mountPath: /usr/data/eventstore
volumes:
- name: eventstore-storage
persistentVolumeClaim:
claimName: eventstore-pv-claim
And this is the yaml for my persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: eventstore-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
The deployments work fine. It's when I tested for durability that I started to encounter a problem. I delete a pod to force actual state from desired state and see how Kubernetes reacts.
It immediately launched a new pod to replace the deleted one. And the admin UI was still showing the same data. But after deleting a pod for the second time, the new pod did not come up. I got an error message that said "record too large" that indicated corrupted data according to this discussion. https://groups.google.com/forum/#!topic/event-store/gUKLaxZj4gw
I tried again for a couple of times. Same result every time. After deleting the pod for the second time the data is corrupted. This has me worried that an actual failure will cause similar result.
However, when deploying new versions of the image or scaling the pods in the deployment to zero and back to one no data corruption occurs. After several tries everything is fine. Which is odd since that also completely replaces pods (I checked the pod id's and they changed).
This has me wondering if deleting a pod using kubectl delete is somehow more forcefull in the way that a pod is terminated. Do any of you have similar experience? Of insights on if/how delete is different? Thanks in advance for your input.
Regards,
Oskar
I was refered to this pull request on Github that stated the the proces was not killed properly: https://github.com/EventStore/eventstore-docker/pull/52
After building a new image with the Docker file from the pull request put this image in the deployment. I am killing pods left and right, no data corruption issues anymore.
Hope this helps someone facing the same issue.
I want to add new pod property in yaml file while creating pod in Kubernetes.
By looking at old properties I did all required changes in the kubernetes source code but I still get below parsing error:
error: error validating "podbox.yml": error validating data: found invalid field newproperty for v1.Pod
Example Pod yaml file :
apiVersion: v1
kind: Pod
metadata:
name: podbox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: podbox
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "1"
restartPolicy: Always
newproperty: false
`newproperty`
not getting parsed while creating Pod.
Is there any specific changes required?
You don't want to add new fields to kind: Pod, because then your Kubernetes code will be on a fork and your config will be non-portable.
If you are planning a contribution to submit to the code Kubernetes code, you should first join the appropriate SIG (sig-node or sig-apps for Pod changes) and get support for your proposed change. Someone there can point you to example PRs that you can follow to add a field.
If you just need to put some extra information in a Pod that you or your own programs can parse, then use an annotation.
If you want to create a new type in your Kubernetes cluster, use a Custom Resource.
Just remove the line
newproperty: false
from your YAML and you should be fine.
As far as I know you should be declaring then inside data:
apiVersion: v1
kind: Pod
metadata:
name: podbox
namespace: default
data:
newproperty: false
if you want an enviroment variable to be passed to the docker use this structure:
....
containers:
- name: name
image: some_image
env:
- name: SOME_VAR
value: "Hello from the kubernetes"
....