Kubernetes Configmap Error validating data: unknown - kubernetes

This is my config kubernetes to create the configmap
kind: ConfigMap
apiVersion: v1
metadata:
name: fileprocessing-acracbsscan-configmap
data:
SCHEDULE_RUNNING_TIME: '20'
The kubectl version:
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-06-29T23:15:59Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
But I have the error: error validating data: unknown; if you choose to ignore these errors, turn validation off with --validate=false
I don't know the unknown error and how can I resolve it

First of all, You did not publish the deployment, So to be clear, the Deployment.yaml should reference this config map
envFrom:
- configMapRef:
    name: fileprocessing-acracbsscan-configmap
Also, I'm not sure but try to put the data on quotes.
"SCHEDULE_RUNNING_TIME": "20"
Use kubectl describe po <pod_name> -n <namespace> in order to get a clearer view on what's the status of the failure.
Try also kubectl get events -n <namespace> To get all events of current namespace, maybe it will clear the reason also.

Related

Unable to deploy emissary-ingress in local kubernetes cluster. Fails with `error validating data: ValidationError(CustomResourceDefinition.spec)`

I'm trying to install emissary-ingress using the instructions here.
It started failing with error no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta". I searched and found an answer on Stack Overflow which said to update apiextensions.k8s.io/v1beta1 to apiextensions.k8s.io/v1 which I did.
It also asked to use the admissionregistration.k8s.io/v1 which my kubectl already uses.
When I run the kubectl apply -f filename.yml command, the above error was gone and a new error started popping in with error: error validating data: ValidationError(CustomResourceDefinition.spec): unknown field "validation" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionSpec;
What should I do next?
My kubectl version - Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:32:41Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
minikube version - minikube version: v1.23.2
commit: 0a0ad764652082477c00d51d2475284b5d39ceed
EDIT:
The custom resource definition yml file: here
The rbac yml file: here
The validation field was officially deprecated in apiextensions.k8s.io/v1.
According to the official kubernetes documentation, you should use schema as a substitution for validation.
Here is a SAMPLE code using schema instead of validation:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: crontabs.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
---> schema: <---
# openAPIV3Schema is the schema for validating custom objects.
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
pattern: '^(\d+|\*)(/\d+)?(\s+(\d+|\*)(/\d+)?){4}$'
image:
type: string
replicas:
type: integer
minimum: 1
maximum: 10

kubeadm doesn't accept controlplane certificatekey in config file

used version:
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:56:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
To join a new master node to the controlplane using another registry than the public one to download I need to use kubeadm with a "--configfile " parameter command line. I loaded the k8s container images to my registry and tried to use kubeadm accordingly. Unfortunatley in this case kubeadm doesn't accept the "certificatekey" from the config file.
kubeadm join my-k8s-api.de:443 --config kubeadm-join-config.yaml
the config file looks like that:
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: "my-k8s-api.de:443"
caCertHashes:
- "sha256:9a5687aed5397958ebbca1c421ec56356dc4a5394f6846a64b071d56b3b41a7a"
token: "4bh3s7.adon04r87zyh7gwj"
nodeRegistration:
kubeletExtraArgs:
# pause container image
pod-infra-container-image: my-registry.de:5000/pause:3.1
controlPlane:
certificateKey: "eb3abd79fb011ced254f2c834079d0fa2af62718f6b4750a1e5309c36ed40383"```
actually I get back:
W1204 12:47:12.944020 54671 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"JoinConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "certificateKey"
when I use "kubeadm --control-plane --certificate-key XXXXXXXXX" I can successfully join the master node to the controlplane but that needs the node to have internet access.
any guess?
did I a typo?
You are having this error because you are using apiVersion : kubeadm.k8s.io/v1beta1 which don't have this kind of field. I have find out that when I was going thru the v1beta1 docs. You can have a look yourself more details: v1beta1
So what you need to do is switch your apiVersion to:
apiVersion: kubeadm.k8s.io/v1beta2
Which have that required filed type. For details please check v1beta2

Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data: ReadMapCB: expect { or n, but found ", error found in #10 byte of

I 'm trying to set up private docker image registry for kubernetes cluster. I'm following this link
$ cat ~/.docker/config.json | base64
ewoJImF1dGhzIjogewoJCSJsb2NhbGhvc3Q6NTAwMDAiOiB7CgkJCSJhdXRoIjogImJYbDFjMlZ5
T21oMGNHRnpjM2RrIgoJCX0KCX0KfQ==
I have file image-registry-secrets.yaml with below contents -
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson:ewoJImF1dGhzIjogewoJCSJsb2NhbGhvc3Q6NTAwMDAiOiB7CgkJCSJhdXRoIjogImJYbDFjMlZ5T21oMGNHRnpjM2RrIgoJCX0KCX0KfQ==
type: kubernetes.io/dockerconfigjson
And when I run the below command
$kubectl create -f image-registry-secrets.yaml --validate=false && kubectl get secrets
Error from server (BadRequest): error when creating "image-registry-secrets.yml": Secret in version "v1" cannot be handled as a Secret: v1.Secret.Data: ReadMapCB: expect { or n, but found ", error found in #10 byte of ...|","data":".dockercon|..., bigger context ...|{"apiVersion":"v1","data":".dockerconfigjson:ewoJImF1dGhzIjogewoJCSJsb2NhbGhv|...
What is the issue in kubectl create -f image-registry-secrets.yaml --validate=false and how can I resolve this error.
Kubernetes version is -
$kubectl version
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
You need to include a space after .dockerconfigjson, and before the base64 string - after that it should work.
When you paste the base64 password, it splits the password between a few lines and adds spaces in between the lines. It is difficult to explain and there is no need to add space after .dockerconfigjson, as the yaml provided in the tutorial is correct. The problem occurs after pasting the base64-encoded-json.
Open the secret in Vim and run:
:set listchars+=space:␣ and then :set list
this will show all the spaces as ␣, check if there is none between the password lines. This worked in my case.
Update:
The Vim command does not always show the spaces, so just navigate to the begging of each line of your secret key and press backspace so they connect.

Kubernetes ud615 newbie secure-monolith.yaml `error validating data`?

I'm a Kubernetes newbie trying to follow along with the Udacity tutorial class linked on the Kubernetes website.
I execute
kubectl create -f pods/secure-monolith.yaml
That is referencing this official yaml file: https://github.com/udacity/ud615/blob/master/kubernetes/pods/secure-monolith.yaml
I get this error:
error: error validating "pods/secure-monolith.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"Pod"}; if you choose to ignore these errors, turn validation off with --validate=false
FYI, the official lesson link is here: https://classroom.udacity.com/courses/ud615/lessons/7824962412/concepts/81991020770923
My first guess is that the provided yaml is out of date and incompatible with the current Kubernetes. Is this right? How can I fix/update?
I ran into the exact same problem but with a much simpler example.
Here's my yaml:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx
ports:
- containerPort: 80
The command kubectl create -f pod-nginx.yaml returns:
error: error validating "pod-nginx.yaml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"Pod"}; if you choose to ignore these errors, turn validation off with --validate=false
As the error says, I am able to override it but I am still at a loss as to the cause of the original issue.
Local versions:
Ubuntu 16.04
minikube version: v0.22.2
kubectl version: 1.8
Thanks in advance!
After correct kubectl version (Same with server version), then the issue is fixed, see:
$ kubectl create -f config.yml
configmap "test-cfg" created
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"7", ...
Server Version: version.Info{Major:"1", Minor:"7", ...
This is the case before modification:
$ kubectl create -f config.yml
error: error validating "config.yml": error validating data: unknown object type schema.GroupVersionKind{Group:"", Version:"v1", Kind:"ConfigMap"}; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8",...
Server Version: version.Info{Major:"1", Minor:"7",...
In general, we should used same version for kubectl and kubernetes.

How to avoid default secret being attached to ServiceAccount?

I'm trying to create a service account with either no secrets or just secret I specify and the kubelet always seems to be attaching the default secret no matter what.
Service Account definition
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
name: test
secrets:
- name: default-token-4pbsm
Submit
$ kubectl create -f service-account.yaml
serviceaccount "test" created
Get
$ kubectl get -o=yaml serviceaccount test
apiVersion: v1
automountServiceAccountToken: false
kind: ServiceAccount
metadata:
creationTimestamp: 2017-05-30T12:25:30Z
name: test
namespace: default
resourceVersion: "31414"
selfLink: /api/v1/namespaces/default/serviceaccounts/test
uid: 122b0643-4533-11e7-81c6-42010a8a005b
secrets:
- name: default-token-4pbsm
- name: test-token-5g3wb
As you can see above the test-token-5g3wb was automatically created & attached to the service account without me specifying it.
As far as I understand the automountServiceAccountToken only affects mounting of those secrets to a pod which was launched via that service account. (?)
Is there any way I can avoid that default secret being ever created and attached?
Versions
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T20:41:24Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Your understanding of automountServiceAccountToken is right it is for pod that will be launched.
The automatic token addition is done by Token controller. Even if you edit the config to delete the token it will be added again.
You must pass a service account private key file to the token controller in the controller-manager by using the --service-account-private-key-file option. The private key will be used to sign generated service account tokens. Similarly, you must pass the corresponding public key to the kube-apiserver using the --service-account-key-file option. The public key will be used to verify the tokens during authentication.
Above is taken from k8s docs. So try not passing those flags, but not sure how to do that. But I not recommending doing that.
Also this doc you might helpful.