I created the below ns.yaml file:
apiVersion: v1
kind: Namespace
metadata:
Name: testns
I am getting the below error.
error: error validating "ns.yaml": error validating data: ValidationError(Namespace.metadata): unknown field "Name" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false
The root cause is clear in the error logs: unknown field "Name" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta;
This means you need to use name instead of Name.
For more info about YAML format of Kubernetes object Namespace metadata, run the following command :
kubectl explain namespace.metadata
And you will get amazing documentation.
Related
I am a beginner with Kubernetes. I have enabled it from Docker Destop and now I want to install Kubernetes Dashboard.
I followed this link:
https://github.com/kubernetes/dashboard#getting-started
And I executed my first command in Powershell as an administrator:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
I get the following error:
error: error validating
"https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml":
error validating data:
ValidationError(Deployment.spec.template.spec.securityContext):
unknown field "seccompProfile" in
io.k8s.api.core.v1.PodSecurityContext; if you choose to ignore these
errors, turn validation off with --validate=false
In which case I tried to use the same command with --validate=false.
Then it went and gave no errors and when I execute :
kubectl proxy
I got an access token using:
kubectl describe secret -n kube-system
and I try to access the link as provided in the guide :
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
I get the following swagger response:
The error indicated that your cluster version is not compatible to use seccompProfile.type: RuntimeDefault. In this case you don't apply the dashboard spec (https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml) right away, you download and comment the following line in the spec:
...
spec:
# securityContext:
# seccompProfile:
# type: RuntimeDefault
...
Then you apply the updated spec kubectl apply -f recommended.yaml.
I am trying to install a second dapr helm chart on namespace "test" while it is already installed on namespace "dev" in same cluster.
helm upgrade -i --namespace $NAMESPACE \
dapr-uat dapr/dapr
already installed config exists whith following name:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
dapr dev 1 2021-10-06 21:16:27.244997 +0100 +01 deployed dapr-1.4.2 1.4.2
I get the following error
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "dapr-operator-admin" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dapr-uat": current value is "dapr"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "test": current value is "dev"
Tried specifying a different version for the installation but with no success
helm upgrade -i --namespace $NAMESPACE \
dapr-uat dapr/dapr \
--version 1.4.0
Starting to think the current chart does not allow for multiple instances (development and testing ) on the same cluster.
Has anyone faced the same issue ?
thank you,
Existing dapr chart applies cluster-wide ressources where names are given with no namespace name consideration. So, when trying to install a second configuration, a cluster-wide ressource name conflict occurs with pre-existing cluster-wide ressource:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: ClusterRole "dapr-operator-admin" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "dapr-uat": current value is "dapr-dev"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "uat": current value is "dev"
I had to edit the chart:
git clone https://github.com/dapr/dapr.git
I edited RBAC ressources in subchart dapr_rbac where ressource name now considers namespace name in dapr_rbac/templates/ClusterRoleBinding.yaml
previous file :
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dapr-operator
...
Edit now consists of metadata name on all ressources:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: dapr-operator-{{ .Release.Namespace }}
...
Same logic have been applied to MutatingWebhookConfiguration in subchart dapr_sidecar_injector in file dapr_sidecar_injector/templates/dapr_sidecar_injector_webhook_config.yaml
For full edits, please see forked repo in :
https://github.com/redaER7/dapr/tree/DEV/charts/dapr
I created K8S cluster using Terraform and I also created CRD for Crunchydata Postgres Operator
I obtained CRD for Postgres cluster creation from this link
Terraform script looks like below (tailored output)
resource "kubectl_manifest" "pgocluster" {
yaml_body = <<YAML
apiVersion: crunchydata.com/v1
kind: Pgcluster
metadata:
annotations:
current-primary: ${var.pgo_cluster_name}
labels:
crunchy-pgha-scope: ${var.pgo_cluster_name}
deployment-name: ${var.pgo_cluster_name}
name: ${var.pgo_cluster_name}
pg-cluster: ${var.pgo_cluster_name}
pgo-version: 4.6.2
pgouser: admin
name: ${var.pgo_cluster_name}
namespace: ${var.cluster_namespace}
YAML
}
But when I execute 'terraform apply' it errored as
Error: pgo/UserGrp failed to create kubernetes rest client for update of resource: resource [crunchydata.com/v1/Pgcluster] isn't valid for cluster, check the APIVersion and Kind fields are valid
However, according to the official link mentioned above following should work
apiVersion: crunchydata.com/v1
kind: Pgcluster
I am not sure whether it's issue with Terraform or link was not updated correctly
Kindly let me know what should be changed / done to fix this issue as I am stuck with this issue
Finally, I figured out the issue and the issue was pgo_cluster_name was not given in lowercase
I was able to get the following error only when I executed the target individually ie terraform apply --target=<target_name>
Error: pgo/UserGrp failed to run apply: error when creating "/tmp/773985147kubectl_manifest.yaml": Pgcluster.crunchydata.com "UserGrp" is invalid: metadata.name: Invalid value: "UserGrp": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
I set pgo_cluster_name=UsrGrp instead of pgo_cluster_name=usrgrp
I have aws-eks cluster and below is my command to replace existing the configuration.
kubectl create configmap flink-config --from-file=./config -o yaml --dry-run | kubectl replace -
but when I run this command. it gives an error like
W1009 17:00:14.998329 323115 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
Will it do the same thing If I replace -dry-run to -dry-run=client?
About dry-run=client we learn
--dry-run=client flag to preview the object that would be sent to
your cluster, without really submitting it.
And in the kubernetes API reference we read:
Must be "none", "server", or "client". If client strategy, only print
the object that would be sent, without sending it. If server strategy,
submit server-side request without persisting the resource.
Performing local tests I realized that when I try to replace an existing config object using dry-run=server, the following error occurs. The apiserver told us that already exist a configmap with the name flink-config.
kubectl create configmap flink-config --from-file=./config -o yaml --dry-run=server
Error from server (AlreadyExists): configmaps "flink-config" already exists
However is I try with to use dry-run=client the object is not validated by the apiserver, that is, just by the client, so the yaml is printed to us:
kubectl create configmap flink-config --from-file=./config -o yaml --dry-run=client
apiVersion: v1
data:
config: |
FOO: foo
MYVAR: hello
kind: ConfigMap
metadata:
creationTimestamp: null
name: flink-config
So basically, yes, the dry-run=client it has the same effect than the deprecated dry-run. The equivalent flag for dry-run=server was --server-dry-run and became deprecated in v1.18.
I am a beginner and start learning Kubernetes.
I'm trying to create a POD with the name of myfirstpodwithlabels.yaml and write the following specification in my YAML file. But when I try to create a POD I get this error.
error: error validating "myfirstpodwithlabels.yaml": error validating data: [ValidationError(Pod.spec): unknown field "contianers" in io.k8s.api.core.v1.PodSpec, ValidationError(Pod.spec): missing required field "containers" in io.k8s.api.core.v1.PodSpec]; if you choose to ignore these errors, turn validation off with --validate=false
My YAML file specification
kind: Pod
apiVersion: v1
metadata:
name: myfirstpodwithlabels
labels:
type: backend
env: production
spec:
contianers:
- image: aamirpinger/helloworld:latest
name: container1
ports:
- containerPort: 80
There is a typo in your .spec Section of the yaml.
you have written:
"contianers"
as seen in the error message when it really should be
"containers"
also for future reference: if there is an issue in your resource definitions yaml it helps if you actually post the yaml on stackoverflow otherwise helping is not an easy task.