I am a beginner and start learning Kubernetes.
I'm trying to create a POD with the name of myfirstpodwithlabels.yaml and write the following specification in my YAML file. But when I try to create a POD I get this error.
error: error validating "myfirstpodwithlabels.yaml": error validating data: [ValidationError(Pod.spec): unknown field "contianers" in io.k8s.api.core.v1.PodSpec, ValidationError(Pod.spec): missing required field "containers" in io.k8s.api.core.v1.PodSpec]; if you choose to ignore these errors, turn validation off with --validate=false
My YAML file specification
kind: Pod
apiVersion: v1
metadata:
name: myfirstpodwithlabels
labels:
type: backend
env: production
spec:
contianers:
- image: aamirpinger/helloworld:latest
name: container1
ports:
- containerPort: 80
There is a typo in your .spec Section of the yaml.
you have written:
"contianers"
as seen in the error message when it really should be
"containers"
also for future reference: if there is an issue in your resource definitions yaml it helps if you actually post the yaml on stackoverflow otherwise helping is not an easy task.
Related
apiVersion: v1
kind: ConfigMap
metadata:
name: kibana
namespace: the-project
labels:
app: kibana
env: dev
data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |- server.name: kib.the-project.d4ldev.txn2.com server.host: "0" elasticsearch.url: http://elasticsearch:9200
this is my config.yml file. when I try to create this project, I get this error
error: error parsing configmap.yml: error converting YAML to JSON: yaml: line 13: did not find expected comment or line break
I can't get rid of the error even after removing the space in line 13 column 17
The yml content can be directly put on multiple lines, formatted like a real yaml, take a look at the following example:
data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |-
server:
name: kib.the-project.d4ldev.txn2.com
host: "0"
elasticsearch.url: http://elasticsearch:9200
This works when put in a ConfigMap, it should work even if provided to a HELM Chart (depending on how the HELM templates are written)
Per this spec on github and these helm instructions I'm trying to upgrade our Helm installation of datadog using the following syntax:
helm upgrade datadog-monitoring --set datadog.confd."kube_scheduler\.yaml".instances[0].prometheus_url="http://localhost:10251/metrics",datadog.confd."kube_scheduler\.yaml".init_config= stable/datadog
However I'm getting the error below regardless of any attempt at altering the syntax of the prometheus_url value (putting the url in quotes, escaping the quotes, etc):
Error: UPGRADE FAILED: failed to create resource: ConfigMap in version "v1" cannot be handled as a ConfigMap: v1.ConfigMap.Data: ReadString: expects " or n, but found {, error found in #10 byte of ...|er.yaml":{"instances|..., bigger context ...|{"apiVersion":"v1","data":{"kube_scheduler.yaml":{"instances":[{"prometheus_url":"\"http://localhost|...
If I add the --dry-run --debug flags I get the following yaml output:
REVISION: 7
RELEASED: Mon Mar 2 14:28:52 2020
CHART: datadog-1.39.7
USER-SUPPLIED VALUES:
datadog:
confd:
kube_scheduler.yaml:
init_config: ""
instances:
- prometheus_url: http://localhost:10251/metrics
The Yaml output appears to mesh with the integration as specified on this github page.
hey!
Sorry in advance if my answer isn't correct because I'm a complete newby in kuber and helm and I can't make sure that it will help, but maybe it helps.
So, the problem, as I can understand, in the resulting ConfigMap configuration. From my expirience, I faced the same with the following config:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
labels:
group: mock
data:
APP_NAME: my-mock
APP_PORT: 8080
APP_PATH: /api
And I could solve it only by surrounding with quotes all the values:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
labels:
group: mock
data:
APP_NAME: "my-mock"
APP_PORT: "8080"
APP_PATH: "/api"
Currently I'm using Kubernetes version 1.11.+. Previously I'm always using the following command for my cloud build scripts:
- name: 'gcr.io/cloud-builders/kubectl'
id: 'deploy'
args:
- 'apply'
- '-f'
- 'k8s'
- '--recursive'
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_REGION}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}'
And the commands just working as expected, at that time I'm using k8s version 1.10.+. However recently I got the following error:
spec.clusterIP: Invalid value: "": field is immutable
metadata.resourceVersion: Invalid value: "": must be specified for an update
So I'm wondering if this is an expected behavior for Service resources?
Here's my YAML config for my service:
apiVersion: v1
kind: Service
metadata:
name: {name}
namespace: {namespace}
annotations:
beta.cloud.google.com/backend-config: '{"default": "{backend-config-name}"}'
spec:
ports:
- port: {port-num}
targetPort: {port-num}
selector:
app: {label}
environment: {env}
type: NodePort
This is due to https://github.com/kubernetes/kubernetes/issues/71042
https://github.com/kubernetes/kubernetes/pull/66602 should be picked to 1.11
I sometimes meet this error when manually running kubectl apply -f somefile.yaml.
I think it happens when someone have changed the specification through the Kubernetes Dashboard instead of by applying new changes through kubectl apply.
To fix it, I run kubectl edit services/servicename which opens the yaml specification in my default editor. Then remove the fields metadata.resourceVersion and spec.clusterIP, hit save and run kubectl apply -f somefile.yaml again.
You need to set the spec.clusterIP on your service yaml file with value to be replaced with clusterIP address from service as shown below:
spec:
clusterIP:
Your issue is discuused on the following github there as well a workaround to help you bypass this issue.
I am new to kubernetes and docker. I am trying to chain 2 containers in a pod such that the second container should not be up until the first one is running. I searched and got a solution here. It says to add "depends" field in YAML file for the container which is dependent on another container. Following is a sample of my YAML file:
apiVersion: v1beta4
kind: Pod
metadata:
name: test
labels:
apps: test
spec:
containers:
- name: container1
image: <image-name>
ports:
- containerPort: 8080
hostPort: 8080
- name: container2
image: <image-name>
depends: ["container1"]
Kubernetes gives me following error after running the above yaml file:
Error from server (BadRequest): error when creating "new.yaml": Pod in version "v1beta4" cannot be handled as a Pod: no kind "Pod" is registered for version "v1beta4"
Is the apiVersion problem here? I even tried v1, apps/v1, extensions/v1 but got following errors (respectively):
error: error validating "new.yaml": error validating data: ValidationError(Pod.spec.containers[1]): unknown field "depends" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
error: unable to recognize "new.yaml": no matches for apps/, Kind=Pod
error: unable to recognize "new.yaml": no matches for extensions/, Kind=Pod
What am I doing wrong here?
As I understand there is no field called depends in the Pod Specification.
You can verify and validate by following command:
kubectl explain pod.spec --recursive
I have attached a link to understand the structure of the k8s resources.
kubectl-explain
There is no property "depends" in the Container API object.
You split your containers in two different pods and let the kubernetes cli wait for the first container to become available:
kubectl create -f container1.yaml --wait # run command until the pod is available.
kubectl create -f container2.yaml --wait
I am having an issue configuring GCR with ImagePullSecrets in my deployment.yaml file. It cannot download the container due to permission
Failed to pull image "us.gcr.io/optimal-jigsaw-185903/syncope-deb": rpc error: code = Unknown desc = Error response from daemon: denied: Permission denied for "latest" from request "/v2/optimal-jigsaw-185903/syncope-deb/manifests/latest".
I am sure that I am doing something wrong but I followed this tutorial (and others like it) but with still no luck.
https://ryaneschinger.com/blog/using-google-container-registry-gcr-with-minikube/
The pod logs are equally useless:
"syncope-deb" in pod "syncope-deployment-64479cdcf5-cng57" is waiting to start: trying and failing to pull image
My deployment looks like:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: syncope-deployment
namespace: default
spec:
# 3 Pods should exist at all times.
replicas: 1
# Keep record of 2 revisions for rollback
revisionHistoryLimit: 2
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: syncope-deb
spec:
imagePullSecrets:
- name: mykey
containers:
- name: syncope-deb
# Run this image
image: us.gcr.io/optimal-jigsaw-185903/syncope-deb
ports:
- containerPort: 9080
Any I have a key in my default namespace called "mykey" that looks like (Edited out the Secure Data):
{"https://gcr.io":{"username":"_json_key","password":"{\n \"type\": \"service_account\",\n \"project_id\": \"optimal-jigsaw-185903\",\n \"private_key_id\": \"EDITED_TO_PROTECT_THE_INNOCENT\",\n \"private_key\": \"-----BEGIN PRIVATE KEY-----\\EDITED_TO_PROTECT_THE_INNOCENT\\n-----END PRIVATE KEY-----\\n\",\n \"client_email\": \"bobs-service#optimal-jigsaw-185903.iam.gserviceaccount.com\",\n \"client_id\": \"109145305665697734423\",\n \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n \"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/bobs-service%40optimal-jigsaw-185903.iam.gserviceaccount.com\"\n}","email":"redfalconinc#gmail.com","auth":"EDITED_TO_PROTECT_THE_INNOCENT"}}
I even loaded that user up with the permissions of:
Editor Cloud Container
Builder Cloud Container
Builder Editor Service
Account Actor Service
Account Admin Storage
Admin Storage Object
Admin Storage Object Creator
Storage Object Viewer
Any help would be appreciated as I am spending a lot of time on seemingly a very simple problem.
The issue is most likely caused by you using a secret of type dockerconfigjson and having valid dockercfg in it. The kubectl command changed at some point that causes this.
Can you check what it is marked as dockercfg or dockerconfigjson and then check if its valid dockerconfigjson.
The json you have provided is dockercfg (not the new format)
See https://github.com/kubernetes/kubernetes/issues/12626#issue-100691532 for info about the formats