apiVersion: v1
kind: ConfigMap
metadata:
name: kibana
namespace: the-project
labels:
app: kibana
env: dev
data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |- server.name: kib.the-project.d4ldev.txn2.com server.host: "0" elasticsearch.url: http://elasticsearch:9200
this is my config.yml file. when I try to create this project, I get this error
error: error parsing configmap.yml: error converting YAML to JSON: yaml: line 13: did not find expected comment or line break
I can't get rid of the error even after removing the space in line 13 column 17
The yml content can be directly put on multiple lines, formatted like a real yaml, take a look at the following example:
data:
# kibana.yml is mounted into the Kibana container
# see https://github.com/elastic/kibana/blob/master/config/kibana.yml
# Kubernetes Ingress is used to route kib.the-project.d4ldev.txn2.com
kibana.yml: |-
server:
name: kib.the-project.d4ldev.txn2.com
host: "0"
elasticsearch.url: http://elasticsearch:9200
This works when put in a ConfigMap, it should work even if provided to a HELM Chart (depending on how the HELM templates are written)
Related
When I want to restart a deployment by the following command: kubectl rollout restart -n ind-iv -f mattermost-installation.yml it returns an error: unable to decode "mattermost-installation.yml": no kind "Mattermost" is registered for version "installation.mattermost.com/v1beta1" in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
The yml file looks like this:
apiVersion: installation.mattermost.com/v1beta1
kind: Mattermost
metadata:
name: mattermost-iv # Choose the desired name
spec:
size: 1000users # Adjust to your requirements
useServiceLoadBalancer: true # Set to true to use AWS or Azure load balancers instead of an NGINX controller.
ingressName: ************* # Hostname used for Ingress, e.g. example.mattermost-example.com. Required when using an Ingress controller. Ignored if useServiceLoadBalancer is true.
ingressAnnotations:
kubernetes.io/ingress.class: nginx
version: 6.0.0
licenseSecret: "" # Name of a Kubernetes secret that contains Mattermost license. Required only for enterprise installation.
database:
external:
secret: db-credentials # Name of a Kubernetes secret that contains connection string to external database.
fileStore:
external:
url: ********** # External File Storage URL.
bucket: ********** # File Storage bucket name to use.
secret: file-store-credentials
mattermostEnv:
- name: MM_FILESETTINGS_AMAZONS3SSE
value: "false"
- name: MM_FILESETTINGS_AMAZONS3SSL
value: "false"
Anybody an idea?
I am using fronendconfig.yaml file to enable http to https redirection, but it is giving me chart validation failed error. Listing the content of my yaml file. This issue is I am facing GKE ingress. My GKE master version is "1.17.14-gke.1600".
apiVersion: networking.k8s.io/v1beta1
kind: FrontendConfig
metadata:
name: "abcd"
spec:
redirectToHttps:
enabled: true
responseCodeName: "301"
Using annotations in values.yaml file like this.
ingress:
enabled: true
annotations:
networking.k8s.io/v1beta1.FrontendConfig: "abcd"
As of now, HTTP-to-HTTPS redirect is in beta and only available for GKE 1.18.10-gke.600 or greater as per the documentation.
Since you stated to be using GKE 1.17.14-gke.1600, this won't be available for your cluster.
I am a beginner and start learning Kubernetes.
I'm trying to create a POD with the name of myfirstpodwithlabels.yaml and write the following specification in my YAML file. But when I try to create a POD I get this error.
error: error validating "myfirstpodwithlabels.yaml": error validating data: [ValidationError(Pod.spec): unknown field "contianers" in io.k8s.api.core.v1.PodSpec, ValidationError(Pod.spec): missing required field "containers" in io.k8s.api.core.v1.PodSpec]; if you choose to ignore these errors, turn validation off with --validate=false
My YAML file specification
kind: Pod
apiVersion: v1
metadata:
name: myfirstpodwithlabels
labels:
type: backend
env: production
spec:
contianers:
- image: aamirpinger/helloworld:latest
name: container1
ports:
- containerPort: 80
There is a typo in your .spec Section of the yaml.
you have written:
"contianers"
as seen in the error message when it really should be
"containers"
also for future reference: if there is an issue in your resource definitions yaml it helps if you actually post the yaml on stackoverflow otherwise helping is not an easy task.
Per this spec on github and these helm instructions I'm trying to upgrade our Helm installation of datadog using the following syntax:
helm upgrade datadog-monitoring --set datadog.confd."kube_scheduler\.yaml".instances[0].prometheus_url="http://localhost:10251/metrics",datadog.confd."kube_scheduler\.yaml".init_config= stable/datadog
However I'm getting the error below regardless of any attempt at altering the syntax of the prometheus_url value (putting the url in quotes, escaping the quotes, etc):
Error: UPGRADE FAILED: failed to create resource: ConfigMap in version "v1" cannot be handled as a ConfigMap: v1.ConfigMap.Data: ReadString: expects " or n, but found {, error found in #10 byte of ...|er.yaml":{"instances|..., bigger context ...|{"apiVersion":"v1","data":{"kube_scheduler.yaml":{"instances":[{"prometheus_url":"\"http://localhost|...
If I add the --dry-run --debug flags I get the following yaml output:
REVISION: 7
RELEASED: Mon Mar 2 14:28:52 2020
CHART: datadog-1.39.7
USER-SUPPLIED VALUES:
datadog:
confd:
kube_scheduler.yaml:
init_config: ""
instances:
- prometheus_url: http://localhost:10251/metrics
The Yaml output appears to mesh with the integration as specified on this github page.
hey!
Sorry in advance if my answer isn't correct because I'm a complete newby in kuber and helm and I can't make sure that it will help, but maybe it helps.
So, the problem, as I can understand, in the resulting ConfigMap configuration. From my expirience, I faced the same with the following config:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
labels:
group: mock
data:
APP_NAME: my-mock
APP_PORT: 8080
APP_PATH: /api
And I could solve it only by surrounding with quotes all the values:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
labels:
group: mock
data:
APP_NAME: "my-mock"
APP_PORT: "8080"
APP_PATH: "/api"
I'm trying to set node affinity using nodeSelector as discussed here: https://kubernetes.io/docs/user-guide/node-selection/
However, no matter if I use a Pod, a Replication Controller or a Deployment, I can't get the kubectl create to work properly. This is the error I get, and it happens with everything similarly:
Error from server (BadRequest): error when creating "test-pod.yaml": Pod in version "v1" cannot be handled as a Pod: [pos 222]: json: expect char '"' but got char 't'
Substitute "Deployment" or "ReplicationController" for "Pod" and it's the same error everywhere. Here is my yaml file for the test pod:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
ingress: yes
If I remove the nodeSelector part of the file, the pod builds successfully. This also works with Deployments and Replication Controllers as well. I made sure that the proper label was added to the node.
Any help would be appreciated!
In yaml, the token yes evaluates to a boolean true (http://yaml.org/type/bool.html)
Internally, kubectl converts yaml to json as a preprocessing step. Your node selector is converting to "nodeSelector":{"ingress":true}, which fails when trying to decode into a string-to-string map.
You can quote the string like this to force it to be treated as a string:
ingress: "yes"