Helm Installation Failed: configMap cannot be handled as a ConfigMap: json: cannot unmarshal bool into Go struct field ConfigMap.data of type string - kubernetes-helm

Trying to do helm install yet I get a error message like this:
install.go:178: [debug] Original chart version: ""
install.go:195: [debug] CHART PATH: /Users/lkkto/Dev/pipelines/manifests/helm/helm-pipeline/charts/base/charts/installs/charts/multi-user/charts/api-service
client.go:128: [debug] creating 4 resource(s)
Error: INSTALLATION FAILED: ConfigMap in version "v1" cannot be handled as a ConfigMap: json: cannot unmarshal bool into Go struct field ConfigMap.data of type string
helm.go:84: [debug] ConfigMap in version "v1" cannot be handled as a ConfigMap: json: cannot unmarshal bool into Go struct field ConfigMap.data of type string
INSTALLATION FAILED
main.newInstallCmd.func2
helm.sh/helm/v3/cmd/helm/install.go:127
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra#v1.3.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra#v1.3.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra#v1.3.0/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_amd64.s:1571
Seems like it has to do with my generated configmap:
apiVersion: v1
data:
DEFAULTPIPELINERUNNERSERVICEACCOUNT: default-editor
MULTIUSER: true
VISUALIZATIONSERVICE_NAME: ml-pipeline-visualizationserver
VISUALIZATIONSERVICE_PORT: 8888
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: kubeflow-pipelines
app.kubernetes.io/component: ml-pipeline
application-crd-id: kubeflow-pipelines
name: pipeline-api-server-config-f4t72426kt
namespace: kubeflow
Anything wrong with it?

From the docs ConfigMap.data is a string:string map. In your example you're setting MULTIUSER to a boolean.
Update your ConfigMap to:
apiVersion: v1
data:
DEFAULTPIPELINERUNNERSERVICEACCOUNT: default-editor
MULTIUSER: 'true'
VISUALIZATIONSERVICE_NAME: ml-pipeline-visualizationserver
VISUALIZATIONSERVICE_PORT: 8888
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: kubeflow-pipelines
app.kubernetes.io/component: ml-pipeline
application-crd-id: kubeflow-pipelines
name: pipeline-api-server-config-f4t72426kt
namespace: kubeflow
Notice the 'true' for MULTIUSER. This explicitly sets it to string.

Just to buttress more this. Encountered an issue with this when setting up an ingress controller.
My initial config was like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-kube-prometheus-prometheus
namespace: monitoring
labels:
app: kube-prometheus-stack-prometheus
heritage: Helm
release: prometheus
self-monitor: true
The issue was with the self-monitor: true which Helm cannot marshall into a string since it's a bool. It should rather be enclosed in quotes like this - self-monitor: "true", so we will have this afterwards:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-kube-prometheus-prometheus
namespace: monitoring
labels:
app: kube-prometheus-stack-prometheus
heritage: Helm
release: prometheus
self-monitor: "true"

Related

Deploying Linkerd with FluxCD fails to reconcile the linkerd-control-plane

I have a repository set up on Github that is linked to my Kubernetes cluster with FluxCD.
I have then written a Kustomization that "should" install LinkerD to my cluster. However here is where things have taken a bad turn... I have followed the documentation for installing LinkerD with helm and successfully managed to install linkerd-crds but when it comes to the linkerd-control-plane the reconciliation gets stuck on InProgress.
My LinkerD kustomization consists of the following files:
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: HelmRepository
metadata:
name: linkerd
namespace: linkerd
spec:
interval: 2m
url: https://helm.linkerd.io/stable
kind: Secret
apiVersion: v1
metadata:
name: linkerd-certs
namespace: linkerd
data:
ca.crt: ****
issuer.crt: ****
issuer.key: ****
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: linkerd-crds
namespace: linkerd
spec:
timeout: 3m
interval: 10m
releaseName: linkerd
targetNamespace: linkerd
storageNamespace: linkerd
chart:
spec:
chart: linkerd-crds
version: 1.4.0
sourceRef:
kind: HelmRepository
name: linkerd
namespace: linkerd
interval: 40m
values:
installNamespace: false
install:
crds: CreateReplace
upgrade:
crds: CreateReplace
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: linkerd-control-plane
namespace: linkerd
spec:
timeout: 3m
interval: 10m
releaseName: linkerd
targetNamespace: linkerd
storageNamespace: linkerd
chart:
spec:
chart: linkerd-control-plane
version: 1.9.5
sourceRef:
kind: HelmRepository
name: linkerd
namespace: linkerd
interval: 40m
valuesFiles:
- values.yaml
- values-ha.yaml
valuesFrom:
- kind: Secret
name: linkerd-certs
valuesKey: ca.crt
targetPath: identityTrustAnchorsPEM
- kind: Secret
name: linkerd-certs
valuesKey: issuer.crt
targetPath: identity.issuer.tls.crtPEM
- kind: Secret
name: linkerd-certs
valuesKey: issuer.key
targetPath: identity.issuer.tls.keyPEM
install:
crds: CreateReplace
upgrade:
crds: CreateReplace
kind: Namespace
apiVersion: v1
metadata:
name: linkerd
annotations:
linkerd.io/inject: disabled
labels:
linkerd.io/is-control-plane: "true"
config.linkerd.io/admission-webhooks: disabled
linkerd.io/control-plane-ns: linkerd
I also found that the helm-controller deployment logs the following error every time it tries to reconcile:
Helm install failed: YAML parse error on
linkerd-control-plane/templates/identity.yaml: error converting YAML
to JSON: yaml: control characters are not allowed

How to configure datasource of Grafana using bitnami helm charts?

I am having trouble with configuring grafana datasource with Helm Charts and Kubernetees. This is my release.yaml:
kind: Namespace
apiVersion: v1
metadata:
name: monitoring
annotations:
name: monitoring
labels:
name: monitoring
---
apiVersion: v1
kind: ConfigMap
metadata:
name: grafana-datasources
namespace: monitoring
data:
grafana.ini: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus-kube-prometheus-prometheus.monitoring.svc.cluster.local:9090
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: prometheus
namespace: monitoring
spec:
interval: 5m
chart:
spec:
chart: kube-prometheus
version: "8.0.7"
sourceRef:
kind: HelmRepository
name: bitnami
namespace: flux-system
interval: 1m
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: grafana
namespace: monitoring
spec:
interval: 5m
chart:
spec:
chart: grafana
version: "7.9.8"
sourceRef:
kind: HelmRepository
name: bitnami
namespace: flux-system
interval: 1m
values:
config:
useGrafanaIniFile: true
grafanaIniConfigMap: grafana-datasources
---
I see that grafana does have grafana.ini file in /opt/bitnami/grafana/conf/ directory but when checking the datasources in grafana UI there is none. Does anyone know why this is the case, because I do not understand what I am doing wrong.
When deploying with Flux you might benefit from using the configMapGenerator constructor instead of creating the config map manually.
I'm going to assuming you have a folder structure like this
monitoring/
├── kustomize.yaml
├── kustomizeconfig.yaml
├── namepace.yaml
├── release.yaml
└── values.yaml
kustomization.yaml
The namespace attribute tells Flux, that it should install all the resources referenced in the file, in the monitoring namespace. This way you only have to reference a namespace, if you're pulling in objects from other namespaces (eg. sources from the flux-system namespace).
If you call your values file something else (eg. grafana-config.yaml), then you should change the name in the files list to something like values.yaml=grafana-config.yaml. Helm will complain if it does not find a values.yaml entry.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: monitoring
resources:
- release.yaml
- namespace.yaml
configMapGenerator:
- name: grafana-values
files:
- values.yaml
# - values.yaml=grafana-config.yaml
configurations:
- kustomizeconfig.yaml
kustomizeconfig.yaml
This nameReference will monkey patch the value of the valuesFrom with kind ConfigMap in the release release.yaml file, so it matches the ConfigMap.
nameReference:
- kind: ConfigMap
version: v1
fieldSpecs:
- path: spec/valuesFrom/name
kind: HelmRelease
namespace.yaml
kind: Namespace
apiVersion: v1
metadata:
name: monitoring
annotations:
name: monitoring
labels:
name: monitoring
release.yaml
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: prometheus
spec:
interval: 5m
chart:
spec:
chart: kube-prometheus
version: "8.0.7"
sourceRef:
kind: HelmRepository
name: bitnami
namespace: flux-system
interval: 1m
---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: grafana
spec:
interval: 5m
chart:
spec:
chart: grafana
version: "7.9.8"
sourceRef:
kind: HelmRepository
name: bitnami
namespace: flux-system
interval: 1m
valuesFrom:
- kind: ConfigMap
name: grafana-values
values.yaml
grafana.ini will be transformed into the grafana config file.
grafana.ini obejct
datasources will be imported as data
sources in grafana. datasources obejct
You can reference all the values, that can go into the values.yaml file here: example values.yaml file
grafana.ini:
paths:
data: /var/lib/grafana/
logs: /var/log/grafana
plugins: /var/lib/grafana/plugins
provisioning: /etc/grafana/provisioning
metrics:
enable: true
diable_total_stats: true
date_formats:
use_browser_locale: true
analytics:
reporting_enabled: false
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus-server.monitoring

how can I provide multipe secrets in one yaml file?

How can I define multiple secrets in one file?
Seems that providing multiple secrets doesn't work.
apiVersion: v1
kind: Secret
metadata:
name: ca-secret
labels:
app.kubernetes.io/managed-by: Helm
type: kubernetes.io/tls
data:
tls.crt: LS0tLDR
tls.key: LS0tLDR
apiVersion: v1
kind: Secret
metadata:
name: envoy-secret
labels:
app.kubernetes.io/managed-by: Helm
type: kubernetes.io/tls
data:
tls.crt: LS0tLS1
tls.key: LS0tLS1
I am not able to use multiple files because I need to generate a single template using helm.
You can have separate manifests in one yaml file by separating them with ---. This will work:
apiVersion: v1
kind: Secret
metadata:
name: ca-secret
labels:
app.kubernetes.io/managed-by: Helm
type: kubernetes.io/tls
data:
tls.crt: LS0tLDR
tls.key: LS0tLDR
---
apiVersion: v1
kind: Secret
metadata:
name: envoy-secret
labels:
app.kubernetes.io/managed-by: Helm
type: kubernetes.io/tls
data:
tls.crt: LS0tLS1
tls.key: LS0tLS1

Adding an SSH GitHub repository to ArgoCD using declarative DSL gives "authentication required"

I have an ArgoCD installation and want to add a GitHub repository using SSH access with an SSH key pair to it using the declarative DSL.
What I have is:
apiVersion: v1
data:
sshPrivateKey: <my private ssh key base64 encoded>
url: <url base64 encoded>
kind: Secret
metadata:
annotations:
meta.helm.sh/release-name: argocd-config
meta.helm.sh/release-namespace: argocd
creationTimestamp: "2021-06-30T12:39:35Z"
labels:
app.kubernetes.io/managed-by: Helm
argocd.argoproj.io/secret-type: repo-creds
name: repo-creds
namespace: argocd
resourceVersion: "364936"
selfLink: /api/v1/namespaces/argocd/secrets/repo-creds
uid: 8ca64883-302b-4a41-aaf6-5277c34dfbfc
type: Opaque
---
apiVersion: v1
data:
url: <url base64 encoded>
kind: Secret
metadata:
annotations:
meta.helm.sh/release-name: argocd-config
meta.helm.sh/release-namespace: argocd
creationTimestamp: "2021-06-30T12:39:35Z"
labels:
app.kubernetes.io/managed-by: Helm
argocd.argoproj.io/secret-type: repository
name: argocd-repo
namespace: argocd
resourceVersion: "364935"
selfLink: /api/v1/namespaces/argocd/secrets/argocd-repo
uid: 09de56e0-3b0a-4032-8fb5-81b3a6e1899e
type: Opaque
I can manually connect to that GitHub private repo using that SSH key pair, but using the DSL, the repo doesn't appear in the ArgoCD GUI.
In the log of the argocd-repo-server I am getting the error:
time="2021-06-30T14:48:25Z" level=error msg="finished unary call with code Unknown" error="authentication required" grpc.code=Unknown grpc.method=GenerateManifest grpc.request.deadline="2021-06-30T14:49:25Z" grpc.service=repository.RepoServerService grpc.start_time="2021-06-30T14:48:25Z" grpc.time_ms=206.505 span.kind=server system=grpc
I deploy the secrets with helm.
So can anyone help me point in the right direction? What am I doing wrong?
I basically followed the declarative documentation under: https://argoproj.github.io/argo-cd/operator-manual/declarative-setup/
Thanks in advance.
Best regards,
rforberger
I am not sure about helm, since I am working with the yaml files for now, before moving into helm. You could take a look at this Github issue here to configure SSH Key for helm
I had this issue, when I was working with manifests. The repo config should be in argocd-cm configmap. The fix was this:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
labels:
app.kubernetes.io/name: argocd-cm
app.kubernetes.io/part-of: argocd
data:
repositories: |
- name: my-test-repo
url: ssh://git#repo-url/path/to/repo.git
type: git
insecure: true. // To skip verification
insecureIgnoreHostKey: true // to ignore host key for ssh
sshPrivateKeySecret:
name: private-repo-creds
key: sshPrivateKey
---
apiVersion: v1
kind: Secret
metadata:
name: private-repo-creds
namespace: argocd
labels:
argocd.argoproj.io/secret-type: repo-creds
data:
sshPrivateKey: <my private ssh key base64 encoded>
And I am not sure if the documentation is correct or not, because I can see the document in stable is a bit different, although both your link and this stable doc link are from the same version

Helm v3 unknown object type "nil" in Secret.data.couchbase_password

I am getting this error while using helm 3 only. In helm 2 it's working as expected.Here is the secret object's manifest
secret.yaml
apiVersion: v1
data:
couchbase_password: {{ .Values.secrets.cbPass | quote }}
kind: Secret
metadata:
name: {{ include "persistence.name" .}}-cb-pass
type: Opaque
---
apiVersion: v1
data:
couchbase.crt: {{ .Values.secrets.encodedCouchbaseCrt | quote }}
kind: Secret
metadata:
name: {{ include "persistence.name" .}}-cb-crt
type: Opaque
And here are some contents of the values.yamlfile
configmap:
#support for oxtrust API
gluuOxtrustApiEnabled: false
gluuOxtrustApiTestMode: false
gluuCasaEnabled: true
secrets:
cbPass: UEBzc3cwcmQK # UEBzc3cwcmQK
encodedCouchbaseCrt: LS0tLS1CRUdJTiBDR
When I do helm template test . I get
---
# Source: gluu-server-helm/charts/persistence/templates/secrets.yaml
apiVersion: v1
data:
couchbase.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUROVENDQWgyZ0F3SUJBZ0lKQU93NzNOV2x5cTE3TUEwR0NTcUdTSWIzRFFFQkN3VUFNQll4RkRBU0JnTlYKQkFNTUMyTmlMbWRzZFhVdWIzSm5NQjRYRFRFNU1USXlOakE0TXpBd04xb1hEVEk1TVRJeU16QTRNekF3TjFvdwpGakVVTUJJR0ExVUVBd3dMWTJJdVoyeDFkUzV2Y21jd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFDZjhySjhNcHJYMFFEQTdaamVXWkNjQTExd0FnMFpzSERYV2gwRU5BWE9JYjdObkM5c0diWEYKeG1PVnpNL3pGcWhqNWU4Zi9hZnBPQUlSV1RhMzhTeGFiQ3VPR1pUU2pTZ3dtclQ3bmVPK0pSNDA3REdzYzlrSgp5d1lNc083S3FtcFJTMWpsckZTWXpMNGQ4VW5xa3k3OHFMMEw3R3F2Y0hSTTZKYkM4QXpBdDUwWGJ5eEhwaDFsClNVWDBCSWgzbXl5NHpDcjF1anhHN0x6QVVHaDEyZXVSVGpWc3YrdWN4emdIZjVONXNIcFloaWV4NjJ1UE1MeDUKYjVsOVJtMmVadmM2R0ZpU2NpVEYwUFZFSXhRbkVobmd3R1MyNWNOTGdGRzEzMDV0WkFFNWdtem9lK0V6YmJNZQpXczdyUFZDWmF4dmo4ekRZS1A3ZkxsMitWSUcxcXl6M0FnTUJBQUdqZ1lVd2dZSXdIUVlEVlIwT0JCWUVGTGFFCm9rK1lhV1FHczFJM3ZKOGJiV203dGcxb01FWUdBMVVkSXdRL01EMkFGTGFFb2srWWFXUUdzMUkzdko4YmJXbTcKdGcxb29ScWtHREFXTVJRd0VnWURWUVFEREF0allpNW5iSFYxTG05eVo0SUpBT3c3M05XbHlxMTdNQXdHQTFVZApFd1FGTUFNQkFmOHdDd1lEVlIwUEJBUURBZ0VHTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBWlJnQ0I5cHFKClVZamZxWCsvUStRODNJQUJOSkJvMlMyYjRvT3NITGVyOGx6ZjlvZXdyR2dHUlRIeHNnRHE1dXcvS0c2TVJPSWEKR08zY0JwYWdENC9kVHBnRWpZemU0eXg0RzlTb253dmNESVNvV0dPN2Q5OG41SnJBaFZOYmFUT1FTSGRUTkxBTgp4UFVvcFh3RTZzOUp3bUxQUUdpQ2txcSs3NWp5OUFLRWRJTThTb0xNQXU3eHBPaDY0SVluRmhJOHAvZW5vNVpyCkxNbUFVbTltWVVaK2x0eDB6N0xDTXF1N3Z6RU55SzZ4anZiY3VxN0Y3aGsydDFmdVVYMUFpb1ZpN1dRdnQ3emwKODE3b2V6UG04NDJjTWZubkFqSzFkNnd1Z2RpNzlQSnJ1UDc4WmJXUThIWjZuSUtBRmlZRGxQTTNEakxnR0xZZgpZRll0LzJvVzJFQzEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
kind: Secret
metadata:
name: persistence-cb-crt
type: Opaque
---
# Source: gluu-server-helm/charts/persistence/templates/secrets.yaml
apiVersion: v1
data:
couchbase_password: "UEBzc3cwcmQK"
kind: Secret
metadata:
name: persistence-cb-pass
type: Opaque
When I use default data directly without referencing values file, it still doesn't work.
Helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
kubectl version
Client Version: v1.16.3
Server Version: v1.17.0