deployment.apps/wordpress unchanged
Error from server (BadRequest): error when creating "kustomization.yaml": Secret in version "v1" cannot be handled as a Secret: json: cannot unmarshal string into Go struct field Secret.data of type map[string][]uint8
Below is my yaml file.
apiVersion: v1
kind: Secret
metadata:
name: mysql-pass
type: Opaque
data:
password=password
Anyone having same issues?
I replaced the apiVersion as seen below and yet still getting same error
apiVersion: apps/v1
kind: Secret
metadata:
name: mysql-pass
type: Opaque
data:
password=password
Related
I need a help with HelmRelease config, i need to concatenate secret value with string.
I have a feeling that I don't understand something in secret management.
What is the right approach to combine secret with string. I would most like to do it with printf function but I can't write secret as a variable.
Please give me some advice.
Data:
jdbcOverwrite.jdbcUrl: sonarqube.rds.ednpoint.aws.de. (base64 encoded get from secret)
string : jdbc:postgresql://
new value should look like: jdbcUrl: jdbc:postgresql://sonarqube.rds.ednpoint.aws.de
Sonarqube need to have fully Url:
helm.yaml`
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: sonarqube
spec:
serviceAccountName: ${SERVICE_ACCOUNT_NAME}
releaseName: sonarqube
interval: 25m
timeout: 20m
chart:
spec:
#repository: https://SonarSource.github.io/helm-chart-sonarqube
chart: sonarqube
version: 6.0.0
sourceRef:
kind: HelmRepository
name: sonarqube-repo
namespace: sonarqube
valuesFrom:
- kind: Secret
name: sonarqube-${ENVIRONMENT}-connection
valuesKey: username
targetPath: jdbcOverwrite.jdbcUsername
- kind: Secret
name: sonarqube-${ENVIRONMENT}-connection
valuesKey: password
targetPath: jdbcOverwrite.jdbcPassword
- kind: Secret
name: sonarqube-${ENVIRONMENT}-connection
valuesKey: endpoint
targetPath: jdbcOverwrite.jdbcUrl
Ideally, I would add a string to this variablejdbcOverwrite.jdbcUrl`, should I use Kustomize?
I would appreciate any guidance
I would appreciate any guidance
Trying to do helm install yet I get a error message like this:
install.go:178: [debug] Original chart version: ""
install.go:195: [debug] CHART PATH: /Users/lkkto/Dev/pipelines/manifests/helm/helm-pipeline/charts/base/charts/installs/charts/multi-user/charts/api-service
client.go:128: [debug] creating 4 resource(s)
Error: INSTALLATION FAILED: ConfigMap in version "v1" cannot be handled as a ConfigMap: json: cannot unmarshal bool into Go struct field ConfigMap.data of type string
helm.go:84: [debug] ConfigMap in version "v1" cannot be handled as a ConfigMap: json: cannot unmarshal bool into Go struct field ConfigMap.data of type string
INSTALLATION FAILED
main.newInstallCmd.func2
helm.sh/helm/v3/cmd/helm/install.go:127
github.com/spf13/cobra.(*Command).execute
github.com/spf13/cobra#v1.3.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/cobra#v1.3.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/cobra#v1.3.0/command.go:902
main.main
helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
runtime/proc.go:250
runtime.goexit
runtime/asm_amd64.s:1571
Seems like it has to do with my generated configmap:
apiVersion: v1
data:
DEFAULTPIPELINERUNNERSERVICEACCOUNT: default-editor
MULTIUSER: true
VISUALIZATIONSERVICE_NAME: ml-pipeline-visualizationserver
VISUALIZATIONSERVICE_PORT: 8888
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: kubeflow-pipelines
app.kubernetes.io/component: ml-pipeline
application-crd-id: kubeflow-pipelines
name: pipeline-api-server-config-f4t72426kt
namespace: kubeflow
Anything wrong with it?
From the docs ConfigMap.data is a string:string map. In your example you're setting MULTIUSER to a boolean.
Update your ConfigMap to:
apiVersion: v1
data:
DEFAULTPIPELINERUNNERSERVICEACCOUNT: default-editor
MULTIUSER: 'true'
VISUALIZATIONSERVICE_NAME: ml-pipeline-visualizationserver
VISUALIZATIONSERVICE_PORT: 8888
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: kubeflow-pipelines
app.kubernetes.io/component: ml-pipeline
application-crd-id: kubeflow-pipelines
name: pipeline-api-server-config-f4t72426kt
namespace: kubeflow
Notice the 'true' for MULTIUSER. This explicitly sets it to string.
Just to buttress more this. Encountered an issue with this when setting up an ingress controller.
My initial config was like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-kube-prometheus-prometheus
namespace: monitoring
labels:
app: kube-prometheus-stack-prometheus
heritage: Helm
release: prometheus
self-monitor: true
The issue was with the self-monitor: true which Helm cannot marshall into a string since it's a bool. It should rather be enclosed in quotes like this - self-monitor: "true", so we will have this afterwards:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-kube-prometheus-prometheus
namespace: monitoring
labels:
app: kube-prometheus-stack-prometheus
heritage: Helm
release: prometheus
self-monitor: "true"
With ansible: is it possible to patch resources with json or yaml snippets? I basically want to be able to accomplish the same thing as kubectl patch <Resource> <Name> --type='merge' -p='{"spec":{ "test":"hello }}', to append/modify resource specs.
https://docs.ansible.com/ansible/latest/modules/k8s_module.html
Is it possible to do this with the k8s ansible module? It says that if a resource already exists and "status: present" is set that it will patch it, however it isn't patching as far as I can tell
Thanks
Yes, you can provide just a patch and if the resource already exists it should send a strategic-merge-patch (or just a merge-patch if it's a custom resource). Here's an example playbook that creates and modifies a configmap:
---
- hosts: localhost
connection: local
gather_facts: no
vars:
cm: "{{ lookup('k8s',
api_version='v1',
kind='ConfigMap',
namespace='default',
resource_name='test') }}"
tasks:
- name: Create the ConfigMap
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
hello: world
- name: We will see the ConfigMap defined above
debug:
var: cm
- name: Add a field to the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
added: field
- name: The same ConfigMap as before, but with an extra field in data
debug:
var: cm
- name: Change a field in the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
hello: everyone
- name: The added field is unchanged, but the hello field has a new value
debug:
var: cm
- name: Delete the added field in the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
added: null
- name: The hello field is unchanged, but the added field is now gone
debug:
var: cm
I need to store some passwords and usernames in the secrets.YAML .but after the deployment getting this error .so unable to build the secret and access it in the pods.
Attaching the deployment.yaml and secretes .yaml below.
--Secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
data:
CassandraSettings__CassandraPassword: [[ .Environment ]]-abcd-cassandra-password
---Deployment.yaml
env:
- name: Password
valueFrom:
secretKeyRef:
name: mysecret
key: CassandraSettings__CassandraPassword
After deployment in TeamCity getting this error
Error from server (BadRequest): error when creating "STDIN": Secret in
version "v1" cannot be handled as a Secret: v1.Secret.ObjectMeta:
v1.ObjectMeta.TypeMeta: Kind: Data: decode base64: illegal base64 data
at input byte 3, error found in #10 byte of
...|-password"},"kind":"|..., bigger context
...|_CassandraPassword":"dev-bling-cassandra-password"},"kind":"Secret","metadata":{"annotations":{"kube|...
error parsing STDIN: error converting YAML to JSON: yaml: line 33: did
not find expected '-' indicator
Looks like type is missing, can you try as below,
---Secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
CassandraSettings__CassandraPassword: [[ .Environment ]]-abcd-cassandra-password
---Deployment.yaml
env:
- name: Password
valueFrom:
secretKeyRef:
name: mysecret
key: CassandraSettings__CassandraPassword
I am getting this error while using helm 3 only. In helm 2 it's working as expected.Here is the secret object's manifest
secret.yaml
apiVersion: v1
data:
couchbase_password: {{ .Values.secrets.cbPass | quote }}
kind: Secret
metadata:
name: {{ include "persistence.name" .}}-cb-pass
type: Opaque
---
apiVersion: v1
data:
couchbase.crt: {{ .Values.secrets.encodedCouchbaseCrt | quote }}
kind: Secret
metadata:
name: {{ include "persistence.name" .}}-cb-crt
type: Opaque
And here are some contents of the values.yamlfile
configmap:
#support for oxtrust API
gluuOxtrustApiEnabled: false
gluuOxtrustApiTestMode: false
gluuCasaEnabled: true
secrets:
cbPass: UEBzc3cwcmQK # UEBzc3cwcmQK
encodedCouchbaseCrt: LS0tLS1CRUdJTiBDR
When I do helm template test . I get
---
# Source: gluu-server-helm/charts/persistence/templates/secrets.yaml
apiVersion: v1
data:
couchbase.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUROVENDQWgyZ0F3SUJBZ0lKQU93NzNOV2x5cTE3TUEwR0NTcUdTSWIzRFFFQkN3VUFNQll4RkRBU0JnTlYKQkFNTUMyTmlMbWRzZFhVdWIzSm5NQjRYRFRFNU1USXlOakE0TXpBd04xb1hEVEk1TVRJeU16QTRNekF3TjFvdwpGakVVTUJJR0ExVUVBd3dMWTJJdVoyeDFkUzV2Y21jd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3CmdnRUtBb0lCQVFDZjhySjhNcHJYMFFEQTdaamVXWkNjQTExd0FnMFpzSERYV2gwRU5BWE9JYjdObkM5c0diWEYKeG1PVnpNL3pGcWhqNWU4Zi9hZnBPQUlSV1RhMzhTeGFiQ3VPR1pUU2pTZ3dtclQ3bmVPK0pSNDA3REdzYzlrSgp5d1lNc083S3FtcFJTMWpsckZTWXpMNGQ4VW5xa3k3OHFMMEw3R3F2Y0hSTTZKYkM4QXpBdDUwWGJ5eEhwaDFsClNVWDBCSWgzbXl5NHpDcjF1anhHN0x6QVVHaDEyZXVSVGpWc3YrdWN4emdIZjVONXNIcFloaWV4NjJ1UE1MeDUKYjVsOVJtMmVadmM2R0ZpU2NpVEYwUFZFSXhRbkVobmd3R1MyNWNOTGdGRzEzMDV0WkFFNWdtem9lK0V6YmJNZQpXczdyUFZDWmF4dmo4ekRZS1A3ZkxsMitWSUcxcXl6M0FnTUJBQUdqZ1lVd2dZSXdIUVlEVlIwT0JCWUVGTGFFCm9rK1lhV1FHczFJM3ZKOGJiV203dGcxb01FWUdBMVVkSXdRL01EMkFGTGFFb2srWWFXUUdzMUkzdko4YmJXbTcKdGcxb29ScWtHREFXTVJRd0VnWURWUVFEREF0allpNW5iSFYxTG05eVo0SUpBT3c3M05XbHlxMTdNQXdHQTFVZApFd1FGTUFNQkFmOHdDd1lEVlIwUEJBUURBZ0VHTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBWlJnQ0I5cHFKClVZamZxWCsvUStRODNJQUJOSkJvMlMyYjRvT3NITGVyOGx6ZjlvZXdyR2dHUlRIeHNnRHE1dXcvS0c2TVJPSWEKR08zY0JwYWdENC9kVHBnRWpZemU0eXg0RzlTb253dmNESVNvV0dPN2Q5OG41SnJBaFZOYmFUT1FTSGRUTkxBTgp4UFVvcFh3RTZzOUp3bUxQUUdpQ2txcSs3NWp5OUFLRWRJTThTb0xNQXU3eHBPaDY0SVluRmhJOHAvZW5vNVpyCkxNbUFVbTltWVVaK2x0eDB6N0xDTXF1N3Z6RU55SzZ4anZiY3VxN0Y3aGsydDFmdVVYMUFpb1ZpN1dRdnQ3emwKODE3b2V6UG04NDJjTWZubkFqSzFkNnd1Z2RpNzlQSnJ1UDc4WmJXUThIWjZuSUtBRmlZRGxQTTNEakxnR0xZZgpZRll0LzJvVzJFQzEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
kind: Secret
metadata:
name: persistence-cb-crt
type: Opaque
---
# Source: gluu-server-helm/charts/persistence/templates/secrets.yaml
apiVersion: v1
data:
couchbase_password: "UEBzc3cwcmQK"
kind: Secret
metadata:
name: persistence-cb-pass
type: Opaque
When I use default data directly without referencing values file, it still doesn't work.
Helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
kubectl version
Client Version: v1.16.3
Server Version: v1.17.0