imagePullSecrets mysteriously get `registry-` prefix - kubernetes

I am trying to deploy the superset Helm chart with a customized image. There's no option to specify imagePullSecrets for the chart. Using k8s on DigitalOcean. I linked the repository, and tested it using a basic deploy, and it "just works". That is to say, the pods get the correct value for imagePullSecret, and pulling just works.
However, when trying to install the Helm chart, the used imagePullSecret mysteriously gets a registry- prefix (there's already a -registry suffix, so it becomes registry-xxx-registry when it should just be xxx-registry). The values on the default service account are correct.
To illustrate, default service accounts for both namespaces:
$ kubectl get sa default -n test -o yaml
apiVersion: v1
imagePullSecrets:
- name: xxx-registry
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-14T14:26:41Z"
name: default
namespace: test
resourceVersion: "13125"
uid: xxx-xxx
secrets:
- name: default-token-9ggrm
$ kubectl get sa default -n superset -o yaml
apiVersion: v1
imagePullSecrets:
- name: xxx-registry
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-14T14:19:47Z"
name: default
namespace: superset
resourceVersion: "12079"
uid: xxx-xxx
secrets:
- name: default-token-wkdhv
LGTM, but after trying to install the helm chart (which fails because of registry auth), I can see that the wrong secret is set on the pods:
$ kubectl get -n superset pods -o json | jq '.items[] | {name: .spec.containers[0].name, sa: .spec.serviceAccount, secret: .spec.imagePullSecrets}'
{
"name": "superset",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
{
"name": "xxx-superset-postgresql",
"sa": "default",
"secret": [
{
"name": "xxx-registry"
}
]
}
{
"name": "redis",
"sa": "xxx-superset-redis",
"secret": null
}
{
"name": "superset",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
{
"name": "superset-init-db",
"sa": "default",
"secret": [
{
"name": "registry-xxx-registry"
}
]
}
In the test namespace the secret name is just correct. Extra interesting is that postgres DOES have the correct secret name, and that uses a Helm dependency. So it seems like there's an issue in the superset Helm chart that is causing this, but there's no imagePullSecrets values being set anywhere in the templates. And as you can see above, they are using the default service account.
I have already tried destroying and recreating the whole cluster, but the problem recurs.
I have tried version 0.5.10 (latest) of the Helm chart and version 0.3.5, both result in the same issue.
https://github.com/apache/superset/tree/dafc841e223c0f01092a2e116888a3304142e1b8/helm/superset
https://github.com/apache/superset/tree/1.3/helm/superset

Related

Pass values from Helmrelease to Terraform

I have a helm release file test-helm-release.yaml as given below.
apiVersion: helm.toolkit.gitops.io/v2beta1
kind: HelmRelease
metadata:
name: "test"
namespace: "test-system"
spec:
chart:
spec:
chart: "test-environment"
version: "0.1.10"
values:
key1: "value1"
key1: "value1"
key1: "value1"
key1: "value1"
gitRepository:
url: https://github.com/test-eng/test.git
helmRepositories:
- name: testplatform
url: https://test-platform/charts
While creating the helm release I can pass the values from above helm release to the new release using the below command
chart=$(yq '.spec.chart.spec.chart' test-helm-release.yaml)
version=$(yq '.spec.chart.spec.version' test-helm-release.yaml)
yq '.spec.values' test-helm-release.yaml | helm upgrade --install --values - --version "$version" --namespace test-system --create-namespace test-platform "helm-repo/$chart"
Above code is working perfectly and I'm able to pass the values to the helm release using "yq" command. How I can do the same "yq" function with terraform "helm-release" and git repository data type given below.
data "github_repository_file" "test-platform" {
repository = "test-platform"
branch = "test"
file = "internal/default/test-helm-release.yaml"
}
resource "helm_release" "test-platform" {
name = "test-platform"
repository = "https://test-platform/charts"
chart = "test-environment"
namespace = "test-system"
create_namespace = true
timeout = 800
lifecycle {
ignore_changes = all
}
}
Note
I cannot use "set" because i want to fetch the values form test-helm-release.yaml dynamically.Any idea how I could fetch the .spec.values alone using templatefile functio or a different way?

How to add dependency between helm chart and kubernetes resource in terraform CDK

I'm installing a helm chart which creates a CRD and then I want to instantiate the CRD defined in the helm chart. What's the correct way to declare a dependency between them so that terraform doesn't try to create the CRD until after the helm chart has finished installing?
new helm.Release(this, "doppler-kubernetes-operator-helm-chart", {
chart: "doppler-kubernetes-operator",
name: "doppler",
repository: "https://helm.doppler.com",
version: "1.2.0"
})
const dopplerOperatingSystemNamespace = "doppler-operator-system";
// create a secret referenced by the CRD
const dopplerApiServerProjectServiceTokenSecret = new kubernetes.Secret(this, "doppler-api-server-project-service-token", {
metadata: {
name: "doppler-api-server-project-service-token",
namespace: dopplerOperatingSystemNamespace
},
data: {
"serviceToken": "<some secret>"
}
})
// Create the CRD <------------- how do I get this to depend on the helm chart?
new kubernetes.Manifest(this, "doppler-kubernetes-operator", {
manifest: {
apiVersion: "secrets.doppler.com/v1alpha1",
kind: "DopplerSecret",
metadata: {
name: "doppler-secret-default",
namespace: dopplerOperatingSystemNamespace,
},
spec: {
tokenSecret: {
name: dopplerApiServerProjectServiceTokenSecret.metadata.name
},
managedSecret: {
name: "doppler-api-server-managed-secret",
namespace: "default"
}
}
}
})
In this case I would like to only attempt creating doppler-kubernetes-operator after the helm chart has finished installing.
Turns out I'm an idiot. I was looking for dependsOn (which I use with AWS classes) and Intellij wasn't autocompleting it for the kubernetes Manifest but I guess my cursor was in the wrong position...

helm chart template lookup function does not work

When trying to use the helm function: lookup, I do not get any result at all as expected.
My Secret that I try to read looks like this
apiVersion: v1
data:
adminPassword: VG9wU2VjcmV0UGFzc3dvcmQxIQ==
adminUser: YWRtaW4=
kind: Secret
metadata:
annotations:
sealedsecrets.bitnami.com/cluster-wide: "true"
name: activemq-artemis-broker-secret
namespace: common
type: Opaque
The template helm chart that should load the adminUser and adminPassword data looks like this
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
name: {{ .Values.labels.app }}
namespace: common
spec:
{{ $secret := lookup "v1" "Secret" .Release.Namespace "activemq-artemis-broker-secret" }}
adminUser: {{ $secret.data.adminUser }}
adminPassword: {{ $secret.data.adminPassword }}
When deploying this using ArgoCD I get the following error:
failed exit status 1: Error: template: broker/templates/deployment.yaml:7:23:
executing "broker/templates/deployment.yaml" at <$secret.data.adminUser>:
nil pointer evaluating interface {}.adminUser Use --debug flag to render out invalid YAML
Both the secret and the deployment is in the same namespace (common).
If I try to get the secret with kubectl it works as below
kubectl get secret activemq-artemis-broker-secret -n common -o json
{
"apiVersion": "v1",
"data": {
"adminPassword": "VG9wU2VjcmV0UGFzc3dvcmQxIQ==",
"adminUser": "YWRtaW4="
},
"kind": "Secret",
"metadata": {
"annotations": {
"sealedsecrets.bitnami.com/cluster-wide": "true"
},
"creationTimestamp": "2022-10-10T14:40:49Z",
"name": "activemq-artemis-broker-secret",
"namespace": "common",
"ownerReferences": [
{
"apiVersion": "bitnami.com/v1alpha1",
"controller": true,
"kind": "SealedSecret",
"name": "activemq-artemis-broker-secret",
"uid": "edff38fb-a966-47a6-a706-cb197ac1797d"
}
],
"resourceVersion": "127303988",
"uid": "0679fc5c-7465-4fe1-9197-b483073e93c2"
},
"type": "Opaque"
}
What is wrong here. I use helm version: 3.8.1 and Go version: 1.75
This error is the result of two parts working together:
First, helm's lookup only works in a running cluster, not when running helm template (without --validate). If run in that manner it returns nil. (It is usually used as lookup ... | default dict {}, to avoid a nasty error message).
Second, you're deploying with ArgoCD that is actually running helm template internally when deploying a helm chart. See open issue: https://github.com/argoproj/argo-cd/issues/5202 . The issue mentions a plugin that can be used to change this behaviour. However, doing so requires some reconfiguration of argocd itself, which is not trivial and is not without side effects.

How to patch configmap in json file with kustomize?

How to patch "db.password" in the following cm with kustomize?
comfigmap:
apiVersion: v1
data:
dbp.conf: |-
{
"db_properties": {
"db.driver": "com.mysql.jdbc.Driver",
"db.password": "123456",
"db.user": "root"
}
}
kind: ConfigMap
metadata:
labels: {}
name: dbcm
you can create new file with updated values and use command replace along wih create
kubectl create configmap NAME --from-file file.name -o yaml --dry-run | kubectl replace -f -
create a placeholder in your file and replace it with real data while applying kustomize
your code will be like this:
#!/bin/bash
sed -i "s/PLACE-HOLDER/123456/g" db_config.yaml
kustomize config.yaml >> kustomizeconfig.yaml
kubectl apply -f kustomizeconfig.yaml -n foo
And the db_config file will be:
apiVersion: v1
data:
dbp.conf: |-
{
"db_properties": {
"db.driver": "com.mysql.jdbc.Driver",
"db.password": "PLACE_HODLER",
"db.user": "root"
}
}
kind: ConfigMap
metadata:
labels: {}
name: dbcm
NB: This should be running on the pipeline to have the config file cloned from repo, so the real file won't be updated.

kubernetes coreos rbd storageclass

I want use k8s storageclass under coreos, but failed
.CoreOS version is stable (1122.2)
.Hyperkube version is v1.4.3_coreos.0
k8s cluster deployed by coreos-kubernetes script , and modify rkt_opts for rbd recommandded by kubelet-wrapper.md
ceph version is jewel, I have mounted a rbd image on coreos , it works well.
now, I try to use pvc in pods, Refer to the kubernetes official document https://github.com/kubernetes/kubernetes/tree/master/examples/experimental/persistent-volume-provisioning
the config files:
**ceph-secret-admin.yaml**
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-admin
namespace: kube-system
data:
key: QVFDTEl2NVg5c0U2R1JBQVRYVVVRdUZncDRCV294WUJtME1hcFE9PQ==
**ceph-secret-user.yaml**
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-user
data:
key: QVFDTEl2NVg5c0U2R1JBQVRYVVVRdUZncDRCV294WUJtME1hcFE9PQ==
**rbd-storage-class.yaml**
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
name: kubepool
annotations:
storageclass.beta.kubernetes.io/is-default-class: 'true'
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.199.134.2:6789,10.199.134.3:6789,10.199.134.4:6789
adminId: rbd
adminSecretName: ceph-secret-admin
adminSecretNamespace: kube-system
pool: rbd
userId: rbd
userSecretName: ceph-secret-user
**claim1.json :**
{
"kind": "PersistentVolumeClaim",
"apiVersion": "v1",
"metadata": {
"name": "claim1",
"annotations": {
"volume.beta.kubernetes.io/storage-class": "kubepool"
}
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "3Gi"
}
}
}
}
the secret create ok, the storageclass create seems ok, but can't describe (no description has been implemented for "StorageClass"), when create pvc, it's status always pending , describe it:
Name: claim1
Namespace: default
Status: Pending
Volume:
Labels: <none>
Capacity:
Access Modes:
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
16m 14s 66 {persistentvolume-controller } Warning ProvisioningFailed no volume plugin matched
Could some one help me ?