How to read .jks file into Kubernetes secret? - kubernetes

I have created a secret.yaml file as follows:
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
truststore.jks: {{ (.Files.Glob "../trust.jks").AsSecrets | b64enc }}
I am calling this as part of template .yaml file in HELM.
.yaml
apiVersion: v1
kind: DeploymentConfig
spec:
...
template:
spec:
...
container:
- name: "my-container"
...
volumeMounts:
- name: secrets
mountPath: /mnt/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: "my-secret"
When I run helm install command the pod gets created successfully, and the volume is also mounted, but if I check the truststore.jks in /mnt/secrets using cat command below is the output:
cat /mnt/secrets/truststore.jks
{}
I ran the dry run command to check the generated .yaml file, the secret is populted as below:
# Source: ag-saas/templates/tsSecret.yaml
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
truststore.jks: e30=
How do I get the file into my secret?

There's a couple of things going on here:
.Files.Glob is intended to retrieve multiple files, e.g. .Files.Glob "credentials/*.jks". For a single file .File.Get will retrieve its contents directly.
You can only access files inside the chart directory; referencing .Files.Get "../trust.jks" won't work.
.Files.Glob.AsSecret renders a list of files to the entire contents of the data: block; you just directly need the file content.
So your Secret should look like
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
truststore.jks: {{ .Files.Get "trust.jks" | b64enc }}
where in the last line I've used .Files.Get, I've not tried to refer to a "../..." path outside the chart, and I don't render it to ...AsSecret.
You also will need to move or copy (not symlink) the keyset file into the chart directory for this to work.
(In the current form, .Files.Glob won't match anything outside the chart directory, so you get an empty list of files. Then rendering that to .AsSecrets gets you an empty JSON object. You're using that string {} as the secret value, which gets correctly base64-encoded, but that's why {} comes out at the end.)

Related

How do I set helm values (not files) in ArgoCD Application spec

I looked all over the ArgoCD docs for this but somehow I cannot seem to find an answer. I have an application spec like so:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: myapp
namespace: argocd
spec:
destination:
namespace: default
server: https://kubernetes.default.svc
project: default
source:
helm:
valueFiles:
- my-values.yaml
path: .
repoURL: ssh://git#blah.git
targetRevision: HEAD
However, I also need to specify a particular helm value (like you'd do with --set in the helm command. I see in the ArgoCD web UI that it has a spot for Values, but I have tried every combination of entries I can think of (somekey=somevalue, somekey:somevalue, somekey,somevalue). I also tried editing the manifest directly, but I still get similar errors trying to do so.
The error is long nonsense that ends with error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go value of type map[string]interface {}
What is the correct syntax to set a single value, either through the web UI or the manifest file?
you would use parameters via spec.source.helm.parameters
something like:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: my-project
source:
repoURL: https://charts.my-company.com
targetRevision: "1234"
chart: my-chart
helm:
parameters:
- name: my.helm.key
value: some-val
destination:
name: k8s-dev
namespace: my-ns
Sample from Argo Docs - https://argo-cd.readthedocs.io/en/stable/user-guide/helm/#build-environment
To override just a few arbitrary parameters in the values you indeed can use parameters: as the equivalent of Helm's --set option or fileParameters: instead of --set-file:
...
helm:
# Extra parameters to set (same as setting through values.yaml, but these take precedence)
parameters:
- name: "nginx-ingress.controller.service.annotations.external-dns\\.alpha\\.kubernetes\\.io/hostname"
value: mydomain.example.com
- name: "ingress.annotations.kubernetes\\.io/tls-acme"
value: "true"
forceString: true # ensures that value is treated as a string
# Use the contents of files as parameters (uses Helm's --set-file)
fileParameters:
- name: config
path: files/config.json
But to answer your original question, for the "Values" option in the GUI you pass literal YAML block in the manifest, like:
helm:
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
# Values file as block file
values: |
ingress:
enabled: true
path: /
hosts:
- mydomain.example.com
annotations:
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
labels: {}
Check ArgoCD sample application for more details.

Unable to add configmap data as environment variables into pod. it says invalid variable name cannot be added as environmental variable

I am trying to add config data as environment variables, but Kubernetes warns about invalid variable names. The configmap data contains JSON and property files.
spec:
containers:
- name: env-var-configmap
image: nginx:1.7.9
envFrom:
- configMapRef:
name: example-configmap
After deploying I do not see them added in the process environment. Instead I see a warning message like below
Config map example-configmap contains keys that are not valid environment variable names. Only config map keys with valid names will be added as environment variables.
But I see it works if I add it directly as a key-value pair
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
I have thousand of key values in the configmap data and I could not add them all as separate key-value pairs.
Is there any short syntax to add all values from a configmap as environment variables?
My answer, while #P-Ekambaram already helped you out, I was getting the same error message, it turned out that my issue was that I named the configMap ms-provisioning-broadsoft-adapter and was trying to use ms-provisioning-broadsoft-adapter as the key. As soon as I changed they key to ms_provisioning_broadsoft_adapter, e.g. I added the underscores instead of hyphens and it happily let me add it to an application.
Hope this might help someone else that also runs into the error invalid variable name cannot be added as environmental variable
sample reference is given below
create configmap as shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
load configmap data as environment variables in the pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
output
master $ kubectl logs dapi-test-pod | grep SPECIAL
SPECIAL_LEVEL=very
SPECIAL_TYPE=charm
You should rename your variable.
In my case they were like this:
VV-CUSTOMER-CODE
VV-CUSTOMER-URL
I just rename to:
VV_CUSTOMER_CODE
VV_CUSTOMER_URL
Works fine. Openshift/kubernets works with underline _, but not with hyphen - .
I hope help you.

How to set kubernetes secret key name when using --from-file other than filename?

Is there a way to set a kubernetes secret key name when using --from-file other than the filename?
I have a bunch of different configuration files that I use as secrets.json within my containers. However, to organize my files, none of them are named secrets.json on my host. For example secrets.dev.json or secrets.test.json. My apps only know to read in secrets.json.
When I create a secret with kubectl create secret generic my-app-secrets --from-file=secrets.dev.json, this results in the key name being secrets.dev.json and not secrets.json.
I'm mounting in my secret contents as a file (this is a carry-over from migrating from Docker swarm).
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
template:
spec:
volumes:
- name: my-secret
secret:
secretName: my-app-secrets
containers:
- name: my-app
volumeMounts:
- name: my-secret
mountPath: "/run/secrets/secrets.json"
subPath: "secrets.json"
Because secrets.json doesn't exist as a key because it used the filename (secrets.dev.json), it ends up getting turned into a directory instead. I end up getting this mount path: /run/secrets/secrets.json/secrets.dev.json.
I'd like to be able to set the key name to secrets.json instead of using the filename of secrets.dev.json.
You can specify key name [--from-file=[key=]source]
kubectl create secret generic my-app-secrets --from-file=secrets.json=secrets.dev.json
Here, secrets.json is key name and secrets.dev.json is source

why does helm do not accept kubernetes secrets for deployment?

i have created secret inside Kubernetes cluster for image pull from private repository and added it to helm values.yml.
after deployment start (helm install chart /chart) i see that helm deployment is crashing all the time by timeout.
"kubectl describe pod" shows me an error: "imagePullBackoff" and "wrong credentials".
at the same time if to deploy the same app with kubectl apply -f deployment.yml file this secret works as expected and image is downloaded without any issues and deployment is successful.
the question is how to force this secret to work with helm charts?
Try creating secret using this command:
kubectl create secret docker-registry mysecret --docker-server=<docker-repo> --docker-username=<docker-username> --docker-password=<docker-password> --docker-email=<email>
(Provide your respective inputs in the above command)
From helm document
First, assume that the credentials are defined in the values.yaml file like so:
imageCredentials:
registry: quay.io
username: someone
password: sillyness
We then define our helper template as follows:
{{- define "imagePullSecret" }}
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}" .Values.imageCredentials.registry (printf "%s:%s" .Values.imageCredentials.username .Values.imageCredentials.password | b64enc) | b64enc }}
{{- end }}
Finally, we use the helper template in a larger template to create the Secret manifest:
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
In deployment
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: myregistrykey

How to set default value for Kubernetes ConfigMap (or does Helm overwrite ConfigMap values?)

Use case:
I want to be able to re-run a job from where the first job left off. I am using Helm to deploy into Kubernetes.
I have the idea of saving the state of the first job in a ConfigMap. The ConfigMap yaml defining the ConfigMap is packaged up with the job and both are deployed at the same time with Helm.
apiVersion: v1
kind: ConfigMap
metadata:
name: NameOfMyConfigMap
data:
someKey: someValue
MY_STATE: state <---- See below as to whether this should be included or not
The job is run with an ENV variable set from the ConfigMap:
env:
- name: MY_STATE
valueFrom:
configMapKeyRef:
name: NameOfMyConfigMap
key: MY_STATE
The job runs a script that looks to see if $MY_STATE is set and if it is not set then the job is being run for the first time, otherwise the job closes down the already running first job, saves the first job's state into the MY_STATE ConfigMap variable and launches the job again using the saved state.
If I don't declare the MY_STATE key in the initial ConfigMap definition then the first run of the job will fail, as the ENV definition above cannot find the ConfigMap variable.
If I do declare the value (MY_STATE: "") in the ConfigMap definition, then the first deployment will work. However, if I re-deploy the job with helm upgrade then does the value I enter in the definition not overwrite an existing value in the existing ConfigMap?
What is the best method of storing state in between runs of the same job?
Have you tried using volumes? In this case it should not be overwritten when using helm upgrade.
Could an example like this work? (From
https://groups.google.com/forum/#!msg/kubernetes-users/v2806ezEdPk/1geJCO8-AQAJ)
apiVersion: batch/v1
kind: Job
metadata:
name: keystore-configmap-job
spec:
template:
metadata:
name: keystore-configmap
spec:
containers:
- name: keystore
image: ubuntu
volumeMounts:
- name: keystore-configmap-volume
mountPath: /config-base64
command: [ "sh", "-c", "cat /config-base64/keystore.jks | base64 --decode | sha256sum" ]
restartPolicy: Never
volumes:
- name: keystore-configmap-volume
configMap:
name: keystore-configmap