can I create secrets in openshift 4.3 from files? - kubernetes

I'm new to openshift and Kubernetes too (coming from Docker swarm world). and I'm looking to create secrets in openshift using definition file. these secrets are generated from a file. to give an example of what I'm trying to do let's say I have a file "apache.conf" and I want to add that file to containers as a secret mounted as a volume. In swarm I can just write the following in the stack file:
my-service:
secrets:
- source: my-secret
target: /home/myuser/
mode: 0700
secrets:
my-secret:
file: /from/host/apache.conf
In openshift I'm looking to have something similar like:
apiVersion: v1
kind: Secret
metadata:
- name: my-secret
files:
- "/from/host/apache.conf"
type: Opaque
The only way I've found that I can do something similar is by using kustomize and according to this post using Kustomize with openshift is cumbersome. is there a better way for creating secrets from a file?

No you can't
The reason is that the object Secret is stored in the etcd database and is not bound to any host. Therefore, the object doesn't understand the path.
You can create the secret from a file using the cli, and then the content will be saved in the Secret object.
oc create secret generic my-secret --from-file=fil1=pullsecret_private.json

Related

Best practice for adding app configuration files into kubernetes pods

I have the following setup:
An azure kubernetes cluster with some nodes where my application (consisting of multiple pods) is running.
I'm looking for a good way to make a project-specific configuration file (a few hundred lines) available for two of the deployed containers and their replicas.
The configuration file is different between my projects but the containers are not.
I'm looking for something like a read-only file mount in the containers, but haven't found an good way. I played around with persistent volume claims but there seems to be no automatic file placement possibility apart from copying (including uri and secret managing).
Best thing would be to have a possiblility where kubectl makes use of a yaml file to access a specific folder on my developer machine to push my configuration file into the cluster.
ConfigMaps are not a proper way to do it (because data has to be inside the yaml and my file is big and changing)
For volumes there seems to be no automatic way to place files inside them at creation time.
Can anybody guide me to a good solution that matches my situation?
You can use a configmap for this, but the configmap includes your config file. You can create a configmap with the content of your config file via the following:
kubectl create configmap my-config --from-file=my-config.ini=/path/to/your/config.ini
and the bind it as a volume in your pod:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: mypod
...
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
- name: config
configMap:
name: my-config #the name of your configmap
Afterwards your config is available in your pod under /config/my-config.ini

How can i upload Binary file like cert file as Config-map

How can i upload Binary file like cert file as Config-map
I am trying to upload Cert file like .p12 as config map but it's failing every time. After upload i do not see the file just entry.
Command that i used:
oc create configmap mmx-cert --from-file=xyz.p12
Failed.
Also used:
oc create configmap mmx-cert--from-file=game-special-key=example-files/xyz.p12
Also failed.
You cannot, ConfigMaps cannot contain binary data on their own. You will need to encode it yourself and decode it on the other side, usually base64. Or just a Secret instead, which can handle binary data.
Not sure what's the command oc, but if you are talking about kubectl, please make sure you feed the proper parameter
kubectl create configmap mmx-cert --from-env-file=path/to/xyz.p12
Please go through help as well, the parameter --from-file is based on folder, not file.
$ kubectl create configmap --help
...
# Create a new configmap named my-config based on folder bar
kubectl create configmap my-config --from-file=path/to/bar
This is how i done.
1) Adding the cert to stage CAE:
With the cert in the same directory as you are running the oc command:
oc create secret generic mmx-cert --from-file='cert.p12'
2) Add the secret volume:
The next step is to create a volume for the secret. As a test, I was able to use the oc command to create the volume on the apache nodes. Since the apache nodes have a deployment config, it was straight forward. I took that test setup, and manually added it to the app pods. The pieces I added to the deployment yaml were:
- mountPath: /opt/webserver/cert/nonprod/
name: mmxcert-volume
- name: mmxcert-volume
secret:
defaultMode: 420
secretName: mmx-cert
3) Verify the cert
md5sum cert.p12
IDK if the accepted answer changed over time BUT you actually can put binary files on cms base64 coded you don't actually have to do anything after that e.g.:
apiVersion: v1
kind: ConfigMap
binaryData:
elasticsearch.keystore: P9dsFxZlbGFzdGljc2VhcmNoLmtleXN0b3JlAAAABAAAAAC/AAAAQM5hkNkN7WjdNRwa/vKIte4mnBrWKZxzuqTdvNdneTWzZyQU+TIquP+ZlV1zCOGm2Jbdg+wMNcWqTQY4LvoSKHEAAAAMo5XONlIK6969bNQFAAAAZ6Cn+jnDe29K0W4a0unPodSljz+W+tRxgD59+oFnt17vN9hSutTbk1lzCJNiwnhK5mliHmS5Ie/9dhWfnI+vhkzXvFAYauvFxS7aJ9L3uKw3opFUtSrPY76fAXPcEYMGp8TcTceMZK7AKJPoAAAAAAAAAAAGacI8
data:
elasticsearch.yml: |
...
source: https://kubernetes.io/docs/concepts/configuration/configmap/

Using sensitive environment variables in Kubernetes configMaps

I know you can use ConfigMap properties as environment variables in the pod spec, but can you use environment variables declared in the pods spec inside the configmap?
For example:
I have a secret password which I wish to access in my configmap application.properties. The secret looks like so:
apiVersion: v1
data:
pw: THV3OE9vcXVpYTll==
kind: Secret
metadata:
name: foo
namespace: foo-bar
type: Opaque
so inside the pod spec I reference the secret as an env var. The configMap will be mounted as a volume from within the spec:
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: foo
key: pw
...
and inside my configMap I can then reference the secret value like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: application.properties
namespace: foo-bar
data:
application.properties: /
secret.password=$(PASSWORD)
Anything I've found online is just about consuming configMap values as env vars and doesn't mention consuming env vars in configMap values.
Currently it's not a Kubernetes Feature.
There is a closed issue requesting this feature and it's kind of controversial topic because the discussion is ongoing many months after being closed:
Reference Secrets from ConfigMap #79224
Referencing the closing comment:
Best practice is to not use secret values in envvars, only as mounted files. if you want to keep all config values in a single object, you can place all the values in a secret object and reference them that way.
Referencing secrets via configmaps is a non-goal... it confuses whether things mounting or injecting the config map are mounting confidential values.
I suggest you to read the entire thread to understand his reasons and maybe find another approach for your environment to get this variables.
"OK, but this is Real Life, I need to make this work"
Then I recommend you this workaround:
Import Data to Config Map from Kubernetes Secret
It makes the substitution with a shell in the entrypoint of the container.

Is there a way in a Helm chart to take secrets stored as text strings and write them out to a file while installing the chart?

I am working on my first project using AWS EKS. My company has their own proprietary workflow for deploying apps to EKS which includes the requirement to use Helm.
I need to take terraform and k8s yaml code provided by a vendor to stand up their app, and convert it to fit my company's proprietary standards. One of those standards requires that secrets not be stored with the code. I have to store secrets as text strings in a specific secrets.yaml file, which is then stored in a secure location and brought into the chart on the fly while it's installing it.
That's the background, now here's the question...
The vendor provided app is designed to ingest credentials in the form of a text file. This text file is expected to be in with the code, which I can't do.
Is there a way to embed a script or something within my Helm chart which can take these credentials which I have stored in the secrets file as a text string, and output to a temporary text file while the chart is being installed so that the file exists that the app needs?
I assume the secrets.yaml is added as a Kubernetes Secret in your environment?
What you can do is add the secret as a file in your app.
apiVersion: v1
kind: Pod
metadata:
name: secret-volume-example
spec:
containers:
- name: provider-app
image: provider-app
volumeMounts:
- name: foo
mountPath: "/path/to/provider/expected/location"
readOnly: true
volumes:
- name: foo
secret:
secretName: mysecret

Standard way of keeping Dockerhub credentials in Kubernetes YAML resource

I am currently implementing CI/CD pipeline using docker , Kubernetes and Jenkins for my micro services deployment. And I am testing the pipeline using the public repository that I created in Dockerhub.com. When I tried the deployment using Kubernetes Helm chart , I were able to add my all credentials in Value.yaml file -the default file getting for adding the all configuration when we creating a helm chart.
Confusion
Now I removed my helm chart , and I am only using deployment and service n plane YAML files. SO How I can add my Dockerhub credentials here ?
Do I need to use environment variable ? Or Do I need to create any separate YAML file for credentials and need to give reference in Deployment.yaml file ?
If I am using imagePullSecrets way How I can create separate YAML file for credentials ?
From Kubernetes point of view: Pull an Image from a Private Registry you can create secrets and add necessary information into your yaml (Pod/Deployment)
Steps:
1. Create a Secret by providing credentials on the command line:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
2. Create a Pod that uses your Secret (example pod):
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: regcred
You can pass the dockerhub creds as environment variables at jenkins only and Imagepullsecrets are to be made as per kubernetes doc, as they are one time things, you can directly add them to the required clusters