This is the documentation on External Database Environment Variables. It says,
Using an external service in your application is similar to using an internal service. Your application will be assigned environment variables for the service and the additional environment variables with the credentials described in the previous step. For example, a MySQL container receives the following environment variables:
EXTERNAL_MYSQL_SERVICE_SERVICE_HOST=<ip_address>
EXTERNAL_MYSQL_SERVICE_SERVICE_PORT=<port_number>
MYSQL_USERNAME=<mysql_username>
MYSQL_PASSWORD=<mysql_password>
MYSQL_DATABASE_NAME=<mysql_database>
This part is not clear - Your application will be assigned environment variables for the service.
How should the application be configured so that the environment variables for the service are assigned? I understand that, the ones defined in DeploymentConfig will flow into the application in say NodeJS as process.env.MYSQL_USERNAME, etc. I am not clear, how EXTERNAL_MYSQL_SERVICE_SERVICE_HOST or EXTERNAL_MYSQL_SERVICE_SERVICE_PORT will flow into.
From Step 1 of the link that you posted, if you create a Service object
oc expose deploymentconfig/<name>
This will automatically generate environment variables (https://docs.openshift.com/container-platform/3.11/dev_guide/environment_variables.html#automatically-added-environment-variables) for all pods in your namespace. (The environment variables may not be immediately available if the Service was added after your pods were already created...delete the pods to have them added on restart)
Related
I need to create and mantain some global variables accessible for applications running in all namespaces, because some tools/apps are standard in my dev cluster.
For example:
APM ENDPOINT
APM User/pass
RabbitMQ endpoint
MongoDB endpoint
For any reason, when i change/migrate any global variable, i want to change one time for all running applications in cluster (just needed restart pod), and if a create an "global" configmap and read in envFrom, i need to change/update the configmap in all namespaces.
Someone have an idea to do this? I thinked to use Hashicorp vault with specific role for global environments, but i need to adapt all applications to use Vault, and maybe have better idea.
Thanks
There is no in-built solution in Kubernetes for it except for creating a ConfigMap, and use envFrom to define all of the ConfigMap's data as Pod environment variables, which will indeed require to update them separately for each namespace. So using HashiCorp Vault is a better solution here; one more option here can be trying to customize env with Kubernetes addons like this.
My container code needs to know in which environment it is running on GKE, more specifically what cluster and project. In standard kubernetes, this could be retrieved from current-context value (gke_<project>_<cluster>).
Kubernetes has a downward api that can push pod info to containers - see https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/ - but unfortunately nothing from "higher" entities.
Any thoughts on how this can be achieved ?
Obviously I do not want to explicit push any info at deployment (e.g. as env in the configMap). I rather deploy using a generic/common yaml and have the code at runtime retrieve the info from env or file and branch accordingly.
You can query the GKE metadata server from within your code. In your case, you'd want to query the /computeMetadata/v1/instance/attributes/cluster-name and /computeMetadata/v1/project/project-id endpoints to get the cluster and project. The client libraries for each supported language all have simple wrappers for accessing the metadata API as well.
We run a Vault cluster (Deployed by helm) and some microservices all on k8s.
Our MongoDB atlas connection string configured as ENV on microservices deployment.
We want to continue using ENV without changing the code to read the vault config file. So, we tried the examples from here:
https://www.vaultproject.io/docs/platform/k8s/injector/examples
The injection to ENV works but when the vault rotates the credentials we need to recreate the pod that it will inject again to the ENV.
I would like to know How we may use the functionality of dynamic secrets in Vault with ENV on k8s. If you have any suggestions.
Thanks
If you are using an environment variable to inject a secret, you will need to recreate the pod whenever the secret changes (as you've found), because the environment variable is only generated at startup of the pod - it is not possible to change an environment variable for a running application. If you want your application to support credentials that change while it is running, you will need to add support for that to your application I'm afraid (and change from using an env var to reading the details from the file when required).
I have a requirement where my client applications are having almost same properties and even the URL is same, as they are running behind a load balancer, the only change they have is a particular set of environment properties that differ.
Is it possible to register them uniquely based on that property.
I would say there are a few approaches.
One would be loading Environment Variables from a Kubernetes Secret.
Second using helm(https://helm.sh/)
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.
Explanation:
If you would use a secret option, you would probably create two separate secrets with env variables that you need and load those based on the app name, or if you have them setup in different namespaces then copy the secret over to each as those resources will not work between different namespaces.
If you would use helm, you will have to write your chart and put the env variables into values.yaml or mix it together and load secret from inside Kubernetes.
This will work on Kubernetes, I do not know (based on your tags) if it's the same on OpenShift.
Please provide some samples of what you have already done and I'll provide more details.
How can I tell whether or not I am running inside a kubernetes cluster? With docker I can check if /.dockerinit exist. Is there an equivalent?
You can check for KUBERNETES_SERVICE_HOST environment variable.
This variable is always exported in an environment where the container is executed.
Refer to https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables
You can pass environment variables to your containers in the pod spec. You can even expose some pod information to the containers via environment variables using the downward API.
With the default configuration, Kubernetes will mount the serviceaccount secrets into pods. Simply check for the existence of this folder: /var/run/secrets/kubernetes.io.
No need to set environment variables. In ruby I would do the following:
if File.exists?('/.dockerenv')
puts "I'm running in a docker container"
end
if File.exists?('/var/run/secrets/kubernetes.io')
puts "I'm also running in a Kubernetes pod"
end
One option is to check the /etc/hosts file - there is by default the comment that the file is maintained by K8s.
Anyway the best way is to def your own env variable in deployment, so use some template tools like helm to gen deployment and define some general template.