Common config in Kubernetes ConfigMap - kubernetes

Kubernetes already provides a way to manage configuration with ConfigMap.
However, I have a question/problem here.
If I have multiple applications with different needs deployed in Kubernetes, all these deployments might share and access some common config variables. Is it possible for ConfigMap to use a common config variable?

There are two ways to do that.
Kustomize - Customization of kubernetes YAML configurations (developed as kubernetes sigs, and had been integrated into kubectl command line). But currently it isn't mature enough if compare with helm chart
https://github.com/kubernetes-sigs/kustomize
Helm chart - The Kubernetes Package Manager. Its vaules.yaml can define the vaule for same configuration files (in your case, they are configmap) with variables.
https://helm.sh/

Related

Kubernetes global variables (for all namespaces)

I need to create and mantain some global variables accessible for applications running in all namespaces, because some tools/apps are standard in my dev cluster.
For example:
APM ENDPOINT
APM User/pass
RabbitMQ endpoint
MongoDB endpoint
For any reason, when i change/migrate any global variable, i want to change one time for all running applications in cluster (just needed restart pod), and if a create an "global" configmap and read in envFrom, i need to change/update the configmap in all namespaces.
Someone have an idea to do this? I thinked to use Hashicorp vault with specific role for global environments, but i need to adapt all applications to use Vault, and maybe have better idea.
Thanks
There is no in-built solution in Kubernetes for it except for creating a ConfigMap, and use envFrom to define all of the ConfigMap's data as Pod environment variables, which will indeed require to update them separately for each namespace. So using HashiCorp Vault is a better solution here; one more option here can be trying to customize env with Kubernetes addons like this.

Dynamically refresh pods on secrets update on kubernetes while using helm chart

I am creating deployment,service manifest files using helm charts, also secrets by helm but separately not with deployments and service.
secretes are being loaded as env variables on pod level.
we are looking to refresh or restart PODs when we update secrets with new content.
Kubernetes does not itself support this feature at the moment and there is feature in the works (https://github.com/kubernetes/kubernetes/issues/22368).
You can use custom solution available to achieve the same and one of the popular ones include Reloader.

Kubernetes infrastructure as code best practice

Can anyone point me to the common strategy to setup a Kubernetes cluster according to the principles of infrastructure as code and automatic deployment for different developer teams with Git repos and an undefined CI/CD platform.
Let's say I am going to use Terraform to deploy a Kubernetes cluster on a hypothetical cloud service named QKS with a commonly used service, for example Apache Airflow, for which a public helm chart is available. There are two custom services (from two independent developer groups) to deploy named "apples" and "bananas".
I am struggling with the separation of responsibilities of different code bases. Which steps in this process can best still be done manually. A lot is being written about this technology, but I cannot find any articles on this issue in particular.
This is my own proposal.
Have three git repositories:
my-infrastructure: includes the Terraform files, the Airflow Helm deployment and deployment of two namespaces included access roles to these namespaces. CICD tracks for changes and deploys them on QKS
apples: code base and corresponding helm template. CICD can deploy on the apples namespace only.
bananas: code base and corresponding helm template. CICD can deploy on the bananas namespace only.
Notes:
subdivision of the cluster into namespaces is obvious;
all secrets and authorization tokens for the namespaces can be created via Terraform using Terraform kubernetes provider.
https://www.terraform.io/docs/providers/kubernetes/r/secret.html
There is an interesting kubernetes project for this called cluster-api that lets you create, configure & manage kubernetes clusters in a declarative fashion in a way similar to how we manage different resources in kubernetes itself. It defines new resources of different kinds like Cluster, Machine
e.g. You could define a cluster like this:
apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
name: capi-quickstart
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
name: capi-quickstart
Of course you would need a starting / bootstrap kubernetes cluster where you will deploy this resource. This project is still in prototype stage, so use caution.
Check out the cluster-api repository on Github: https://github.com/kubernetes-sigs/cluster-api

Kubernetes: Is it possible to have the exact same deployment descriptor for all environments including local?

I am new to Kubernetes and Minikube. Both look amazing tools, but I wonder if is there any way to have a single .yml file to deploy my services/deployments in all the environments, including local dev env...
The first limitation I see is related to service discovery since I would like to have my services behind a load balancer on the cloud, but at development environment, I can't since minikube don't support it, so I have to fall back to NodePort.
Can you provide me with some info about that matter?
There are other common differences between environments- names; credentials for any database or other permissioned resources; allocation of RAM/CPU; replica counts. There are limitations that minikube has as a runtime, compared to production k8s.
So- though one can use the same single yaml file in different environments, typically that's not what one wants.
What one usually wants is to have the general architectural shape of the solution be the same across environments, have differences extracted into minimalist configuration, then rendered using templates into environment-specific files to be used at deployment time.
The tool most commonly used to support this kind of approach is helm:
https://helm.sh/
Helm is basically a glorified templating wrapper around kubectl (though it has an in-cluster component). With helm, you can use the same base set of resource files, extract environment differences into config files, and then use helm to deploy as appropriate to each environment.
If I understood your question properly, you would like to spin up your infrastructure using one command and one file.
It is possible; however, it depends on your services. If some pods require another one to be running before they can start, this can get tricky. However technically you can put all your manifest files in one bundle. You can then create all the deployments services etc with kubectl apply -f bundle.yml
To create this bundle, you need to separate every manifest (deployment, service configmap, etc.) by triple dashes (---)
Example:
apiVersion: v1
kind: Namespace
metadata:
name: namespace-1
---
apiVersion: v1
kind: Namespace
metadata:
name: namespace-2

Passing variables to args field in a yaml file, kubernetes

I am writing a YAML file to use Kubernetes and I wondering how to pass variables to args field.
I need to do something like this :
args: ['--arg1=http://12.12.12.12:8080','--arg2=11.11.11.11']
But I don't want to hard code those values for --arg1 and --arg2, instead it should be like,
args: ['--arg1='$HOST1,'--arg2='$HOST2]
How should I do this?
You have two options that are quite different and really depend on your use-case, but both are worth knowing:
1) Helm would allow you to create templates of Kubernetes definitions, that can use variables.
Variables are supplied when you install the Helm chart, and before the resulting manifests are deployed to Kubernetes.
You can change the variables later on, but what it does is regenerate the YAML and re-deploy "static" versions of the result (template+variables=YAML that's sent to Kubernetes)
2) ConfigMaps allow you to separate a configuration from the pod manifest, and share this configuration across several pods/deployments.
You can later reference the ConfigMap from your pod/deployment manifests.
Hope this helps!