Kubernetes Spring Cloud Multiple Config Maps being used - kubernetes

As per the documentation at - https://github.com/spring-cloud-incubator/spring-cloud-kubernetes/#configmap-propertysource, it is possible to make ConfigMaps available during application bootstrapping through adding spring.cloud.kubernetes.config.name to the bootstrap.yaml/properties.
Is it possible to consume multiple ConfigMaps in this manner?
I believe it is possible to do this in the pod specification through the use of env-from - https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/. But it would be great to do this with the current setup that we have.

As you can see in ConfigMapPropertySource.java, only one ConfigMap will be used by this property source.
However, using envFrom, all entries in a ConfigMap can be provided as environment variables to the container and Spring Boot can also read environment variables, so maybe this will help you.

Maybe the spring.cloud.kubernetes.config.sources config is also an option here. Here you can specify multiple configmaps.
See https://cloud.spring.io/spring-cloud-static/spring-cloud-kubernetes/1.0.0.M2/multi/multi__configmap_propertysource.html

Related

Kubernetes global variables (for all namespaces)

I need to create and mantain some global variables accessible for applications running in all namespaces, because some tools/apps are standard in my dev cluster.
For example:
APM ENDPOINT
APM User/pass
RabbitMQ endpoint
MongoDB endpoint
For any reason, when i change/migrate any global variable, i want to change one time for all running applications in cluster (just needed restart pod), and if a create an "global" configmap and read in envFrom, i need to change/update the configmap in all namespaces.
Someone have an idea to do this? I thinked to use Hashicorp vault with specific role for global environments, but i need to adapt all applications to use Vault, and maybe have better idea.
Thanks
There is no in-built solution in Kubernetes for it except for creating a ConfigMap, and use envFrom to define all of the ConfigMap's data as Pod environment variables, which will indeed require to update them separately for each namespace. So using HashiCorp Vault is a better solution here; one more option here can be trying to customize env with Kubernetes addons like this.

What are other ways to provide configuration information to pods other than ConfigMap

I have a deployment in which I want to populate pod with config files without using ConfigMap.
You could also store your config files on a PersistentVolume and read those files at container startup. For more details on that topic please take a look at the K8S reference docs: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Please note: I would not consider this good practice. I used this approach in the early beginning of a project where a legacy app was migrated to Kubernetes: The application consisted of tons of config files that were read by the application at startup.
Later on I switched to creating ConfigMaps from my configuration files, as the latter approach allows to store the K8S object (yaml file) in Git and I found managing/editing a ConfigMap way easier/faster, especially in a multi-node K8S environment:
kubectl create configmap app-config --from-file=./app-config1.properties --from-file=./app-config2.properties
If you go for the "config files in persistent volume" approach you need to take different aspects into account... e.g. how to bring your configuration files on that volume, potentially not on a single but multiple nodes, and how to keep them in sync.
You can use environment variable and read the value from environment.
Or you

Change the spring boot admin registery unique ID

I have a requirement where my client applications are having almost same properties and even the URL is same, as they are running behind a load balancer, the only change they have is a particular set of environment properties that differ.
Is it possible to register them uniquely based on that property.
I would say there are a few approaches.
One would be loading Environment Variables from a Kubernetes Secret.
Second using helm(https://helm.sh/)
Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.
Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.
Explanation:
If you would use a secret option, you would probably create two separate secrets with env variables that you need and load those based on the app name, or if you have them setup in different namespaces then copy the secret over to each as those resources will not work between different namespaces.
If you would use helm, you will have to write your chart and put the env variables into values.yaml or mix it together and load secret from inside Kubernetes.
This will work on Kubernetes, I do not know (based on your tags) if it's the same on OpenShift.
Please provide some samples of what you have already done and I'll provide more details.

Flink on K8S: how do I provide Flink configuration to the cluster?

I am following Flink Kubernetes Setup to create a cluster, but it is unclear how can I provide Flink configuration to the cluster? e.g., I want to specify jobmanager.heap.size=2048m.
According to the docs, all configuration has to be passed via a yaml configuration file.
It seems that jobmanager.heap.size is a common option that can be configured.
That being said, the approach on kubernetes is a little different when it comes to providing this configuration file.
The next piece of the puzzle is figuring out what the current start command is for the container you are trying to launch. I assumed you were using the official flink docker image which is good because the Dockerfile is opensource (link to repo at the bottom). They are using a complicated script to launch the flink container, but if you dig through that script you will see that it's reading the configuration yaml from /opt/flink/conf/flink-conf.yaml. Instead of trying to change this, it'll probably be easier to just mount a yaml file at that exact path in the pod with your configuration values.
Here's the github repo that has these Dockerfiles for reference.
Next question is what should the yaml file look like?
From their docs:
All configuration is done in conf/flink-conf.yaml, which is expected
to be a flat collection of YAML key value pairs with format key: value.
So, I'd imagine you'd create flink-conf.yaml with the following contents:
jobmanager.heap.size: 2048m
And then mount it in your kubernetes pod at /opt/flink/conf/flink-conf.yaml and it should work.
From a kubernetes perspective, it might make the most sense to make a configmap of that yaml file, and mount the config map in your pod as a file. See reference docs
Specifically, you are most interested in creating a configmap from a file and Adding a config map as a volume
Lastly, I'll call this out but I won't recommend it because of the fact that the owners of flink have marked it as an incubating feature currently, they have started providing a helm chart for Flink, and I can see that they are passing flink-conf.yaml as a config map in the helm chart templates (ignore values surrounded with {{ }} - that is helm template syntax). Here is where they mount their config map into a pod.

How to connect applications with services created through Kubernetes operators?

I am looking for best practices to connect applications with services. I have a database operator which creates a service, and there is an application pod which needs to connect to it. Is the following approach going to work?
The operator injects the access details into the pods as Secret and ConfigMap.
The operator identifies application pods through label selectors (e.g., connects-to: mysql).
The application pod receives service access details through environment variables.
The operator can document the environment variables and the label selectors.
If the above flow is going to work, how can I inject values into pods?
I can see a few mechanisms. Which one would be better?
PodPreset (alpha since 2017)
Initializer
MutatingAdmissionWebhook
This is the expected interaction between controllers and actors (PodPreset can be substituted with other choices):
your question is not completely clear.
if I understand your question , you can use helm with multiple values file to it.
helm install ./repo/ --name repo --values ./values.file
you can also --set command to the help to add values that are not in the values file for some reason
Looking into the question it seems to general.
For example you can find more information about "CONTAINERS & KUBERNETES - Best practices for building Kubernetes Operators and stateful apps" here.
As per documentation the most important are:
Operators exercise one of the most valuable capabilities of Kubernetes—its extensibility—through the use of CRDs and custom controllers. Operators extend the Kubernetes API to support the modernization of different categories of workloads and provide improved lifecycle management, scheduling.
Please refer to "Application management made easier with Kubernetes Operators on GCP Marketplace" As you can see - in GCP was published a set of Kubernetes operators to encapsulate best practices and end-to-end solutions for specific applications.
In stack overflow you can find discussion about CRD
As an example please see postgress opeartor with comparison between two methods to set the Postgres Operator configuration here.
In this case "The CRD-based configuration is more powerful than the one based on ConfigMaps and should be used unless there is a compatibility requirement to use an already existing configuration"
Here you can also find information "When to use configMap or a custom resource?"
Hope this help