fiware orion: configure by environment - fiware-orion

We are trying to deploy orion into kubernetes.
We're looking for a way to configure it using environment variables or a configuration file.
According documentation we're not quite figure out how to get since it seems that only we can set configuration using command line options.
Any ideas?

As you said and as far as I know the Orion container does not support env vars which makes things only a bit harder.
you need to create a K8s ConfigMap with all the Orion's configuration vars, ex.
kubectl create configmap orion-config --from-literal='MONGO_DATASTORE=mongo-db`
for instance the mongoDB datastore you are going to use.
then you need to fill in the env of the Orion container in the corresponding K8s Deployment from such a ConfigMap ex.
"envFrom": [
{
"configMapRef": {
"name": "orion-config"
}
}
]
and in the container command args you need to reference the ConfigMap properties through the $(VAR) syntax defined by K8s, example:
"args": [
"-dbhost",
"$(MONGO_DATASTORE)" ]
I hope this helps

Recently Orion has implemented env var support. You can have a look to this section of the documentation about it.
Currently this is implemented in master (so if you use Orion from dockerhub using :latest you would get it) and it will be available in the next Orion release (2.5.0).

Related

Kubernetes global variables (for all namespaces)

I need to create and mantain some global variables accessible for applications running in all namespaces, because some tools/apps are standard in my dev cluster.
For example:
APM ENDPOINT
APM User/pass
RabbitMQ endpoint
MongoDB endpoint
For any reason, when i change/migrate any global variable, i want to change one time for all running applications in cluster (just needed restart pod), and if a create an "global" configmap and read in envFrom, i need to change/update the configmap in all namespaces.
Someone have an idea to do this? I thinked to use Hashicorp vault with specific role for global environments, but i need to adapt all applications to use Vault, and maybe have better idea.
Thanks
There is no in-built solution in Kubernetes for it except for creating a ConfigMap, and use envFrom to define all of the ConfigMap's data as Pod environment variables, which will indeed require to update them separately for each namespace. So using HashiCorp Vault is a better solution here; one more option here can be trying to customize env with Kubernetes addons like this.

How to use Vault dynamic secretes and inject them as Environment Variables to Kubernetes deployment?

We run a Vault cluster (Deployed by helm) and some microservices all on k8s.
Our MongoDB atlas connection string configured as ENV on microservices deployment.
We want to continue using ENV without changing the code to read the vault config file. So, we tried the examples from here:
https://www.vaultproject.io/docs/platform/k8s/injector/examples
The injection to ENV works but when the vault rotates the credentials we need to recreate the pod that it will inject again to the ENV.
I would like to know How we may use the functionality of dynamic secrets in Vault with ENV on k8s. If you have any suggestions.
Thanks
If you are using an environment variable to inject a secret, you will need to recreate the pod whenever the secret changes (as you've found), because the environment variable is only generated at startup of the pod - it is not possible to change an environment variable for a running application. If you want your application to support credentials that change while it is running, you will need to add support for that to your application I'm afraid (and change from using an env var to reading the details from the file when required).

Flink on K8S: how do I provide Flink configuration to the cluster?

I am following Flink Kubernetes Setup to create a cluster, but it is unclear how can I provide Flink configuration to the cluster? e.g., I want to specify jobmanager.heap.size=2048m.
According to the docs, all configuration has to be passed via a yaml configuration file.
It seems that jobmanager.heap.size is a common option that can be configured.
That being said, the approach on kubernetes is a little different when it comes to providing this configuration file.
The next piece of the puzzle is figuring out what the current start command is for the container you are trying to launch. I assumed you were using the official flink docker image which is good because the Dockerfile is opensource (link to repo at the bottom). They are using a complicated script to launch the flink container, but if you dig through that script you will see that it's reading the configuration yaml from /opt/flink/conf/flink-conf.yaml. Instead of trying to change this, it'll probably be easier to just mount a yaml file at that exact path in the pod with your configuration values.
Here's the github repo that has these Dockerfiles for reference.
Next question is what should the yaml file look like?
From their docs:
All configuration is done in conf/flink-conf.yaml, which is expected
to be a flat collection of YAML key value pairs with format key: value.
So, I'd imagine you'd create flink-conf.yaml with the following contents:
jobmanager.heap.size: 2048m
And then mount it in your kubernetes pod at /opt/flink/conf/flink-conf.yaml and it should work.
From a kubernetes perspective, it might make the most sense to make a configmap of that yaml file, and mount the config map in your pod as a file. See reference docs
Specifically, you are most interested in creating a configmap from a file and Adding a config map as a volume
Lastly, I'll call this out but I won't recommend it because of the fact that the owners of flink have marked it as an incubating feature currently, they have started providing a helm chart for Flink, and I can see that they are passing flink-conf.yaml as a config map in the helm chart templates (ignore values surrounded with {{ }} - that is helm template syntax). Here is where they mount their config map into a pod.

Automatically use secret when pulling from private registry

Is it possible to globally (or at least per namespace), configure kubernetes to always use an image pull secret when connecting to a private repo?
There are two use cases:
when a user specifies a container in our private registry in a deployment
when a user points a Helm chart at our private repo (and so we have no control over the image pull secret tag).
I know it is possible to do this on a service account basis but without writing a controller to add this to every new service account created it would get a bit of a mess.
Is there are way to set this globally so if kube tries to pull from registry X it uses secret Y?
Thanks
As far as I know, usually the default serviceAccount is responsible for pulling the images.
To easily add imagePullSecrets to a serviceAccount you can use the patch command:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "mySecret"}]}'
It's possible to use kubectl patch in a script that inserts imagePullSecrets on serviceAccounts across all namespaces.
If it´s too complicated to manage multiple namespaces you can have look at kubernetes-replicator, which syncs resources between namespaces.
Solution 2:
This section of the doc explains how you can set the private registry on a node basis:
Here are the recommended steps to configuring your nodes to use a
private registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes, for example:
If you want the names:
nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
If you want to get the IPs:
nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="ExternalIP")]}{.address} {end}')
Copy your local .docker/config.json to one of the search paths list above. for example:
for n in $nodes; do scp ~/.docker/config.json root#$n:/var/lib/kubelet/config.json; done
Solution 3:
A (very dirty!) way I discovered to not need to set up an imagePullSecret on a deployment / serviceAccount basis is to:
Set ImagePullPolicy: IfNotPresent
Pulling the image in each node
2.1. manually using docker pull myrepo/image:tag.
2.2. using a script or a tool like docker-puller to automate that process.
Well, I think I don't need to explain how ugly is that.
PS: If it helps, I found an issue on kubernetes/kops about the feature of creating a global configuration for private registry.
Two simple questions, where are you running your k8s cluster? Where is your registry located?
Here there are a few approaches to your issue:
https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry

Kubernetes Spring Cloud Multiple Config Maps being used

As per the documentation at - https://github.com/spring-cloud-incubator/spring-cloud-kubernetes/#configmap-propertysource, it is possible to make ConfigMaps available during application bootstrapping through adding spring.cloud.kubernetes.config.name to the bootstrap.yaml/properties.
Is it possible to consume multiple ConfigMaps in this manner?
I believe it is possible to do this in the pod specification through the use of env-from - https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/. But it would be great to do this with the current setup that we have.
As you can see in ConfigMapPropertySource.java, only one ConfigMap will be used by this property source.
However, using envFrom, all entries in a ConfigMap can be provided as environment variables to the container and Spring Boot can also read environment variables, so maybe this will help you.
Maybe the spring.cloud.kubernetes.config.sources config is also an option here. Here you can specify multiple configmaps.
See https://cloud.spring.io/spring-cloud-static/spring-cloud-kubernetes/1.0.0.M2/multi/multi__configmap_propertysource.html