Is it possible for Deis Workflow read values from ConfigMap? - kubernetes

I have installed Deis Workflow v.2.11 in a GKE cluster, and some of our applications share values in common, like a proxy URL e credentials. I can use these values putting them into environment variables, or even in a .env file.
However, every new application, I need to create a .env file, with shared values and then, call
deis config:push
If one of those shared value changes, I need to adjust every configuration of every app and restart them. I would like to modify the value in ConfigMap once and, after changes, Deis restart the applications.
Does anyone know if it is possible to read values from Kubernetes ConfigMap and to put them into Deis environment variables? Moreover, if yes, how do I do it?

I believe what you're looking for is a way to set environment variables globally across all applications. That is currently not implemented. However, please feel free to hack up a PR and we'd likely accept it!
https://github.com/deis/controller/issues/383
https://github.com/deis/controller/issues/1219

Currently there is no support for configMaps in Deis Workflow v2.18.0 . We would appreciate a PR into the Hephy Workflow (open source fork of Deis Workflow). https://github.com/teamhephy/controller
There is no functionality right now to capture configMap in by the init scripts of the containers.

You could update the configMap, but each of the applications would need to run kubectl replace -f path/accessible/for/everyone/configmap.yaml to get the variables updated.
So, I would say yes, at Kubernetes level you can do it. Just figure out the best way for your apps to update the configMap. I don't have details of your use case, so I can't tell you specific ways.

Related

a very simple Kubernetes Scheduler question about custom scheduler

Just a quick question.. Do you HAVE to remove or move the default kube-scheduler.yaml from the folder? Can't I just make a new yaml(with the custom scheduler) and run that in the pod?
Kubernetes isn't file-based. It doesn't care about the file location. You use the files only to apply the configuration onto the cluster via a kubectl / kubeadm or similar CLI tools or their libraries. The yaml is only the content you manually put into it.
You need to know/decide what your folder structure and the execution/configuration flow is.
Also, you can simply have a temporary fule, the naming doesn't matter as well and it's alright to replace the content of a yaml file. Preferably though, try to have some kind of history record such as manual note, comment or a source control such as git in place, so you know what and why was changed.
So yes, you can change the scheduler yaml or you can create a new file and reorganize it however you like but you will need to adjust your flow to that - change paths, etc.

How to set automatic rollbacks in CodeDeploy with CloudFormation?

I'm creating a Deployment Group in CodeDeploy with a CloudFormation template.
The Deployment Group is successfully created and the application is deployed perfectly fine.
The CF resource that I defined (Type: AWS::CodeDeploy::DeploymentGroup) has the "Deployment" property set. The thing is that I would like to configure automatic rollbacks for this deployment, but as per CF documentation for "AutoRollbackConfiguration" property: "Information about the automatic rollback configuration that is associated with the deployment group. If you specify this property, don't specify the Deployment property."
So my understanding is that if I specify "Deployment", I cannot set "AutoRollbackConfiguration"... Then how are you supposed to configure any rollback for the deployment? I don't see any other resource property that relates to rollbacks.
Should I create a second DeploymentGroup resource and bind it to the same instances that the original Deployment Group has? I'm not sure this is possible or makes sense but I ran out of options.
Thanks,
Nicolas
First i like to describe why you cannot specify both, deployment and rollback configuration:
Whenever you specify a deployment directly for the group, you already state which revision you like to deploy. This conflicts with the idea of CloudFormation of having resources managed by it without having a drift in the actual configuration of those resources.
I would recommend the following:
Use CloudFormation to deploy the 'underlying' infrastructure (the deployment group, application, roles, instances, etc.)
Create a CodePipline within this infrastructure template, which then includes a CodeDeploy deployment action (https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference-CodeDeploy.html, https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-codepipeline-pipeline-stages-actions-actiontypeid.html)
The pipeline can triggered whenever you have a new version inside you revision location
This approach clearly separates the underlying stuff, which is not changing dynamically and the actual application deployment, done using a proper pipeline.
Additionally in this way you can specify how you like to deploy (green/blue, canary) and how/when rollbacks should be handled. The status of your deployment also to be seen inside CodePipeline.
I didn't mention it but what you are suggesting about CodePipeline is exactly what I did.
In fact, I have one CloudFormation template that creates all the infrastructure and includes the DeploymentGroup. With this, the application is deployed for the first time to my EC2 instances.
Then I have another CF template for CI/CD purposes with a CodeDeploy stage/action that references the previous DeploymentGroup. Whenever I push some code to my repository, the Pipeline is triggered, code is built and new version successfully deployed to the instances.
However, I don't see how/where in any of the CF templates to handle/configure the rollback for the DeploymentGroup as you were saying. I think I get the idea of your explanation about the conflict CF might have in case of having a drift, but my impression is that in case of errors during the CF stack creation, CF rollback should just remove the DeploymentGroup you're trying to create. In other words, for me there's no CodeDeploy deployment rollback involved in that scenario, just removing the resource (DeploymentGroup) CF was trying to create.
One thing that really impresses me is that you can enable/disable automatic rollbacks for the DeploymentGroup through the AWS Console. Just edit and go to Advanced Configuration for the DeploymentGroup and you have a checkbox. I tried it and triggered the Pipeline again and worked perfectly. I made a faulty change to make the deployment fail in purpose, and then CodeDeploy automatically reverted back to the previous version of my application... completely expected behavior. Doesn't make much sense that this simple boolean/flag option is not available through CF.
Hope this makes sense and helps clarifying my current situation. Any extra help would be highly appreciated.
Thanks again

Populating a config file when a pod starts

I am trying to run a legacy application inside Kubernetes. The application consists of one of more controllers, and one or more workers. The workers and controllers can be scaled independently. The controllers take a configuration file as a command line option, and the configuration looks similar to the following:
instanceId=hostname_of_machine
Memory=XXX
....
I need to be able to populate the instanceId field with the name of the machine, and this needs to be stable over time. What are the general guidelines for implementing something like this? The application doesn't support environment variables, so my first thought was to record the stateful set stable network ID in an environment variable and rewrite the configuration file with an init container. Is there a cleaner way to approach this? I haven't found a whole lot of solutions when I searched the 'net.
Nope, that's the way to do it (use an initContainer to update the config file).

Handling OpenShift secrets in a safe way after extraction into environment variables

So I have configured an OpenShift 3.9 build configuration such that environment variables are populated from an OpenShift secret at build-time. I am using these environment variables for setting passwords up for PostgreSQL roles in the image's ENTRYPOINT script.
Apparently these environment variables are baked into the image, not just the build image, but also the resulting database image. (I can see their values when issuing set inside the running container.) On one hand this seems necessary because the ENTRYPOINT script needs access to them, and it executes only at image run-time (not build-time). On the other this is somewhat disconcerting, because FWIK one who obtained the image could now extract those passwords. Unsetting the environment variables after use would not change that.
So is there a better way (or even best practice) for handling such situations in a more secure way?
UPDATE At this stage I see two possible ways forward (better choice first):
Configure DeploymentConfig such that it mounts the secret as a volume (not: have BuildConfig populate environment variables from it).
Store PostgreSQL password hashes (not: verbatim passwords) in secret.
As was suggested in a comment, what made sense was to shift the provision of environment variables from the secret from BuildConfig to DeploymentConfig. For reference:
oc explain bc.spec.strategy.dockerStrategy.env.valueFrom.secretKeyRef
oc explain dc.spec.template.spec.containers.env.valueFrom.secretKeyRef

How to implement the "One Binary" principle with Docker

The One Binary principle explained here:
http://programmer.97things.oreilly.com/wiki/index.php/One_Binary states that one should...
"Build a single binary that you can identify and promote through all the stages in the release pipeline. Hold environment-specific details in the environment. This could mean, for example, keeping them in the component container, in a known file, or in the path."
I see many dev-ops engineers arguably violate this principle by creating one docker image per environment (ie, my-app-qa, my-app-prod and so on). I know that Docker favours immutable infrastructure which implies not changing an image after deployment, therefore not uploading or downloading configuration post deployment. Is there a trade-off between immutable infrastructure and the one binary principle or can they complement each-other? When it comes to separating configuration from code what is the best practice in a Docker world??? Which one of the following approaches should one take...
1) Creating a base binary image and then having a configuration Dockerfile that augments this image by adding environment specific configuration. (i.e my-app -> my-app-prod)
2) Deploying a binary-only docker image to the container and passing in the configuration through environment variables and so on at deploy time.
3) Uploading the configuration after deploying the Docker file to a container
4) Downloading configuration from a configuration management server from the running docker image inside the container.
5) Keeping the configuration in the host environment and making it available to the running Docker instance through a bind mount.
Is there another better approach not mentioned above?
How can one enforce the one binary principle using immutable infrastructure? Can it be done or is there a trade-off? What is the best practice??
I've about 2 years of experience deploying Docker containers now, so I'm going to talk about what I've done and/or know to work.
So, let me first begin by saying that containers should definitely be immutable (I even mark mine as read-only).
Main approaches:
use configuration files by setting a static entrypoint and overriding the configuration file location by overriding the container startup command - that's less flexible, since one would have to commit the change and redeploy in order to enable it; not fit for passwords, secure tokens, etc
use configuration files by overriding their location with an environment variable - again, depends on having the configuration files prepped in advance; ; not fit for passwords, secure tokens, etc
use environment variables - that might need a change in the deployment code, thus lessening the time to get the config change live, since it doesn't need to go through the application build phase (in most cases), deploying such a change might be pretty easy. Here's an example - if deploying a containerised application to Marathon, changing an environment variable could potentially just start a new container from the last used container image (potentially on the same host even), which means that this could be done in mere seconds; not fit for passwords, secure tokens, etc, and especially so in Docker
store the configuration in a k/v store like Consul, make the application aware of that and let it be even dynamically reconfigurable. Great approach for launching features simultaneously - possibly even accross multiple services; if implemented with a solution such as HashiCorp Vault provides secure storage for sensitive information, you could even have ephemeral secrets (an example would be the PostgreSQL secret backend for Vault - https://www.vaultproject.io/docs/secrets/postgresql/index.html)
have an application or script create the configuration files before starting the main application - store the configuration in a k/v store like Consul, use something like consul-template in order to populate the app config; a bit more secure - since you're not carrying everything over through the whole pipeline as code
have an application or script populate the environment variables before starting the main application - an example for that would be envconsul; not fit for sensitive information - someone with access to the Docker API (either through the TCP or UNIX socket) would be able to read those
I've even had a situation in which we were populating variables into AWS' instance user_data and injecting them into container on startup (with a script that modifies containers' json config on startup)
The main things that I'd take into consideration:
what are the variables that I'm exposing and when and where am I getting their values from (could be the CD software, or something else) - for example you could publish the AWS RDS endpoint and credentials to instance's user_data, potentially even EC2 tags with some IAM instance profile magic
how many variables do we have to manage and how often do we change some of them - if we have a handful, we could probably just go with environment variables, or use environment variables for the most commonly changed ones and variables stored in a file for those that we change less often
and how fast do we want to see them changed - if it's a file, it typically takes more time to deploy it to production; if we're using environment variable
s, we can usually deploy those changes much faster
how do we protect some of them - where do we inject them and how - for example Ansible Vault, HashiCorp Vault, keeping them in a separate repo, etc
how do we deploy - that could be a JSON config file sent to an deployment framework endpoint, Ansible, etc
what's the environment that we're having - is it realistic to have something like Consul as a config data store (Consul has 2 different kinds of agents - client and server)
I tend to prefer the most complex case of having them stored in a central place (k/v store, database) and have them changed dynamically, because I've encountered the following cases:
slow deployment pipelines - which makes it really slow to change a config file and have it deployed
having too many environment variables - this could really grow out of hand
having to turn on a feature flag across the whole fleet (consisting of tens of services) at once
an environment in which there is real strive to increase security by better handling sensitive config data
I've probably missed something, but I guess that should be enough of a trigger to think about what would be best for your environment
How I've done it in the past is to incorporate tokenization into the packaging process after a build is executed. These tokens can be managed in an orchestration layer that sits on top to manage your platform tools. So for a given token, there is a matching regex or xpath expression. That token is linked to one or many config files, depending on the relationship that is chosen. Then, when this build is deployed to a container, a platform service (i.e. config mgmt) will poke these tokens with the correct value with respect to its environment. These poke values most likely would be pulled from a vault.