Extending deployments with default configuration - kubernetes

I have a config map that defines some variables like environment that are then passed into alot of deployment configurations like this
- name: ENV
valueFrom:
configMapKeyRef:
name: my-config-map
key: ENV
secrets and some volumes like ssl certs are common across the configs also. Is there some kubernetes type that I could create a base service deployment that extends a normal deployment? Or some other way to deal with this? Also using kustomize, there might be an option there.

You can use a PodPreset object to inject information like secrets, volume mounts, and environment variables etc into pods at creation time.
Before starting using PodPreset you need to take few steps:
Firstly need to enable API type settings.k8s.io/v1alpha1/podpreset, which can be done by including settings.k8s.io/v1alpha1=true in the --runtime-config option for the API server
Enable the admission controller PodPreset. You can do it by including PodPreset in the --enable-admission-plugins option value specified for the API server
After that you need to creatie PodPreset objects in the namespace you will work in and create it by typing kubectl apply -f preset.yaml
Please refer to official documentation to see how it works.

Related

How do I interpolate Kubernetes variables into JSON in the ConfigMap YAML file?

I have this ConfigMap where I am constructing a app-config.json file that I pass into Angular. This file is how I get environment variables into Angular as they must be served.
Below is how I thought passing variables into the JSON would work in ConfiMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: frontend-settings
data:
app-config.json: |-
{
"keycloakUrl": "http://${minikube ip}:${keycloak_port}/auth",
"realm": "eshc",
"clientId": "eshc-frontend",
"backendApi": "http://localhost:${backend_port}"
}
The problem is that these are not evaluated. I want to pass Kube service aliases, and the minikube ip command as in the example above. Could someone point me in the right direction as to how I might do this?
Thanks in advance!
Kubernetes doesn't provide this facility in the API.
You can do this at deploy time with helm or kubectl's kustomization features.
Depending on your use case, this can also be done at runtime in a container entry point before the app starts up or in a Kubernetes specific init container. Avoid the init container unless you are working with shared file systems, or with the Kubernetes API to apply these changes.
From your example it looks like everything should be available at deploy time, maybe not the minikube IP. For that you should be able to use the magic DNS name host.minikube.internal

Look up secrets from gcloud secrets manager directly as secretGenerator with kustomize

I am setting up my Kubernetes cluster using kubectl -k (kustomize). Like any other such arrangement, I depend on some secrets during deployment. The route I want go is to use the secretGenerator feature of kustomize to fetch my secrets from files or environment variables.
However managing said files or environment variables in a secure and portable manner has shown itself to be a challenge. Especially since I have 3 separate namespaces for test, stage and production, each requiring a different set of secrets.
So I thought surely there must be a way for me to manage the secrets in my cloud provider's official way (google cloud platform - secret manager).
So how would the secretGenerator that accesses secrets stored in the secret manager look like?
My naive guess would be something like this:
secretGenerator:
- name: juicy-environment-config
google-secret-resource-id: projects/133713371337/secrets/juicy-test-secret/versions/1
type: some-google-specific-type
Is this at all possible?
What would the example look like?
Where is this documented?
If this is not possible, what are my alternatives?
I'm not aware of a plugin for that. The plugin system in Kustomize is somewhat new (added about 6 months ago) so there aren't a ton in the wild so far, and Secrets Manager is only a few weeks old. You can find docs at https://github.com/kubernetes-sigs/kustomize/tree/master/docs/plugins for writing one though. That links to a few Go plugins for secrets management so you can probably take one of those and rework it to the GCP API.
There is a Go plugin for this (I helped write it), but plugins weren't supported until more recent versions of Kustomize, so you'll need to install Kustomize directly and run it like kustomize build <path> | kubectl apply -f - rather than kubectl -k. This is a good idea anyway IMO since there are a lot of other useful features in newer versions of Kustomize than the one that's built into kubectl.
As seen in the examples, after you've installed the plugin (or you can run it within Docker, see readme) you can define files like the following and commit them to version control:
my-secret.yaml
apiVersion: crd.forgecloud.com/v1
kind: EncryptedSecret
metadata:
name: my-secrets
namespace: default
source: GCP
gcpProjectID: my-gcp-project-id
keys:
- creds.json
- ca.crt
In your kustomization.yaml you would add
generators:
- my-secret.yaml
and when you run kustomize build it'll automatically retrive your secret values from Google Secret Manager and output Kubernetes secret objects.

Automatically use secret when pulling from private registry

Is it possible to globally (or at least per namespace), configure kubernetes to always use an image pull secret when connecting to a private repo?
There are two use cases:
when a user specifies a container in our private registry in a deployment
when a user points a Helm chart at our private repo (and so we have no control over the image pull secret tag).
I know it is possible to do this on a service account basis but without writing a controller to add this to every new service account created it would get a bit of a mess.
Is there are way to set this globally so if kube tries to pull from registry X it uses secret Y?
Thanks
As far as I know, usually the default serviceAccount is responsible for pulling the images.
To easily add imagePullSecrets to a serviceAccount you can use the patch command:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "mySecret"}]}'
It's possible to use kubectl patch in a script that inserts imagePullSecrets on serviceAccounts across all namespaces.
If it´s too complicated to manage multiple namespaces you can have look at kubernetes-replicator, which syncs resources between namespaces.
Solution 2:
This section of the doc explains how you can set the private registry on a node basis:
Here are the recommended steps to configuring your nodes to use a
private registry. In this example, run these on your desktop/laptop:
Run docker login [server] for each set of credentials you want to use. This updates $HOME/.docker/config.json.
View $HOME/.docker/config.json in an editor to ensure it contains just the credentials you want to use.
Get a list of your nodes, for example:
If you want the names:
nodes=$(kubectl get nodes -o jsonpath='{range.items[*].metadata}{.name} {end}')
If you want to get the IPs:
nodes=$(kubectl get nodes -o jsonpath='{range .items[*].status.addresses[?(#.type=="ExternalIP")]}{.address} {end}')
Copy your local .docker/config.json to one of the search paths list above. for example:
for n in $nodes; do scp ~/.docker/config.json root#$n:/var/lib/kubelet/config.json; done
Solution 3:
A (very dirty!) way I discovered to not need to set up an imagePullSecret on a deployment / serviceAccount basis is to:
Set ImagePullPolicy: IfNotPresent
Pulling the image in each node
2.1. manually using docker pull myrepo/image:tag.
2.2. using a script or a tool like docker-puller to automate that process.
Well, I think I don't need to explain how ugly is that.
PS: If it helps, I found an issue on kubernetes/kops about the feature of creating a global configuration for private registry.
Two simple questions, where are you running your k8s cluster? Where is your registry located?
Here there are a few approaches to your issue:
https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry

Kubernetes secrets encryption

I have pods who deployed to Kubernetes cluster (hosted with Google Cloud Kubernetes). Those pods are using some secret, which are plain text files. I added the secret to the yaml file and deployed the deployment. The application is working fine.
Now, let say that someone compromised my code and somehow get access to all my files on the container. In that case, the attacker can find the secrets directory and print all the secrets written there. It's a plain text.
Question:
Why it more secure use kubernetes-secrets instead of just a plain-text?
There are different levels of security and as #Vishal Biyani says in the comments, it sounds like you're looking for a level of security you'd get from a project like Sealed Secrets.
As you say, out of the box secrets doesn't give you encryption at the container level. But it does give controls on access through kubectl and the kubernetes APIs. For example, you could use role-based access control so that specific users could see that a secret exists without seeing (through the k8s APIs) what its value is.
In case you can create the secrets using a command instead of having it on the yaml file:
example:
kubectl create secret generic cloudsql-user-credentials --from-literal=username=[your user]--from-literal=password=[your pass]
you can also read it as
kubectl get secret cloudsql-user-credentials -o yaml
i also use the secret with 2 levels, the one is the kubernetes :
env:
- name: SECRETS_USER
valueFrom:
secretKeyRef:
name: cloudsql-user-credentials
key: username
the SECRETS_USER is a env var, which i use this value on jasypt
spring:
datasource:
password: ENC(${SECRETS_USER})
on the app start up you use the param : -Djasypt.encryptor.password=encryptKeyCode
/.m2/repository/org/jasypt/jasypt/1.9.2/jasypt-1.9.2.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI input="encryptKeyCode" password=[pass user] algorithm=PBEWithMD5AndDES

Kubernetes - different settings per environment

We have an app that runs on GKE Kubernetes and which expects an auth url (to which user will be redirected via his browser) to be passed as environment variable.
We are using different namespaces per environment
So our current pod config looks something like this:
env:
- name: ENV
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: AUTH_URL
value: https://auth.$(ENV).example.org
And all works amazingly, we can have as many dynamic environments as we want, we just do apply -f config.yaml and it works flawlessly without changing a single config file and without any third party scripts.
Now for production we kind of want to use different domain, so the general pattern https://auth.$(ENV).example.org does not work anymore.
What options do we have?
Since configs are in git repo, create a separate branch for prod environment
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that, else use config.yaml) - but with this approach we cannot use kubectl directly anymore
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
Other...?
This seems like an ideal opportunity to use helm!
It's really easy to get started, simply install tiller into your cluster.
Helm gives you the ability to create "charts" (which are like packages) which can be installed into your cluster. You can template these really easily. As an example, you might have you config.yaml look like this:
env:
- name: AUTH_URL
value: {{ .Values.auth.url }}
Then, within the helm chart you have a values.yaml which contains defaults for the url, for example:
auth:
url: https://auth.namespace.example.org
You can use the --values option with helm to specify per environment values.yaml files, or even use the --set flag on helm to override them when using helm install.
Take a look at the documentation here for information about how values and templating works in helm. It seems perfect for your use case
jaxxstorms' answer is helpful, I just want to add what that means to the options you proposed:
Since configs are in git repo, create a separate branch for prod environment.
I would not recommend separate branches in GIT since the purpose of branches is to allow for concurrent editing of the same data, but what you have is different data (different configurations for the cluster).
Have a default ConfigMap and a specific one for prod environment, and run it via some script (if exists prod-config.yaml then use that,
else use config.yaml) - but with this approach we cannot use kubectl
directly anymore
Using Helm will solve this more elegantly. Instead of a script you use helm to generate the different files for different environments. And you can use kubectl (using the final files, which I would also check into GIT btw.).
Move this config to application level, and have separate config file for prod env - but this kind of goes against 12factor app?
This is a matter of opinion but I would recommend in general to split up the deployments by applications and technologies. For example when I deploy a cluster that runs 3 different applications A B and C and each application requires a Nginx, CockroachDB and Go app-servers then I'll have 9 configuration files, which allows me to separately deploy or update each of the technologies in the app context. This is important for allowing separate deployment actions in a CI server such as Jenkins and follows general separation of concerns.
Other...?
See jaxxstorms' answer about Helm.