Can I consume a ConfigMap value in arbitrary places in YAML? - kubernetes

In Azure Kubernetes Service, my goal is to configure both staging and production k8 clusters with a common YAML file, with critical values & environment variables parameterized from a ConfigMap.
I can set container environment variables easily using valueFrom but I would like to use the ConfigMap values in other areas of the YAML file, for example:
staging-config-map.yaml:
kind: ConfigMap
apiVersion: v1
metadata:
name: base-config
data:
ENVIRONMENT_NAME: staging
...
prod-config-map.yaml:
kind: ConfigMap
apiVersion: v1
metadata:
name: base-config
data:
ENVIRONMENT_NAME: prod
...
common-cluster-config.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-amazing-microservice
annotations:
service.beta.kubernetes.io/azure-dns-label-name: "my-amazing-microservice-$ENVIRONMENT_NAME"
spec:
type: LoadBalancer
ports:
- targetPort: 5000
name: port5000
port: 5000
protocol: TCP
selector:
app: my-amazing-microservice
---
...
Note the reference to $ENVIRONMENT_NAME which is where I want to insert something from the ConfigMap.
Can I do this, so I don't have to maintain duplicated manifests for staging and prod?

No you cant with vanilla k8's manifests. ConfigMaps are just resources that get mounted into the container when it starts. Either as env variables or as a file. You cant access a config map at deployment time.
I suggest looking into helm that can do this with templating.

Related

use Kustomize to create additional namespaces with a suffix

Just currently battling an issue with kustomize and not having much look.
I have my config setup and are using kustomize (v4.5.7) to have separate base, variants and environment configuration. I’m trying to use the setup to deploy a copy of my dev environment onto the same cluster using different namespaces and a suffix.
The idea is that everything would be deployed using a suffix for the name (and got this working but it only does the names and not the namespaces) and drop them into separate namespaces with a suffix.
I’m currently defining all the namspaces with the following config:
apiVersion: v1
kind: Namespace
metadata:
name: mynamespace
Now i want to be able to deploy copies of the NS named mynamespace-mysuffix
I’ve managed to implemented a suffix for the names of the object alongside a PrefixSuffixTransformer to update the namespaces in the objects created to mynamespace-mysuffix
This unfortunately doesn’t update the namespace configuration and leaves things in tact. In short it tries to deploy the objects into namespaces that do not exist.
This is the working PrefixSuffixTransformer amending the namespace set in the various objects:
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: customSuffixer
suffix: "-mysuffix"
fieldSpecs:
- path: metadata/name
- path: metadata/namespace
trying to target the namespace objects unsuccessfully with the following additional PrefixSuffixTransformer
apiVersion: builtin
kind: PrefixSuffixTransformer
metadata:
name: nsSuffixer
suffix: "-mysuffix"
fieldSpecs:
- kind: Namespace
path: metadata/name
Was hoping on this last part working but no success. Anyone any suggestions as to how I can get the additional namespaces created with a suffix?
If I understand your question correctly, the solution is just to add a namespace: declaration to the kustomization.yaml file in your variants.
For example, if I have a base directory that contains:
namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: example
spec: {}
service.yaml
apiVersion: v1
kind: Service
metadata:
name: example
spec:
ports:
- name: http
port: 80
targetPort: http
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: example
resources:
- namespace.yaml
- service.yaml
And I create a variant in overlays/example, with this kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: example-mysuffix
resources:
- ../../base
nameSuffix: -mysuffix
Then running kustomize build overlays/example results in:
apiVersion: v1
kind: Namespace
metadata:
name: example
spec: {}
---
apiVersion: v1
kind: Service
metadata:
name: example-mysuffix
namespace: example
spec:
ports:
- name: http
port: 80
targetPort: http
As you have described in your question, the Namespace resource wasn't renamed by the nameSuffix configuration. But if I simply add a namespace: declaration to the kustomization.yaml, like this:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: example-mysuffix
resources:
- ../../base
nameSuffix: -mysuffix
Then I get the desired output:
apiVersion: v1
kind: Namespace
metadata:
name: example-mysuffix
spec: {}
---
apiVersion: v1
kind: Service
metadata:
name: example-mysuffix
namespace: example-mysuffix
spec:
ports:
- name: http
port: 80
targetPort: http

Confused on connection a Cronjob service to service mesh when no ports are exposed

I am fairly new to Kubernetes and currently working on a project where I am trying to add a CronJob deployment service to our Kubernetes service mesh. My current deployment and Cronjob does not expose any ports so I was curious if there is a way to actually go around not adding any ports and still adding it to the service mesh?
apiVersion: v1
kind: Service
metadata:
name: cronjob-service
spec:
selector:
app: cronjob-example
ports:
- protocol: TCP
port: ????
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob-example
.....
This is an example of my yaml template. Also do I need to add a service account as well to the yaml file?

Define unique IDs for wso2.carbon: in deployment.yaml || Kubernetes deployment

Requirement:
Need to assign two different id's for each pod in deployment.yaml for id parameter, wso2.carbon section.
i.e. wso2-am-analytics_1 and wso2-am-analytic_2
SetUp:
This deployment is a kubernetes deployment deployment.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wso2apim-mnt-worker-conf
namespace: wso2
data:
deployment.yaml: |
wso2.carbon:
type: wso2-apim-analytics
id: wso2-am-analytics
name: WSO2 API Manager Analytics Server
ports:
# port offset
offset: 0
.
.
.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wso2apim-mnt-worker
namespace: wso2
spec:
replicas: 2
strategy:
type: Recreate
selector:
matchLabels:
deployment: wso2apim-mnt-worker
.
.
.
You don't have to necessarily have the ID as wso2-am-analytics_1 and wso2-am-analytics_2. This just has to be a unique value. Hence you can use something like the pod IP for this. If you are strict about the ID value you can create the config map based on a config file and then have some logic to populate the config files' ID field appropriately. If you use helm this would be pretty easy.
If you are ok to use some other unique value you can do the following. In WSO2 configs you can read values from environment variables, hence you can do something like this.
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wso2apim-mnt-worker-conf
namespace: wso2
data:
deployment.yaml: |
wso2.carbon:
type: wso2-apim-analytics
id: ${NODE_ID}
name: WSO2 API Manager Analytics Server
ports:
# port offset
offset: 0
# Then in the Deployment pass environment variable
env:
-
name: NODE_ID
valueFrom:
fieldRef:
fieldPath: status.podIP

Kube / create deployment with config map

I new in kube, and im trying to create deployment with configmap file. I have the following:
app-mydeploy.yaml
--------
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-mydeploy
labels:
app: app-mydeploy
spec:
replicas: 3
selector:
matchLabels:
app: mydeploy
template:
metadata:
labels:
app: mydeploy
spec:
containers:
- name: mydeploy-1
image: mydeploy:tag-latest
envFrom:
- configMapRef:
name: map-mydeploy
map-mydeploy
-----
apiVersion: v1
kind: ConfigMap
metadata:
name: map-mydeploy
namespace: default
data:
my_var: 10.240.12.1
I created the config and the deploy with the following commands:
kubectl create -f app-mydeploy.yaml
kubectl create configmap map-mydeploy --from-file=map-mydeploy
when im doing kubectl describe deployments, im getting among the rest:
Environment Variables from:
map-mydeploy ConfigMap Optional: false
also kubectl describe configmaps map-mydeploy give me the right results.
the issue is that my container is CrashLoopBackOff, when I look at the logs, it says: time="2019-02-05T14:47:53Z" level=fatal msg="Required environment variable my_var is not set.
this log is from my container that says that the my_var is not defined in the env vars.
what im doing wrong?
I think you are missing you key in the command
kubectl create configmap map-mydeploy --from-file=map-mydeploy
try to this kubectl create configmap map-mydeploy --from-file=my_var=map-mydeploy
also I highly recommend that if you are just using one value, create you configMap from literal kubectl create configmap my-config --from-literal=my_var=10.240.12.1 then related the configMap in your deployment as you are currently doing it.

Reusable Pod Templates

Is it possible in Kubernetes to create a pod template and reuse it later when specifying a pod within a deployment? For example:
Say I have pod template...
apiVersion: v1
kind: PodTemplate
metadata:
name: my-pod-template
template:
metadata:
labels:
app: "my-app"
spec:
containers:
- name: my-app
image: jwaldrip/my-app:latest
Could I then use it in a deployment as so?
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-deployment
spec:
template:
metadata:
name: my-pod-template
This would be super helpful when deploying something like Jobs, where I want to own the creation of a job with the given template.
There is not.
Specifically in the case of Pods, there are PodPresets:
https://kubernetes.io/docs/tasks/inject-data-application/podpreset/
But those don't apply to other objects.
One way to enforce the shape or attributes of arbitrary objects is to establish tooling that correctly creates those objects, then create credentials for that tooling, and use RBAC to only allow those credentials to create those objects.
https://kubernetes.io/docs/admin/authorization/rbac/
Another way would be to create an Admission Controller to watch the attempted creation of the desired objects, and verify/reject those that don't meet the criteria:
https://kubernetes.io/docs/admin/admission-controllers/