We are starting a project from scratch that will be managed on Google Cloud Services. I'd like to use Google Kubernetes Engine. Our application will have multiple environments (Dev, Staging, Production). Each environment is setup as a new Project on Google Cloud.
What is unclear to me is how to parameterize our service/manifest files. For instance our deploy file below, anything in {} I'd like to pull from a list of variables per environment. In a previous post someone mentioned using Helm, but I cannot find much documentation supporting the use of helm this way.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: {max-surge}
maxUnavailable: 0
selector:
matchLabels:
run: webapp
template:
metadata:
labels:
run: webapp
spec:
containers:
- name: webapp
image: {gcr-image-url}
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: DATABASE_URL
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
name: app-secrets
key: SECRET_KEY_BASE
What tools are available to manage my GKE environments? We'll use terraform for our infrastructure management, but again is there a larger wrapper I can use to set parameters per environment?
Helm would work for this, as would kustomize. In the case of helm, you'll have separate values.yaml files (e.g. dev-values.yaml) with e.g.:
max-surge: 2
gcr-image-url: project-23456/test
And then reference them in the yaml via:
{{ .Values.max-surge }}
The when installing you would use helm upgrade --install my-app . --values=dev-values.yaml
https://get-ytt.io could be a solution.
Particularly if you look at this github discussion you will notice that you can configure your environment and then pass in values in the form of flags or environment variables.
In case of your example, given the following config.yml:
## load("#ytt:data", "data")
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: webapp
annotations:
environment: ## data.values.env
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: ## data.values.max_surge
maxUnavailable: 0
selector:
matchLabels:
run: webapp
template:
metadata:
labels:
run: webapp
spec:
containers:
- name: webapp
image: ## data.values.gcr_image_url
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: DATABASE_URL
- name: SECRET_KEY_BASE
valueFrom:
secretKeyRef:
name: app-secrets
key: SECRET_KEY_BASE
and values.yml:
##data/values
---
env: staging
max-surge: 1
gcr-image-url: some/other-image:latest
assuming that everything is in the same directory, you can template config.yml like:
ytt -f .
or customize the values on the fly from env vars and command line arguments:
export CUSTOM_env=production
ytt -f . \
--data-value max_surge=10 \
--data-value gcr_image_url=some/image:1.0 \
--data-values-env CUSTOM
Related
I am unable to deploy this file by using
kubectl apply -f command
Deployment YAML image
I have provided the YAML file required for your deployment. It is important that all the lines are indented correctly. Hyphens (-) indicate a list item. Therefore, it is not required to use them on every line.
apiVersion: apps/v1
kind: Deployment
metadata:
name: abc-deployment
namespace: abc
spec:
replicas: 3
selector:
matchLabels:
app: abc-deployment
template:
metadata:
labels:
app: abc-deployment
spec:
containers:
- name: abc-deployment
image: anyimage
ports:
- containerPort: 80
env:
- name: APP_VERSION
value: v1
- name: ENVIRONMENT
value: "123"
- name: DATA
valueFrom:
configMapKeyRef:
name: abc-configmap
key: data
imagePullPolicy: IfNotPresent
restartPolicy: Always
imagePullSecrets:
- name: abc-secret
As a side note, the way envFrom was used is incorrect. It must be within the container env section, and formatted as such in the example above (see the DATA env variable).
If you are using Visual Studio Code, there is an official Kubernetes extension from Microsoft that provides Intellisense (suggestions) and alerts you to errors.
Hope this helps.
I am new to DevOps. I wrote a deployment.yaml file for a Kubernetes cluster I just created on AWS. Creating the deployment keeps bringing up errors that I can't decode for now. This is just a test deployment in preparation for the migration of my company's web apps to kubernetes.
I tried editing the content of the deployment to look like conventional examples I've found. I can't even get this simple example to work. You may find the deployment.yaml content below.
apiVersion: v1
kind: Service
metadata:
name: ghost
labels:
app: ghost
spec:
ports:
- port: 80
selector:
app: ghost
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: ghost
labels:
app: ghost
spec:
selector:
matchLabels:
app: ghost
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: ghost
tier: frontend
spec:
containers:
- image: ghost:4-alpine
name: ghost
env:
- name: database_client
valueFrom:
secretKeyRef:
name: eks-keys
key: client
- name: database_connection_host
valueFrom:
secretKeyRef:
name: eks-keys
key: host
- name: database_connection_user
valueFrom:
secretKeyRef:tha
- name: database_connection_password
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcp
- name: database_connection_database
valueFrom:
secretKeyRef:
name: eks-keys
key: ghostdcd
ports:
- containerPort: 2368
name: ghost
volumeMounts:
- name: ghost-persistent-storage
mountPath: /var/lib/ghost
volumes:
- name: ghost-persistent-storage
persistentVolumeClaim:
claimName: efs-ghost
I ran this line on cmd in the folder container:
kubectl create -f deployment-ghost.yaml --validate=false
service/ghost created
Error from server (BadRequest): error when creating "deployment-ghost.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.ValueFrom: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|lueFrom":"secretKeyR|..., bigger context ...|},{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},{"name":"database_connection_pa|...
I couldn't even get any information on this from my search. I can't just get the deployment created. Pls, who understands and can put me through?
{"name":"database_connection_user","valueFrom":"secretKeyRef:tha"},
Your spec has error:
...
- name: database_connection_user # <-- The error message points to this env variable
valueFrom:
secretKeyRef:
name: <secret name, eg. eks-keys>
key: <key in the secret>
...
I have a config yaml file for a kubernetes deployment that looks like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: <some_app>
name: <some_app>
namespace: dataengineering
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: <some_app>
spec:
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 30
containers:
- image: 127579856528.dkr.ecr.us-west-2.amazonaws.com/dataengineering/<some_app>:latest
imagePullPolicy: Always
name: <some_app>
env:
- name: ES_HOST
value: "vpc-some-name-dev-wrfkk5v7kidaro67ozjrv4wdeq.us-west-2.es.amazonaws.com"
- name: ES_PORT
value: "443"
- name: DATALOADER_QUEUE
valueFrom:
configMapKeyRef:
name: <some_name>
key: DATALOADER_QUEUE
- name: AWS_DEFAULT_REGION
value: "us-west-2"
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: <some_name>
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: <some_name>
key: AWS_SECRET_ACCESS_KEY
...
Currently, this file is in dev/deployment.yaml but I also want a prod/deployment.yaml. Instead of copying this whole file over, is there a better way to DRY up this file so it can be used for both dev and prod clusters? The parts of this file that differ are some of the environment variables (I used a different DATALOADER_QUEUE variable for prod and dev, and the AWS keys. What can be done?
I looked into some options like a configmap. How does one do this? What's a mounted volume? I'm reading this: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume but I'm not sure what it is.... what is a volume? How do I access the data stored in this "volume"?
Can the image be switched from prod to dev? I know that seems odd...
Something like this would help with the env vars:
envFrom:
- configMapRef:
name: myapp-config
- secretRef:
name: myapp-secrets
You can then use different namespaces for dev vs. prod so the references don't have to vary. For handling labels, look at Kustomize overlays and setting labels at the overlay level.
Im on kubernetes 1.3.5, we are using Deployments with rollingupdates to update the pods in our cluster. However, on rollingupdate, the newly added environment variable never gets added to the pod, is it by design ? what are the ways to get around that ?
Following is the sample deployment yaml files. Basically the deployment was deployed with first version then we updated the yaml with newly added env variable NEW_KEY and basically run through the rolling updated. But the new env does not show up in the PODS.
first version yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APP_NAME-deployment
labels:
name: APP_NAME
environment: DEV
spec:
revisionHistoryLimit: 2
strategy:
type: RollingUpdate
replicas: 2
template:
metadata:
labels:
name: APP_NAME
environment: DEV
spec:
containers:
- name: APP_NAME
image: repo.app_name:latest
env:
- name: NODE_ENV
value: 'development'
- name: APP_KEY
value: '123'
updated yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APP_NAME-deployment
labels:
name: APP_NAME
environment: DEV
spec:
revisionHistoryLimit: 2
strategy:
type: RollingUpdate
replicas: 2
template:
metadata:
labels:
name: APP_NAME
environment: DEV
spec:
containers:
- name: APP_NAME
image: repo.app_name:latest
env:
- name: NODE_ENV
value: 'development'
- name: APP_KEY
value: '123'
- name: NEW_KEY
value: 'new'
You can store the env variable in either a ConfigMap or secretKeyRef. For a ConfigMap you would do:
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: node_env
key: node.dev
Or with a secretKeyRef:
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
secretKeyRef:
name: node_env
key: node.dev
Below is how I am using kunbernetes on google.
I have one node application let's say Book-portal.
node app is using environment variables for configurations.
Step1: I created docker file and pushed
gcr.io/<project-id>/book-portal:v1
Step2: deployed with following commands
kubectl run book-portal --image=gcr.io/<project-id>/book-portal:v1 --port=5555 --env ENV_VAR_KEY1=value1 --env ENV_VAR_KEY2=value2 --env ENV_VAR_KEY3=value3
Step3:
kubectl expose deployment book-portal --type="LoadBalancer"
Step4: Get public ip with
kubectl get services book-portal
now assume I added new features and new configurations in next release.
So to roll out new version v2
Step1: I created docker file and pushed
gcr.io/<project-id>/book-portal:v2
Step2: Edit deployment
kubectl edit deployment book-portal
---------------yaml---------------
...
spec:
replicas: 1
selector:
matchLabels:
run: book-portal
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: book-portal
spec:
containers:
- env:
- name: ENV_VAR_KEY1
value: value1
- name: ENV_VAR_KEY2
value: value2
- name: ENV_VAR_KEY3
value: value3
image: gcr.io/<project-id>/book-portal:v1
imagePullPolicy: IfNotPresent
name: book-portal
...
----------------------------------
I am successfully able to change
image:gcr.io/<project-id>/book-portal:v1
to
image:gcr.io/<project-id>/book-portal:v2
But I can not add/change environment variables
- env:
- name: ENV_VAR_KEY1
value: value1
- name: ENV_VAR_KEY2
value: value2
- name: ENV_VAR_KEY3
value: value3
- name: ENV_VAR_KEY4
value: value4
Can anyone guide with what is best practices to pass configurations
in node app on kubernetes?
how should I handle environment variable
changes during rolling updates?
I think your best bet is to use configmaps in k8s and then change you pod template to get env variable values from the configmap see Consuming ConfigMap in pods
edit: I appologize I put the wrong link here. I have updated but for the TL;DR
you can do the following.
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
special.how: very
special.type: charm
and then pod usage can look like this.
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
- name: SPECIAL_TYPE_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.type
restartPolicy: Never