K8s Create Deployment with EnvFrom - kubernetes

I am trying to fire up an influxdb instance on my cluster.
I am following a few different guides and am trying to get it to expose a secret as environment variables using the envFrom operator. Unfortunately I am always getting the Environment: <none> after doing my deployment. Doing an echo on the environment variables I expect yields a blank value as well.
I am running this command to deploy (the script below is in influxdb.yaml): kubectl create deployment influxdb --image=influxdb
Here is my deployment script:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
generation: 1
labels:
app: influxdb
project: pihole
name: influxdb
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: influxdb
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: influxdb
spec:
containers:
- name: influxdb
envFrom:
- secretRef:
name: influxdb-creds
image: docker.io/influxdb:1.7.6
imagePullPolicy: IfNotPresent
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/influxdb
name: var-lib-influxdb
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: var-lib-influxdb
persistentVolumeClaim:
claimName: influxdb
status: {}
The output of kubectl describe secret influxdb-creds is this:
Name: influxdb-creds
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
INFLUXDB_USERNAME: 4 bytes
INFLUXDB_DATABASE: 6 bytes
INFLUXDB_HOST: 8 bytes
INFLUXDB_PASSWORD: 11 bytes

to test your deployment, please first create secrets and later create deployment:
1. Secrets:
kubectl create secret generic influxdb-creds --from-literal=INFLUXDB_USERNAME='test_user' --from-literal=INFLUXDB_DATABASE='test_password'
2. Deployment:
kubectl apply -f <path_to_your_yaml_file>
In order to verify, please run
kubectl describe secret influxdb-creds
kubectl exec <your_new_deployed_pod> -- env
kubectl describe pod <your_new_deployed_pod>
Take a look at:
Environment Variables from:
influxdb-creds Secret Optional: false
Hope this help.
Please share with your findings.

The answer to this is that I was creating the deployment incorrectly. I was using the command kubectl create deployment influxdb --image=influxdb which was creating a blank deployment and instead I should have been creating it with kubectl create -f influxdb.yaml where influxdb.yaml was my file that contained the deployment definition in the original question.
I was making the false assumption that the create deployment command read the yaml file by the same name, but it does not.

Related

Horizontal scaling of a deployment on GCP Kubernetes

I have a cluster where I have deployed multiple applications and I want to horizontally scale one of the deployment.
Following is my Yaml for the deployment, how can I achieve it ?
Note : I have tried changing the replicas to more than 1 and applied the new config and restarted the deployment but want to know if I need to add any policies, specs, etc to achieve the right horizontal scaling.
apiVersion: apps/v1
kind: Deployment
metadata:
name: preview
namespace: default
resourceVersion: {}
uid: {}
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: preview
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
app: preview
spec:
containers:
- image: gcr.io/{project name}/{image name}
imagePullPolicy: Always
name: preview
resources:
requests:
cpu: 10m
memory: 450Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /app/data
name: data
- mountPath: /app/conf
name: config
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: data
persistentVolumeClaim:
claimName: preview
- name: config
secret:
defaultMode: 420
secretName: preview-secrets
You can use the HPA (Horizontal Pod Autoscaler). Here is what the typical yaml configuration looks like.
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpa_name
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: deployment_name_to_autoscale
minReplicas: 1
maxReplicas: 3
targetCPUUtilizationPercentage: 80
You can monitor the scaling using kubectl get hpa
In GKE, you can achieve this with Horizontal Pod Autoscaler (HPA). The autoscaling event can be configured to be triggered by system (eg. cpu or memory) or custom metrics (eg. pubsub queued messages count). You can also set the minimum and maximum number of pods to scale up to.
Here is a link from GCP for a sample HPA yaml file
Menu > GKE > Workloads > click on your deployment > 3 dots (more
actions) > Actions > Autoscale > set metrics > Save
Here is another example for using HPA with external metric (e.g cloud pub/sub):
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: your-hpa-name
spec:
minReplicas: 1
maxReplicas: 4
metrics:
- external:
metric:
name: pubsub.googleapis.com|subscription|num_undelivered_messages
selector:
matchLabels:
resource.labels.subscription_id: your-pubsub-subscirbtion-name
target:
type: AverageValue
averageValue: 200
type: External
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: your-deployment-name

Get the Kubernetes uid of the Deployment that created the pod, from within the pod

I want to be able to know the Kubernetes uid of the Deployment that created the pod, from within the pod.
The reason for this is so that the Pod can spawn another Deployment and set the OwnerReference of that Deployment to the original Deployment (so it gets Garbage Collected when the original Deployment is deleted).
Taking inspiration from here, I've tried*:
Using field refs as env vars:
containers:
- name: test-operator
env:
- name: DEPLOYMENT_UID
valueFrom:
fieldRef: {fieldPath: metadata.uid}
Using downwardAPI and exposing through files on a volume:
containers:
volumeMounts:
- mountPath: /etc/deployment-info
name: deployment-info
volumes:
- name: deployment-info
downwardAPI:
items:
- path: "uid"
fieldRef: {fieldPath: metadata.uid}
*Both of these are under spec.template.spec of a resource of kind: Deployment.
However for both of these the uid is that of the Pod, not the Deployment. Is what I'm trying to do possible?
The behavior is correct, the Downward API is for pod rather than deployment/replicaset.
So I guess the solution is set the name of deployment manually in spec.template.metadata.labels, then adopt Downward API to inject the labels as env variables.
I think it's impossible to get the UID of Deployment itself, you can set any range of runAsUser while creating the deployment.
Try this command to get the UIDs of the existing pods:
kubectl get pod -o jsonpath='{range .items[*]}{#.metadata.name}{" runAsUser: "}{#.spec.containers[*].securityContext.runAsUser}{" fsGroup: "}{#.spec.securityContext.fsGroup}{" seLinuxOptions: "}{#.spec.securityContext.seLinuxOptions.level}{"\n"}{end}'
It's not the exact what you wanted to get, but it can be a hint for you.
To set the UID while creating the Deployment, see the example below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: toolbox2
labels:
app: toolbox2
spec:
replicas: 3
selector:
matchLabels:
app: toolbox2
template:
metadata:
labels:
app: toolbox2
spec:
securityContext:
supplementalGroups: [1000620001]
seLinuxOptions:
level: s0:c25,c10
containers:
- name: net-toolbox
image: quay.io/wcaban/net-toolbox
ports:
- containerPort: 2000
securityContext:
runAsUser: 1000620001

k8s: configMap does not work in deployment

We ran into an issue recently as to using environment variables inside container.
OS: windows 10 pro
k8s cluster: minikube
k8s version: 1.18.3
1. The way that doesn't work, though it's preferred way for us
Here is the deployment.yaml using 'envFrom':
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
labels:
app.kubernetes.io/name: db
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: db
template:
metadata:
labels:
app.kubernetes.io/name: db
spec:
serviceAccountName: default
securityContext:
{}
containers:
- name: db
image: "postgres:9.4"
ports:
- name: http
containerPort: 5432
protocol: TCP
envFrom:
- configMapRef:
name: db-configmap
here is the db.properties:
POSTGRES_HOST_AUTH_METHOD=trust
step 1:
kubectl create configmap db-configmap ./db.properties
step 2:
kebuctl apply -f ./deployment.yaml
step 3:
kubectl get pod
Run the above command, get the following result:
db-8d7f7bcb9-7l788 0/1 CrashLoopBackOff 1 9s
That indicates the environment variables POSTGRES_HOST_AUTH_METHOD is not injected.
2. The way that works (we can't work with this approach)
Here is the deployment.yaml using 'env':
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
labels:
app.kubernetes.io/name: db
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: db
template:
metadata:
labels:
app.kubernetes.io/name: db
spec:
serviceAccountName: default
securityContext:
{}
containers:
- name: db
image: "postgres:9.4"
ports:
- name: http
containerPort: 5432
protocol: TCP
env:
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
step 1:
kubectl apply -f ./deployment.yaml
step 2:
kubectl get pod
Run the above command, get the following result:
db-fc58f998d-nxgnn 1/1 Running 0 32s
the above indicates the environment is injected so that the db starts.
What did I do wrong in the first case?
Thank you in advance for the help.
Update:
Provide the configmap:
kubectl describe configmap db-configmap
Name: db-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
db.properties:
----
POSTGRES_HOST_AUTH_METHOD=trust
For creating config-maps for usecase-1. please use the below command
kubectl create configmap db-configmap --from-env-file db.properties
Are you missing the key? (see "key:" (no quotes) below) And I think you need to provide the name of the env-variable...which people usually use the key-name, but you don't have to. I've repeated the same value ("POSTGRES_HOST_AUTH_METHOD") below as the environment variable NAME and the keyname of the config-map.
#start env .. where we add environment variables
env:
# Define the environment variable
- name: POSTGRES_HOST_AUTH_METHOD
#value: "UseHardCodedValueToDebugSometimes"
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to environment variable (above "name:")
name: db-configmap
# Specify the key associated with the value
key: POSTGRES_HOST_AUTH_METHOD
My example (trying to use your values)....comes from this generic example:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
pods/pod-single-configmap-env-variable.yaml
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
restartPolicy: Never
PS
You can use "describe" to take a looksie at your config-map, after you (think:) ) you have set it up correctly.
kubectl describe configmap db-configmap --namespace=IfNotDefaultNameSpaceHere
See when you do it like you described.
deployment# exb db-7785cdd5d8-6cstw
root#db-7785cdd5d8-6cstw:/# env | grep -i TRUST
db.properties=POSTGRES_HOST_AUTH_METHOD=trust
the env set is not exactly POSTGRES_HOST_AUTH_METHOD its actually taking filename in env.
create configmap via
kubectl create cm db-configmap --from-env-file db.properties and it will actually put env POSTGRES_HOST_AUTH_METHOD in pod.

Mounting kubernetes volume in multiple containers within a pod with gitlab

I am setting up a CI/CD environment for the first time consisting of a single node kubernetes (minikube).
On this node I created a PV
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
data-volume 1Gi RWO Retain Bound gitlab-managed-apps/data-volume-claim manual 20m
and PVC
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-claim Bound data-volume 1Gi RWO manual 19m
Now I would like to create a pod with multiple containers accessing to this volume.
Where and how do you advise to setup this using gitlab pipelines gitlab-ci etc? Multiple repos may be the best fit for the project.
Here is the fully working example of deployment manifest file, having in Pod's spec defined two containers (based on different nginx docker images) using the same PV, from where they serve custom static html content on ports 80/81 accordingly:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: nginx
name: nginx
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
volumes:
- name: my-pv-storage
persistentVolumeClaim:
claimName: my-pv-claim-nginx
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html_custom
- image: custom-nginx
imagePullPolicy: IfNotPresent
name: custom-nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
Yes probaly you can do that run multiple container in one pod sharing the one PVC.
In CI/CD if you have multiple repos and if commit comes in one repo it will build new Docker image and push it to the registry and deployed to k8s cluster.
In CI/CD if you have the plan to use latest tag for image tagging then you can use multi-container in pod. it will be easy to manage deployment if there is commit in only one repository.
If you have plan to use SHA:hash for CI/CD-tagging images then how will you manage the deployment file having two containers config.

Google Cloud Build can't find kubernetes deployment when updating image

I'm trying to set up a CI pipeline using Cloud Build. My build file builds and pushes the Docker images, and then uses the kubectl builder to update the images in a kubernetes deployment. However I'm getting the following error:
Error from server (NotFound): deployments.extensions "my-app" not found
Running: kubectl set image deployment my-app my-app-api=gcr.io/test-project-195004/my-app-api:ef53550e2ahy784a14iouyh79712c9f
I've verified via the UI that the deployment is active and has that name. Thought it could be a permissions issue but as far as I know the Cloud Build service account has the Kubernetes Engine Admin role, and is successfully able to pull the cluster auth data in the previous step.
EDIT: As requested, here is my build script:
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA', '-f', 'deploy/api/Dockerfile', '--no-cache', '.']
- name: 'gcr.io/cloud-builders/kubectl'
args:
- set
- image
- deployment
- my-app
- my-app=gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER}'
images: ['gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA']
timeout: 5000s
And the deployment.yaml --
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-11-04T17:34:44Z
generation: 1
labels:
app: my-app
name: my-app
namespace: my-app
resourceVersion: "4370"
selfLink: /apis/extensions/v1beta1/namespaces/my-app/deployments/my-app
uid: 65kj54g3-e057-11e8-81bc-42010aa20094
spec:
replicas: 3
selector:
matchLabels:
app: my-app
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: my-app
spec:
containers:
- image: gcr.io/test-project/my-app-api:f16erogierjf1abd436e733398a08e1b76ce6b712
imagePullPolicy: IfNotPresent
name: my-appi-api
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 3
conditions:
- lastTransitionTime: 2018-11-04T17:37:59Z
lastUpdateTime: 2018-11-04T17:37:59Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 3
replicas: 3
updatedReplicas: 3
You need to define the namespace for your deployment when using kubectl
if you're not using the default namespace.
Because you're using namespace for your deployment namespace: my-app you will need to add it to the command using the --namspace flag.
Here's how to do it inside cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA', '-f', 'deploy/api/Dockerfile', '--no-cache', '.']
- name: 'gcr.io/cloud-builders/kubectl'
args:
- set
- image
- deployment
- my-app
- my-app=gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA
- --namespace
- my-app
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER}'
images: ['gcr.io/$PROJECT_ID/my-app-api:$COMMIT_SHA']
timeout: 5000s