Cluster information:
Kubernetes version: v1.12.8-gke.10 on GCP
Question:
I’m doing application migration now. The thing I do is to grab all configurations of related resources and then deploy them to a new cluster. After getting information from shell command kubectl get <resource> -o yaml, I noticed that there is a lot of information that my deploy YAMLs don’t have.
I deleted .spec.clusterIP, .metadata.uid, .metadata.selfLink, .metadata.resourceVersion, .metadata.creationTimestamp, .metadata.generation, .status, .spec.template.spec.securityContext, .spec.template.spec.dnsPolicy, .spec.template.spec.terminationGracePeriodSeconds, .spec.template.spec.restartPolicy fields.
I’m not sure is there other fields that will influence the new deployment I need to delete?
Is there a way to find all non-portable fields that I can delete?
And another question is: do all related resources matter? For now I just grab a list of resources from kubectl api-resources and then get info of them one by one. Should I ignore some resources like ReplicaSet to migrate the whole application?
For example, output configuration of nginx deployment will be like:
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-07-16T21:55:39Z"
generation: 1
labels:
app: nginx
name: nginx-deployment
namespace: default
resourceVersion: "1482081"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx-deployment
uid: 732377ee-a814-11e9-bbe9-42010a8a001a
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.7.9
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: "2019-07-16T21:55:41Z"
lastUpdateTime: "2019-07-16T21:55:41Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-07-16T21:55:39Z"
lastUpdateTime: "2019-07-16T21:55:41Z"
message: ReplicaSet "nginx-deployment-5c689d88bb" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 2
replicas: 2
updatedReplicas: 2```
Right off the bat, there is no way to detect which fields are cluster-specific automatically, the kubectl get [resource] -o yaml is outputting the current RESTful state of the resource. However, you can use some linux bash to manipulate the ouput of a cluster dump to get the fields you want. Take a look at this blog post on medium.
As to the "do all resources matter" the answer is no. If you have a deployment, you don't need the replicaSet or the pod resources since the deployment will manage and create those once it is deployed. You just need the top level controller resource (same thing does for daemonsets and statefulsets).
On another note, the fields from the spec section can mostly all be kept, the values that you are removing are likely default values you never set initially but there is no real benefit in removing them.
Related
I am setting up a CI/CD environment for the first time consisting of a single node kubernetes (minikube).
On this node I created a PV
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
data-volume 1Gi RWO Retain Bound gitlab-managed-apps/data-volume-claim manual 20m
and PVC
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-volume-claim Bound data-volume 1Gi RWO manual 19m
Now I would like to create a pod with multiple containers accessing to this volume.
Where and how do you advise to setup this using gitlab pipelines gitlab-ci etc? Multiple repos may be the best fit for the project.
Here is the fully working example of deployment manifest file, having in Pod's spec defined two containers (based on different nginx docker images) using the same PV, from where they serve custom static html content on ports 80/81 accordingly:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: null
generation: 1
labels:
run: nginx
name: nginx
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/nginx
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nginx
spec:
volumes:
- name: my-pv-storage
persistentVolumeClaim:
claimName: my-pv-claim-nginx
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html_custom
- image: custom-nginx
imagePullPolicy: IfNotPresent
name: custom-nginx
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-pv-storage
subPath: html
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status: {}
Yes probaly you can do that run multiple container in one pod sharing the one PVC.
In CI/CD if you have multiple repos and if commit comes in one repo it will build new Docker image and push it to the registry and deployed to k8s cluster.
In CI/CD if you have the plan to use latest tag for image tagging then you can use multi-container in pod. it will be easy to manage deployment if there is commit in only one repository.
If you have plan to use SHA:hash for CI/CD-tagging images then how will you manage the deployment file having two containers config.
I am trying to fire up an influxdb instance on my cluster.
I am following a few different guides and am trying to get it to expose a secret as environment variables using the envFrom operator. Unfortunately I am always getting the Environment: <none> after doing my deployment. Doing an echo on the environment variables I expect yields a blank value as well.
I am running this command to deploy (the script below is in influxdb.yaml): kubectl create deployment influxdb --image=influxdb
Here is my deployment script:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
generation: 1
labels:
app: influxdb
project: pihole
name: influxdb
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: influxdb
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: influxdb
spec:
containers:
- name: influxdb
envFrom:
- secretRef:
name: influxdb-creds
image: docker.io/influxdb:1.7.6
imagePullPolicy: IfNotPresent
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/influxdb
name: var-lib-influxdb
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: var-lib-influxdb
persistentVolumeClaim:
claimName: influxdb
status: {}
The output of kubectl describe secret influxdb-creds is this:
Name: influxdb-creds
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
INFLUXDB_USERNAME: 4 bytes
INFLUXDB_DATABASE: 6 bytes
INFLUXDB_HOST: 8 bytes
INFLUXDB_PASSWORD: 11 bytes
to test your deployment, please first create secrets and later create deployment:
1. Secrets:
kubectl create secret generic influxdb-creds --from-literal=INFLUXDB_USERNAME='test_user' --from-literal=INFLUXDB_DATABASE='test_password'
2. Deployment:
kubectl apply -f <path_to_your_yaml_file>
In order to verify, please run
kubectl describe secret influxdb-creds
kubectl exec <your_new_deployed_pod> -- env
kubectl describe pod <your_new_deployed_pod>
Take a look at:
Environment Variables from:
influxdb-creds Secret Optional: false
Hope this help.
Please share with your findings.
The answer to this is that I was creating the deployment incorrectly. I was using the command kubectl create deployment influxdb --image=influxdb which was creating a blank deployment and instead I should have been creating it with kubectl create -f influxdb.yaml where influxdb.yaml was my file that contained the deployment definition in the original question.
I was making the false assumption that the create deployment command read the yaml file by the same name, but it does not.
I am new to Kubernetes. I am trying to follow this tutorial that instructs me on how to use minikube to setup a local service. I was able to get things running with the $ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080 service from the tutorial. Huzzah!
Now I want to run a server with a locally tagged-and-built Docker image. According to this post all I need to do is tell my computer to use the minikube docker daemon, build my image, and set the imagePullPolicy to never.
How and where do I set the imagePullPolicy with minikube? I've googled around and while there's plenty of results, my "babe in the woods" status with K8 leads to information overload. (i.e. the simpler your answer the better)
You have to edit your Deployment (kubectl run creates a deployment). The spec would look something like this:
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
run: hello-minikube
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-minikube
spec:
containers:
- image: k8s.gcr.io/echoserver:1.10 <-- change to the right image
imagePullPolicy: IfNotPresent <-- change to Always
name: hello-minikube
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Edit with:
$ kubectl edit deployment hello-minikube
I want rollback deployment for my pods. I'm updating my pod using set Image in a CI environment. When I set maxUnavailable on Deployment/web file to 1, I get downtime. but when I set maxUnavailable to 0, The pods doesnot get replaced and container / app is not restarted.
Also I Have a single Node in Kubernetes cluster and Here's its info
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
881m (93%) 396m (42%) 909712Ki (33%) 1524112Ki (56%)
Events: <none>
Here's the complete YAML file. I do have readiness Probe set.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "10"
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert
kompose.version: 1.14.0 (fa706f2)
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{"kompose.cmd":"C:\\ProgramData\\chocolatey\\lib\\kubernetes-kompose\\tools\\kompose.exe convert","kompose.version":"1.14.0 (fa706f2)"},"creationTimestamp":null,"labels":{"io.kompose.service":"dev-web"},"name":"dev-web","namespace":"default"},"spec":{"replicas":1,"strategy":{},"template":{"metadata":{"labels":{"io.kompose.service":"dev-web"}},"spec":{"containers":[{"env":[{"name":"JWT_KEY","value":"ABCD"},{"name":"PORT","value":"2000"},{"name":"GOOGLE_APPLICATION_CREDENTIALS","value":"serviceaccount/quick-pay.json"},{"name":"mongoCon","value":"mongodb://quickpayadmin:quickpay1234#ds121343.mlab.com:21343/quick-pay-db"},{"name":"PGHost","value":"173.255.206.177"},{"name":"PGUser","value":"postgres"},{"name":"PGDatabase","value":"quickpay"},{"name":"PGPassword","value":"z33shan"},{"name":"PGPort","value":"5432"}],"image":"gcr.io/quick-pay-208307/quickpay-dev-node:latest","imagePullPolicy":"Always","name":"dev-web-container","ports":[{"containerPort":2000}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/","port":2000,"scheme":"HTTP"},"initialDelaySeconds":5,"periodSeconds":5,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"20m"}}}]}}}}
creationTimestamp: 2018-12-24T12:13:48Z
generation: 12
labels:
io.kompose.service: dev-web
name: dev-web
namespace: default
resourceVersion: "9631122"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/web
uid: 5e66f7b3-0775-11e9-9653-42010a80019d
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
io.kompose.service: web
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: web
spec:
containers:
- env:
- name: PORT
value: "2000"
image: gcr.io/myimagepath/web-node
imagePullPolicy: Always
name: web-container
ports:
- containerPort: 2000
protocol: TCP
readinessProbe:
failureThreshold: 10
httpGet:
path: /
port: 2000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources:
requests:
cpu: 10m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2019-01-03T05:49:46Z
lastUpdateTime: 2019-01-03T05:49:46Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2018-12-24T12:13:48Z
lastUpdateTime: 2019-01-03T06:04:24Z
message: ReplicaSet "dev-web-7bd498fc74" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 12
readyReplicas: 2
replicas: 2
updatedReplicas: 2
I've tried with 1 replica and it still doesnot work.
In first scenario, Kubernetes deletes one pod (maxUnavailable: 1) and started the pod with new image and waits for ~110 seconds(based on your readiness probe) to check if new pod is able to serve request. New pod isn't able to serve requests but the pod is in running state and hence it delete the second old pod and started it with new image and again second pod waits for the readiness probe to complete. This is the reason there is some time in between where both the containers are not ready to serve request and hence the downtime.
In second scenario, where you have maxUnavailable:0, Kubernetes first brings up the pod with new image and it isn't able to serve the request in ~110 seconds(based on your readiness probe) and hence it times out and deletes the new pod with new image. Same happens with the second pod. Hence both your pod do not get updated
So the reason is that you are not giving enough time to your application to come up and start serving requests. You can increase the value of failureThreshold in your readiness probe and maxUnavailable: 0, it will work.
I have a Kubernetes deployment that looks something like this (replaced names and other things with '....'):
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "3"
kubernetes.io/change-cause: kubectl replace deployment ....
-f - --record
creationTimestamp: 2016-08-20T03:46:28Z
generation: 8
labels:
app: ....
name: ....
namespace: default
resourceVersion: "369219"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/....
uid: aceb2a9e-6688-11e6-b5fc-42010af000c1
spec:
replicas: 2
selector:
matchLabels:
app: ....
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: ....
spec:
containers:
- image: gcr.io/..../....:0.2.1
imagePullPolicy: IfNotPresent
name: ....
ports:
- containerPort: 8080
protocol: TCP
resources:
requests:
cpu: "0"
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
observedGeneration: 8
replicas: 2
updatedReplicas: 2
The problem I'm observing is that Kubernetes places both replicas (in the deployment I've asked for two) on the same node. If that node goes down, I lose both containers and the service goes offline.
What I want Kubernetes to do is to ensure that it doesn't double up containers on the same node where the containers are the same type - this only consumes resources and doesn't provide any redundancy. I've looked through the documentation on deployments, replica sets, nodes etc. but I couldn't find any options that would let me tell Kubernetes to do this.
Is there a way to tell Kubernetes how much redundancy across nodes I want for a container?
EDIT: I'm not sure labels will work; labels constrain where a node will run so that it has access to local resources (SSDs) etc. All I want to do is ensure no downtime if a node goes offline.
There is now a proper way of doing this.
You can use the label in "kubernetes.io/hostname" if you just want to spread it out across all nodes. Meaning if you have two replicas of a pod, and two nodes, each should get one if their names aren't the same.
Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
labels:
app: my-service
spec:
replicas: 2
selector:
matchLabels:
app: my-service
template:
metadata:
labels:
app: my-service
spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: my-service
containers:
- name: pause
image: k8s.gcr.io/pause:3.1
I think you're looking for the Affinity/Anti-Affinity Selectors.
Affinity is for co-locating pods, so I want my website to try and schedule on the same host as my cache for example. On the other hand, Anti-affinity is the opposite, don't schedule on a host as per a set of rules.
So for what you're doing, I would take a closer look at this two links:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#never-co-located-in-the-same-node
https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/#tolerating-node-failure
If you create a Service for that Deployment, before creating the said Deployment, Kubernetes will spread your pods across nodes. This behavior comes from the Scheduler, it is provided on a best-effort basis, providing that you have enough resources available on both nodes.
From the Kubernetes documentation (Managing Resources):
it’s best to specify the service first, since that will ensure the scheduler can spread the pods associated with the service as they are created by the controller(s), such as Deployment.
Also related: Configuration best practices - Service.
I agree with Antoine Cotten to use a service for your deployment. A service always keeps any service up by creating a new pod if, for some reason, one pod is dying in a certain node. However, if you just want to distribute a deployment among all nodes then you can use pod anti affinity in your pod manifest file. I put an example on my gitlab page that you can also find in Kubernetes Blog. For your convenience, I'm providing the example here as well.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
In this example, each Deployment has a label which is app and the value of this label is nginx. In pod spec, you have podAntiAffinity that will restrict to have two same pods (label app:nginx) in one node. You can also use podAffinity if you would like to place multiple Deployments in one node.
If a node goes down, any pods running on it would be restarted automatically on another node.
If you start specifying exactly where you want them to run, then you actually loose the capability of Kubernetes to reschedule them on a different node.
The usual practice therefore is to simply let Kubernetes do its thing.
If however you do have valid requirements to run a pod on a specific node, due to requirements for certain local volume type etc, have a read of:
http://kubernetes.io/docs/user-guide/node-selection/
Maybe a DaemonSet will work better. I'm using DaemonStets with nodeSelector to run pods on specific nodes and avoid duplication.
http://kubernetes.io/docs/admin/daemons/