Specify Depends on in Kubernetes deployment - kubernetes

I have two kubernetes deployments say backend and frontend. frontend deployment depends on the backend deployments. Means after backend deployment pods are ready then the pods for frontend should be created. How can I specify this in the deployment yaml?

The solution you are looking for is Init container. Pod can have one or more Init containers and they run one after another before main Pod containers are started. Please be aware that each Init container runs until completion.
So you can use Init containers to check availability of your back-end applications. Here is an example:
apiVersion: v1
kind: Pod
metadata:
name: front-end
labels:
app: front-end
spec:
containers:
- name: front-end
image: node:boron
initContainers:
- name: init-backend
image: busybox
command: ['sh', '-c', 'until <put check condition for your back-end>; do echo waiting for back-end; sleep 2; done;']
For more information you can go through documentation.

Related

How to check other containers file systems within a k8s pod?

I have a Pod which contains two containers - nginx and busybox. Now I want to access the files within nginx container from the busybox container, say doing 'ls /etc/nginx' in busybox container to list the files there. Is there any configuration in k8s pod to allow me to achieve this?
Below is my current pod yaml file.
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginxbusybox
name: nginxbusybox
spec:
shareProcessNamespace: true
containers:
- image: nginx
name: nginx
- image: busybox
name: busybox
command:
- sleep
- '86400'
P.S. This is for debugging a container which don't have the common linux tools within.
OK, I've found a way to achieve this. By using the 'kubectl debug' and checking the /proc/1/root/, I could see the filesystem in the nginx container. Below's what this looks like.

What is the current equivalent of `kubectl run --generator=run/v1`

I'm working through Kubernetes in Action (copyright 2018), and at least one of the examples is out-of-date with respect to current versions of kubectl.
Currently I'm stuck in section 2.3 on just trying to demo a simple web-server docker container ("kubia"):
kubectl run kubia --image=Dave/kubia --port=8080 --generator=run/v1
the --generator option has been removed from current versions of kubectl. What command(s) achieve the same end in the current version of kubectl?
Note: I'm literally just 2 chapters into learning about Kubernetes, so I don't really know what a deployment or anything else (so the official kubernetes docuementation doesn't help), I just need the simplest way to verify that that I can, in fact, run this container in my minikube "cluster".
in short , you can use following commands to create pods and deployments (imperative way) using following commands which are similar to the commands mentioned in that book :
To create a pod named kubia with image Dave/kubia
kubectl run kubia --image=Dave/kubia --port=8080
To create a deployment named kubia with image Dave/kubia
kubectl create deployment kubia --image=Dave/kubia --port=8080
You can just instantiated the pod, since --generator has been deprecated.
apiVersion: v1
kind: Pod
metadata:
name: kubia
spec:
containers:
- name: kubia
image: Dave/kubia
ports:
- containerPort: 8080
Alternatively, you can use a deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia-deployment
labels:
app: kubia
spec:
replicas: 1
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: Dave/kubia
ports:
- containerPort: 8080
Save either one to a something.yaml file and run
kubectl create -f something.yaml
And to clean up
kubectl delete -f something.yaml
✌️
If someone who read same book (Kubernetes in Action, copyright 2018) have same issue in the future, just run pod instead of the replication controller and expose pod instead of rc in following chapter.

K8S cronjob scheduling on existing pod?

I have my application running in K8S pods. my application writes logs to a particular path for which I already have volume mounted on the pod. my requirement is to schedule cronjob which will trigger weekly once and read the logs from that pod volume and generate a report base on my script (which is basically filtering the logs based on some keywords). and send the report via mail.
unfortunately I am not sure how I will proceed on this as I couldn't get any doc or blog which talks about integrating conrjob to existing pod or volume.
apiVersion: v1
kind: Pod
metadata:
name: webserver
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: nginx
image: nginx
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
- name: sidecar-container
image: busybox
command: ["sh","-c","while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done"]
volumeMounts:
- name: shared-logs
mountPath: /var/log/nginx
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: "discovery-cronjob"
labels:
app.kubernetes.io/name: discovery
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: log-report
image: busybox
command: ['/bin/sh']
args: ['-c', 'cat /var/log/nginx/access.log > nginx.log']
volumeMounts:
- mountPath: /log
name: shared-logs
restartPolicy: Never
I see two things here that you need to know:
Unfortunately, it is not possible to schedule a cronjob on an existing pod. Pods are ephemeral and job needs to finish. It would be impossible to tell if the job completed or not. This is by design.
Also in order to be able to see the files from one pod to another you must use a PVC. The logs created by your app have to be persisted if your job wants to access it. Here you can find some examples of how to Create ReadWriteMany PersistentVolumeClaims on your Kubernetes Cluster:
Kubernetes allows us to provision our PersistentVolumes dynamically
using PersistentVolumeClaims. Pods treat these claims as volumes. The
access mode of the PVC determines how many nodes can establish a
connection to it. We can refer to the resource provider’s docs for
their supported access modes.

Share filesystem across containers in a pod

Is there a way to share the filesystem of two containers in a multi-container pod? without using shared volumes?
I have following pod manifest
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: pod
name: pod
spec:
replicas: 1
selector:
matchLabels:
app: pod
template:
metadata:
labels:
app: pod
spec:
containers:
- image: nginx:latest
name: nginx
- image: jenkins
name: jenkins
I want to access /var/jenkins_home path which is available in jenkins container from nginx container.
This is just for experimental purposes, I am trying to learn ways to share filesystem/things in general across containers in a pod.
You can't share files between containers without some sort of shared volume.
Part of the goal of a containerized system is that the container filesystems are isolated from each other. There are a huge number of practical problems with sharing files specifically (what if the containers are on different nodes? what if you have three replicas each of Jenkins and Nginx? what if they're all trying to write the same files?) and in general it's better to just avoid sharing files altogether if that's a possibility.
In the specific example you've shown, the lifecycle of a Jenkins CI system and an Nginx server will just be fundamentally different; whenever Jenkins builds something you don't want to restart it to also restart the Web server, and you could very easily want to scale up the Web tier without adding Jenkins workers. A better approach here would be to have Jenkins generate custom Docker images, push them to a registry, and then use the Kubernetes API to create a separate Nginx Deployment.
In most cases (especially because of the scaling considerations) you should avoid multi-container pods altogether.
(A more specific example of a case where this setup does make sense is if you're storing credentials somewhere like a Hashicorp Vault server. You would need an init container to connect to Vault, retrieve the credentials, and deposit them in an emptyDir volume, and then the main container can start up having gotten those credentials. As far as the main server container is concerned it's the only important part of this pod, and logically the pod is nothing more than the server container with some auxiliary stuff.)
Below sample would help you how to share volume between cobtainers
apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]

Re Scheduling pods from one node to another

So, I am writing a custom auto-rescheduler for my clusters and I am using Python Client library to do so. As the rescheduler is still in proposal and nothing has been done for it, the only known way is to delete the pod from overused node and let the replication controller and scheduler take care of the rest (make a new pod and assign it to an appropriate node). What I want to know is can I use the client library to move the pods from one node to another without deleting the pod. Basically, I want to create a pod in an appropriate node first and then delete the pod in the over-used node. Is that possible?
Using node label you can start the container in matching nodes. for this first you need set the node label and update the deployment file and apply it.
Here is the sample yml file I used for blue green deployment, see this help.
web server running on node labeled web
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver-blue
spec:
replicas: 2
template:
metadata:
labels:
type: webserver
color: blue
spec:
containers:
- image: nginx:1.12.0
name: webserver-container
ports:
- containerPort: 80
name: http-server
nodeSelector:
svrtype: web
set another node label as newweb and update and deployment with different name and node label the config.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webserver-green
spec:
replicas: 2
template:
metadata:
labels:
type: webserver
color: green
spec:
containers:
- image: nginx:1.13.0
name: webserver-container
ports:
- containerPort: 80
name: http-server
nodeSelector:
svrtype: newweb
After testing you can remove the old one. the issue here is you can direct the traffic to only one deployment at a time.