What is the equivalent for depends_on in kubernetes - kubernetes

I have a docker compose file with the following entries
version: '2.1'
services:
mysql:
container_name: mysql
image: mysql:latest
volumes:
- ./mysqldata:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 'password'
ports:
- '3306:3306'
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3306"]
interval: 30s
timeout: 10s
retries: 5
test1:
container_name: test1
image: test1:latest
ports:
- '4884:4884'
- '8443'
depends_on:
mysql:
condition: service_healthy
links:
- mysql
The Test-1 container is dependent on mysql and it needs to be up and running.
In docker this can be controlled using health check and depends_on attributes.
The health check equivalent in kubernetes is readinessprobe which i have already created but how do we control the container startup in the pod's?????
Any directions on this is greatly appreciated.
My Kubernetes file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
template:
metadata:
labels:
app: deployment
spec:
containers:
- name: mysqldb
image: "dockerregistry:mysqldatabase"
imagePullPolicy: Always
ports:
- containerPort: 3306
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 10
- name: test1
image: "dockerregistry::test1"
imagePullPolicy: Always
ports:
- containerPort: 3000

That's the beauty of Docker Compose and Docker Swarm... Their simplicity.
We came across this same Kubernetes shortcoming when deploying the ELK stack.
We solved it by using a side-car (initContainer), which is just another container in the same pod thats run first, and when it's complete, kubernetes automatically starts the [main] container. We made it a simple shell script that is in loop until Elasticsearch is up and running, then it exits and Kibana's container starts.
Below is an example of a side-car that waits until Grafana is ready.
Add this 'initContainer' block just above your other containers in the Pod:
spec:
initContainers:
- name: wait-for-grafana
image: darthcabs/tiny-tools:1
args:
- /bin/bash
- -c
- >
set -x;
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://grafana:3000/login)" != "200" ]]; do
echo '.'
sleep 15;
done
containers:
.
.
(your other containers)
.
.

This was purposefully left out. The reason being is that applications should be responsible for their connect/re-connect logic for connecting to service(s) such as a database. This is outside the scope of Kubernetes.

While I don't know the direct answer to your question except this link (k8s-AppController), I don't think it's wise to use same deployment for DB and app. Because you are tightly coupling your db with app and loosing awesome k8s option to scale any one of them as needed. Further more if your db pod dies you loose your data as well.
Personally what I would do is to have a separate StatefulSet with Persistent Volume for database and Deployment for app and use Service to make sure their communication.
Yes I have to run few different commands and may need at least two separate deployment files but this way I am decoupling them and can scale them as needed. And my data is being persistent as well!

As mentioned, you should run the database and the application containers in separate pods and connect them with a service.
Unfortunately, both Kubernetes and Helm don't provide a functionality similar to what you've described. We had many issues with that and tried a few approaches until we have decided to develop a smallish utility that solved this problem for us.
Here's the link to the tool we've developed: https://github.com/Opsfleet/depends-on
You can make pods wait until other pods become ready according to their readinessProbe configuration. It's very close to Docker's depends_on functionality.

In Kubernetes terminology one your docker-compose set is a Pod.
So, there is no depends_on equivalent there. Kubernetes will check all containers in a pod and they all have to be alive for a mark that pod as Healthy and will always run them together.
In your case, you need to prepare configuration of Deployment like that:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
template:
metadata:
labels:
app: app-and-db
spec:
containers:
- name: app
image: nginx
ports:
- containerPort: 80
- name: db
image: mysql
ports:
- containerPort: 3306
After pod will be started, your database will be available on localhost interface for your application, because of network conception:
Containers within a pod share an IP address and port space, and can find each other via localhost. They can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory.
But, as #leninhasda mentioned, it is not a good idea to run database and application in your pod and without Persistent Volume. Here is a good tutorial on how to run a stateful application in the Kubernetes.

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
what about liveness and readiness ??? supports commands, http requests and more
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5

Related

Configure a LivenessProbe which simply runs true question; preparation to pass the CKA exams

I was passing one of the sample tests for CKA and one question says this:
"Configure a LivenessProbe which simply runs true"
This is while creating simple nginx pod(s) in the general question, then they ask that as one of the items. What does that mean and how to do it?
...Configure a LivenessProbe which simply runs true...while creating simple nginx pod...
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- image: nginx:alpine
name: nginx
ports:
- containerPort: 80
livenessProbe:
exec:
command: ["true"]
true is a command that returns zero. In this case it means the probe simply return no error. Alternately, you can probe nginx with: command: ["ash","-c","nc -z localhost 80"].

Ensure pod deletion when one of container terminates [duplicate]

I have a Kubernetes JOB that does database migrations on a CloudSQL database.
One way to access the CloudSQL database from GKE is to use the CloudSQL-proxy container and then connect via localhost. Great - that's working so far. But because I'm doing this inside a K8s JOB the job is not marked as successfully finished because the proxy keeps on running.
$ kubectrl get po
NAME READY STATUS RESTARTS AGE
db-migrations-c1a547 1/2 Completed 0 1m
Even though the output says 'completed' one of the initially two containers is still running - the proxy.
How can I make the proxy exit on completing the migrations inside container 1?
The best way I have found is to share the process namespace between containers and use the SYS_PTRACE securityContext capability to allow you to kill the sidecar.
apiVersion: batch/v1
kind: Job
metadata:
name: my-db-job
spec:
template:
spec:
restartPolicy: OnFailure
shareProcessNamespace: true
containers:
- name: my-db-job-migrations
command: ["/bin/sh", "-c"]
args:
- |
<your migration commands>;
sql_proxy_pid=$(pgrep cloud_sql_proxy) && kill -INT $sql_proxy_pid;
securityContext:
capabilities:
add:
- SYS_PTRACE
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
args:
- "-instances=$(DB_CONNECTION_NAME)=tcp:5432"
One possible solution would be a separate cloudsql-proxy deployment with a matching service. You would then only need your migration container inside the job that connects to your proxy service.
This comes with some downsides:
higher network latency, no pod local mysql communication
possible security issue if you provide the sql port to your whole kubernetes cluster
If you want to open cloudsql-proxy to the whole cluster you have to replace tcp:3306 with tcp:0.0.0.0:3306 in the -instance parameter on the cloudsql-proxy.
There are 3 ways of doing this.
1- Use private IP to connect your K8s job to Cloud SQL, as described by #newoxo in one of the answers. To do that, your cluster needs to be a VPC-native cluster. Mine wasn't and I was not whiling to move all my stuff to a new cluster. So I wasn't able to do this.
2- Put the Cloud SQL Proxy container in a separate deployment with a service, as described by #Christian Kohler. This looks like a good approach, but it is not recommended by Google Cloud Support.
I was about to head in this direction (solution #2) but I decided to try something else.
And here is the solution that worked for me:
3- You can communicate between different containers in the same Pod/Job using the file system. The idea is to tell the Cloud SQL Proxy container when the main job is done, and then kill the cloud sql proxy. Here is how to do it:
In the yaml file (my-job.yaml)
apiVersion: v1
kind: Pod
metadata:
name: my-job-pod
labels:
app: my-job-app
spec:
restartPolicy: OnFailure
containers:
- name: my-job-app-container
image: my-job-image:0.1
command: ["/bin/bash", "-c"]
args:
- |
trap "touch /lifecycle/main-terminated" EXIT
{ your job commands here }
volumeMounts:
- name: lifecycle
mountPath: /lifecycle
- name: cloudsql-proxy-container
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/bin/sh", "-c"]
args:
- |
/cloud_sql_proxy -instances={ your instance name }=tcp:3306 -credential_file=/secrets/cloudsql/credentials.json &
PID=$!
while true
do
if [[ -f "/lifecycle/main-terminated" ]]
then
kill $PID
exit 0
fi
sleep 1
done
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: lifecycle
mountPath: /lifecycle
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: lifecycle
emptyDir:
Basically, when your main job is done, it will create a file in /lifecycle that will be identified by the watcher added to the cloud-sql-proxy container, which will kill the proxy and terminate the container.
I hope it helps! Let me know if you have any questions.
Based on: https://stackoverflow.com/a/52156131/7747292
Doesn't look like Kubernetes can do this alone, you would need to manually kill the proxy once the migration exits. Similar question asked here: Sidecar containers in Kubernetes Jobs?
Google cloud sql has recently launched private ip address connectivity for cloudsql. If the cloud sql instance and kubernetes cluster is in same region you can connect to cloudsql without using cloud sql proxy.
https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine#private-ip
A possible solution would be to set the concurrencyPolicy: Replace in the job spec ... this will agnostically replace the current pod with the new instance whenever it needs to run again. But, you have to make sure that the subsequent cron runs are separated enough.
Unfortunately the other answers weren't working for me because of CloudSQLProxy running in a distroless environment where there is no shell.
I managed to get around this by bundling a CloudSQLProxy binary with my deployment and running a bash script to start up CloudSQLProxy followed by my app.
Dockerfile:
FROM golang:1.19.4
RUN apt update
COPY . /etc/mycode/
WORKDIR /etc/mycode
RUN chmod u+x ./scripts/run_migrations.sh
RUN chmod u+x ./bin/cloud_sql_proxy.linux-amd64
RUN go install
ENTRYPOINT ["./scripts/run_migrations.sh"]
Shell Script (run_migrations.sh):
#!/bin/sh
# This script is run from the parent directory
dbConnectionString=$1
cloudSQLProxyPort=$2
echo "Starting Cloud SQL Proxy"
./bin/cloud_sql_proxy.linux-amd64 -instances=${dbConnectionString}=tcp:5432 -enable_iam_login -structured_logs &
CHILD_PID=$!
echo "CloudSQLProxy PID: $CHILD_PID"
echo "Migrating DB..."
go run ./db/migrations/main.go
MAIN_EXIT_CODE=$?
kill $CHILD_PID;
echo "Migrations complete.";
exit $MAIN_EXIT_CODE
K8s (via Pulumi):
import * as k8s from '#pulumi/kubernetes'
const jobDBMigrations = new k8s.batch.v1.Job("job-db-migrations", {
metadata: {
namespace: namespaceName,
labels: appLabels,
},
spec: {
backoffLimit: 4,
template: {
spec: {
containers: [
{
image: pulumi.interpolate`gcr.io/${gcpProject}/${migrationsId}:${migrationsVersion}`,
name: "server-db-migration",
args: [
dbConnectionString,
],
},
],
restartPolicy: "Never",
serviceAccount: k8sSAMigration.metadata.name,
},
},
},
},
{
provider: clusterProvider,
});

ArangoDB init container fails on minikube

I'm working on a NodeJS service which uses ArangoDB as datastore, and deployed on minikube. I use an initContainer directive in the kubernetes deployment manifest to ensure that the database is ready to receive connections before the application attempts to connect. The relevant portion of the kubernetes YAML is shown below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: carservice
template:
spec:
initContainers:
- name: init-carservice
image: arangodb/arangodb:3.5.1
command: ['sh', 'c', 'arangosh --server.endpoint="https://${CARSERVICE_CARSERVICEDB_SERVICE_HOST}:${CARSERVICE_CARSERVICEDB_SERVICE_PORT}" --server.password=""; do echo waiting for database to be up; sleep 2; done;']
containers:
- name: carservice
image: carservice
imagePullPolicy: IfNotPresent
The challenge has been that sometimes the initContainer is able to wait for the database connection to be established successfully. Most of the other times, it randomly fails with the error:
ERROR caught exception: invalid endpoint spec: https://
Out of desperation, I changed the scheme to http, and it fails with a corresponding error:
ERROR caught exception: invalid endpoint spec: http://
My understanding of these errors is that the database is not able to recognize https and http in these instances, which is strange. The few times the initContainer bit worked successfully, I used https in the related command in the kubernetes spec.
I must add that the actual database (https://${CARSERVICE_CARSERVICEDB_SERVICE_HOST}:${CARSERVICE_CARSERVICEDB_SERVICE_PORT}) has been successfully deployed to minikube using kube-arangodb, and can be accessed through the web UI, so that bit is sorted.
What I'd like to know:
Is this the recommended way to wait for ArangoDB to connect using the initContainer directive, or do I have to use an entirely different approach?
What could be causing the error I'm getting? Am I missing something fundamental here?
Would be glad for any help.
The issue was that for those times the init container failed to connect to ArangoDB, the env variables were not correctly set. Therefore, I added another init container before that (since init containers are executed in sequence), that'd wait for the corresponding kubernetes "service" resource of the ArangoDB deployment to come up. That way, by the time the second init container would run, the env variables would be available.
The corresponding portion of kubernetes deployment YAML is shown as:
apiVersion: apps/v1
kind: Deployment
metadata:
name: carservice
template:
spec:
initContainers:
- name:init-db-service
image: busybox:1.28
command: ['sh', '-c', 'until nslookup carservice-carservicedb; do echo waiting for kubernetes service resource for db; sleep 2; done;']
- name: init-carservice
image: arangodb/arangodb:3.5.1
command: ['sh', 'c', 'arangosh --server.endpoint="https://${CARSERVICE_CARSERVICEDB_SERVICE_HOST}:${CARSERVICE_CARSERVICEDB_SERVICE_PORT}" --server.password=""; do echo waiting for database to be up; sleep 2; done;']
containers:
- name: carservice
image: carservice
imagePullPolicy: IfNotPresent

How to create a mongo database per service with Docker

I am working towards having multiple services (NodeJS, Spring-boot) that each have their own MongoDB Database-server-per-service (eventually targeting GCP & K8s) so that I can keep the data separate. I will be using Docker compose to launch both the service and database together. However, when I run multiple services, naturally I get port collision. Here is a typical docker-compose file:
version: '3'
# Define the services/containers to be run
services:
myapp: #name of your service
build: ./ # specify the directory of the Dockerfile
ports:
- "3000:3000" #specify ports forwarding
links:
- database # link this service to the database service
volumes:
- .:/usr/src/app
depends_on:
- database
database: # name of the service
image: mongo # specify image to build container from
volumes:
- ./data:/data/db
ports:
- "27017:27017"
I am looking for an example of how to do this. My thinking is that each compose file will have it's own ports and each service will map to those ports internally?
You can make yaml for deployment and explain all your containers in one pod (a Pod is a group of containers). Your deployment may look like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: application-deployment
labels:
app: application
spec:
selector:
matchLabels:
app: application
template:
metadata:
labels:
app: application
spec:
containers:
- name: application
image: application:version
ports:
- containerPort: 3000
name: database
image: database:version
ports:
- containerPort: 27017
It is just deployment inside your cluster. You need to expose it outside the cluster. I recommend you to use Ingress for that.
Here you will have the database inside the pod. Also you can create 2 deployments for database and your app in the same namespace.
Also, you need to build Docker images manually or use the CI tool for that. You can manage environments ( prod, pre-prod, dev, test ) by namespaces. One namespace for one environment will give you full isolation. Also, to manage all this, I recommend you to use tools like Helm or kops.
There are a lot of differences between Kubernetes and Docker-compose, but the main difference is design. In Kubernetes, you have more entities for each level of application, and you can manage them. In Docker-compose, you configure all as one service in one place and usually it is hard to manage some specific things.

How to roll kubernetes updates in intervals

We have a case where we need to make sure that pods in k8s have the latest version possible. What is the best way to accomplish this?
First idea was to kill the pod after some point, knowing that the new ones will come up pulling the latest image. Here is what we found so far. Still don't know how to do it.
Another idea is having rolling-update executed in intervals, like every 5 hours. Is there a way to do this?
As mentioned by #svenwltr using activeDeadlineSeconds is an easy option but comes with the risk of loosing all pods at once. To mitigate that risk I'd use a deployment to manage the pods and their rollout, and configure a small second container along with the actual application. The small helper could be configured like this (following the official docs):
apiVersion: v1
kind: Pod
metadata:
name: app-liveness
spec:
containers:
- name: liveness
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep $(( RANDOM % (3600) + 1800 )); rm -rf /tmp/healthy; sleep 600
image: gcr.io/google_containers/busybox
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
- name: yourapplication
imagePullPolicy: Always
image: nginx:alpine
With this configuration every pod would break randomly within the configured timeframe (here between 30 and 90mins) and that would trigger the start of a new pod. The imagePullPolicy: Always would then make sure that the image is updated during that cycle.
This of course assumes that your application versions are always available under the same name/tag.
Another alternative is to use a deployment and let the controller handle roll outs. To be more specific: If you update the image field in the deployment yaml, it automatically updates every pod. IMO that's the cleanest way, but it has some requirements:
You cannot use the latest tag. The assumption is that a container only needs an update, when the image tag changes.
If an updated happens, you have to update image tag manually, somehow. This might be done by a custom controller which checks for new tags and updates the deployment accordingly. Or this could be triggered by a Continuous Delivery system.
To use your linked feature you just have to specify activeDeadlineSeconds in your pods.
Not tested example:
apiVersion: v1
kind: Pod
metadata:
name: "nginx"
spec:
activeDeadlineSeconds: 3600
containers:
- name: nginx
image: nginx:alpine
imagePullPolicy: Always
The downside of this is, that you cannot control when the deadline kicks in. This means it might happen, that all your pods get killed at the same time and the whole service gets offline (that depends on you applications).
I tried using Pagid's solution, but unfortunately my observation and subsequent research indictate that his assertion that a failing container will restart the whole pod is incorrect. It turns out that only the failing container will be restarted, which obviously does not help much when the point is to restart the other containers in the pod at random intervals.
The good news is that I have a solution that seems to work which is based on his answer. Basically, instead of writing to /tmp/healthy, you instead write to a shared volume which each of the containers within the pod have mounted. You also need to add the liveness probe to each of those pods. Here's an example based on the one I am using:
volumes:
- name: healthcheck
emptyDir:
medium: Memory
containers:
- image: alpine:latest
volumeMounts:
- mountPath: /healthcheck
name: healthcheck
name: alpine
livenessProbe:
exec:
command:
- cat
- /healthcheck/healthy
initialDelaySeconds: 5
periodSeconds: 5
- name: liveness
args:
- /bin/sh
- -c
- touch /healthcheck/healthy; sleep $(( RANDOM % (3600) + 1800 )); rm -rf /healthcheck/healthy; sleep 600
image: gcr.io/google_containers/busybox
volumeMounts:
- mountPath: /healthcheck
name: healthcheck
livenessProbe:
exec:
command:
- cat
- /healthcheck/healthy
initialDelaySeconds: 5
periodSeconds: 5