How to access mysql pod in another pod(busybox)? - kubernetes

I am given a task of connecting mysql pod with any other working pod(preferably busybox) but was not able to that. Is there a way to do this task. I referred many places but the explanations was bit complicated as I am new to Kubernetes.

MySQL YAML config for Kubernets
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
you can use the service name to connect with the MySQL from busy box container
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
above command will start one container of Busy box.
run kubectl get pods to check both pod status.
In Busy container you will be able to run the command to connect with the MySQL
mysql -h <MySQL service name> -u <Username> -p<password>
Ref doc : MySQL : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/
Busy box : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/

Related

Health check of Postgres database in separate pod from springboot application pod to ensure database connection - K8s

as a training in my company I have to deploy my simple microservices app in minikube Kubernetes local cluster.
My app is as follows:
Post microservice
User microservice
both communicate with eachother throught RestTemplate
services uses postgres databases in separate containers build on official postgres image - each have its own database run i a docker container.
I have both services run i a containers alongside with 2 databases.
In docker-compose.yml I could simply add 'depends_on' to make sure that application containers will run only after containers with database are ready and running.
How can I achive the same thing in Kubernetes? I must run 8 pods (2 for microservices and 2 for databases all*2 instances) all pods in the same namespace. I must make sure that pods with databases will run first and only after those are ready, pods with microservices will run.
All should work with command
kubectl apply -f .
So shortly, pod with application should start after pod with database.
I have tried using InitContainer, but it doesnt work.
my post-manifest.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: post
namespace: kubernetes-microservices-task2
spec:
replicas: 2
selector:
matchLabels:
app: post
template:
metadata:
labels:
app: post
spec:
containers:
- name: post
image: norbertpedyk/post-image:varsion8
envFrom:
- configMapRef:
name: post-config
- secretRef:
name: secret
ports:
- containerPort: 8081
initContainers:
- name: check-db
image: busybox
command: [ "sh", "-c", "until nslookup post-db; do echo waiting for post-db; sleep 2; done" ]
---
apiVersion: v1
kind: Service
metadata:
name: post
namespace: kubernetes-microservices-task2
spec:
selector:
app: post
ports:
- name: post
port: 8081
targetPort: 8081
my post-db-manifest.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: post-db
namespace: kubernetes-microservices-task2
spec:
replicas: 2
selector:
matchLabels:
app: post-db
template:
metadata:
labels:
app: post-db
spec:
containers:
- name: post-db
image: postgres
env:
- name: POSTGRES_DB
value: "posts"
- name: POSTGRES_USER
value: "pedyk"
- name: POSTGRES_PASSWORD
value: "password"
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-post-data
mountPath: /data/postgres
volumes:
- name: postgres-post-data
emptyDir: { }
---
apiVersion: v1
kind: Service
metadata:
name: post-db
namespace: kubernetes-microservices-task2
spec:
selector:
app: post-db
ports:
- name: http
port: 5433
targetPort: 5432
Ehat I got is:
enter image description here
This is an example only for Post service, the same is with User service
I must make sure that pods with databases will run first and only after those are ready, pods with microservices will run.
You shouldn't rely on a specific startup order in order for your application to function. You really want your application to wait until the database is ready; your initContainer idea is heading in the right direction, but you're not checking to see if the database is ready; you're simply checking to see if the hostname resolves.
An easy way to check if postgres is ready is to use psql to run a simple query (SELECT 1) against it until it succeeds. We could do something like the following. In this example, I assume that we have a secret named postgres-credentials that looks like:
apiVersion: v1
kind: Secret
metadata:
labels:
app: wait-for-db
component: postgres
name: postgres-credentials-b8mfmtc4g6
namespace: postgres
type: Opaque
stringData:
POSTGRES_PASSWORD: secret
POSTGRES_USER: postgres
POSTGRES_DB: postgres
That secret is used to configure the postgres image; we can use the same secret to set up environment variables for psql:
initContainers:
- name: wait-for-db
image: docker.io/postgres:14
env:
- name: PGHOST
value: postgres
- name: PGUSER
valueFrom:
secretKeyRef:
key: POSTGRES_USER
name: postgres-credentials-b8mfmtc4g6
- name: PGPASSWORD
valueFrom:
secretKeyRef:
key: POSTGRES_PASSWORD
name: postgres-credentials-b8mfmtc4g6
- name: PGDATABASE
valueFrom:
secretKeyRef:
key: POSTGRES_DB
name: postgres-credentials-b8mfmtc4g6
command:
- /bin/sh
- -c
- |
while ! psql -c 'select 1' > /dev/null 2>&1; do
echo "waiting for database"
sleep 1
done
echo "database is ready!"
This initContainer runs psql -c 'select 1' until it succeeds, at which point the application pod can come up.
You can find a complete example deployment here (that uses a PgAdmin pod as an example application).
It's worth noting that this is really only half the picture: in an ideal world, you wouldn't need the initContainer because your application would already have the necessary logic to (a) wait for database availability at startup and (b) handle database interruptions at runtime. In other words, you want to be able to restart your database (or upgrade it, or whatever) while your application is running without having to restart your application.

How can you connect to a database inside a k8 cluster that's behind a headless service?

given a database that is part of a statefulset and behind a headless service, how can I use a local client (outside of the cluster) to access the database? Is it possible to create a separate service that targets a specific pod by its stable id?
There are multiple ways you can conect to this database service
You can use
Port-forward : https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
Service as LoadBalancer : https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
Service as Nodeport : https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
Example MySQL database running on K8s : https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
The easiest way is to try with port-forwarding :
kubectl port-forward -n <NAMESPACE Name> <POD name> 3306:3306
using the above command you can create the proxy from local to K8s cluster and test the localhost:3306
This is not a method for Prod use case it's can be used for debugging.
NodePort : Expose the port but use the worker node IPs so if worker not get killed during autoscaling IP may changed with time
I would recommend creating a new service with the respective label and type as LoadBalancer.

InitContainer takes about 5 minutes to get an positive response of mysql

Friends,
I am learning here and trying to run a pod with an init container which checks if the DNS of the service of my mysql pod resolves. Both pods are being deployed with helm (Version:v3.4.1) charts created by me in minikube (version: v1.15.0).
The problem is that the init container tries for about five minutes until it finally resolves the DNS. It always works after 4 to 5 minutes but never before that, no matter for how long the mysql pod is running and the mysql service exists. Does anyone know how this is happening?
One interesting thing is that if I pass the clusterIP instead of the the DNS, it resolves immediately and it also resolves immediately if I passe the full domain name like this: mysql.default.svc.cluster.local.
Here is the conde of my init container:
initContainers:
- name: {{ .Values.initContainers.name }}
image: {{ .Values.initContainers.image }}
command: ['sh', '-c', 'until nslookup mysql; do echo waiting for mysql; sleep 2; done;']
Here is the service of the mysql deployment:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- protocol: TCP
port: 3306
targetPort: 3306
selector:
app: mysql
type: ClusterIP
And the deployment of the mysql:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
envFrom:
- configMapRef:
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
subPath: mysql
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: pvc-mysql

Kubernetes: Pod is not created after applying deployment

I have a problem with Kubernetes on my local machine. I want to create a pod with a database so I prepared a deployment file with service.
apiVersion: v1
kind: Service
metadata:
name: bid-service-db
labels:
app: bid-service-db
tier: database
spec:
ports:
- name: "5432"
port: 5432
targetPort: 5432
selector:
app: bid-service-db
tier: database
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bid-service-db
labels:
app: bid-service-db
tier: database
spec:
selector:
matchLabels:
app: bid-service-db
strategy:
type: Recreate
template:
metadata:
labels:
app: bid-service-db
tier: database
spec:
containers:
- env:
- name: POSTGRES_DB
value: mydb
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: postgres
image: postgres:13
imagePullPolicy: Never
name: bid-service-db
ports:
- containerPort: 5432
name: bid-service-db
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: postgres-persistance-storage
persistentVolumeClaim:
claimName: bid-service-db-volume
status: {}
I am applying this file with k apply -f bid-db-deployment.yaml. k get all returns that only service was created, but pod not started. What can I do in this case? How to troubleshoot that?
if you didn't get any errors on the 'apply' you can get the failure reason by:
kubectl describe deployment/DEPLOMENT_NAME
Also, you can take only the deployment part and put it in a separate YAML file and see if you get errors.
Since after restart the cluster it worked for you, a good ideia next times should be verify the logs from kube-api and kube-controller pods using the command:
kubectl logs pn kube-system <kube-api/controller_pod_name>
To get the list of your deployments in all name space you can use the command:
kubectl get deployments -A

Cannot connect to a mongodb service in a Kubernetes cluster

I have a Kubernetes cluster on Google Cloud, I have a database service, which is running in front of a mongodb deployment. I also have a series of microservices, which are attempting to connect to that datastore.
However, they can't seem to find the host.
apiVersion: v1
kind: Service
metadata:
labels:
name: mongo
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
name: mongo
Here's my mongo deployment...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo-deployment
spec:
replicas: 1
template:
metadata:
labels:
name: mongo
spec:
containers:
- image: mongo:latest
name: mongo
ports:
- name: mongo
containerPort: 27017
hostPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumes:
- name: mongo-persistent-storage
gcePersistentDisk:
pdName: mongo-disk
fsType: ext4
And an example of one of my services...
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: bandzest-artists
spec:
replicas: 1
template:
metadata:
labels:
name: bandzest-artists
spec:
containers:
- name: artists-container
image: gcr.io/<omitted>/artists:41040e8
ports:
- containerPort: 7000
imagePullPolicy: Always
env:
- name: DB_HOST
value: mongo
- name: AWS_BUCKET_NAME
value: <omitted>
- name: AWS_ACCESS_KEY_ID
value: <omitted>
- name: AWS_SECRET_KEY
value: <omitted>
First, check that the service is created
kubectl describe svc mongo
You should see it show that it is both created and routing to your pod's IP. If you're wondering what your pod's IP is you can check it out via
kubectl get po | grep mongo
Which should return something like: mongo-deployment-<guid>-<guid>, then do
kubectl describe po mongo-deployment-<guid>-<guid>
You should make sure the pod is started correctly and says Running not something like ImagePullBackoff. It looks like you're mounting a volume from a gcePersistentDisk. If you're seeing your pod just hanging out in the ContainerCreating state it's very likely you're not mounting the disk correctly. Make sure you create the disk before you try and mount it as a volume.
If it looks like your service is routing correctly, then you can check the logs of your pod to make sure it started mongo correctly:
kubectl logs mongo-deployment-<guid>-<guid>
If it looks like the pod and logs are correct, you can exec into the pod and make sure mongo is actually starting and working:
kubectl exec -it mongo-deployment-<guid>-<guid> sh
Which should get you into the container (Pod) and then you can try something like this to see if your DB is running.