InitContainer takes about 5 minutes to get an positive response of mysql - kubernetes

Friends,
I am learning here and trying to run a pod with an init container which checks if the DNS of the service of my mysql pod resolves. Both pods are being deployed with helm (Version:v3.4.1) charts created by me in minikube (version: v1.15.0).
The problem is that the init container tries for about five minutes until it finally resolves the DNS. It always works after 4 to 5 minutes but never before that, no matter for how long the mysql pod is running and the mysql service exists. Does anyone know how this is happening?
One interesting thing is that if I pass the clusterIP instead of the the DNS, it resolves immediately and it also resolves immediately if I passe the full domain name like this: mysql.default.svc.cluster.local.
Here is the conde of my init container:
initContainers:
- name: {{ .Values.initContainers.name }}
image: {{ .Values.initContainers.image }}
command: ['sh', '-c', 'until nslookup mysql; do echo waiting for mysql; sleep 2; done;']
Here is the service of the mysql deployment:
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- protocol: TCP
port: 3306
targetPort: 3306
selector:
app: mysql
type: ClusterIP
And the deployment of the mysql:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
envFrom:
- configMapRef:
name: mysql
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-storage
mountPath: /var/lib/mysql
subPath: mysql
volumes:
- name: mysql-storage
persistentVolumeClaim:
claimName: pvc-mysql

Related

How to access mysql pod in another pod(busybox)?

I am given a task of connecting mysql pod with any other working pod(preferably busybox) but was not able to that. Is there a way to do this task. I referred many places but the explanations was bit complicated as I am new to Kubernetes.
MySQL YAML config for Kubernets
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
you can use the service name to connect with the MySQL from busy box container
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
above command will start one container of Busy box.
run kubectl get pods to check both pod status.
In Busy container you will be able to run the command to connect with the MySQL
mysql -h <MySQL service name> -u <Username> -p<password>
Ref doc : MySQL : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/
Busy box : https://kubernetes.io/blog/2015/10/some-things-you-didnt-know-about-kubectl_28/

Not able to connect to SQL container using service sqlservice. Not able to figure out problem. Everything looks fine

apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: secretssql
key: pass
volumeMounts:
- name: mysqlvolume
mountPath: "/var/lib/mysql"
volumes:
- name: mysqlvolume
persistentVolumeClaim:
claimName: sqlpvc
---
apiVersion: v1
kind: Secret
metadata:
name: secretssql
data:
# You can include additional key value pairs as you do with Opaque Secrets
pass: YWRtaW4=
---
apiVersion: v1
kind: Service
metadata:
name: sqlservice
spec:
selector:
app: mysql
ports:
- port: 80
I want to connect to sql container using service sqlservice. Dns is reachable but when I try to ping the service,100% packet loss.
I want to connect to sql container using service sqlservice. Dns is reachable but when I try to ping the service,100% packet loss.I want to connect to sql container using service sqlservice.
Your service is using port 80:
ports:
- port: 80
while your pod is listening on port 3306:
ports:
- containerPort: 3306
Try adjusting your service to user port 3306:
ports:
- port: 3306
targetPort: 3306

Kibana on Kubernetes - how to point to ES container running on a different pod

Learning Kubernetes by setting up two pods, each running an elastic-search and a kibana container respectively.
My configuration file is able to setup both pods as well as create two services to access these applications on host machine's web browser.
Issue is that i don't know how to make Kibana container communicate with ES application/pod.
Earlier while learning Docker i crafted a docker-compose app configuration and now basically trying to do the same using Kubernetes ( docker-compose config pasted below ) .
Came across a blog that suggested using Deployment instead of Pod. Again not sure how would one make Kibana talk to ES
Kubernetes configuation yaml:
apiVersion: v1
kind: Pod
metadata:
name: pod-elasticsearch
labels:
app: myapp
spec:
hostname: "es01-docker-local"
containers:
- name: myelasticsearch-container
image: myelasticsearch
imagePullPolicy: Never
volumeMounts:
- name: my-volume
mountPath: /home/newuser
volumes:
- name: my-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: myelasticsearch-service
spec:
type: NodePort
ports:
- targetPort: 9200
port: 9200
nodePort: 30015
selector:
app: myapp
---
apiVersion: v1
kind: Pod
metadata:
name: pod-kibana
labels:
app: myapp
spec:
containers:
- name: mykibana-container
image: mykibana
imagePullPolicy: Never
volumeMounts:
- name: my-volume
mountPath: /home/newuser
volumes:
- name: my-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: mykibana-service
spec:
type: NodePort
ports:
- targetPort: 5601
port: 5601
nodePort: 30016
selector:
app: myapp
For reference below is the docker-compose that i am trying to replicate on Kubernetes
version: "2.2"
services:
elasticsearch:
image: myelasticsearch
container_name: myelasticsearch-container
restart: always
hostname: 'es01.docker.local'
ports:
- '9200:9200'
- '9300:9300'
volumes:
- myVolume:/home/newuser/
environment:
- discovery.type=single-node
kibana:
depends_on:
- elasticsearch
image: mykibana
container_name: mykibana-container
restart: always
ports:
- '5601:5601'
volumes:
- myVolume:/home/newuser/
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: http://es01:9200
volumes:
myVolume:
networks:
myNetwork:
ES Pod description:
% kubectl describe pod/pod-elasticsearch
Name: pod-elasticsearch
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Sun, 10 Jan 2021 23:06:18 -0800
Labels: app=myapp
Annotations: <none>
Status: Running
IP: 10.x.0.yy
IPs:
IP: 10.x.0.yy
In kubernetes Pod/Deployment/DaemonSet... in the same cluster can communicate with each other with no problem because it has a flat network architecture .One way for these resources to call each other directly is by the name of Kubernetes service of each resource.
For example any resource in the cluster can call your kibana-app directly by service name you give it to it mykibana-service.name-of-namespace.
So for kibana pod to communicate with elasticsearch it can use http://name-of-service-of-elasticsearch.name-of-namespace:9200 namespace is be default if you dont specify where you create your service => http://name-of-service-of-elasticsearch.default:9200 or http://name-of-service-of-elasticsearch:9200
The concern you raised on what type of your resource you have to create (pod, deployment,daemonset or statefulSet) is not important for these resources to communicate with each other.
If you re having problem converting docker-compose to manifest file you can start with Kompose you can do kompose convert where is your docker-compose is located .
Here sample
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: default
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- image: myelasticsearch:yourtag #fix this
name: elasticsearch
ports:
- containerPort: 9200
- containerPort: 9300
volumeMounts:
- mountPath: /home/newuser/
name: my-volume
volumes:
- name: my-volume
emptyDir: {} # I wouldnt use emptydir
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: default
spec:
ports:
- port: 9200
name: "9200"
targetPort: 9200
- port: 9300
name: "9300"
targetPort: 9300
selector:
app: elasticsearch
type: ClusterIP #you dont need to make expose your service publicly
#####################################
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: kibana
name: kibana
namespace: default
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200/ #elasticsearch is the same name as service resrouce name
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
image: mykibana:yourtagname #fix this
name: kibana
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kibana
name: kibana
namespace: default
spec:
ports:
- port: 5601
protocol: TCP
targetPort: 5601
selector:
app: kibana
type: NodePort
You can choose whats adequate for your app , for example in elasticsearch you can use StatefulSet ,Deployment, in ElasticSearch, and you can you use Deployment for Kibana , Also you can change the type of volume .
Also the mynetwork that you created in docker-compose can be translated network policy where you can isolate your resources (for example isolated mynetwork namespace) because these resources are not isolated if they are created in the same cluster by default.
Hope I helped
If you want to deploy Elasticsearch and Kibana in Kubernetes the usual way then you have to take care of some core Elasticsearch cluster configuration like:
cluster.initial_master_nodes [7.0] Added in 7.0.
network.host
network.publish_host
Also you would have to carefully setup the network.host so that even after accidental pod restarts the network.host remains the same.
While deploying Kibana you need provide Elasticsearch service and also manually configure the SSL certificates if Elasticsearch has SSL enabled.
So to install Elastic Stack on Kubernetes then you should probably prefer
Elastic Cloud on Kubernetes (ECK). The documentation provided by Elastic is easy to understand.
Elastic Cloud on Kubernetes (ECK) uses Kubernetes Operators to make installation easier and it automatically takes care of core cluster configuration.
ECK installation will create a default user called "elastic" and you can retrieve its password from secrets. It also creates self-signed certificates which can be found in secrets.
For deploying Kibana you can just provide "elasticsearchRef" in your YAML file and it will automatically configure the Elasticsearch endpoints. You can use the default "elastic" user to login to Kibana.

unable to mount a specific directory from couchdb pod kubernetes

Hi I am trying to mount a directory from pod where couchdb is running . directory is /opt/couchdb/data and for mounting in kubernetes I am using this config for deployment .
apiVersion: v1
kind: Service
metadata:
name: couchdb0-peer0org1
spec:
ports:
- port: 5984
targetPort: 5984
type: NodePort
selector:
app: couchdb0-peer0org1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: couchdb0-peer0org1
spec:
selector:
matchLabels:
app: couchdb0-peer0org1
strategy:
type: Recreate
template:
metadata:
labels:
app: couchdb0-peer0org1
spec:
containers:
- image: hyperledger/fabric-couchdb
imagePullPolicy: IfNotPresent
name: couchdb0
env:
- name: COUCHDB_USER
value: admin
- name: COUCHDB_PASSWORD
value: admin
ports:
- containerPort: 5984
name: couchdb0
volumeMounts:
- name: datacouchdbpeer0org1
mountPath: /opt/couchdb/data
subPath: couchdb0
volumes:
- name: datacouchdbpeer0org1
persistentVolumeClaim:
claimName: worker1-incoming-volumeclaim
so by applying this deployments . I always gets result for the pods .
couchdb0-peer0org1-b89b984cf-7gjfq 0/1 CrashLoopBackOff 1 9s
couchdb0-peer0org2-86f558f6bb-jzrwf 0/1 CrashLoopBackOff 1 9s
But now the strange thing if I changed mounted directory from /opt/couchdb/data to /var/lib/couchdb then it works fine . But the issue is that I have to store the data for couchdb database in statefull manner .
Edit your /etc/exports with following content
"path/exported/directory *(rw,sync,no_subtree_check,no_root_squash)"
and then restart NFS server:
sudo /etc/init.d/nfs-kernel-server restart*
no_root_squash is used, remote root users are able to change any file on the shared file. This a quick solution but have some security concerns

Multiple K8S containers connecting to Google Cloud SQL through proxy

I would like to connect my Kubernetes cluster to Google Cloud SQL.
I have at least 10 different deployed pods which presently connect to MySQL [docker image deployed to k8s] using a JDBC url + username/password.
It it possible to use a single instance of the Google Cloud SQL Proxy and connect all the pods through this proxy to the Cloud SQL database? Ideally I would like to replace the mysql running in the container with the proxy.
I would prefer not having to run the proxy inside each deployment. The only samples I found seem to indicate the proxy needs to be declared in each deployment.
I found a solution.
Deploy the proxy with the yml below, and expose the deployment as a service. Most importantly, make the proxy listen on 0.0.0.0, instead of default 127.0.0.1. All the secrets as per the Google Cloud sql documentation
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
template:
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
- image: b.gcr.io/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=MYSQL:ZONE:DATABASE_INSTANCE=tcp:0.0.0.0:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
ports:
- containerPort: 3306
name: mysql
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
The solution is slightly more expensive than having the proxy in the same deployment as the client software, since there is an extra TCP connection.
However there are many benefits:
Much simpler and doesn't require modifying existing K8S deployment files
Allows switching the implementation to a MySQL Docker container or using the Google Cloud SQL proxy without any modifications to the client configuration.
You can create a deployment and a service to expose the cloudsql proxy to other pods like so:
apiVersion: v1
kind: Service
metadata:
name: cloudsqlproxy
spec:
ports:
- port: 3306
targetPort: database-port
selector:
app: cloudsqlproxy
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cloudsqlproxy
spec:
template:
metadata:
labels:
app: cloudsqlproxy
spec:
volumes:
- name: service-account-token
secret:
secretName: service-account-token
containers:
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
imagePullPolicy: Always
command:
- /cloud_sql_proxy
- -instances=<project>:<cloudsqlinstance>=tcp:0.0.0.0:3306
- -credential_file=/secrets/cloudsql/credentials.json
ports:
- name: database-port
containerPort: 3306
volumeMounts:
- name: service-account-token
mountPath: /secrets/cloudsql
readOnly: true
So within any of your pods, the database your MYSQL_HOST:MYSQL_PORT will be cloudsqlproxy:3306
For multiple databases through the same proxy, you'd have the same deployment structure for the proxy, except that you will now expose 2 ports from the pod, like so:
apiVersion: extensions/v1beta1
...
spec:
template:
...
spec:
volumes:
...
containers:
- name: cloudsql-proxy
...
ports:
- name: database-port1
containerPort: 3306
- name: database-port2
containerPort: 3307
...
Then you'd create 2 services to for discovery on those ports like so:
apiVersion: v1
kind: Service
metadata:
name: cloudsqlproxy-db1
spec:
ports:
- port: 3306
targetPort: database-port1
selector:
app: cloudsqlproxy
---
apiVersion: v1
kind: Service
metadata:
name: cloudsqlproxy-db2
spec:
ports:
- port: 3306
targetPort: database-port2
selector:
app: cloudsqlproxy
So, with both services set to port 3306, you can connect to each database on that port:
mysql --host=cloudsqlproxy-db1 --port=3306 ...
mysql --host=cloudsqlproxy-db2 --port=3306 ...
Reference: https://github.com/GoogleCloudPlatform/cloudsql-proxy/blob/master/Kubernetes.md
With Google "Private IP" the cloud proxy is now irrelevant!