kubectl - Error response from daemon: error while creating mount source path - kubernetes

I'm trying to install SAP HANA Express docker image in a Kubernete node in Google Cloud Platform as per guide https://developers.sap.com/tutorials/hxe-k8s-advanced-analytics.html#7f5c99da-d511-479b-8745-caebfe996164 however, during execution of step 7 "Deploy your containers and connect to them" I'm not getting the expected result.
I'm executing command kubectl create -f hxe.yaml and here is the yaml file I'm using it:
kind: ConfigMap
apiVersion: v1
metadata:
creationTimestamp: 2018-01-18T19:14:38Z
name: hxe-pass
data:
password.json: |+
{"master_password" : "HXEHana1"}
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-vol-hxe
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 150Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/hxe_pv"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hxe-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hxe
labels:
name: hxe
spec:
selector:
matchLabels:
run: hxe
app: hxe
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
run: hxe
app: hxe
role: master
tier: backend
spec:
initContainers:
- name: install
image: busybox
command: [ 'sh', '-c', 'chown 12000:79 /hana/mounts' ]
volumeMounts:
- name: hxe-data
mountPath: /hana/mounts
volumes:
- name: hxe-data
persistentVolumeClaim:
claimName: hxe-pvc
- name: hxe-config
configMap:
name: hxe-pass
imagePullSecrets:
- name: docker-secret
containers:
- name: hxe-container
image: "store/saplabs/hanaexpress:2.00.045.00.20200121.1"
ports:
- containerPort: 39013
name: port1
- containerPort: 39015
name: port2
- containerPort: 39017
name: port3
- containerPort: 8090
name: port4
- containerPort: 39041
name: port5
- containerPort: 59013
name: port6
args: [ "--agree-to-sap-license", "--dont-check-system", "--passwords-url", "file:///hana/hxeconfig/password.json" ]
volumeMounts:
- name: hxe-data
mountPath: /hana/mounts
- name: hxe-config
mountPath: /hana/hxeconfig
- name: sqlpad-container
image: "sqlpad/sqlpad"
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: hxe-connect
labels:
app: hxe
spec:
type: LoadBalancer
ports:
- port: 39013
targetPort: 39013
name: port1
- port: 39015
targetPort: 39015
name: port2
- port: 39017
targetPort: 39017
name: port3
- port: 39041
targetPort: 39041
name: port5
selector:
app: hxe
---
apiVersion: v1
kind: Service
metadata:
name: sqlpad
labels:
app: hxe
spec:
type: LoadBalancer
ports:
- port: 3000
targetPort: 3000
protocol: TCP
name: sqlpad
selector:
app: hxe
I'm also using the last version of HANA Express Edition docker image: store/saplabs/hanaexpress:2.00.045.00.20200121.1 that you can see available here: https://hub.docker.com/_/sap-hana-express-edition/plans/f2dc436a-d851-4c22-a2ba-9de07db7a9ac?tab=instructions
The error I'm getting is the following:
Any thought on what could be wrong?
Best regards and happy new year for everybody.

Thanks to the Mahboob suggestion now I can start the pods (partially) and the issue is not poppin up in the "busybox" container starting stage. The problem was that I was using an Container-Optimized image for the node pool and the required one is Ubuntu. If you are facing a similar issue double check the image flavor you are choosing at the moment of node pool creation.
However, I have now a different issue, the pods are starting (both the hanaxs and the other for sqlpad), nevertheless one of them, the sqlpad container, is crashing at some point after starting and the pod gets stuck in CrashLoopBackOff state. As you can see in picture below, the pods are in CrashLoopBackOff state and only 1/2 started and suddenly both are running.
I'm not hitting the right spot to solve this problem since I'm a newcomer to the kubernetes and docker world. Hope some of you can bring some light to me.
Best regards.

Related

kubernetes persistent volume overrides the previous volume

We are trying to run blockchain node with AWS EKS.
We need to maintain the previous blockchain snapshot while updating node version.
So we used the kubernetes persistent volume, but the previous volume is just overwritten whenever we kill the previous pod, and restart the new one.
We set up the arguments of docker image to store the data at "/data/headless"
like this
--store-path=/data/headless
also, we set the volumeMounts to "/disk"
volumeMounts:
- name: block-data
mountPath: /disk
and the result shows
We think this volume path seems a temporary volume, and we are wondering about how to maintain these volumes even if we update the pods.
[We have seen millions of documents of AWS, but still doesn't know where to get started:( ]
How can I update the previous volume and connect it to new pod without losing any data in the previous one?
Here's our source code.
Volume Claim Yaml File
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nine-claim-3
spec:
storageClassName: gp2
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
StatefulSet & Service Yaml File
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nine-cluster
namespace: test-nine-3
labels:
app: nine
spec:
serviceName: nine-cluster
replicas: 2
selector:
matchLabels:
app: nine
template:
metadata:
labels:
app: nine
spec:
containers:
- name: nine-container
image: planetariumhq/ninechronicles-headless:v100260
ports:
- name: rpc
containerPort: 31238
- name: iceserver
containerPort: 3478
- name: peer
containerPort: 31234
- name: graphql
containerPort: 80
args: [
"--port=31234",
"--no-miner",
"--store-type=rocksdb",
"--store-path=/data/headless",
"--graphql-server",
"--graphql-host=0.0.0.0",
"--graphql-port=80",
"--rpc-server",
"--rpc-remote-server",
"--rpc-listen-host=0.0.0.0",
"--rpc-listen-port=31238",
"--no-cors",
"--chain-tip-stale-behavior-type=reboot"
]
volumeMounts:
- name: block-data
mountPath: /data/headless
volumes:
- name: block-data
persistentVolumeClaim:
claimName: nine-claim-3
---
apiVersion: v1
kind: Service
metadata:
name: nine-nlb-services
namespace: test-nine-3
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
ports:
- name: graphql
port: 80
targetPort: 80
nodePort: 31003
protocol: TCP
- name: rpc
port: 31238
targetPort: 31238
nodePort: 31004
protocol: TCP
type: LoadBalancer
selector:
app: nine

mongodb microservice k8 persistent volume claim not persisting data

I have several microservices, each one with its own mongodb deployment. I would like to start with getting my auth service working with a persistent volume. I have watched courses where postgresql is used and read a lot in the kubernetes docs but am having trouble getting this to work for mongodb.
When I run skaffold dev the PVC is created with no errors. kubectl shows the PVC is in Bound status, and running describe on the PVC shows my mongo deployment as the user.
However, when I visit my client service in the browser, I signup, logout, signin again with no problem and then if I restart skaffold so it deletes and recreates the containers my data is gone and I have to signup again.
Here are my files
auth-mongo-depl.yaml
# auth-mongo service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
auth-depl.yaml
# auth service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: isimmons33/ticketing-auth
env:
- name: MONGO_URI
value: 'mongodb://auth-mongo-ip-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-ip-srv
spec:
selector:
app: auth
type: ClusterIP
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
api/users portion of my ingress-srv.yaml
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-ip-srv
port:
number: 3000
My client fires off a post request to /api/users/auth with which I can successfully signup or signin as long as I don't restart skaffold.
I even used kubectl to get a shell into my mongo deployment and queried to see the new user account there as it should be. But of course it is gone after restarting skaffold.
I am on Windows 10 but am running everything through WSL2 (Ubuntu)
Thanks for any help
It is highly recommended to use StatefulSets for running databases in Kubernetes. In Deployment if your pod crashes for some reason and creates new one, it's not guaranteed the pod will get patched to the same PV, hence the you loose the data.
Have a look on this https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets
The solution as pointed out by raghu_manne was to use StatefulSets. But because the link posted is extremely old, here is the full solution that worked for me.
Also here is a youtube video I just found that explains StatefulSet and volumeClaimTemplates quite well.
How to run MongoDB with StatefulSets in Kubernetes
auth-mongo-depl.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: auth-mongo-depl
spec:
replicas: 1
serviceName: auth-mongo
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: auth-mongo-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: auth-mongo-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017

kubernetes how do I expose pods to things outside of cluster machine?

I read the following kubernetes docs which resulted in the following yaml's to run postgresql & pgadmin in a cluster:
--- pgadmin-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
--- pgadmin-service.yaml
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- port: 30000
targetPort: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
--- postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- port: 30001
targetPort: 5432
selector:
app: postgres-pod
--- postgres-storage.yaml
postgres-storage.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
I then run the following command kubectl create -f ./ which results in the following:
kubernetes pods / svc's
Then I try to access pgAdmin on 10.43.225.170:30000 from outside of the cluster, but I get "10.43.225.170 took too long to respond." no matter what I try.
So how do I expose pgAdmin & postgress to the outside world, and is there a way to give them static ip's so I don't have to update ip's in connection strings each time I re-deploy on kubernetes, or do I have to use statefulset for this?
Problems here
you are trying to reach node internal ip 10.43.225.170 instead of external one.
nodePort service configured incorrectly. In addition you are trying to call incorrect port
You haven't specified what platform you use. I'm using GKE, so in my case its easier because I have external IP's automatically assigned during cluster node creation. But I had to manually create ingress firewall rule to allow access from outside to nodes and required ports (30000,30001)
In any case, to be able to use nodePort - you should have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port
Going next. You are trying to call <NodeIP>:spec.ports[*].port.
As per Type NodePort documentation:
Service is visible as <NodeIP>:spec.ports[*].nodePort
You need explicitly specify nodePort
I have changed a bit your deployment, can access pgAdmin after deploying it and opening corresponding ports in firewall.
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- nodePort: 30000
targetPort: 80
port: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- nodePort: 30001
targetPort: 5432
port: 5432
selector:
app: postgres-pod
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Check:
kubectl apply -f pg_my.yaml
deployment.apps/pgadmin-deployment created
service/pgadmin-service created
service/postgres-service created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
#In my case I take node external ip from any node from `kubectl get nodes -o wide` output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
gke-cluster-1-default-pool-*******-***** Ready <none> 20d v1.18.16-gke.502 10.186.0.7 *.*.*.*
curl *.*.*.*:30000
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: /login?next=%2F.

Kubernetes pods refusing connections to each other

I'm trying to implement an ElasticStack in Kubernetes via Minikube. I've barely started, as I'm writing basically everything from scratch to get a better understand of K8s and because the provided yml's from Elastic don't offer any explanation as to what is done why, so I'm doing my own thing.
The problem I've ran into is that my Kibana-pod cannot communicate with my ElasticSearch-pod, although I've set up the necessary services and ports on my pods.
Where it gets weird is that
kubectl port-forward services/elastic-http 9200
works flawlessly and lets me get information from my ElasticSearch pod. However, when I enter a pod via
kubectl exec -it <pod-name> -- /bin/bash
and try to use curl to get the same information my browser just showed me, the connection is being refused and my pods won't talk to one another.
My configs look as follows.
Kibana.yml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: my-kb
namespace: default
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
name: kibana
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.3.0
ports:
- containerPort: 5601
name: kibana-web
volumeMounts:
- name: kb-conf
mountPath: /usr/share/kibana/config/kibana.yml
subPath: kibana.yml
volumes:
- name: kb-conf
configMap:
name: kibana-config
items:
- key: kibana.yml
path: kibana.yml
---
kind: Service
apiVersion: v1
metadata:
name: kibana-http
namespace: default
spec:
selector:
app: kibana
ports:
- protocol: TCP
port: 5601
name: kibana-web
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kibana-config
namespace: default
data:
kibana.yml: |
elasticsearch.hosts: ["http://elastic-http.default.svc:9200"]
ElasticSearch.yml
kind: PersistentVolume
apiVersion: v1
metadata:
name: elastic-pv
namespace: default
spec:
capacity:
storage: 15Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elastic-pv-claim
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 15Gi
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: elastic-deploy
namespace: default
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
ports:
- containerPort: 9200
name: elastic-http
protocol: TCP
- containerPort: 9300
name: node-sniffer
protocol: TCP
#readinessProbe:
# httpGet:
# port: 9200
# periodSeconds: 5
volumeMounts:
- name: elastic-conf
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: elastic-data
mountPath: /var/data
securityContext:
privileged: true
initContainers:
- name: sysctl-adj
image: busybox
command: ['sysctl', '-w', 'vm.max_map_count=262144']
securityContext:
privileged: true
volumes:
- name: elastic-data
persistentVolumeClaim:
claimName: elastic-pv-claim
- name: elastic-conf
configMap:
name: elastic-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
---
kind: Service
apiVersion: v1
metadata:
name: elastic-http
namespace: default
spec:
selector:
app: elasticsearch
ports:
- port: 9200
targetPort: elastic-http
name: elastic-http
- port: 9300
targetPort: node-sniffer
name: node-finder
---
kind: ConfigMap
apiVersion: v1
metadata:
name: elastic-config
namespace: default
data:
elasticsearch.yml: |
xpack.security.enabled: false
node.master: true
path.data: /var/data
http.port: 9200
I think you are having clusterIP service type and if you want to see it in browser one of the option is to have service type as NodePort.
You can see more details here
I'm not sure about this part in service:
targetPort: elastic-http
targetPort: node-sniffer
could you try to remove them and try again

unable to deploy debezium on minikube

I am new to kubernetes, I am trying to integrate kafka with debezium and mysql.
i successfully deploy kafka and mysql on minikube , once i deploy the debezium yml on minikube, it got hanged and don't response at all , then i restart the minikube, After running all pod minikube again got hanged.
below is my code:
zookeeper service
apiVersion: v1
kind: Service
metadata:
name: zoo1
labels:
app: zookeeper-1
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-1
zookeeper deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: zookeeper-deployment-1
spec:
template:
metadata:
labels:
app: zookeeper-1
spec:
containers:
- name: zoo1
image: debezium/zookeeper
ports:
- containerPort: 2181
env:
- name: ZOOKEEPER_ID
value: "1"
- name: ZOOKEEPER_SERVER_1
value: zoo1
kafka service:
apiVersion: v1
kind: Service
metadata:
name: kafka-service
labels:
name: kafka
spec:
ports:
- port: 9092
name: kafka-port
protocol: TCP
selector:
app: kafka
id: "1"
type: NodePort
kafka deployemnt:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kafka-broker1
spec:
template:
metadata:
labels:
selector: kafka
app: kafka
id: "1"
spec:
containers:
- name: kafka
image: debezium/kafka
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: 192.168.39.47
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: hello-topic:3:3
MySql-persistance volume:
#application/mysql/mysql-pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
mysql deployment:
#application/mysql/mysql-deployment.yaml
# this command is for mysql client kubectl run -it --rm --image=debezium/example-mysql --restart=Never mysql-client -- mysql -h mysql -pdebezium
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: extensions/v1beta1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: debezium/example-mysql
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: debezium
- name: MYSQL_USER
value: mysqluser
- name: MYSQL_PASSWORD
value: mysqlpw
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Debezium deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: debezium-connect-source
spec:
selector:
matchLabels:
app: debezium-connect-source
replicas: 1
template:
metadata:
labels:
app: debezium-connect-source
spec:
terminationGracePeriodSeconds: 30
containers:
- name: debezium-connect-source
image: debezium/connect
env:
- name: BOOTSTRAP_SERVERS
value: kafka-service:9092
- name: GROUP_ID
value: "1"
- name: CONFIG_STORAGE_TOPIC
value: debezium-connect-source_config
- name: OFFSET_STORAGE_TOPIC
value: debezium-connect-source_offset
ports:
- containerPort: 8083
name: dm-c-source
when i deploy the debezium , then problem starts and minikube response like
$ kubectl get pods
Unable to connect to the server: net/http: TLS handshake timeout
OS :Centos
minikube version: v0.30.0
I believe this is happening because of the resource crunch on the VM started by minikube.
By default when you start using minikube start it takes up only 2 CPU and 2GB RAM from your system, and by looking at your deployments (kafka + mysql + debezium) that might not be enough.
You can increase CPU and memory allocated to VM by using minikube start with parameters --cpu and --memory (value should be in MB).
For more info, you should do minikube start -h
I strongly suggest, if you want to setup a heavy deployments you should be using machines with more resources.
Hope this helps.
you should set memory limits for the Java-based pods. The older versions of Java would see the whole guest memory as their own and will happily consume it completely - and there are at least three JVMs started.