Unable to connect to Cockroach pod in Kubernetes - kubernetes

I am developing a simple web app with web service and persistent layer. Web persistent layer is Cockroach db. I am trying to deploy my app with single command:
kubectl apply -f my-app.yaml
App is deployed successfully. However when backend has to store something in db the following error appears:
dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host
When I start my app I provide the following connection string to cockroach db and connection is successful but when I try to store something in db the above error appears:
postgresql://root#web-service-db:26257/defaultdb?sslmode=disable
For some reason web pod can not talk with db pod. My whole configuration is:
# Service for web application
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web-service
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: http
nodePort: 30103
externalIPs:
- 192.168.1.9 # < - my local ip
---
# Deployment of web app
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
selector:
matchLabels:
app: web-service
replicas: 1
template:
metadata:
labels:
app: web-service
spec:
hostNetwork: true
containers:
- name: web-service
image: my-local-img:latest
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
hostPort: 8080
env:
- name: DB_CONNECT_STRING
value: "postgresql://root#web-service-db:26257/defaultdb?sslmode=disable"
---
### Kubernetes official doc PersistentVolume
apiVersion: v1
kind: PersistentVolume
metadata:
name: cockroach-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/my-local-volueme"
---
### Kubernetes official doc PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cockroach-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
---
# Cockroach used by web-service
apiVersion: v1
kind: Service
metadata:
name: web-service-cockroach
labels:
app: web-service-cockroach
spec:
selector:
app: web-service-cockroach
type: NodePort
ports:
- protocol: TCP
port: 26257
targetPort: 26257
nodePort: 30104
---
# Cockroach stateful set used to deploy locally
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web-service-cockroach
spec:
serviceName: web-service-cockroach
replicas: 1
selector:
matchLabels:
app: web-service-cockroach
template:
metadata:
labels:
app: web-service-cockroach
spec:
volumes:
- name: cockroach-pv-storage
persistentVolumeClaim:
claimName: cockroach-pv-claim
containers:
- name: web-service-cockroach
image: cockroachdb/cockroach:latest
command:
- /cockroach/cockroach.sh
- start
- --insecure
volumeMounts:
- mountPath: "/tmp/my-local-volume"
name: cockroach-pv-storage
ports:
- containerPort: 26257
After deployment everything looks good.
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 50m
web-service NodePort 10.111.85.64 192.168.1.9 8080:30103/TCP 6m17s
webs-service-cockroach NodePort 10.96.42.121 <none> 26257:30104/TCP 6m8s
kubectl get pods
NAME READY STATUS RESTARTS AGE
web-service-6cc74b5f54-jlvd6 1/1 Running 0 24m
web-service-cockroach-0 1/1 Running 0 24m
Thanks in advance!

Looks like you have a problem with DNS.
dial tcp: lookup web-service-cockroach on 192.168.65.1:53: no such host
Address 192.168.65.1 does not like a kube-dns service ip.
This could be explaind if you where using host network, and surprisingly you do.
When using hostNetwork: true, the default dns server used is the server that the host uses and that never is a kube-dns.
To solve it set:
spec:
dnsPolicy: ClusterFirstWithHostNet
It sets the dns server to the k8s one for the pod.
Have a look at kubernetes documentaion for more information about Pod's DNS Policy.

Related

How can I expose my postgresSQL pods with LoadBalancer service?

I setup 1 master node and 2 worker nodes on bare matel server. I deploy my postgressSQL with 3 replica sets. This is my deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
selector:
matchLabels:
app: postgres
replicas: 3
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: "IfNotPresent"
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: standard
capacity:
storage: 15Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: standard
accessModes:
- ReadWriteMany
resources:
requests:
storage: 15Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: root
---
I also follow the MetalLB https://metallb.universe.tf/installation/ and set up layer2 load balancer. which is running fine and I can even expose nginx pod with this service.
As you can see here.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h28m
nginx LoadBalancer 10.107.29.158 153.10.19.35 80:30703/TCP 162m
These are my running pods
NAME READY STATUS RESTARTS AGE
nginx-76d6c9b8c-lrljz 1/1 Running 0 3h29m
postgres-7dff8d6d74-8mlnt 1/1 Running 0 136m
postgres-7dff8d6d74-9zxsk 1/1 Running 0 136m
postgres-7dff8d6d74-xzkkx 1/1 Running 0 136m
What Issue Do I face?
When I try to expose the postgresSQL pods with load balancer I am not able to connect. Server is not reachable.
I try to expose as follow
kubectl expose deploy postgres --port 30432 --type LoadBalancer
I also try to create a yaml file for this service and still not successful.
kind: Service
apiVersion: v1
metadata:
name: postgres-svc
labels:
app: postgres
spec:
type: LoadBalancer
ports:
- port: 5432
targetPort: 30432
type: LoadBalancer
selector:
metallb-service: postgres
What Do I expect?
I want to expose my Pods to external network with this load balancer service so all the new data should be updated in all 3 replica set. Can you please help me to fix my service.yaml file?
I will be very thanks full
You don't specify a port in your postres container.
With kubectl expose you should specify a targetPort:
kubectl expose deploy postgres --port 30432 --target-port 5432 --type LoadBalancer
In your YAML you have to switch ports:
kind: Service
apiVersion: v1
metadata:
name: postgres-svc
labels:
app: postgres
spec:
type: LoadBalancer
ports:
- port: 30432
targetPort: 5432
type: LoadBalancer
selector:
app: postgres
Here also the selector was wrong. It has to match labels on the pod.

mongodb microservice k8 persistent volume claim not persisting data

I have several microservices, each one with its own mongodb deployment. I would like to start with getting my auth service working with a persistent volume. I have watched courses where postgresql is used and read a lot in the kubernetes docs but am having trouble getting this to work for mongodb.
When I run skaffold dev the PVC is created with no errors. kubectl shows the PVC is in Bound status, and running describe on the PVC shows my mongo deployment as the user.
However, when I visit my client service in the browser, I signup, logout, signin again with no problem and then if I restart skaffold so it deletes and recreates the containers my data is gone and I have to signup again.
Here are my files
auth-mongo-depl.yaml
# auth-mongo service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
auth-depl.yaml
# auth service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: isimmons33/ticketing-auth
env:
- name: MONGO_URI
value: 'mongodb://auth-mongo-ip-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-ip-srv
spec:
selector:
app: auth
type: ClusterIP
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
api/users portion of my ingress-srv.yaml
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-ip-srv
port:
number: 3000
My client fires off a post request to /api/users/auth with which I can successfully signup or signin as long as I don't restart skaffold.
I even used kubectl to get a shell into my mongo deployment and queried to see the new user account there as it should be. But of course it is gone after restarting skaffold.
I am on Windows 10 but am running everything through WSL2 (Ubuntu)
Thanks for any help
It is highly recommended to use StatefulSets for running databases in Kubernetes. In Deployment if your pod crashes for some reason and creates new one, it's not guaranteed the pod will get patched to the same PV, hence the you loose the data.
Have a look on this https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets
The solution as pointed out by raghu_manne was to use StatefulSets. But because the link posted is extremely old, here is the full solution that worked for me.
Also here is a youtube video I just found that explains StatefulSet and volumeClaimTemplates quite well.
How to run MongoDB with StatefulSets in Kubernetes
auth-mongo-depl.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: auth-mongo-depl
spec:
replicas: 1
serviceName: auth-mongo
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: auth-mongo-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: auth-mongo-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017

Kubernetes: access from outside

I have a flask app running on a remote Kubernetes cluster and when I'm accessing it on the inside it works. However, when I'm trying to access it from the outside nothing happens.
I'm using kind to create the cluster. Locally I can access the flask app via node's IP address.
I'm don't know how to access the service from the outside, do I need to do something else to be able to access the app.
apiVersion: v1
vi serkind: Service
metadata:
name: iweblens-svc
labels:
app: flaskapp
spec:
type: NodePort
ports:
- port: 5000
targetPort: 5000
protocol: TCP
selector:
app: flaskapp
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
kubeadmConfigPatches:
- |
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
evictionHard:
nodefs.available: "0%"
kubeadmConfigPatchesJSON6902:
- group: kubeadm.k8s.io
version: v1beta2
kind: ClusterConfiguration
patch: |
- op: add
path: /apiServer/certSANs/-
value: my-hostname
nodes:
- role: control-plane
- role: worker
apiVersion: apps/v1
kind: Deployment
metadata:
name: flaskapp
labels:
app: flaskapp
spec:
replicas: 1
selector:
matchLabels:
app: flaskapp
template:
metadata:
labels:
app: flaskapp
spec:
containers:
- name: flaskapp
image: myimage
imagePullPolicy: Never
ports:
- containerPort: 5000
resources:
limits:
cpu: "0.5"
requests:
cpu: "0.5"
Create a NodePort or LoadBalancer (works only on supported cloud providers) service to expose the deployment outside the cluster.
Here is a guide on how to use NodePort service.
To be be able to access an app via NodePort service the Node IP need to be reachable(i.e should be in same network) from the system where you are accessing it.

How to set dynamic IP to property file?

I had deployed 2 pods which needed to talk to another pod (let say Pod A).
Pod A requires Ip address of services of deployed pods.So i need to set those IP address in config property file needed for pod A.
As Ip address are dynamic i.e if pod crashed it get changed.So need to set it dynamically.
Currently I deployed 2 pods and do
kubectl get ep
and set those Ip address in config property file and build Dockerfile and push it and use that image for deployment.
This is my deplyment and svc file in which image djtijare/a2ipricing refers to config file
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
selector:
app: spring-boot-demo-pricing
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
spec:
replicas: 1
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084
nodeSelector:
disktype: ssd
So How to set IP's of those 2 pods dynamically in config file and build and push docker image.
I think you should think about using Headless services.
Sometimes you don’t need or want load-balancing and a single service IP. In this case, you can create what are termed “headless” Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
You can use a headless Service to interface with other service discovery mechanisms, without being tied to Kubernetes’ implementation. For example, you could implement a custom [Operator]( be built upon this API.
For such Services, a cluster IP is not allocated, kube-proxy does not handle these services, and there is no load balancing or proxying done by the platform for them. How DNS is automatically configured depends on whether the service has selectors defined.
For your example if you set service to spec.clusterIP = None you could nslookup -type=A spring-boot-demo-pricing which will show you IPs of pods attached to this service.
/ # nslookup -type=A spring-boot-demo-pricing
Server: 10.11.240.10
Address: 10.11.240.10:53
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.2.20
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.12
Name: spring-boot-demo-pricing.default.svc.cluster.local
Address: 10.8.1.13
And here are the yaml I've used:
apiVersion: v1
kind: Service
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
ports:
- name: spring-boot-pricing
port: 8084
targetPort: 8084
clusterIP: None
selector:
app: spring-boot-demo-pricing
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-boot-demo-pricing
labels:
app: spring-boot-demo-pricing
spec:
replicas: 3
selector:
matchLabels:
app: spring-boot-demo-pricing
template:
metadata:
labels:
app: spring-boot-demo-pricing
spec:
containers:
- name: spring-boot-demo-pricing
image: djtijare/a2ipricing:v1
imagePullPolicy: IfNotPresent
# envFrom:
#- configMapRef:
# name: spring-boot-demo-config-map
resources:
requests:
cpu: 100m
memory: 1Gi
ports:
- containerPort: 8084

Kubernetes nodeport not working

I've created an YAML file with three images in one pod (they need to communicate with eachother over 127.0.0.1) It seems that it's all working. I've defined a nodeport in the yaml file.
There is one deployment defined applications it contains three images:
contacts-db (A MySQL database)
front-end (An Angular website)
net-core (An API)
I've defined three services, one for every container. In there I've defined the type NodePort to access it.
So I retrieved the services to get the port numbers:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contacts-db 10.103.67.74 <nodes> 3306:30241/TCP 1d
front-end 10.107.226.176 <nodes> 80:32195/TCP 1d
net-core 10.108.146.87 <nodes> 5000:30245/TCP 1d
And I navigate in my browser to http://:32195 and it just keeps loading. It's not connecting. This is the complete Yaml file:
---
apiVersion: v1
kind: Namespace
metadata:
name: three-tier
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: applications
labels:
name: applications
namespace: three-tier
spec:
replicas: 1
template:
metadata:
labels:
name: applications
spec:
containers:
- name: contacts-db
image: mysql/mysql-server #TBD
env:
- name: MYSQL_ROOT_PASSWORD
value: quintor
- name: MYSQL_DATABASE
value: quintor #TBD
ports:
- name: mysql
containerPort: 3306
- name: front-end
image: xanvier/angularfrontend #TBD
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
- name: net-core
image: xanvier/contactsapi #TBD
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: contacts-db
labels:
name: contacts-db
namespace: three-tier
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 3306
targetPort: 3306
selector:
name: contacts-db
---
apiVersion: v1
kind: Service
metadata:
name: front-end
labels:
name: front-end
namespace: three-tier
spec:
type: NodePort
ports:
- port: 80
targetPort: 80 #nodePort: 30001
selector:
name: front-end
---
apiVersion: v1
kind: Service
metadata:
name: net-core
labels:
name: net-core
namespace: three-tier
spec:
type: NodePort
ports:
- port: 5000
targetPort: 5000 #nodePort: 30001
selector:
name: net-core
---
The selector of a service is matching the labels of your pod. In your case the defined selectors point to the containers which leads into nothing when choosing pods.
You'd have to redefine your services to use one selector or split up your containers to different Deployments / Pods.
To see whether a selector defined for a services would work, you can check them with:
kubectl get pods -l key=value
If the result is empty, your services will run into the void too.