Pods in differents Nodes can't talk each other - Kubernetes - mongodb

Context:
I am building an application and now I am on the infrastructure step.
The application is built with Java and persistence layer is MongoDB.
Problem:
If the application is running in same Node as persistence Layer are, everything goes ok, but on different nodes the application cannot communicate with MongoDB.
There is a print of Kubernetes Dashboard:
As you can see, two pods of application (gateway) are running in same node as Mongo, but other two don't. These two are not finding MongoDb.
Here is the mongo-db.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-data
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
hostPath:
path: /home/vitor/seguranca/mongo
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
volumeName: mongo-data
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mongo
name: mongo-service
spec:
volumes:
- name: "deployment-storage"
persistentVolumeClaim:
claimName: "pvc"
containers:
- image: mongo
name: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: "deployment-storage"
mountPath: "/data/db"
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongo-service
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
clusterIP: None
and here the application.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: gateway
name: gateway
spec:
replicas: 4
selector:
matchLabels:
app: gateway
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: gateway
spec:
containers:
- image: vitornilson1998/native-micro
name: native-micro
env:
- name: MONGO_CONNECTION_STRING
value: mongodb://mongo-service:27017 #HERE IS THE POINT THAT THE APPLICATION USES TO ACCESS MONGODB
- name: MONGO_DB
value: gateway
resources: {}
ports:
- containerPort: 8080
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: gateway-service
name: gateway-service
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: gateway
type: NodePort
status:
loadBalancer: {}
I can't see what is stopping application to reach MongoDB.
Should I do what?

I was using calico as CNI.
I removed calico and let kube-proxy take care of everything.
Now everything is working fine.

Related

kubernetes how do I expose pods to things outside of cluster machine?

I read the following kubernetes docs which resulted in the following yaml's to run postgresql & pgadmin in a cluster:
--- pgadmin-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
--- pgadmin-service.yaml
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- port: 30000
targetPort: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
--- postgres-service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- port: 30001
targetPort: 5432
selector:
app: postgres-pod
--- postgres-storage.yaml
postgres-storage.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
I then run the following command kubectl create -f ./ which results in the following:
kubernetes pods / svc's
Then I try to access pgAdmin on 10.43.225.170:30000 from outside of the cluster, but I get "10.43.225.170 took too long to respond." no matter what I try.
So how do I expose pgAdmin & postgress to the outside world, and is there a way to give them static ip's so I don't have to update ip's in connection strings each time I re-deploy on kubernetes, or do I have to use statefulset for this?
Problems here
you are trying to reach node internal ip 10.43.225.170 instead of external one.
nodePort service configured incorrectly. In addition you are trying to call incorrect port
You haven't specified what platform you use. I'm using GKE, so in my case its easier because I have external IP's automatically assigned during cluster node creation. But I had to manually create ingress firewall rule to allow access from outside to nodes and required ports (30000,30001)
In any case, to be able to use nodePort - you should have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port
Going next. You are trying to call <NodeIP>:spec.ports[*].port.
As per Type NodePort documentation:
Service is visible as <NodeIP>:spec.ports[*].nodePort
You need explicitly specify nodePort
I have changed a bit your deployment, can access pgAdmin after deploying it and opening corresponding ports in firewall.
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin-deployment
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin-pod
template:
metadata:
labels:
app: pgadmin-pod
spec:
containers:
- name: pgadmin-container
image: dpage/pgadmin4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 80
env:
- name: PGADMIN_DEFAULT_EMAIL
value: email#example.com
- name: PGADMIN_DEFAULT_PASSWORD
value: password
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
type: NodePort
ports:
- nodePort: 30000
targetPort: 80
port: 80
selector:
app: pgadmin-pod
--- postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
app: postgres-pod
template:
metadata:
labels:
app: postgres-pod
spec:
containers:
- name: postgres-container
image: postgres:9.6-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: database
- name: POSTGRES_PASSWORD
value: password
- name: POSTGRES_USER
value: username
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgrepvc
volumes:
- name: postgrepvc
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: NodePort
ports:
- nodePort: 30001
targetPort: 5432
port: 5432
selector:
app: postgres-pod
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
Check:
kubectl apply -f pg_my.yaml
deployment.apps/pgadmin-deployment created
service/pgadmin-service created
service/postgres-service created
persistentvolume/postgres-pv-volume created
persistentvolumeclaim/postgres-pv-claim created
#In my case I take node external ip from any node from `kubectl get nodes -o wide` output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
gke-cluster-1-default-pool-*******-***** Ready <none> 20d v1.18.16-gke.502 10.186.0.7 *.*.*.*
curl *.*.*.*:30000
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to target URL: /login?next=%2F.

Kubernetes stateful mongodb: What is right string or how to connect to mongodb of running istance of statefulset which has service also attached?

Here is my config of YAML (all PV, Statefulset, and Service get created fine no issues in that). Tried a bunch of solution for connection string of Kubernetes mongo but didn't work any.
Kubernetes version (minikube):
1.20.1
Storage for config:
NFS (working fine, tested)
OS:
Linux Mint 20
YAML CONFIG:
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-pv
spec:
capacity:
storage: 250Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
path: /nfs/auth
server: 192.168.10.104
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
storageClassName: manual
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 250Mi
I have found few issues in your configuration file. In your service manifest file you use
selector:
role: mongo
But in your statefull set pod template you are using
labels:
app: mongo
One more thing you should use ClusterIP:None to use the headless service which is recommended for statefull set, if you want to access the db using dns name.
To all viewers, working YAML is below
right connection string is: mongodb://mongo:27017/<dbname>
apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-pv
spec:
capacity:
storage: 250Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
path: /nfs/auth
server: 192.168.10.104
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
app: mongo
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
storageClassName: manual
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 250Mi

How to connect to samba server from container running in kubernetes?

I created a kubernetes cluster in amazon. Then I run my pod (container) and volume into this cluster. Now I want to run the samba server into the volume and connect my pod to samba server. Is there any tutorial how can I solve this problem? By the way I am working at windows 10. Here is my deployment code with volume:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
labels:
app : application
spec:
replicas: 2
selector:
matchLabels:
project: k8s
template:
metadata:
labels:
project: k8s
spec:
containers:
- name : k8s-web
image: mine/flask:latest
volumeMounts:
- mountPath: /test-ebs
name: my-volume
ports:
- containerPort: 8080
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pv0004
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0004
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: [my-Id-volume]
you can check out the smaba container docker image at : https://github.com/dperson/samba
---
kind: Service
apiVersion: v1
metadata:
name: smb-server
labels:
app: smb-server
spec:
type: LoadBalancer
selector:
app: smb-server
ports:
- port: 445
name: smb-server
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: smb-server
spec:
replicas: 1
selector:
matchLabels:
app: smb-server
template:
metadata:
name: smb-server
labels:
app: smb-server
spec:
containers:
- name: smb-server
image: dperson/samba
env:
- name: PERMISSIONS
value: "0777"
args: ["-u", "username;test","-s","share;/smbshare/;yes;no;no;all;none","-p"]
volumeMounts:
- mountPath: /smbshare
name: data-volume
ports:
- containerPort: 445
volumes:
- name: data-volume
hostPath:
path: /smbshare
type: DirectoryOrCreate

Why am I getting "1 pod has unbound immediate PersistentVolumeClaims" when working with 2 deployments

I am trying to do a fairly simple red/green setup using Minikube where I want 1 pod running a red container and 1 pod running a green container and a service to hit each. To do this my k82 file is like...
apiVersion: v1
kind: PersistentVolume
metadata:
name: main-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Users/jackiegleason/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: main-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-red
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-red
---
apiVersion: v1
kind: Service
metadata:
labels:
app: express-app
name: express-service-green
spec:
type: LoadBalancer
ports:
- port: 3000
selector:
app: express-app-green
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-red
labels:
app: express-app-red
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-red
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: express-app-deployment-green
labels:
app: express-app-green
tier: app
spec:
replicas: 1
selector:
matchLabels:
tier: app
template:
metadata:
labels:
app: express-app-green
tier: app
spec:
volumes:
- name: express-app-storage
persistentVolumeClaim:
claimName: main-volume-claim
containers:
- name: express-app-container
image: partyk1d24/hello-express:latest
imagePullPolicy: IfNotPresent
env:
- name: DEPLOY_TYPE
value: "Green"
volumeMounts:
- mountPath: "/var/external"
name: express-app-storage
ports:
- containerPort: 3000
protocol: TCP
name: express-endpnt
But when I run I get...
0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
What am I missing? It worked fine without the second deployment so what am I missing?
Thank you!
You cannot use the same a PV with accessMode: ReadWriteOnce multiple times.
To do this you would need to use a volume that supports ReadWriteMany access mode.
Check out k8s documentation for the list of Volume Plugins that support this feature.
Additionally, as David already menioned, it's much better to log to the STDOUT.
You can also check solutions like FluentBit/fluentd or ELK stack.

MongoDB in Kubernetes within GCP

I'm trying to deploy mongodb on my k8s cluster as mongodb is my db of choice. To do that I've config files (very similar to what I did with postgress few weeks ago).
Here's mongo's deployment k8s object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-mongo-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-mongo
template:
metadata:
labels:
component: panel-admin-mongo
spec:
volumes:
- name: panel-admin-mongo-storage
persistentVolumeClaim:
claimName: database-persistent-volume-claim
containers:
- name: panel-admin-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: panel-admin-mongo-storage
mountPath: /data/db
In order to get into the pod I made a service:
apiVersion: v1
kind: Service
metadata:
name: panel-admin-mongo-cluster-ip-service
spec:
type: ClusterIP
selector:
component: panel-admin-mongo
ports:
- port: 27017
targetPort: 27017
And of cource I need a PVC as well:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-persistent-volume-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
In order to get to the db from my server I used server deployment object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: panel-admin-api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: panel-admin-api
template:
metadata:
labels:
component: panel-admin-api
spec:
containers:
- name: panel-admin-api
image: my-image
ports:
- containerPort: 3001
env:
- name: MONGO_URL
value: panel-admin-mongo-cluster-ip-service // This is important
imagePullSecrets:
- name: gcr-json-key
But for some reason when I'm booting up all containers with kubectl apply command my server says:
MongoDB :: connection error: MongoParseError: Invalid connection string
Can I deploy it like that (as it was possible with postgress)? Or what am I missing here?
Use mongodb:// in front of your panel-admin-mongo-cluster-ip-service
So it should look like this:
mongodb://panel-admin-mongo-cluster-ip-service