Connect spring boot and mongodb on different kubernetes pods - mongodb

I am trying to create two different deployments using kubernetes, one for a spring boot project and another one for mongo db. I want the spring boot project to connect to mongo. Here is my properties file:
server:
port: 8081
spring:
data:
mongodb:
host: mongo-service
port: 27017
database: inventory
And here is the .yml file I am using for kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory
labels:
app: inventory
spec:
selector:
matchLabels:
app: inventory
template:
metadata:
labels:
app: inventory
spec:
containers:
- image: carlospalma03/inventory_java-api:version7
name: inventory-api
ports:
- containerPort: 8081
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo-db
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
labels:
run: mongo-service
spec:
ports:
- port: 27017
protocol: TCP
selector:
app: mongo-service
I get the following exception on the spring boot side:
Exception in monitor thread while connecting to server mongo-db:27017
Does anybody know what's the proper name I should use for the mongo-db service so that the spring boot project can communicate with it?
I am trying to use the name of the kubernetes service I created to enable communication but something tells me that there's a trick to how spring boot names the other pods.

Alright, a couple things here, first of all I had to create two services, not just one. The service associated to the spring boot deployment talks with other pods in the kubernetes cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory
labels:
app: inventory
spec:
selector:
matchLabels:
app: inventory
template:
metadata:
labels:
app: inventory
spec:
containers:
- image: carlospalma03/inventory_java-api:version9
name: inventory-api
ports:
- containerPort: 8081
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo-db
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
name: inventory-service
labels:
run: inventory-service
spec:
ports:
- port: 8081
targetPort: 8081
protocol: TCP
selector:
app: inventory
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
labels:
run: mongo-service
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
app: mongo
Secondly I had to use the spring.data.mongo.db.uri property inside the spring boot project like this:
server:
port: 8081
spring:
data:
mongodb:
uri: mongodb://mongo-service:27017/inventory

Related

How to connect MongoDB , golang in Kubernetes

The database and the server are not connected.
Attempting to deploy in Kubernetes environment.
this is deployment, sevice of mongodb , golang http server
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- image: royroyee/backend:0.8
name: backend
ports:
- containerPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: backend-service
labels:
run: backend-service
spec:
ports:
- port: 9001
targetPort: 9001
protocol: TCP
selector:
app: backend
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo-db
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
labels:
run: mongo-service
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
app: mongo
and my golang code ...
mongodb session
func getSession() *mgo.Session {
s, err := mgo.Dial("mongodb://mongo-service:27017/mongo-db")
pls let me know ..
also I tried something like this.
// mongodb://mongo-service:27017/backend
// mongodb://mongo-service:27017/mongo-db
// mongodb://mongo-service:27017
To connect MongoDB with Golang in a Kubernetes environment, you need to follow these steps:
Deploy MongoDB as a statefulset or a deployment in your Kubernetes cluster.
Create a Service for MongoDB to access the deployed pods from your Golang application.
In your Golang application, use the official MongoDB Go driver to establish a connection to the MongoDB service by specifying the service name and port.
Verify the connection by running a simple test that inserts and retrieves data from the MongoDB database.
Finally, package the Golang application as a Docker image and deploy it as a deployment in the same Kubernetes cluster.
Here is a sample Go code to connect to MongoDB:
package main
import (
"context"
"fmt"
"log"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
func main() {
// Set client options
clientOptions := options.Client().ApplyURI("mongodb://mongodb-service:27017")
// Connect to MongoDB
client, err := mongo.Connect(context.TODO(), clientOptions)
if err != nil {
log.Fatal(err)
}
// Check the connection
err = client.Ping(context.TODO(), nil)
if err != nil {
log.Fatal(err)
}
fmt.Println("Connected to MongoDB!")
}
Here's a sample YAML file for deploying MongoDB as a StatefulSet and a Go application as a Deployment:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb
spec:
serviceName: mongodb-service
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo:4.4
ports:
- containerPort: 27017
volumeMounts:
- name: mongodb-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongodb-data
annotations:
volume.beta.kubernetes.io/storage-class: standard
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- name: mongodb
port: 27017
targetPort: 27017
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-app
spec:
replicas: 1
selector:
matchLabels:
app: go-app
template:
metadata:
labels:
app: go-app
spec:
containers:
- name: go-app
image: <your-go-app-image>
ports:
- containerPort: 8080
Note: You will need to replace your-go-app-image with the actual Docker image of your Go application.

connect Postgres database in docker to app in Kubernetes

I'm new with Kubernetes and I try to understand how to connect Postgres database which is outside from Kubernetes (exactly in docker with ip address 172.17.0.2 and port 5432) to my webapp in Kubernetes.
I try connect database through env variable PS_DATABASE_URL in Deployment section.
But it cannot find mentioned url for postgres. How it need to be done correctly?
webapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_kuber
ports:
- containerPort: 5000
env:
- name: PS_DATABASE_URL
value: postgresql://postgres:password#172.17.0.2:5432/db
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30100
I figured it out. it depends from cloud provider. For this example i use amazon cloud and to connect database on amazon (this is external service). So we must define it in yaml file like an external service.
postgres_external.yaml
kind: Service
apiVersion: v1
metadata:
name: postgres
spec:
type: ExternalName
externalName: db.cdmhjidhpqyu.us-east-2.rds.amazonaws.com
to connect to external service you need to link to it on deployment.
webapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_kuber
ports:
- containerPort: 5000
env:
- name: PS_DATABASE_URL
value: postgresql://<username>:<password>#postgres:5432/db
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30100
Please note in webapp.yaml, env section value value: postgresql://<username>:<password>#postgres:5432/db   contains postgres - this is name of our external service which we define in postgres_external.yaml

Restart pod when another service is recreated

I have a flask pod that connects to a mongodb service through the environment variable SERVICE_HOST (DNS discovery didn't work for some reason), when I change something in mongodb service and re-apply it, the flask pod won't be able to connect to the service anymore since the service host changes, I have to recreate it everytime manually, is there a way to automate this, sort of like docker-compose depends_on directive ?
flask yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-api-deployment
labels:
app: proxy23-api
spec:
replicas: 2
selector:
matchLabels:
app: proxy23-api
template:
metadata:
labels:
app: proxy23-api
spec:
containers:
- name: proxy23-api
image: my_image
ports:
- containerPort: 5000
env:
- name: DB_URI
value: mongodb://$(PROXY23_DB_SERVICE_SERVICE_HOST):27017
- name: DB_NAME
value: db
- name: PORT
value: "5000"
imagePullSecrets:
- name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-api-service
spec:
selector:
app: proxy23-api
type: NodePort
ports:
- port: 9002
targetPort: 5000
nodePort: 30002
mongodb yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-db-deployment
labels:
app: proxy23-db
spec:
replicas: 1
selector:
matchLabels:
app: proxy23-db
template:
metadata:
labels:
app: proxy23-db
spec:
containers:
- name: proxy23-db
image: mongo:bionic
ports:
- containerPort: 27017
volumeMounts:
- name: proxy23-storage
mountPath: /data/db
volumes:
- name: proxy23-storage
persistentVolumeClaim:
claimName: proxy23-db-pvc
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-db-service
spec:
selector:
app: proxy23-db
type: NodePort
ports:
- port: 27017
targetPort: 27017
nodePort: 30003

What host does Kubernetes assign to my deployment?

I have two Kubernetes deployments: composite-app (1 pod) and product-app (2 pods), both listening in port 8080. The first one needs to call the second one sometimes.
However, the first deployment can't find the second one. When it tries to call it using the product.app host it fails:
Exception: I/O error on GET request for "http://product-app:8080/product/123": product-app;
nested exception is UnknownHostException
Am I using the right host? So far I've tried (to no avail):
product
product-app.default.pod.cluster.local
product-app
Here's my YAML:
apiVersion: v1
kind: Service
metadata:
name: composite-service
spec:
type: NodePort
selector:
app: composite-app
ports:
- targetPort: 8080
port: 8080
nodePort: 30091
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: composite-deploy
spec:
replicas: 1
selector:
matchLabels:
app: composite-app
template:
metadata:
labels:
app: composite-app
spec:
containers:
- name: composite-container
image: 192.168.49.2:2376/composite-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You need to define a Service object for the product-deploy Deployment as well for the other pod to be able to connect to it. The Service can be of type ClusterIP if it is not needed to be exposed to the external world.
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
selector:
app: product-app
ports:
- targetPort: 8080
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You can connect to the other pod using the pod's IP without the service. However, that is not recommended since the pod's IP can be changed across pod updates.
You can then connect to the product-app pod from the composite-app using product-service.

Kubernetes on Spinnaker - Interservice communication

I have a sample application running on a Kubernetes cluster. Two microservices, one is a mongodb container and the other is a java springboot container.
The springboot container interacts with the mongodb container thro a service and stores data into the mongodb container.
The specs are provided below.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: empappdepl
labels:
name: empapp
spec:
replicas: 1
template:
metadata:
labels:
name: empapp
spec:
containers:
-
resources:
limits:
cpu: 0.5
image: 11.168.xx.xx:5000/employee:latest
imagePullPolicy: IfNotPresent
name: wsemp
ports:
- containerPort: 8080
name: wsemp
command: ["java","-Dspring.data.mongodb.uri=mongodb://mongoservice/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
imagePullSecrets:
- name: myregistrykey
---
apiVersion: v1
kind: Service
metadata:
labels:
name: empwhatever
name: empservice
spec:
ports:
- port: 8080
nodePort: 30062
type: NodePort
selector:
name: empapp
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodbdepl
labels:
name: mongodb
spec:
replicas: 1
template:
metadata:
labels:
name: mongodb
spec:
containers:
- resources:
limits:
cpu: 1
image: mongo
imagePullPolicy: IfNotPresent
name: mongodb
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
labels:
name: mongowhatever
name: mongoservice
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
type: NodePort
selector:
name: mongodb
I would like to know how this communication can be accomplished in spinnaker since it creates its own labels and selectors.
Thanks,
This is how it needs to be done.
Each loadbalancer created for the application is the service. So for mongodb application, after a loadbalancer is created with the nodeport settings, get the name of the service eg: mongodb-dev. The server group for mongodb also needs to be created.
Then when creating the employee server group, you need to specify the commands one by one in a separate line for that container as mentioned here
https://github.com/spinnaker/spinnaker/issues/2021#issuecomment-334885467
"java","-Dspring.data.mongodb.uri=mongodb://name-of-mongodb-service/microservices", "-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"
Now when the employee and mongodb pod starts, it is able to get its mapping and able to communicate properly.