connect Postgres database in docker to app in Kubernetes - postgresql

I'm new with Kubernetes and I try to understand how to connect Postgres database which is outside from Kubernetes (exactly in docker with ip address 172.17.0.2 and port 5432) to my webapp in Kubernetes.
I try connect database through env variable PS_DATABASE_URL in Deployment section.
But it cannot find mentioned url for postgres. How it need to be done correctly?
webapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_kuber
ports:
- containerPort: 5000
env:
- name: PS_DATABASE_URL
value: postgresql://postgres:password#172.17.0.2:5432/db
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30100

I figured it out. it depends from cloud provider. For this example i use amazon cloud and to connect database on amazon (this is external service). So we must define it in yaml file like an external service.
postgres_external.yaml
kind: Service
apiVersion: v1
metadata:
name: postgres
spec:
type: ExternalName
externalName: db.cdmhjidhpqyu.us-east-2.rds.amazonaws.com
to connect to external service you need to link to it on deployment.
webapp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
labels:
app: webapp
spec:
replicas: 1
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: dmitriy83/flask_kuber
ports:
- containerPort: 5000
env:
- name: PS_DATABASE_URL
value: postgresql://<username>:<password>#postgres:5432/db
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
selector:
app: webapp
ports:
- protocol: TCP
port: 5000
targetPort: 5000
nodePort: 30100
Please note in webapp.yaml, env section value value: postgresql://<username>:<password>#postgres:5432/db   contains postgres - this is name of our external service which we define in postgres_external.yaml

Related

Connect spring boot and mongodb on different kubernetes pods

I am trying to create two different deployments using kubernetes, one for a spring boot project and another one for mongo db. I want the spring boot project to connect to mongo. Here is my properties file:
server:
port: 8081
spring:
data:
mongodb:
host: mongo-service
port: 27017
database: inventory
And here is the .yml file I am using for kubernetes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory
labels:
app: inventory
spec:
selector:
matchLabels:
app: inventory
template:
metadata:
labels:
app: inventory
spec:
containers:
- image: carlospalma03/inventory_java-api:version7
name: inventory-api
ports:
- containerPort: 8081
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo-db
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
labels:
run: mongo-service
spec:
ports:
- port: 27017
protocol: TCP
selector:
app: mongo-service
I get the following exception on the spring boot side:
Exception in monitor thread while connecting to server mongo-db:27017
Does anybody know what's the proper name I should use for the mongo-db service so that the spring boot project can communicate with it?
I am trying to use the name of the kubernetes service I created to enable communication but something tells me that there's a trick to how spring boot names the other pods.
Alright, a couple things here, first of all I had to create two services, not just one. The service associated to the spring boot deployment talks with other pods in the kubernetes cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
name: inventory
labels:
app: inventory
spec:
selector:
matchLabels:
app: inventory
template:
metadata:
labels:
app: inventory
spec:
containers:
- image: carlospalma03/inventory_java-api:version9
name: inventory-api
ports:
- containerPort: 8081
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo
labels:
app: mongo
spec:
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- image: mongo
name: mongo-db
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
name: inventory-service
labels:
run: inventory-service
spec:
ports:
- port: 8081
targetPort: 8081
protocol: TCP
selector:
app: inventory
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: mongo-service
labels:
run: mongo-service
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
app: mongo
Secondly I had to use the spring.data.mongo.db.uri property inside the spring boot project like this:
server:
port: 8081
spring:
data:
mongodb:
uri: mongodb://mongo-service:27017/inventory

Unable to access the app deployed on minikube cluster by using the url

This the IP I am getting for accessing the application but I am unable to access this app on this url.
This is what I am getting while accessing the url
This is the service file I am using for application.
apiVersion: v1
kind: Service
metadata:
name: villa-service
spec:
selector:
app: villa
ports:
- port: 80
targetPort: 80
nodePort: 31000
type: NodePort
This is the deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: villa.deployment
labels:
app: villa
spec:
replicas: 2
selector:
matchLabels:
app: villa
template:
metadata:
labels:
app: villa
spec:
containers:
- name: villa
image: farhan23432/angular
ports:
- containerPort: 80
This is the inbound rules of the Security group of the instance on which I am running my minikube cluster.
This is the versions of the minikube, docker and kubectl that I am using.
This is the status of the minikube

What host does Kubernetes assign to my deployment?

I have two Kubernetes deployments: composite-app (1 pod) and product-app (2 pods), both listening in port 8080. The first one needs to call the second one sometimes.
However, the first deployment can't find the second one. When it tries to call it using the product.app host it fails:
Exception: I/O error on GET request for "http://product-app:8080/product/123": product-app;
nested exception is UnknownHostException
Am I using the right host? So far I've tried (to no avail):
product
product-app.default.pod.cluster.local
product-app
Here's my YAML:
apiVersion: v1
kind: Service
metadata:
name: composite-service
spec:
type: NodePort
selector:
app: composite-app
ports:
- targetPort: 8080
port: 8080
nodePort: 30091
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: composite-deploy
spec:
replicas: 1
selector:
matchLabels:
app: composite-app
template:
metadata:
labels:
app: composite-app
spec:
containers:
- name: composite-container
image: 192.168.49.2:2376/composite-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You need to define a Service object for the product-deploy Deployment as well for the other pod to be able to connect to it. The Service can be of type ClusterIP if it is not needed to be exposed to the external world.
apiVersion: v1
kind: Service
metadata:
name: product-service
spec:
type: ClusterIP
selector:
app: product-app
ports:
- targetPort: 8080
port: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-deploy
spec:
replicas: 2
selector:
matchLabels:
app: product-app
template:
metadata:
labels:
app: product-app
spec:
containers:
- name: product-container
image: 192.168.49.2:2376/product-ms:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
You can connect to the other pod using the pod's IP without the service. However, that is not recommended since the pod's IP can be changed across pod updates.
You can then connect to the product-app pod from the composite-app using product-service.

tunnel for service target port empty kubernetes and can't access pod from local browser

apiVersion: apps/v1
kind: Deployment
metadata:
name: identityold-deployment
spec:
selector:
matchLabels:
app: identityold
replicas: 1
template:
metadata:
labels:
app: identityold
spec:
containers:
- name: identityold
image: <image name from docker hub>
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
labels:
app: identityold
name: identityold-svc
namespace: default
spec:
type: NodePort # use LoadBalancer as type here
ports:
- port: 80
targetPort: 8081
nodePort: 30036
selector:
app: identityold
The above code is my deployment YAML file.
and cant access from the browser the service
Exposing a service in minikube cluster is little bit different than in normal kubernetes cluster.
Please follow this guide from kubernetes documentation and use minikube service command in order to expose it properly.

Kubernetes API External Point Fail on Connection

I have currently create this kubernetes file: for deploy two API's in a Cluster on GCloud. I had tried make two kinds of "type" on kind Service.
First of all I had set the service type as a NodePort and couldn't connect to it, after that I had try use LoadBalancer, although, even with the external IP and the Endpoints I'm not able to access any API.
apiVersion: apps/v1
kind: Deployment
metadata:
name: xxxxxxxxxxxxx
labels:
app: xxxxxxxx
spec:
replicas: 1
selector:
matchLabels:
app: xxxxxxxxx
template:
metadata:
labels:
app: xxxxxxxx
spec:
containers:
- name: xxxxx
image: xxxxxxxxx
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: xxxxxxxxx
spec:
selector:
app: xxxxxxxx
ports:
- protocol: TCP
port: 8080
targetPort: 3000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: yyyyyy
labels:
app: yyyyyy
spec:
replicas: 1
selector:
matchLabels:
app: yyyyyy
template:
metadata:
labels:
app: yyyyyy
spec:
containers:
- name: yyyyyy
image: yyyyyy
ports:
- containerPort: 3000
---
kind: Service
apiVersion: v1
metadata:
name: yyyyyy
spec:
selector:
app: yyyyyy
ports:
- protocol: TCP
port: 80
targetPort: 3000
type: LoadBalancer
Could anyone help me on this issue?
Regards.
There are a lot of examples of deploying Service (type:LoadBalancer) and have it redirect traffic to Deployment on GKE documentation.
Please follow these tutorials:
https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk
Also in your question you don't list any "errors" or "events". Please take a look at kubectl describe output of the Service. If you aren't getting the load balancer working, there might be an error like you ran out of IP addresses in your GCP project.