I'm setting up a Kubernetes which is compose by prismagraphql and Postgres locally.
The Postgres container is setted up correctly and by using a NodePort I can access it from pgadmin locally.
Instead the prisma container is not reachable. I followed these steps (Guide) but I'm not able to find it at localhost:4466 also after the forwarding process.
This is the postgres.yaml:
# postgres config
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: database
POSTGRES_USER: username
POSTGRES_PASSWORD: password
---
# postgres deployment
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-deployment
labels:
app: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12
envFrom:
- configMapRef:
name: postgres-configuration
ports:
- containerPort: 5432
name: postgresdb
---
# postgres service
apiVersion: v1
kind: Service
metadata:
name: postgres-service
labels:
app: postgres
spec:
type: ClusterIP
selector:
app: postgres
ports:
- port: 5432
name: postgres
This is the prisma.yaml:
# prisma config
apiVersion: v1
kind: ConfigMap
metadata:
name: prisma-config
labels:
app: prisma
data:
PRISMA_CONFIG: |
port: 4466
databases:
default:
connector: postgres
host: database
port: 5432
user: username
password: password
migrations: true
---
# prisma deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: prisma-deployment
labels:
app: prisma
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: prisma
template:
metadata:
labels:
app: prisma
spec:
containers:
- name: prisma-4466
image: prismagraphql/prisma:1.34
ports:
- containerPort: 4466
env:
- name: PRISMA_CONFIG
valueFrom:
configMapKeyRef:
name: prisma-config
key: PRISMA_CONFIG
---
# prisma service
apiVersion: v1
kind: Service
metadata:
name: prisma-service
spec:
type: NodePort
selector:
app: prisma
ports:
- protocol: TCP
port: 4466
targetPort: 4466
nodePort: 30100
This is what I obtained with kubectl describe svc prisma-service:
Related
so I have a basic minikube cluster configuration for K8s cluster with only 2 pods for Postgres DB and my Spring app. However, I can't get my app to connect to my DB. I know that in Docker such issue could be solved with networking but after a lot of research I can't seem to find the problem and the solution to my issue.
Currently, given my configuration I get a Connection refused error by postgres whenever my Spring App tries to start:
Caused by: org.postgresql.util.PSQLException: Connection to postgres-service:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
So my spring-app is a basic REST API with some open endpoints where I query for some data. The app works completely fine and here is my application.properties:
spring.datasource.driverClassName=org.postgresql.Driver
spring.datasource.url=jdbc:postgresql://${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}
spring.datasource.username=${POSTGRES_USER}
spring.datasource.password=${POSTGRES_PASSWORD}
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.ddl-auto=update
The way I create my Postgres component is by creating a ConfigMap, a Secret and finally a Deployment with it's Service inside. They look like so:
postgres-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
data:
postgres-url: postgres-service
postgres-port: "5432"
postgres-db: "test"
postgres-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
type: Opaque
data:
postgres_user: cm9vdA== #already encoded in base64
postgres_password: cm9vdA== #already encoded in base64
postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgresdb
image: postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_password
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-db
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app.kubernetes.io/name: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
and finally here's my Deployment with it's Service for my spring app
spring-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-app-deployment
labels:
app: spring-app
spec:
replicas: 1
selector:
matchLabels:
app: spring-app
template:
metadata:
labels:
app: spring-app
spec:
containers:
- name: spring-app
image: app #image is pulled from my docker hub
ports:
- containerPort: 8080
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: postgres_password
- name: POSTGRES_HOST
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-url
- name: POSTGRES_PORT
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-port
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: postgres-db
---
apiVersion: v1
kind: Service
metadata:
name: spring-app-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: spring-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30001
A connection refused means that the host you are connecting to, does not have the port you mentioned opened.
This leads me to think that the postgres pod isnt running correctly, or the service is not pointing to those pods correctly.
By checking the Yamls I can see that the service's pod selector isnt configured correctly:
The service is selecting pods with label: app.kubernetes.io/name: postgres
The deployment is configured with pods with label: app: postgres
The correct service manifest should look like:
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
You can double check that by describing the service using kubectl describe service postgres-service.
The output should contain the postgres pods IPs for Endpoints.
I have a Golang Microservice Application which has following Kubernetes Manifest Configuration...
apiVersion: v1 # Service for accessing store application (this) from Ingress...
kind: Service
metadata:
name: store-internal-service
namespace: store-namespace
spec:
type: ClusterIP
selector:
app: store-internal-service
ports:
- name: http
port: 8000
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: store-application-service
namespace: store-namespace
labels:
app: store-application-service
spec:
selector:
matchLabels:
app: store-internal-service
template:
metadata:
labels:
app: store-internal-service
spec:
containers:
- name: store-application
image: <image>
envFrom:
- secretRef:
name: project-secret-store
ports:
- containerPort: 8000
protocol: TCP
imagePullPolicy: Always
env:
- name: APPLICATION_PORT
value: "8000"
- name: APPLICATION_HOST
value: "localhost"
terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: Secret
metadata:
name: project-secret-store
namespace: store-namespace
type: Opaque
stringData:
# Prometheus Server Credentials...
PROMETHEUS_HOST: "prometheus-internal-service"
PROMETHEUS_PORT: "9090"
# POSTGRESQL CONFIGURATION.
DATABASE_HOST: "postgres-internal-service"
DATABASE_PORT: "5432"
DATABASE_USER: "postgres_user"
DATABASE_PASSWORD: "postgres_password"
DATABASE_NAME: "store_db"
And Also for Test Purposes, I've specified following Variables in order to receive values from secrets in my application..
var (
POSTGRES_USER = os.Getenv("DATABASE_USER")
POSTGRES_PASSWORD = os.Getenv("DATABASE_PASSWORD")
POSTGRES_DATABASE = os.Getenv("DATABASE_NAME")
POSTGRES_HOST = os.Getenv("DATABASE_HOST")
POSTGRES_PORT = os.Getenv("DATABASE_PORT")
)
The Problem is when run my application, and after some time go check the logs of my application using kubectl logs <my-application-pod-name> --namespace=store-namespace, turns out that all this Golang variables are empty, despite the fact that they all has been declared in the Secret...
There is probably some other issues, that can cause this problem, but if there is some errors in configuration to point out, please share with your thoughts about it :)
I have a flask pod that connects to a mongodb service through the environment variable SERVICE_HOST (DNS discovery didn't work for some reason), when I change something in mongodb service and re-apply it, the flask pod won't be able to connect to the service anymore since the service host changes, I have to recreate it everytime manually, is there a way to automate this, sort of like docker-compose depends_on directive ?
flask yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-api-deployment
labels:
app: proxy23-api
spec:
replicas: 2
selector:
matchLabels:
app: proxy23-api
template:
metadata:
labels:
app: proxy23-api
spec:
containers:
- name: proxy23-api
image: my_image
ports:
- containerPort: 5000
env:
- name: DB_URI
value: mongodb://$(PROXY23_DB_SERVICE_SERVICE_HOST):27017
- name: DB_NAME
value: db
- name: PORT
value: "5000"
imagePullSecrets:
- name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-api-service
spec:
selector:
app: proxy23-api
type: NodePort
ports:
- port: 9002
targetPort: 5000
nodePort: 30002
mongodb yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: proxy23-db-deployment
labels:
app: proxy23-db
spec:
replicas: 1
selector:
matchLabels:
app: proxy23-db
template:
metadata:
labels:
app: proxy23-db
spec:
containers:
- name: proxy23-db
image: mongo:bionic
ports:
- containerPort: 27017
volumeMounts:
- name: proxy23-storage
mountPath: /data/db
volumes:
- name: proxy23-storage
persistentVolumeClaim:
claimName: proxy23-db-pvc
---
apiVersion: v1
kind: Service
metadata:
name: proxy23-db-service
spec:
selector:
app: proxy23-db
type: NodePort
ports:
- port: 27017
targetPort: 27017
nodePort: 30003
sonar-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonarqube
spec:
replicas: 1
selector:
matchLabels:
app: sonarqube
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- image: 10.150.0.131/devops/sonarqube:1.0
args:
- -Dsonar.web.context=/sonar
name: sonarqube
env:
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://sonar-postgres:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
sonar-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonarqube
name: sonarqube
spec:
type: NodePort
ports:
- port: 80
targetPort: 9000
name: sonarport
selector:
name: sonarqube
sonar-postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonar-postgres
spec:
replicas: 1
selector:
matchLabels:
app: sonar-postgres
template:
metadata:
labels:
app: sonar-postgres
spec:
containers:
- image: 10.150.0.131/devops/postgres:12.1
name: sonar-postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: POSTGRES_USER
value: sonar
ports:
- containerPort: 5432
name: postgresport
volumeMounts:
# This name must match the volumes.name below.
- name: data-disk
mountPath: /var/lib/postgresql/data
volumes:
- name: data-disk
persistentVolumeClaim:
claimName: claim-postgres
sonar-postgresql-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
name: sonar-postgres
Kubernetes Version:1.18.0
Docker Version : 19.03
**I am having a connection problem between the Sonarqube pod and the Postgresql pod.
I use the flannel network plug.
Can you help with the error?
Postgresql pod log value does not come.
**
ERROR
Try with:
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
app: sonar-postgres
because it looks like your selector is wrong. The same issue with sonar-service.yaml, change name to app and it should work.
If you installed postgresql on the sql cloud service, it is necessary to release the firewall access ip. To validate this question, try adding the 0.0.0.0/0 ip, it will release everything, but placing the correct sonar ip is the best solution
I'm trying to setup postgres in my Kubernates enabled Docker Desktop. I pulled images 'postgres:latest' and 'dpage/pgadmin4:latest' from Docker Hub. Below are my yaml files. Everything starts perfectly. I can open the pgAdmin page from Chrome. The creds i have given in the yaml file are not logging me in.
What is causing this issue?
postgres Db:-
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: app
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
type: local
app: postgres
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: Never
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: app
volumes:
- name: app
persistentVolumeClaim:
claimName: postgres-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres
pgAdmin :-
apiVersion: v1
kind: ConfigMap
metadata:
name: pgadmin-config
labels:
app: pgadmin4
data:
PGADMIN_DEFAULT_EMAIL: ap#a.com
PGADMIN_DEFAULT_PASSWORD: posdt
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pgadmin4
labels:
app: pgadmin4
spec:
replicas: 1
selector:
matchLabels:
app: pgadmin4
template:
metadata:
labels:
app: pgadmin4
spec:
containers:
- name: pgadmin4
image: dpage/pgadmin4:latest
imagePullPolicy: Never
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: pgadmin-config
---
apiVersion: v1
kind: Service
metadata:
name: pgadmin4
labels:
app: pgadmin4
spec:
type: NodePort
ports:
- port: 80
selector:
app: pgadmin4
I just deployed and was able to connect the database. see below
postgres-7cb54957dd-qkfd4 1/1 Running 0 13s
# kubectl exec -it postgres-7cb54957dd-qkfd4 sh
# psql -h 127.0.0.1 -U postgres -q -d app -c 'SELECT 1'
?column?
----------
1
(1 row)
used below YAML with slight modifications
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: app
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: Always
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: app
volumes:
- name: app
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres