odoo in k8s: Odoo pod running then crashing - postgresql

I try to deploy Odoo in k8s ;
I have use the below Yaml files for odoo/postgres/services.
the Odoo pod is always crashing . the logs result :
could not translate host name "db" to address: Temporary failure in name resolution
apiVersion: apps/v1
kind: Deployment
metadata:
name: odoo3
spec:
replicas: 1
selector:
matchLabels:
app: odoo3
template:
metadata:
labels:
app: odoo3
spec:
containers:
- name: odoo3
image: odoo
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: POSTGRES_DB
value: "postgres"
- name: POSTGRES_PASSWORD
value: "postgres"
- name: POSTGRES_USER
value: "postgres"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: odoo3
labels:
app: odoo3
spec:
ports:
- port: 80
targetPort: 80
selector:
app: odoo3

You need to specify the environment variable HOST
env:
- name: POSTGRES_DB
value: "postgres"
- name: POSTGRES_PASSWORD
value: "postgres"
- name: POSTGRES_USER
value: "postgres"
- name: HOST
value: "your-postgres-service-name"
Your your-postgres-service-name should point to your postgres database container or server.

Related

Keycloak with postgres on minikube: cannot connect to database

I have one pod with postgres and one pod with keycloak on minikube.
The pod with keycloak deployed via an helm chart from codecentric (chart version 17.0.1, application version 16.1.1) is failing to initialize.
I have inspected the logs and it is failing to connect to the database:
FATAL [org.keycloak.services] (ServerService Thread Pool -- 64) Error during startup: java.lang.RuntimeException: Failed to connect to database
...
Caused by: org.postgresql.util.PSQLException: Connection to 127.0.0.1:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
values.yml used to deploy the helm chart
postgresql:
enabled: false
extraEnv: |
- name: DB_VENDOR
value: postgres
- name: DB_ADDR
value: "127.0.0.1"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: keycloak
- name: DB_USER
value: keycloak
- name: DB_PASSWORD
value: keycloak
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: JDBC_PARAMS
value: "connectTimeout=30000"
Files used to deploy the postgresql pod:
postgres-configmap.yml:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config-old
labels:
app: postgres-old
data:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: keycloak
postgres-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-old
spec:
replicas: 1
selector:
matchLabels:
app: postgres-old
template:
metadata:
labels:
app: postgres-old
spec:
containers:
- name: postgres-old
image: docker.io/library/postgres:14.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config-old
volumeMounts:
- mountPath: /var/lib/postgresql/14/data
name: postgredb-old
volumes:
- name: postgredb-old
persistentVolumeClaim:
claimName: postgres-pv-claim-old
postgres-service.yml:
apiVersion: v1
kind: Service
metadata:
name: postgres-old
labels:
app: postgres-old
spec:
type: NodePort
ports:
- port: 5432
selector:
app: postgres-old
postgres-storage.yml:
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume-old
labels:
type: local
app: postgres-old
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim-old
labels:
app: postgres-old
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
I've also tried to expose an external port with:
minikube service postgres-old --url
and add this port on values.yml instead of 5432 but with no luck.
I am running minikube on wsl2.

Can't authenticate MongoDb on Kubernetes cluster

When I try to connect to MongoDb running on Kubernetes cluster with mongo -u admin -p password -authenticationDatabase admin, I get this error:
{"mechanism":"SCRAM-SHA-1","speculative":false,"principalName":"admin","authenticationDatabase":"admin","remote":"192.168.65.3:47486","extraInfo":{},"error":"UserNotFound: Could not find user \"admin\" for db \"admin\""}}
Below is the yaml file I'm using to create the MongoDb service.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongodb-statefulset
spec:
serviceName: "mongodb-service"
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
value: password
volumeMounts:
- mountPath: /data/db
name: data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
type: LoadBalancer
selector:
app: mongodb
ports:
- port: 27017
targetPort: 27017
I've tried everything and it still doesn't work. I appreciate any help.
Try something like, using mongod as command
apiVersion: v1
kind: Service
metadata:
labels:
app: mongo
name: mongodb-service
spec:
type: NodePort
ports:
- name: "http"
port: 27017
protocol: TCP
targetPort: 27017
selector:
service: mongo
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongo
spec:
replicas: 1
template:
metadata:
labels:
service: mongo
name: mongodb-service
spec:
containers:
- args:
- mongod
- --smallfiles
image: mongo:latest
name: mongo
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
value: admin

Can't reach Postgres through kubernetes service

I'm trying to deploy postgres, but am unable to reach postgres through service, using the following domain nametestapp-postgres.default.svc.cluster.local.
However, I get the following response
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "testapp-postgres.default.svc.cluster.local" (10.43.31.182) and accepting
TCP/IP connections on port 5432?
The address matches the service
default testapp-postgres ClusterIP 10.43.31.182 5432/TCP 20m
I am using the following deployment and service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testapp-postgres
labels:
app: testapp
spec:
replicas: 1
selector:
matchLabels:
app: testapp
template:
metadata:
labels:
app: testapp
spec:
containers:
- name: testapp-postgres
image: library/postgres:12-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
key: DB_NAME
name: testapp-secret
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
key: DB_PASSWORD
name: testapp-secret
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
key: DB_USER
name: testapp-secret
volumeMounts:
- mountPath: /var/lib/postgresql/data/
name: testapp-postgres-data
volumes:
- name: testapp-postgres-data
persistentVolumeClaim:
claimName: testapp-postgres-data
---
apiVersion: v1
kind: Service
metadata:
labels:
app: testapp
name: testapp-postgres
spec:
selector:
app: testapp
ports:
- protocol: TCP
port: 5432
targetPort: 5432
name: postgres

SonarQube + Postgresql Connection refused error in Kubernetes Cluster

sonar-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonarqube
spec:
replicas: 1
selector:
matchLabels:
app: sonarqube
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- image: 10.150.0.131/devops/sonarqube:1.0
args:
- -Dsonar.web.context=/sonar
name: sonarqube
env:
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://sonar-postgres:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
sonar-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonarqube
name: sonarqube
spec:
type: NodePort
ports:
- port: 80
targetPort: 9000
name: sonarport
selector:
name: sonarqube
sonar-postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonar-postgres
spec:
replicas: 1
selector:
matchLabels:
app: sonar-postgres
template:
metadata:
labels:
app: sonar-postgres
spec:
containers:
- image: 10.150.0.131/devops/postgres:12.1
name: sonar-postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: POSTGRES_USER
value: sonar
ports:
- containerPort: 5432
name: postgresport
volumeMounts:
# This name must match the volumes.name below.
- name: data-disk
mountPath: /var/lib/postgresql/data
volumes:
- name: data-disk
persistentVolumeClaim:
claimName: claim-postgres
sonar-postgresql-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
name: sonar-postgres
Kubernetes Version:1.18.0
Docker Version : 19.03
**I am having a connection problem between the Sonarqube pod and the Postgresql pod.
I use the flannel network plug.
Can you help with the error?
Postgresql pod log value does not come.
**
ERROR
Try with:
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
app: sonar-postgres
because it looks like your selector is wrong. The same issue with sonar-service.yaml, change name to app and it should work.
If you installed postgresql on the sql cloud service, it is necessary to release the firewall access ip. To validate this question, try adding the 0.0.0.0/0 ip, it will release everything, but placing the correct sonar ip is the best solution

kubernetes creating statefulset fail

I am trying to create a stateful set with definition below but I get this error:
error: unable to recognize "wordpress-database.yaml": no matches for kind "StatefulSet" in version "apps/v1beta2"
what's wrong?
The yaml file is (please do not consider the alignment of the rows):
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: wordpress-database
spec:
selector:
matchLabels:
app: blog
serviceName: "blog"
replicas: 1
template:
metadata:
labels:
app: blog
spec:
containers:
- name: database
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: rootPassword
- name: MYSQL_DATABASE
value: database
- name: MYSQL_USER
value: user
- name: MYSQL_PASSWORD
value: password
volumeMounts:
- name: data
mountPath: /var/lib/mysql
- name: blog
image: wordpress:latest
ports:
- containerPort: 80
env:
- name: WORDPRESS_DB_HOST
value: 127.0.0.1:3306
- name: WORDPRESS_DB_NAME
value: database
- name: WORDPRESS_DB_USER
value: user
- name: WORDPRESS_DB_PASSWORD
value: password
volumeClaimTemplates:
- metadata:
name: data
spec:
resources:
requests:
storage: 1Gi
The api version of StatefulSet shoud be:
apiVersion: apps/v1
From the official documentation
Good luck.