GKE pods not connecting to Cloudsql - kubernetes

My app can't seem to connect to the proxy thus my Cloudsql Database.
Below are my setup:
my-simple-app.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: web
name: web
spec:
replicas: 2
selector:
matchLabels:
name: web
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
name: web
spec:
nodeSelector:
cloud.google.com/gke-nodepool: default-pool
containers:
- image: joelaw/nameko-hello:0.2
name: web
env:
- name: DB_HOST
value: 127.0.0.1
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
ports:
- containerPort: 3000
name: http-server
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=spheric-veric-task:asia-southeast1:authdb:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
I had setup the secrets correctly I suppose.
Below are some data that I collected from the instance:
The pod live happily:
web-69c7777c68-s2jt6 2/2 Running 0 9m
web-69c7777c68-zbwtv 2/2 Running 0 9m
When I run: kubectl logs web-69c7777c68-zbwtv -c cloudsql-proxy
It recorded this:
2019/04/04 03:25:35 using credential file for authentication; email=auth-db-user#spheric-verve-228610.iam.gserviceaccount.com
2019/04/04 03:25:35 Listening on /cloudsql/spheric-veric-task:asia-southeast1:authdb:5432/.s.PGSQL.5432 for spheric-veric-task:asia-southeast1:authdb:5432
2019/04/04 03:25:35 Ready for new connections
Since the app is not configured to connect to the db, what I did is to ssh into the pod with:
kubectl exec -it web-69c7777c68-mrdpn -- /bin/bash
# Followed by installing postgresql driver:
apt-get install postgresql
# Trying to connect to cloudsql:
psql -h 127.0.0.1 -p 5432 -U
When I run psql in the container:
psql: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
Can anyone of you kindly advise what should I do to connect to the DB?

You are specifying the instance connection string wrong, and so the proxy is listening on a unix socket in the /cloudsql/ directory instead of to a TCP port.
To tell the proxy to listen on a TCP port, use the following:
-instances=<INSTANCE_CONNECTION_NAME>=tcp:5432
Otherwise, the following format creates a unix socket (defaulting to the /cloudsql directory):
-instances=<INSTANCE_CONNECTION_NAME>

Related

Cannot see PostgreSQL in Kubernetes Through a Browser

I am testing a PostgreSQL configuration in kubernetes.
Windows 11
HyperV
Minikube
Everything works (or seems to work) fine
I can connect to the dabase via
kubectl exec -it pod/postgres-0 -- bash
bash-5.1$ psql --username=$POSTGRES_USER -W --host=localhost --port=5432 --dbname=pg_test
Password:
psql (13.6)
Type "help" for help.
pg_test=#
I cam also view the database through DBeaver.
But when I try to connect from any browser,
localhost:5432
I get errors such as :
firefox canot connect,
ERR_CONNECTION_REFUSED
I have no proxy
when I try
kubectl port-forward service/postgres-service 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
... this line repeats indefinitely for connections attempt
Handling connection for 5432
Handling connection for 5432
...
Here is my YAML config file
...
apiVersion: v1
data:
db: pg_test
user: admin
kind: ConfigMap
metadata:
name: postgres-config
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 2
selector:
matchLabels:
env: prod
domain: infrastructure
template:
metadata:
labels:
env: prod
domain: infrastructure
spec:
terminationGracePeriodSeconds: 20
securityContext:
runAsUser: 70
fsGroup: 70
containers:
- name: kubia-postgres
image: postgres:13-alpine
env:
- name: POSTGRES_PASSWORD
value: admin
# valueFrom:
# secretKeyRef:
# name: postgres-secret
# key: password
- name: POSTGRES_USER
value: admin
# valueFrom:
# configMapKeyRef:
# name: postgres-config
# key: user
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: postgres-config
key: db
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: postgres-test-volume
mountPath: /var/lib/postgresql
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
volumes:
- name: postgres-test-volume
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
labels:
env: prod
domain: infrastructure
spec:
ports:
- port: 5432
targetPort: 5432
protocol: TCP
name: pgsql
clusterIP: None
selector:
env: prod
domain: infrastructure
What am I doing wrong ?
If you want to access your Postgres instance using a web browser, you need to deploy and configure something like pgAdmin.
You haven't opened the service to the internet. You were only tunneling the port to you localhost. To do so you well need one of these Kubernetes services:
Port forwarding
Nodeport: Maps a port to your hosts port.
ClusterIP: It gives your service an internal Ip to be refered to in-cluster.
LoadBalancer: Assigns an Ip or a cloud providers load balancers to the service effectively making it available to external traffic.
Since you are using Minikube, you should try a LoadBalancer or a ClusterIP.
By the way, you are creating a service without a type and you are not giving it an ip.
The important parts in a service for it to work on development are the selector labels, port and type.
Exposing an IP || Docs

Connection Refused when trying to create a new user in MongoDB on Kubernetes

I am using a custom MongoDB image with a read-only file system and trying to deploy it to Kubernetes locally on my Mac using Kind. The below is my statefulset.yaml. I cannot run a custom script called mongo-init.sh, which creates my db users. Kubernetes only allows one point of entry, and so I am using that to run my db startup command: mongod -f /docker-entrypoint-initdb.d/mongod.conf. It doesn't allow me to run another command or script after that, even if I append a & at the end. I tried using "/bin/bash/", "-c", "command1", "command2" but that doesn't execute command2 if my first command is the mongod initialization command. Lastly, if I skip the mongod initialization command, the database is not up and running and I cannot connect to it, so I get a connection refused. When I kubectl exec onto the container, I can get into the database using mongo shell with mongo, but I cannot view anything or really execute any commands because I get an unauthorized error. Totally lost on this one and could use some help.
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
creationTimestamp: null
labels:
app: mydb
name: mongodb
namespace: mongodb
spec:
serviceName: "dbservice"
replicas: 1
selector:
matchLabels:
app: mydb
template:
metadata:
annotations:
creationTimestamp: null
labels:
app: mydb
spec:
# terminationGracePeriodSeconds: 10
imagePullSecrets:
- name: regcred
containers:
- env:
- name: MONGODB_APPLICATION_USER_PWD
valueFrom:
configMapKeyRef:
key: MONGODB_APPLICATION_USER_PWD
name: myconfigmap
- name: MONGO_INITDB_DATABASE
valueFrom:
configMapKeyRef:
key: MONGO_INITDB_DATABASE
name: myconfigmap
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
key: MONGO_INITDB_ROOT_PASSWORD
name: myconfigmap
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
key: MONGO_INITDB_ROOT_USERNAME
name: myconfigmap
image: 'localhost:5000/mongo-db:custom'
command: ["/docker-entrypoint-initdb.d/mongo-init.sh"]
name: mongodb
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- mountPath: /data/db
name: mongodata
- mountPath: /etc/mongod.conf
subPath: mongod.conf
name: mongodb-conf
readOnly: true
- mountPath: /docker-entrypoint-initdb.d/mongo-init.sh
subPath: mongo-init.sh
name: mongodb-conf
initContainers:
- name: init-mydb
image: busybox:1.28
command: ["chown"]
args: ["-R", "998:998", "/data"]
volumeMounts:
- name: mongodata
mountPath: /data/db
volumes:
- name: mongodata
persistentVolumeClaim:
claimName: mongo-volume-claim
- name: mongodb-conf
configMap:
name: myconfigmap
defaultMode: 0777
The error message I see when I do kubectl logs <podname>:
MongoDB shell version v4.4.1
connecting to: mongodb://localhost:27017/admin?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server localhost:27017, connection attempt failed: SocketException: Error connecting to localhost:27017 (127.0.0.1:27017) :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1

How to run a keycloak as second container after first container postgres Database start up at multi-container pod environment of kubernetes?

In a multi-container pod:
step-1: Deploy first container Postgres Database and create a schema
step-2: Wait until the Postgres pod came up
step-3: then start deploying second container keycloak
I have written below deployment file to run :
apiVersion: apps/v1
kind: Deployment
metadata:
name: idms
namespace: default
labels:
app: idms
spec:
replicas: 1
selector:
matchLabels:
app: idms
template:
metadata:
labels:
app: idms
spec:
containers:
- name: postgres
image: registry.prod.srv.da.nsn-rdnet.net/edge/postgres:12.3-alpine
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","sleep 5 && PGPASSWORD=$POSTGRES_PASSWORD psql $POSTGRES_DB -U $POSTGRES_USER -c \'CREATE SCHEMA IF NOT EXISTS keycloak;\'"]
envFrom:
- configMapRef:
name: postgres-config
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: REALM
value: "ntc"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
- name: DB_ADDR
value: "localhost"
- name: DB_PORT
value: "5432"
- name: DB_DATABASE
value: "postgresdb"
- name: DB_USER
value: "xxxxxxxxx"
- name: DB_PASSWORD
value: "xxxxxxxxx"
- name: DB_SCHEMA
value: "keycloak"
- name: KEYCLOAK_IMPORT
value: "/opt/jboss/keycloak/startup/elements/realm.json"
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
- mountPath: /opt/jboss/keycloak/startup/elements
name: elements
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumes:
- name: elements
configMap:
name: keycloak-elements
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
but keycloak is starting with H2 embedded database instead of Postgres. if I am using init-container to nslookup on Postgres on deployment file like below :
initContainers:
- name: init-postgres
image: busybox
command: ['sh', '-c', 'until nslookup postgres; do echo waiting for postgres; sleep 2; done;']
pod is getting stuck at "podinitialization"
you forget to add the
- name: DB_VENDOR
value: POSTGRES
in the deployment YAML file due to that keycloak by default using the H2 database mode.
YAML ref file : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml

k8s docker container mounts the host, but fails to output log files

The k8s docker container mounts the host, but fails to output log files to the host. Can you tell me the reason?
kubernets yaml like this
apiVersion: apps/v1
kind: Deployment
metadata:
name: db
namespace: test
spec:
replicas: 1
template:
spec:
containers:
- name: db
image: postgres:11.0-alpine
command:
- "docker-entrypoint.sh"
- "postgres"
- "-c"
- "logging_collector=on"
- "-c"
- "log_directory=/var/lib/postgresql/log"
ports:
- containerPort: 5432
protocol: TCP
volumeMounts:
- name: log-fs
mountPath: /var/lib/postgresql/log
volumes:
- name: log-fs
hostPath:
path: /var/log

How to run pgAdmin in OpenShift?

I'm trying to run a pgAdmin container (the one I'm using comes from here) in an OpenShift cluster where I don't have admin privileges and the admin does not want to allow containers to run as root for security reasons.
The error I'm currently receiving looks like this:
Error with Standard Image
I created a Dockerfile that creates that directory ahead of time based on the image linked above and I get this error:
Error with Edited Image
Is there any way to run pgAdmin within OpenShift? I want to be able to let DB admins log into the instance of pgAdmin and configure the DB from there, without having to use the OpenShift CLI and port forwarding. When I use that method the port-forwarding connection drops very frequently.
Edit1:
Is there a way that I should edit the Dockerfile and entrypoint.sh file found on pgAdmin's github?
Edit2:
It looks like this is a bug with pgAdmin... :/
https://www.postgresql.org/message-id/15470-c84b4e5cc424169d%40postgresql.org
To work around these errors, you need to add a writable volume to the container and set pgadmin's configuration to use that directory.
Permission Denied: '/var/lib/pgadmin'
Permission Denied: '/var/log/pgadmin'
The OpenShift/Kubernetes YAML example below demonstrates this by supplying a custom /pgadmin4/config_local.py as documented here. This allows you to run the image as a container with regular privileges.
Note the configuration files base directory (/var/lib/pgadmin/data) still needs to be underneath the mount point (/var/lib/pgadmin/), as pgadmin's initialization code tries to create/change ownership of that directory which is not allowed on mount point directories inside the container.
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: Secret
metadata:
labels:
app: pgadmin-app
name: pgadmin
type: Opaque
stringData:
username: admin
password: DEFAULT_PASSWORD
- apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
serviceaccounts.openshift.io/oauth-redirectreference.pgadmin: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"pgadmin"}}'
labels:
app: pgadmin-app
name: pgadmin
- apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: pgadmin-app
name: pgadmin
data:
config_local.py: |-
import os
_BASEDIR = '/var/lib/pgadmin/data'
LOG_FILE = os.path.join(_BASEDIR, 'logfile')
SQLITE_PATH = os.path.join(_BASEDIR, 'sqlite.db')
STORAGE_DIR = os.path.join(_BASEDIR, 'storage')
SESSION_DB_PATH = os.path.join(_BASEDIR, 'sessions')
servers.json: |-
{
"Servers": {
"1": {
"Name": "postgresql",
"Group": "Servers",
"Host": "postgresql",
"Port": 5432,
"MaintenanceDB": "postgres",
"Username": "dbuser",
"SSLMode": "prefer",
"SSLCompression": 0,
"Timeout": 0,
"UseSSHTunnel": 0,
"TunnelPort": "22",
"TunnelAuthentication": 0
}
}
}
- apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
name: pgadmin
labels:
app: pgadmin-app
spec:
replicas: 1
selector:
app: pgadmin-app
deploymentconfig: pgadmin
template:
metadata:
labels:
app: pgadmin-app
deploymentconfig: pgadmin
name: pgadmin
spec:
serviceAccountName: pgadmin
containers:
- env:
- name: PGADMIN_DEFAULT_EMAIL
valueFrom:
secretKeyRef:
key: username
name: pgadmin
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: pgadmin
- name: PGADMIN_LISTEN_PORT
value: "5050"
- name: PGADMIN_LISTEN_ADDRESS
value: 0.0.0.0
image: docker.io/dpage/pgadmin4:4
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
httpGet:
path: /misc/ping
port: 5050
scheme: HTTP
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 1
name: pgadmin
ports:
- containerPort: 5050
protocol: TCP
readinessProbe:
failureThreshold: 10
initialDelaySeconds: 3
httpGet:
path: /misc/ping
port: 5050
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
volumeMounts:
- mountPath: /pgadmin4/config_local.py
name: pgadmin-config
subPath: config_local.py
- mountPath: /pgadmin4/servers.json
name: pgadmin-config
subPath: servers.json
- mountPath: /var/lib/pgadmin
name: pgadmin-data
- image: docker.io/openshift/oauth-proxy:latest
name: pgadmin-oauth-proxy
ports:
- containerPort: 5051
protocol: TCP
args:
- --http-address=:5051
- --https-address=
- --openshift-service-account=pgadmin
- --upstream=http://localhost:5050
- --cookie-secret=bdna987REWQ1234
volumes:
- name: pgadmin-config
configMap:
name: pgadmin
defaultMode: 0664
- name: pgadmin-data
emptyDir: {}
- apiVersion: v1
kind: Service
metadata:
name: pgadmin-oauth-proxy
labels:
app: pgadmin-app
spec:
ports:
- name: 80-tcp
protocol: TCP
port: 80
targetPort: 5051
selector:
app: pgadmin-app
deploymentconfig: pgadmin
- apiVersion: route.openshift.io/v1
kind: Route
metadata:
labels:
app: pgadmin-app
name: pgadmin
spec:
port:
targetPort: 80-tcp
tls:
insecureEdgeTerminationPolicy: Redirect
termination: edge
to:
kind: Service
name: pgadmin-oauth-proxy
Openshift by default doesn't allow to run containers with root privilege, you can add Security Context Constraints (SCC) to the user anyuid for the project where you are deploying the container.
Adding a SCC for the project:
$ oc adm policy add-scc-to-user anyuid system:serviceaccount:<your-project>:default
scc "anyuid" added to: ["system:serviceaccount:data-base-administration:default"]
$ oc get scc
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
PGAdmin deployed:
$ oc describe pod pgadmin4-4-fjv4h
Name: pgadmin4-4-fjv4h
Namespace: data-base-administration
Priority: 0
PriorityClassName: <none>
Node: host/IP
Start Time: Mon, 18 Feb 2019 23:22:30 -0400
Labels: app=pgadmin4
deployment=pgadmin4-4
deploymentconfig=pgadmin4
Annotations: openshift.io/deployment-config.latest-version=4
openshift.io/deployment-config.name=pgadmin4
openshift.io/deployment.name=pgadmin4-4
openshift.io/generated-by=OpenShiftWebConsole
openshift.io/scc=anyuid
Status: Running
IP: IP
Controlled By: ReplicationController/pgadmin4-4
Containers:
pgadmin4:
Container ID: docker://ID
Image: dpage/pgadmin4#sha256:SHA
Image ID: docker-pullable://docker.io/dpage/pgadmin4#sha256:SHA
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Mon, 18 Feb 2019 23:22:37 -0400
Ready: True
Restart Count: 0
Environment:
PGADMIN_DEFAULT_EMAIL: secret
PGADMIN_DEFAULT_PASSWORD: secret
Mounts:
/var/lib/pgadmin from pgadmin4-1 (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-74b75 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
pgadmin4-1:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-74b75:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-74b75
Optional: false
QoS Class: BestEffort
Node-Selectors: node-role.kubernetes.io/compute=true
Tolerations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51m default-scheduler Successfully assigned data-base-administration/pgadmin4-4-fjv4h to host
Normal Pulling 51m kubelet, host pulling image "dpage/pgadmin4#sha256:SHA"
Normal Pulled 51m kubelet, host Successfully pulled image "dpage/pgadmin4#sha256:SHA"
Normal Created 51m kubelet, host Created container
Normal Started 51m kubelet, host Started container
I have already replied to similar issue for local installation OSError: [Errno 13] Permission denied: '/var/lib/pgadmin'
For docker image, you can map the /pgadmin4/config_local.py using environment variables, Check Mapped Files and Directories section on the https://hub.docker.com/r/dpage/pgadmin4/
This might work if you create a pgadmin user via the Dockerfile, and give it permission to write to /var/log/pgadmin.
You can create a user in the Dockerfile using the RUN command; something like this:
# Create pgadmin user
ENV_HOME=/pgadmin
RUN mkdir -p ${HOME} && \
mkdir -p ${HOME}/pgadmin && \
useradd -u 1001 -r -g 0 -G pgadmin -d ${HOME} -s /bin/bash \
-c "Default Application User" pgadmin
# Set user home and permissions with group 0 and writeable.
RUN chmod -R 700 ${HOME} && chown -R 1001:0 ${HOME}
# Create the log folder and set permissions
RUN mkdir /var/log/pgadmin && \
chmod 0600 /var/log/pgadmin && \
chown 1001:0 /var/log/pgadmin
# Run as 1001 (pgadmin)
USER 1001
Adjust your pgadmin install so it runs as 1001, and I think you should be set.