K8s TLS Secret for Postgres | GKE & Google Cloud SQL Postgres - postgresql

I'm having troubles establishing a SSL connection between a web service and a remotely hosted Postgres database. With the same cert and key files being used for the web service, I can connect to the database with tools such as pgAdmin and DataGrip. These files were downloaded from Postgres instance in the Google Cloud Console.
Issue:
At the time of Spring Boot service start up, the following error occurs:
org.postgresql.util.PSQLException: Could not read SSL key file /tls/tls.key
Where I look at the Postgres server logs, the error is recorded as
LOG: could not accept SSL connection: UNEXPECTED_RECORD
Setup:
Spring Boot service running on Minikube (local) and GKE connecting to a Google Cloud SQL Postgres instance.
Actions Taken:
I have downloaded the client cert & key. I created a K8s TLS Secret using the downloaded client cert & key. I also have made sure the files can be read from the volume mount by running the following command on the k8s deployment config:
command: ["bin/sh", "-c", "cat /tls/tls.key"]
Here is the datasource url which is fed in via an environment variable (DATASOURCE).
"jdbc:postgresql://[Database-Address]:5432/[database]?ssl=true&sslmode=require&sslcert=/tls/tls.crt&sslkey=/tls/tls.key"
Here is the k8s deployment yaml, any idea where i'm going wrong?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "service.name" . }}
labels:
release: {{ template "release.name" . }}
chart: {{ template "chart.name" . }}
chart-version: {{ template "chart.version" . }}
release: {{ template "service.fullname" . }}
spec:
replicas: {{ $.Values.image.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: {{ template "service.name" . }}
release: {{ template "release.name" . }}
env: {{ $.Values.environment }}
spec:
imagePullSecrets:
- name: {{ $.Values.image.pullSecretsName }}
containers:
- name: {{ template "service.name" . }}
image: {{ $.Values.image.repo }}:{{ $.Values.image.tag }}
# command: ["bin/sh", "-c", "cat /tls/tls.key"]
imagePullPolicy: {{ $.Values.image.pullPolicy }}
volumeMounts:
- name: tls-cert
mountPath: "/tls"
readOnly: true
ports:
- containerPort: 80
env:
- name: DATASOURCE_URL
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_URL
- name: DATASOURCE_USER
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_USER
- name: DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_PASSWORD
volumes:
- name: tls-cert
projected:
sources:
- secret:
name: postgres-tls
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key

So I figured it out, I was asking the wrong question!
Google Cloud SQL has a proxy component for the Postgres database. Therefore, trying to connect the traditional way (the problem I was trying to solve) has been resolved by implementing proxy. Instead of dealing with whitelisting IPs, SSL certs, and such, you just spin up the proxy, point it at a GCP credential file, then updated your database uri to access via localhost.
To set up the proxy, you can find directions here. There is a good example of a k8s deployment file here.
One situation I did come across was the GCP service account. Make sure to add Cloud SQL Client AND Cloud SQL Editor roles. I only added the Cloud SQL Client to start with and kept getting the 403 error.

Related

ssh permission denied when connecting from Kubernetes pod to remote host

I'm trying to ssh from a pod into a remote server while specifying an identity file. This fails with the following error:
admin#123.123.123.123: Permission denied (publickey).
I've made sure I can connect from my local host with the same set of public and private keys. It only fails when I try to connect from a bash shell inside the pod's container.
My Job definition is as follows :
---
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-volumes-population-{{ .Release.Revision }}
spec:
template:
spec:
containers:
- name: populate-volumes
image: {{ .Values.gitlab.image_repository.repository }}/{{ .Values.phpfpm.image.name }}:{{ .Values.phpfpm.image.version }}
imagePullPolicy: IfNotPresent
ports:
- name: ssh
containerPort: 22
args:
- /bin/bash
- -c
- |
echo "Testing ssh connection..."
ssh -i/etc/ssh/hetzner_box admin#123.123.123.123
volumeMounts:
- name: hetzner-box-identity
mountPath: /etc/ssh/hetzner_box.pub
subPath: .pub
- name: hetzner-box-identity
mountPath: /etc/ssh/hetzner_box
subPath: .key
volumes:
- name: hetzner-box-identity
secret:
secretName: {{ .Release.Name }}-hetzner-box-identity
defaultMode: 256
items:
- key: .pub
path: .pub
- key: .key
path: .key
Edit 1:
After further inquiries I've manage to notice that the key pair is passphrase less. I've managed to login using a different key pair, protected by a passphrase. My goal is automation and is therefore unacceptable to have passphrase protected keys. Is there a reason the ssh daemon is refusing to authenticate a passphrase less key?

Deploying multiple container in kubernetes through a deployed app

I am using minikube for deployment in my local machine.
I am deploying an app with help of helm charts. My deployment script looks like
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "app.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
spec:
volumes:
- name: dockersock
hostPath:
path: "/var/run/docker.sock"
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- name: dockersock
mountPath: "/var/run/docker.sock"
command: ["/bin/sh", "-c"]
args:
- python3 runMyApp.py;
resources:
limits:
nvidia.com/gpu: {{ .Values.numGpus }}
My script runMyApp.py when executed launch 4 other containers and I want that Kubernetes deploy them as well at minikube and this is expected behavior from my understanding.
But When I other 4 containers they were deployed on host machine, just like I executed some docker run command on my host machine.
To verify that I was not mistaken,
so I tried to access them through other application using which is in minikube cluster but I couldn't. Then I tried accessing the application from the local environment and I was able to do that.
Is there any flaw in my understanding. If the behaviour is expected then what can I do to deploy other application in k8s as well.
In order to create new pods from within your python script, you should use the Kubernetes Python client library: https://github.com/kubernetes-client/python .
When you running docker run inside your script, kubernetes isn't aware of those containers, and they are just orphand containers running on your node host machine.

Unable to connect NodeJs application with Postgres running in Minikube

I am getting started with helm to carry out deployments to Kubernetes and i am stuck while connecting Nodejs application with postgres DB. I am using helm to carry out the deployment to K8.
Below is my YAML file for application
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "service-chart.fullname" . }}-deployment
spec:
replicas: 2
selector:
matchLabels:
app: {{ template "service-chart.fullname" . }}-converstionrate
template:
metadata:
labels:
app: {{ template "service-chart.fullname" . }}-converstionrate
spec:
containers:
- name: {{ template "service-chart.fullname" . }}-converstionrate
image: <application_image>
env:
- name: DB_URL
value: postgres://{{ template "postgres.fullname" . }}.default.svc.cluster.local:5432/{{ .Values.DbName }}
---
kind: Service
apiVersion: v1
metadata:
name: {{ template "service-chart.fullname" . }}-service
spec:
type: NodePort
selector:
app: {{ template "service-chart.fullname" . }}-converstionrate
ports:
- port: 8080
targetPort: 3000
Below is my requirement file where i am using the postgres dependency
dependencies:
- name: postgresql
version: "8.1.2"
repository: "https://kubernetes-charts.storage.googleapis.com/"
Below is application code where i try to connect to DB:-
if (config.use_env_variable) {
// sequelize = new Sequelize(process.env[config.use_env_variable], config);
sequelize = new Sequelize(
process.env.POSTGRES_HOST,
process.env.POSTGRES_USER,
process.env.POSTGRES_PASSWORD,
pocess.env.POSTGRES_DIALECT
);
} else {
sequelize = new Sequelize(
config.database,
config.username,
config.password,
config
);
}
What i am not able to understand is how to connect to the DB as with the above i am not able to.?
Can anyone please help me out here.?
I am newbie with helm hence not able to figure it out. I have looked into lot of blogs but some how it is not clear on how it needs to be done. As the DB is running in one POD and Node app in another so how do i wire it up together.? How to set the env variables of DB in yaml to be consumed.?
FYI --- I am using minikube to deploy as of now.
The application code is available:- at https://github.com/Vishesh30/Node-express-Postgress-helm
Thanks,
Vishesh.
As a mentioned by #David Maze, you need to fix the variable name from DB_URL to POSTGRES_HOST, but there are some other things I could see.
I've tried to reproduce you scenario and the following works for me:
You need to fix the service dns from you YAML file:
postgres://{{ template "postgres.fullname" . }}.default.svc.cluster.local:5432/{{ .Values.DbName }}
to
postgres://{{ template "postgresql.fullname" . }}-postgresql.default.svc.cluster.local:5432/{{ .Values.DbName }}
After that you need to pass to your application the host, database username and password of the database, you can do it overriding the postgresql default variables (because it's a subchart from you aplication), as describe in Helm documentaion and inject to you container using environment variables.
There's a lot of variables, see here.
Add this values to you values.yaml in order to override postgresql defaults:
postgresql:
postgresqlUsername: dbuser
postgresqlPassword: secret # just for example
You can create a secret file to store the password:
service-chart/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: dbconnection
type: Opaque
stringData:
POSTGRES_PASSWORD: {{ .Values.postgresql.postgresqlPassword }
If you don't want keep the password in file, maybe you can try some automation process to >rewrite you file before the deployment. You can read more about secrets here.
Now in you deployment.yaml add the variables, the POSTGRES_USERNAME will be get from values.yaml file, and the value to POSTGRES_PASSWORD:
env:
- name: POSTGRES_HOST
value: postgres://{{ template "postgresql.fullname" . }}-postgresql.default.svc.cluster.local:5432/{{ .Values.DbName }}
- name: POSTGRES_USERNAME
value: {{ .Values.postgresql.postgresqlUsername }}
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dbconnection
key: POSTGRES_PASSWORD
After the deployment you can check the container's environment variables.
I really hope it helps you!

How do I customize PostgreSQL configurations using helm chart?

I'm trying to deploy an application that uses PostgreSQL as a database to my minikube. I'm using helm as a package manager, and add have added PostgreSQL dependency to my requirements.yaml. Now the question is, how do I set postgres user, db and password for that deployment? Here's my templates/applicaion.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ template "sgm.fullname" . }}-service
spec:
type: NodePort
selector:
app: {{ template "sgm.fullname" . }}
ports:
- port: 80
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "sgm.fullname" . }}-deployment
spec:
replicas: 2
selector:
matchLabels:
app: {{ template "sgm.fullname" . }}
template:
metadata:
labels:
app: {{ template "sgm.fullname" . }}
spec:
containers:
- name: sgm
image: mainserver/sgm
env:
- name: POSTGRES_HOST
value: {{ template "postgres.fullname" . }}.default.svc.cluster.local
I've tried adding a configmap as it is stated in the postgres helm chart github Readme, but seems like I'm doing something wrong
This is lightly discussed in the Helm documentation: your chart's values.yaml file contains configuration blocks for the charts it includes. The GitHub page for the Helm stable/postgresql chart lists out all of the options.
Either in your chart's values.yaml file, or in a separate YAML file you pass to the helm install -f option, you can set parameters like
postgresql:
postgresqlDatabase: stackoverflow
postgresqlPassword: enterImageDescriptionHere
Note that the chart doesn't create a non-admin user (unlike its sibling MySQL chart). If you're okay with the "normal" database user having admin-level privileges (like creating and deleting databases) then you can set postgresqlUser here too.
In your own chart you can reference these values like any other
- name: PGUSER
value: {{ .Values.postgresql.postgresqlUser }}

Configuring different pod configuration for different environments (Kubernetes + Google Cloud or Minikube)

I have a (containerized) web service talking to an external CloudSQL service in Google Cloud. I've used the sidecar pattern in which a Google Cloud SQL Proxy container is next to the web service and authenticates+proxies to the external CloudSQL service. This works fine. Let's call this Deployment "deployment-api" with containers "api" + "pg-proxy"
The problem occurs when I want to deploy the application on my local minikube cluster which needs to have different configuration due to the service talking to a local postgres server on my computer. If I deploy "deployment-api" as is to minikube, it tries to run the "pg-proxy" container which barfs and the entire pod goes into a crash loop. Is there a way for me to selectively NOT deploy "pg-proxy" container without having two definitions for the Pod, e.g., using selectors/labels? I do not want to move pg-proxy container into its own deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-api
namespace: ${MY_ENV}
labels:
app: api
env: ${MY_ENV}
spec:
replicas: ${REPLICAS}
selector:
matchLabels:
app: api
env: ${MY_ENV}
template:
metadata:
labels:
app: api
env: ${MY_ENV}
spec:
containers:
- name: pg-proxy
ports:
- containerPort: 5432
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=<redacted>:${MY_ENV}-app=tcp:5432",
"-credential_file=/secrets/cloudsql/${MY_ENV}-sql-credentials.json"]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
volumeMounts:
- name: ${MY_ENV}-cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: api
image: ${DOCKER_IMAGE_PREFIX}api:${TAG}
imagePullPolicy: ${PULL_POLICY}
ports:
- containerPort: 50051
volumes:
- name: ${MY_ENV}-cloudsql-instance-credentials
secret:
secretName: ${MY_ENV}-cloudsql-instance-credentials
In raw Kubernetes means? No.
But I strongly encourage you to use Helm to deploy your application(s). With helm you can easily adapt manifest based on variables provided for each environment (or defaults). For example with variable postgresql.proxy.enabled: true in defaults and
{{- if .Values.postgresql.proxy.enabled }}
- name: pg-proxy
...
{{- end }}
in helm template you could disable this block completely on dev env by setting the value to false.