I'm trying to ssh from a pod into a remote server while specifying an identity file. This fails with the following error:
admin#123.123.123.123: Permission denied (publickey).
I've made sure I can connect from my local host with the same set of public and private keys. It only fails when I try to connect from a bash shell inside the pod's container.
My Job definition is as follows :
---
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-volumes-population-{{ .Release.Revision }}
spec:
template:
spec:
containers:
- name: populate-volumes
image: {{ .Values.gitlab.image_repository.repository }}/{{ .Values.phpfpm.image.name }}:{{ .Values.phpfpm.image.version }}
imagePullPolicy: IfNotPresent
ports:
- name: ssh
containerPort: 22
args:
- /bin/bash
- -c
- |
echo "Testing ssh connection..."
ssh -i/etc/ssh/hetzner_box admin#123.123.123.123
volumeMounts:
- name: hetzner-box-identity
mountPath: /etc/ssh/hetzner_box.pub
subPath: .pub
- name: hetzner-box-identity
mountPath: /etc/ssh/hetzner_box
subPath: .key
volumes:
- name: hetzner-box-identity
secret:
secretName: {{ .Release.Name }}-hetzner-box-identity
defaultMode: 256
items:
- key: .pub
path: .pub
- key: .key
path: .key
Edit 1:
After further inquiries I've manage to notice that the key pair is passphrase less. I've managed to login using a different key pair, protected by a passphrase. My goal is automation and is therefore unacceptable to have passphrase protected keys. Is there a reason the ssh daemon is refusing to authenticate a passphrase less key?
Related
I started using K3S, so I'm an absolute noob. Now I'm wondering how I can create the .yaml Files for pods by my own or use a docker image. (Couldn't find detailed infos about that)
I want a OpenVPN or any other suggested VPN Server running, so I can access my home devices from anywhere. It would safe a lot of headache and time, if someone could be so nice and help me a little.
Before, I've had a OpenVPN Server running, when I only had 1 Raspi. But it looks like everything from the install to the config changed with my k3s Kubernetes Cluster.
How I Made my k3s Cluster with Rancher: https://youtu.be/X9fSMGkjtug
Tried for 3hrs to figure it out, found no real step by step guide for beginners...
I already have a Cloudflare ddns script running to update my Domain with correct IP.
Thank you very much!
here is ther example of Open VPN client YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: openvpn-client
spec:
selector:
matchLabels:
app: openvpn-client
vpn: vpn-id
replicas: 1
template:
metadata:
labels:
app: openvpn-client
vpn: vpn-id
spec:
volumes:
- name: vpn-config
secret:
secretName: vpn-config
items:
- key: client.ovpn
path: client.ovpn
- name: vpn-auth
secret:
secretName: vpn-auth
items:
- key: auth.txt
path: auth.txt
- name: route-script
configMap:
name: route-script
items:
- key: route-override.sh
path: route-override.sh
- name: tmp
emptyDir: {}
initContainers:
- name: vpn-route-init
image: busybox
command: ['/bin/sh', '-c', 'cp /vpn/route-override.sh /tmp/route/route-override.sh; chown root:root /tmp/route/route-override.sh; chmod o+x /tmp/route/route-override.sh;']
volumeMounts:
- name: tmp
mountPath: /tmp/route
- name: route-script
mountPath: /vpn/route-override.sh
subPath: route-override.sh
containers:
- name: vpn
image: dperson/openvpn-client
command: ["/bin/sh","-c"]
args: ["openvpn --config 'vpn/client.ovpn' --auth-user-pass 'vpn/auth.txt' --script-security 3 --route-up /tmp/route/route-override.sh;"]
stdin: true
tty: true
securityContext:
privileged: true
capabilities:
add:
- NET_ADMIN
env:
- name: TZ
value: "Turkey"
volumeMounts:
- name: vpn-config
mountPath: /vpn/client.ovpn
subPath: client.ovpn
- name: vpn-auth
mountPath: /vpn/auth.txt
subPath: auth.txt
- name: tmp
mountPath: /tmp/route
- name: app1
image: python:3.6-stretch
command:
- sleep
- "100000"
tty: true
dnsConfig:
nameservers:
- 8.8.8.8
- 8.8.4.4
you can also read more about the deployment :https://bugraoz93.medium.com/openvpn-client-in-a-pod-kubernetes-d3345c66b014
You can also use the HELM Chart for same which will make easy to setup anything on Kubernetes via pre-made YAML scripts : https://itnext.io/use-helm-to-deploy-openvpn-in-kubernetes-to-access-pods-and-services-217dec344f13
Docker Open VPN : https://github.com/dperson/openvpn-client
I am using a custom MongoDB image with a read-only file system and trying to deploy it to Kubernetes locally on my Mac using Kind. The below is my statefulset.yaml. I cannot run a custom script called mongo-init.sh, which creates my db users. Kubernetes only allows one point of entry, and so I am using that to run my db startup command: mongod -f /docker-entrypoint-initdb.d/mongod.conf. It doesn't allow me to run another command or script after that, even if I append a & at the end. I tried using "/bin/bash/", "-c", "command1", "command2" but that doesn't execute command2 if my first command is the mongod initialization command. Lastly, if I skip the mongod initialization command, the database is not up and running and I cannot connect to it, so I get a connection refused. When I kubectl exec onto the container, I can get into the database using mongo shell with mongo, but I cannot view anything or really execute any commands because I get an unauthorized error. Totally lost on this one and could use some help.
apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
creationTimestamp: null
labels:
app: mydb
name: mongodb
namespace: mongodb
spec:
serviceName: "dbservice"
replicas: 1
selector:
matchLabels:
app: mydb
template:
metadata:
annotations:
creationTimestamp: null
labels:
app: mydb
spec:
# terminationGracePeriodSeconds: 10
imagePullSecrets:
- name: regcred
containers:
- env:
- name: MONGODB_APPLICATION_USER_PWD
valueFrom:
configMapKeyRef:
key: MONGODB_APPLICATION_USER_PWD
name: myconfigmap
- name: MONGO_INITDB_DATABASE
valueFrom:
configMapKeyRef:
key: MONGO_INITDB_DATABASE
name: myconfigmap
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
key: MONGO_INITDB_ROOT_PASSWORD
name: myconfigmap
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
key: MONGO_INITDB_ROOT_USERNAME
name: myconfigmap
image: 'localhost:5000/mongo-db:custom'
command: ["/docker-entrypoint-initdb.d/mongo-init.sh"]
name: mongodb
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- mountPath: /data/db
name: mongodata
- mountPath: /etc/mongod.conf
subPath: mongod.conf
name: mongodb-conf
readOnly: true
- mountPath: /docker-entrypoint-initdb.d/mongo-init.sh
subPath: mongo-init.sh
name: mongodb-conf
initContainers:
- name: init-mydb
image: busybox:1.28
command: ["chown"]
args: ["-R", "998:998", "/data"]
volumeMounts:
- name: mongodata
mountPath: /data/db
volumes:
- name: mongodata
persistentVolumeClaim:
claimName: mongo-volume-claim
- name: mongodb-conf
configMap:
name: myconfigmap
defaultMode: 0777
The error message I see when I do kubectl logs <podname>:
MongoDB shell version v4.4.1
connecting to: mongodb://localhost:27017/admin?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server localhost:27017, connection attempt failed: SocketException: Error connecting to localhost:27017 (127.0.0.1:27017) :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
I have an application in a container which reads certain data from a configMap which goes like this
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
password: hello123
Now I created a secret for the password and mounted as env variable while starting the container.
apiVersion: v1
kind: Secret
metadata:
name: appdbpassword
type: Opaque
stringData:
password: hello123
My pod looks like:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.pod.name }}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.image }}
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;"]
env:
- name: password
valueFrom:
secretKeyRef:
name: appdbpassword
key: password
volumeMounts:
- name: config-volume
mountPath: /app/app-config/application.yaml
subPath: application.yaml
volumes:
- name: config-volume
configMap:
name: app-config
I tried using this env variable inside the configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
**password: ${password}**
But my application is unable to read this password. Am I missing something here?
EDIT:
I cannot change the application.yaml to any other form as my server looks for application.yaml in source path. Do we have any way to use that environment variable in values.yaml(helm) file and use it in the configmap?
your ${password} variable will not be replaced by its value as application.yaml is a static file. If you use this yaml file in some configuration then it is possible that it will get replaced by its value.
consider a scenario where instead of application.yaml pass this file
application.sh: |
echo "${password}"
now go inside /app/app-config you will see application.sh file . And now do sh application.sh you will see the value of environment variable.
I hope this might clear your point.
You cannot use a secret in ConfigMap as they are intended to non-sensitive data (See here).
Also you should not pass Secrets using env's as it's create potential risk (Read more here why env shouldn't be
used).
Applications usually dump env variables in error reports or even write the to the
app logs at startup which could lead to exposing Secrets.
The best way would be to mount the Secret as file.
Here's an simple example how to mount it as file:
spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: ssh-key
Kubernetes documentation explains well how to use and mount secrets.
I tried to configure mongo with authentication on a kubernetes cluster. I deployed the following yaml:
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:4.0.0
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "admin"
- name: MONGO_INITDB_ROOT_PASSWORD
# Get password from secret
value: "abc123changeme"
command:
- mongod
- --auth
- --replSet
- rs0
- --bind_ip
- 0.0.0.0
ports:
- containerPort: 27017
name: web
volumeMounts:
- name: mongo-ps
mountPath: /data/db
volumes:
- name: mongo-ps
persistentVolumeClaim:
claimName: mongodb-pvc
When I tried to authenticate with username "admin" and password "abc123changeme" I received "Authentication failed.".
How can I configure mongo admin username and password (I want to get password from secret)?
Thanks
The reason the environment variables don't work is that the MONGO_INITDB environment variables are used by the docker-entrypoint.sh script within the image ( https://github.com/docker-library/mongo/tree/master/4.0 ) however when you define a 'command:' in your kubernetes file you override that entrypoint (see notes https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ )
See below YML which is adapted from a few of the examples I found online. Note the learning points for me
cvallance/mongo-k8s-sidecar looks for ANY mongo instance matching the POD labels REGARDLESS of namespace so it'll try to hook up with any old instance in the cluster. This caused me a few hours of headscratching as I'd removed the environment= labels from the example as we use namespaces to segregate our environments..silly and obvious in retrospect...extremely confusing in the beginning (mongo logs were throwing all sorts of authentication errors and service down type errors because of the cross talk)
I was new to ClusterRoleBindings and it took me a while to realise they are Cluster level which I know seems obvious (despite needing to supply a namespace to get kubectl to accept it) but was causing mine to get overwritten between each namespace so make sure you create unique names per environment to avoid a deployment in one namespace messing up another as the ClusterRoleBinding gets overwritten if they're not unqiue within the cluster
MONGODB_DATABASE needs to be set to 'admin' for authentication to work.
I was following this example to configure authentication which depended on a sleep5 in the hope the daemon was up and running before attempting to create the adminUser. I found this wasn't long enough so upped it initially as failure to create the adminUser obviously led to connection refused issues. I later changed the sleep to test the daemon with a while loop and a ping of mongo which is more foolproof.
If you run mongod in a container (e.g. lxc, cgroups, Docker, etc.) that does not have access to all of the RAM available in a system, you must set --wiredTigerCacheSizeGB to a value less than the amount of RAM available in the container. The exact amount depends on the other processes running in the container.
You need at least 3 nodes in a Mongo cluster !
The YML below should spin up and configure a mongo replicaset in kubernetes with persistent storage and authentication enabled.
If you connect into the pod...
kubectl exec -ti mongo-db-0 --namespace somenamespace /bin/bash
mongo shell is installed in the image so you should be able to connect to the replicaset with...
mongo mongodb://mongoadmin:adminpassword#mongo-db/admin?replicaSet=rs0
And see that you get either rs0:PRIMARY> or rs0:SECONDARY, indicating the two pods are in a mongo replicateset. use rs.conf() to verify that from the PRIMARY.
#Create a Secret to hold the MONGO_INITDB_ROOT_USERNAME/PASSWORD
#so we can enable authentication
apiVersion: v1
data:
#echo -n "mongoadmin" | base64
init.userid: bW9uZ29hZG1pbg==
#echo -n "adminpassword" | base64
init.password: YWRtaW5wYXNzd29yZA==
kind: Secret
metadata:
name: mongo-init-credentials
namespace: somenamespace
type: Opaque
---
# Create a secret to hold a keyfile used to authenticate between replicaset members
# this seems to need to be base64 encoded twice (might not be the case if this
# was an actual file reference as per the examples, but we're using a simple key
# here
apiVersion: v1
data:
#echo -n "CHANGEMECHANGEMECHANGEME" | base64 | base64
mongodb-keyfile: UTBoQlRrZEZUVVZEU0VGT1IwVk5SVU5JUVU1SFJVMUYK
kind: Secret
metadata:
name: mongo-key
namespace: somenamespace
type: Opaque
---
# Create a service account for Mongo and give it Pod List role
# note this is a ClusterROleBinding - the Mongo Pod will be able
# to list all pods present in the cluster regardless of namespace
# (and this is exactly what it does...see below)
apiVersion: v1
kind: ServiceAccount
metadata:
name: mongo-serviceaccount
namespace: somenamespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: mongo-somenamespace-serviceaccount-view
namespace: somenamespace
subjects:
- kind: ServiceAccount
name: mongo-serviceaccount
namespace: somenamespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pod-viewer
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pod-viewer
namespace: somenamespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
---
#Create a Storage Class for Google Container Engine
#Note fstype: xfs isn't supported by GCE yet and the
#Pod startup will hang if you try to specify it.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: somenamespace
name: mongodb-ssd-storage
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
allowVolumeExpansion: true
---
#Headless Service for StatefulSets
apiVersion: v1
kind: Service
metadata:
namespace: somenamespace
name: mongo-db
labels:
name: mongo-db
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
app: mongo
---
# Now the fun part
#
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
namespace: somenamespace
name: mongo-db
spec:
serviceName: mongo-db
replicas: 3
template:
metadata:
labels:
# Labels MUST match MONGO_SIDECAR_POD_LABELS
# and MUST differentiate between other mongo
# instances in the CLUSTER not just the namespace
# as the sidecar will search the entire cluster
# for something to configure
app: mongo
environment: somenamespace
spec:
#Run the Pod using the service account
serviceAccountName: mongo-serviceaccount
terminationGracePeriodSeconds: 10
#Prevent a Mongo Replica running on the same node as another (avoid single point of failure)
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- mongo
topologyKey: "kubernetes.io/hostname"
containers:
- name: mongo
image: mongo:4.0.12
command:
#Authentication adapted from https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
#in order to pass the new admin user id and password in
- /bin/sh
- -c
- >
if [ -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with runtime settings (clusterAuthMode)"
#ensure wiredTigerCacheSize is set within the size of the containers memory limit
mongod --wiredTigerCacheSizeGB 0.5 --replSet rs0 --bind_ip 0.0.0.0 --smallfiles --noprealloc --clusterAuthMode keyFile --keyFile /etc/secrets-volume/mongodb-keyfile --setParameter authenticationMechanisms=SCRAM-SHA-1;
else
echo "KUBERNETES LOG $HOSTNAME- Starting Mongo Daemon with setup setting (authMode)"
mongod --auth;
fi;
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- >
if [ ! -f /data/db/admin-user.lock ]; then
echo "KUBERNETES LOG $HOSTNAME- no Admin-user.lock file found yet"
#replaced simple sleep, with ping and test.
while (! mongo --eval "db.adminCommand('ping')"); do sleep 10; echo "KUBERNETES LOG $HOSTNAME - waiting another 10 seconds for mongo to start" >> /data/db/configlog.txt; done;
touch /data/db/admin-user.lock
if [ "$HOSTNAME" = "mongo-db-0" ]; then
echo "KUBERNETES LOG $HOSTNAME- creating admin user ${MONGODB_USERNAME}"
mongo --eval "db = db.getSiblingDB('admin'); db.createUser({ user: '${MONGODB_USERNAME}', pwd: '${MONGODB_PASSWORD}', roles: [{ role: 'root', db: 'admin' }]});" >> /data/db/config.log
fi;
echo "KUBERNETES LOG $HOSTNAME-shutting mongod down for final restart"
mongod --shutdown;
fi;
env:
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
ports:
- containerPort: 27017
livenessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
readinessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 5
periodSeconds: 60
timeoutSeconds: 10
resources:
requests:
memory: "350Mi"
cpu: 0.05
limits:
memory: "1Gi"
cpu: 0.1
volumeMounts:
- name: mongo-key
mountPath: "/etc/secrets-volume"
readOnly: true
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
# Sidecar searches for any POD in the CLUSTER with these labels
# not just the namespace..so we need to ensure the POD is labelled
# to differentiate it from other PODS in different namespaces
- name: MONGO_SIDECAR_POD_LABELS
value: "app=mongo,environment=somenamespace"
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.userid
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-init-credentials
key: init.password
#don't be fooled by this..it's not your DB that
#needs specifying, it's the admin DB as that
#is what you authenticate against with mongo.
- name: MONGODB_DATABASE
value: admin
volumes:
- name: mongo-key
secret:
defaultMode: 0400
secretName: mongo-key
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "mongodb-ssd-storage"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Supposing you created a secret:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
Here a snippet to get a value from a secret in a kubernetes yaml file:
env:
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
I found this issue is related to a bug in docker-entrypoint.sh and occurs when numactl is detected on the node.
Try this simplified code (which moves numactl out of the way):
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment
labels:
app: mongo
spec:
replicas: 1
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:4.0.0
command:
- /bin/bash
- -c
# mv is not needed for later versions e.g. 3.4.19 and 4.1.7
- mv /usr/bin/numactl /usr/bin/numactl1 && source docker-entrypoint.sh mongod
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "xxxxx"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "xxxxx"
ports:
- containerPort: 27017
I raised an issue at:
https://github.com/docker-library/mongo/issues/330
Hopefully it will be fixed at some point so no need for the hack :o)
Adding this resolved the issue for me:
- name: ME_CONFIG_MONGODB_ENABLE_ADMIN
value: "true"
Seems like the default is set to "false".
If you are using Kubernetes you can check the reason for failure by using the command:
kubernetes logs <pod name>
This is what worked for me;
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: my-mongodb-pod
image: mongo:4.4.3
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: "someMongoUser"
- name: MONGO_INITDB_ROOT_PASSWORD
value: "somePassword"
- name: MONGO_REPLICA_SET
value: "myReplicaSet"
- name: MONGO_PORT
value: "27017"
# Note, to disable non-auth in mongodb is kind of complicated[4]
# Note, the `_getEnv` function is internal and undocumented[3].
#
# 1. https://gist.github.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115
# 2. https://stackoverflow.com/a/54726708/2768067
# 3. https://stackoverflow.com/a/67037065/2768067
# 4. https://www.mongodb.com/features/mongodb-authentication
command:
- /bin/sh
- -c
- >
set -x # print command been ran
set -e # fail if any command fails
env;
ps auxwww;
printf "\n\t mongod:: start in the background \n\n";
mongod \
--port="${MONGO_PORT}" \
--bind_ip_all \
--replSet="${MONGO_REPLICA_SET}" \
--quiet > /tmp/mongo.log.json 2>&1 &
sleep 9;
ps auxwww;
printf "\n\t mongod: set master \n\n";
mongo --port "${MONGO_PORT}" --eval '
rs.initiate({});
sleep(3000);';
printf "\n\t mongod: add user \n\n";
mongo --port "${MONGO_PORT}" --eval '
db.getSiblingDB("admin").createUser({
user: _getEnv("MONGO_INITDB_ROOT_USERNAME"),
pwd: _getEnv("MONGO_INITDB_ROOT_PASSWORD"),
roles: [{ role: "userAdminAnyDatabase", db: "admin" }]
});';
printf "\n\t mongod: shutdown \n\n";
mongod --shutdown;
sleep 3;
ps auxwww;
printf "\n\t mongod: restart with authentication \n\n";
mongod \
--auth \
--port="${MONGO_PORT}" \
--bind_ip_all \
--replSet="${MONGO_REPLICA_SET}" \
--verbose=v
I'm having troubles establishing a SSL connection between a web service and a remotely hosted Postgres database. With the same cert and key files being used for the web service, I can connect to the database with tools such as pgAdmin and DataGrip. These files were downloaded from Postgres instance in the Google Cloud Console.
Issue:
At the time of Spring Boot service start up, the following error occurs:
org.postgresql.util.PSQLException: Could not read SSL key file /tls/tls.key
Where I look at the Postgres server logs, the error is recorded as
LOG: could not accept SSL connection: UNEXPECTED_RECORD
Setup:
Spring Boot service running on Minikube (local) and GKE connecting to a Google Cloud SQL Postgres instance.
Actions Taken:
I have downloaded the client cert & key. I created a K8s TLS Secret using the downloaded client cert & key. I also have made sure the files can be read from the volume mount by running the following command on the k8s deployment config:
command: ["bin/sh", "-c", "cat /tls/tls.key"]
Here is the datasource url which is fed in via an environment variable (DATASOURCE).
"jdbc:postgresql://[Database-Address]:5432/[database]?ssl=true&sslmode=require&sslcert=/tls/tls.crt&sslkey=/tls/tls.key"
Here is the k8s deployment yaml, any idea where i'm going wrong?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "service.name" . }}
labels:
release: {{ template "release.name" . }}
chart: {{ template "chart.name" . }}
chart-version: {{ template "chart.version" . }}
release: {{ template "service.fullname" . }}
spec:
replicas: {{ $.Values.image.replicaCount }}
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: {{ template "service.name" . }}
release: {{ template "release.name" . }}
env: {{ $.Values.environment }}
spec:
imagePullSecrets:
- name: {{ $.Values.image.pullSecretsName }}
containers:
- name: {{ template "service.name" . }}
image: {{ $.Values.image.repo }}:{{ $.Values.image.tag }}
# command: ["bin/sh", "-c", "cat /tls/tls.key"]
imagePullPolicy: {{ $.Values.image.pullPolicy }}
volumeMounts:
- name: tls-cert
mountPath: "/tls"
readOnly: true
ports:
- containerPort: 80
env:
- name: DATASOURCE_URL
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_URL
- name: DATASOURCE_USER
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_USER
- name: DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: service
key: DATASOURCE_PASSWORD
volumes:
- name: tls-cert
projected:
sources:
- secret:
name: postgres-tls
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
So I figured it out, I was asking the wrong question!
Google Cloud SQL has a proxy component for the Postgres database. Therefore, trying to connect the traditional way (the problem I was trying to solve) has been resolved by implementing proxy. Instead of dealing with whitelisting IPs, SSL certs, and such, you just spin up the proxy, point it at a GCP credential file, then updated your database uri to access via localhost.
To set up the proxy, you can find directions here. There is a good example of a k8s deployment file here.
One situation I did come across was the GCP service account. Make sure to add Cloud SQL Client AND Cloud SQL Editor roles. I only added the Cloud SQL Client to start with and kept getting the 403 error.