For a node app, what's required in the deployment config to connect via cloud proxy? - google-cloud-sql

Trying to connect to a 2nd gen cloud sql database from a GCP Container.
I created the cloud proxy, but am a bit confused on what my app needs to connect via the proxy. My app is looking to connect on 127.0.0.1:3306 already within the application with all the needed mysql connection information which works fine outside of GCP. My app is currently logging connection errors against 127.0.0.1:3306 when deployed on GCP container.
Error: connect ECONNREFUSED 127.0.0.1:3306 at Object.exports._errnoException
Any additional sample files available for a simple node app to better understand the needed application config?
The sample below seems to address what wordpress needs, but what do I need for simple node app?
https://github.com/GoogleCloudPlatform/container-engine-samples/blob/master/cloudsql/cloudsql_deployment.yaml
Related Link:
https://cloud.google.com/sql/docs/mysql/connect-container-engine
Provide 127.0.0.1:3306 as the host address your application uses to access the database.
I have this hard coded in my app.
Because the proxy runs in a second container in the same pod, it appears to your application as localhost, so you use 127.0.0.1:3306 to connect to it.
Right, I have this hard coded in my app
Provide the cloudsql-db-credentials secret to enable the application to log in to the database.
Ok, if I have to add this, what
For example, assuming the application expected DB_USER and DB_PASSWORD:
name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
If your proxy user requires a password, you would also add:
So what variable name would I be using here? Is this asking for the mysql user name?
name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
What variable is needed here? Is this asking for the mysql pw for the id above?
In the wordpress sample from the link above, I'm trying to figure out what variables are needed for a simple node app.
containers:
- image: wordpress:4.4.2-apache
name: web
env:
- name: WORDPRESS_DB_HOST
# Connect to the SQL proxy over the local network on a fixed port.
# Change the [PORT] to the port number used by your database
# (e.g. 3306).
value: 127.0.0.1:[PORT]
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
# [END cloudsql_secrets]
ports:
- containerPort: 80
name: wordpress
Thanks!

Related

Access secret environment variables from kubernetes in nextjs app

I'm trying to deploy a nextjs app to GCP with kubernetes. I'm trying to setup auth with nextAuth and keycloak provider. The issue I'm running into is that KEYCLOAK_CLIENT_SECRET and KEYCLOAK_CLIENT_SECRET can't be found, all of the other environment variables listed here show up under process.env.XX the two with secretKeyRefs come in undefined.
I have read in multiple places that they need to be available during build time for nextjs but I'm not sure how to set this up. We have several nodejs apps with this same deployment file and they are able to obtain the secrets.
This is what my deployment.yaml env section looks like.
Anyone have any experience setting this up?
I can't add the secrets to any local env files for security purposes.
env:
- name: NODE_ENV
value: production
- name: KEYCLOAK_URL
value: "https://ourkeycloak/auth/"
- name: NEXTAUTH_URL
value: "https://nextauthpath/api/auth"
- name: NEXTAUTH_SECRET
value: "11234"
- name: KEYCLOAK_REALM
value: "ourrealm"
- name: KEYCLOAK_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: vault_client
key: CLIENT_SECRET
- name: KEYCLOAK_CLIENTID
valueFrom:
secretKeyRef:
name: vault_client
key: CLIENT_ID

How to create keycloak with operator and external database

I follow this but it is not working.
I created custom secret:
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
data:
POSTGRES_DATABASE: ...
POSTGRES_EXTERNAL_ADDRESS: ...
POSTGRES_EXTERNAL_PORT: ...
POSTGRES_HOST: ...
POSTGRES_USERNAME: ...
POSTGRES_PASSWORD: ...
and keycloak with external db:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
labels:
app: keycloak
name: keycloak
spec:
externalDatabase:
enabled: true
instances: 1
but when I check log, keycloak can not connect to db. It is still using default vaule: keycloak-postgresql.keycloak not value defined in my custom secret ? Why it is not using my value from secrets ?
UPDATE
when I check keycloak pod which was created by operator I can see:
env:
- name: DB_VENDOR
value: POSTGRES
- name: DB_SCHEMA
value: public
- name: DB_ADDR
value: keycloak-postgresql.keycloak
- name: DB_PORT
value: '5432'
- name: DB_DATABASE
value: keycloak
- name: DB_USER
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
so now I know why I can not connect to db. It use different DB_ADDR. How I can use address: my-app.postgres (db in another namespace).
I dont know why POSTGRES_HOST in secret not working and pod still using default service name
To connect with service in another namespace you can use.
<servicename>.<namespace>.svc.cluster.local
suppose your Postgres deployment and service running in test namespace it will go like
postgres.test.svc.cluster.local
this is what i am using : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
i have also attached the Postgres file you can use it however in my case i have setup both in the same namespace keycloak and Postgres so working like charm.
I'm using Azure PostgreSQL for that, and it works correctly. In pod configuration, it also uses keycloak-postgresql.keycloak as DB_ADDR, but this is pointing to my internal service created by operator based on keycloak-db-secret.
keycloak-postgresql.keycloak this is the another service created by Keycloak Operator, which is used to connect to Postgresql's service.
You can check its endpoint.
$ kubectl get endpoints keycloak-postgresql -n keycloak
NAME ENDPOINTS AGE
keycloak-postgresql {postgresql's service ip}:5432 4m31s
However, the reason why it fails is due to the selector of this service:
selector:
app: keycloak
component: database
So if your DB Pod has the different Label, the selector will not work.
I reported this issue to the community. If they reply me, I will try to fix this bug by submitting a patch.
I was having this same issue, and then after looking at #JiyeYu 's answer, I have searched the project's issue backlog, and I've found some related issues that are still open (at the moment of this reply).
Particularly this one: https://issues.redhat.com/browse/KEYCLOAK-18602
After reading this, and its comments, I did the following:
Don't use IPs on POSTGRES_EXTERNAL_ADDRESS. If your PostGres is hosted within K8s via a StatefulSet, use the full <servicename>.<namespace>.svc.cluster.local (like #Harsh Manvar 's answer)
Remove the POSTGRES_HOST setting from the secret (don't just set it to the default, delete it). Apparently, it is not only being ignored, but also breaking the keycloak pod initialization process somehow.
After I applied these changes the issue was solved for me.
I also had similar problem, it turned out since I was using SSLMODE: "verify-full", keycloak expected correct hostname of my external db.
Since somehow Keycloak translates internally the real external db address into "keycloak-postgresql.keycloak", it expected something like "keycloak-postgresql.my-keycloak-namespace"
The log went something like this:
SEVERE [org.postgresql.ssl.PGjdbcHostnameVerifier] (ServerService Thread Pool -- 57) Server name validation failed: certificate for host keycloak-postgresql.my-keycloak-namespace dNSName entries subjectAltName, but none of them match. Assuming server name validation failed
After I added the host keycloak-postgresql.my-keycloak-namespace on the db certificate, it worked like advertised.

POSTGRES_PASSWORD ignored and can access DB without or with any password

As the title says, I'm setting a POSTGRES_PASSWORD and after spinning up the cluster with Skaffold (--port-forward on so I can access the DB with pgAdmin), I can access the database
with or without the correct password. POSTGRES_DB and POSTGRES_USER work as expected.
I am seeing in the documentation on Docker Hub for Postgres:
Note 1: The PostgreSQL image sets up trust authentication locally so you may notice a password is not required when connecting from localhost (inside the same container). However, a password will be required if connecting from a different host/container.
I think the --port-forward could possibly be the culprit since it is registering as localhost.
Anyway to prevent this behavior?
I guess the concern is someone having access to my laptop and easily being able to connect to the DB.
This is my postgres.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image:testproject/postgres
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: dev
- name: POSTGRES_USER
value: dev
- name: POSTGRES_PASSWORD
value: qwerty
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: postgres
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-storage
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
And the skaffold.yaml:
apiVersion: skaffold/v1beta15
kind: Config
build:
local:
push: false
artifacts:
- image: testproject/postgres
docker:
dockerfile: ./db/Dockerfile.dev
sync:
manual:
- src: "***/*.sql"
dest: .
- image: testproject/server
docker:
dockerfile: ./server/Dockerfile.dev
sync:
manual:
- src: "***/*.py"
dest: .
deploy:
kubectl:
manifests:
- k8s/ingress.yaml
- k8s/postgres.yaml
- k8s/server.yaml
The Dockerfile.dev too:
FROM postgres:11-alpine
EXPOSE 5432
COPY ./db/*.sql /docker-entrypoint-initdb.d/
Ok, reread the postgres Docker docs and came across this:
POSTGRES_INITDB_ARGS
This optional environment variable can be used to send arguments to postgres initdb. The value is a space separated string of arguments as postgres initdb would expect them. This is useful for adding functionality like data page checksums: -e POSTGRES_INITDB_ARGS="--data-checksums".
That brought me to the initdb docs:
--auth=authmethod
This option specifies the authentication method for local users used in pg_hba.conf (host and local lines). Do not use trust unless you trust all local users on your system. trust is the default for ease of installation.
That brought me to the Authentication Methods docs:
19.3.2. Password Authentication
The password-based authentication methods are md5 and password. These methods operate similarly except for the way that the password is sent across the connection, namely MD5-hashed and clear-text respectively.
If you are at all concerned about password "sniffing" attacks then md5 is preferred. Plain password should always be avoided if possible. However, md5 cannot be used with the db_user_namespace feature. If the connection is protected by SSL encryption then password can be used safely (though SSL certificate authentication might be a better choice if one is depending on using SSL).
PostgreSQL database passwords are separate from operating system user passwords. The password for each database user is stored in the pg_authid system catalog. Passwords can be managed with the SQL commands CREATE USER and ALTER ROLE, e.g., CREATE USER foo WITH PASSWORD 'secret'. If no password has been set up for a user, the stored password is null and password authentication will always fail for that user.
Long story short, I just did this and it takes only the actual password now:
env:
...
- name: POSTGRES_INITDB_ARGS
value: "-A md5"

Google Cloud, Kubernetes and Cloud SQL proxy: default Compute Engine service account issue

I have Google Cloud projects A, B, C, D. They all use similar setup for Kubernetes cluster and deployment. Projects A,B and C have been build months ago. They all use Google Cloud SQL proxy to connect to Google Cloud SQL service. Now when recently I started setting up the Kubernetes for project D, I get following error visible in the Stackdriver logging:
the default Compute Engine service account is not configured with sufficient permissions to access the Cloud SQL API from this VM. Please create a new VM with Cloud SQL access (scope) enabled under "Identity and API access". Alternatively, create a new "service account key" and specify it using the -credential_file parameter
I have compared the difference between the Kubernetes cluster between A,B,C and D but they appear to be same.
Here is the deployment I am using
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: my-site
labels:
system: projectA
spec:
selector:
matchLabels:
system: projectA
template:
metadata:
labels:
system: projectA
spec:
containers:
- name: web
image: gcr.io/customerA/projectA:alpha1
ports:
- containerPort: 80
env:
- name: DB_HOST
value: 127.0.0.1:3306
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
# Change <INSTANCE_CONNECTION_NAME> here to include your GCP
# project, the region of your Cloud SQL instance and the name
# of your Cloud SQL instance. The format is
# $PROJECT:$REGION:$INSTANCE
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command:
- sh
- -c
- /cloud_sql_proxy -instances=my-gcloud-project:europe-west1:databaseName=tcp:3306
- -credential_file=/secrets/cloudsql/credentials.json
# [START cloudsql_security_context]
securityContext:
runAsUser: 2 # non-root user
allowPrivilegeEscalation: false
# [END cloudsql_security_context]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
# [END volumes]
So it would appear that the default service account doesn't have enough permissions? Google Cloud doesn't allow enabling the Cloud SQL API when creating the cluster via Google Cloud console.
From what I have googled this issue some say that the problem was with the gcr.io/cloudsql-docker/gce-proxy image but I have tried newer versions but the same error still occurs.
I found solution to this problem and it was setting the service-account argument when creating the cluster. Note that I haven't tested what are the minimum required permissions for the new service account.
Here are the steps:
Create new service account, doesn't require API key. Name I used was "super-service"
Assign roles Cloud SQL admin, Compute Admin, Kubernetes Engine Admin, Editor to the new service account
Use gcloudto create the cluster like this using the new service account
gcloud container clusters create my-cluster \
--zone=europe-west1-c \
--labels=system=projectA \
--num-nodes=3 \
--enable-master-authorized-networks \
--enable-network-policy \
--enable-ip-alias \
--service-account=super-service#project-D.iam.gserviceaccount.com \
--master-authorized-networks <list-of-my-ips>
Then the cluster and the deployment at least was deployed without errors.

Ho to build Connection URL for Google Cloud SQL Postgresql Instance

I want to connect my app to a managed postgresql instance on google cloud SQL. The app would be deployed via GKE. Normally, i'd connect via a connection string:
Eg: postgres://<user>:<password>#<my-postgres-host>:5432"
But the documentation states that:
Create a Secret to provide the PostgreSQL username and password to the database.
Update your pod configuration file with the following items:
Provide the Cloud SQL instance's private IP address as the host address your application will use to access your database.
Provide the Secret you previously created to enable the application to log into the database.
Bring up your Deployment using the Kubernetes manifest file.
I can do step 1 and 3 but cannot follow step 2. Should the connection URL just be: postgres://<PRIVATE_ID>:5432 and I add ENV variables POSTGRES_USER and POSTGRES_PASSWORD through a secret?
Are there any examples I can look up?
Outcome: I'd like to derive the connection url for postgresql hosted on google cloud sql.
Thank you in advance!
You can find example of postgres_deployment.yaml file, to deploy with kubernetes. Example users proxy, but database configuration section does not change for private IP. For private IP do not use the section [proxy_container]
This is the section stated in the documentation, that you are searching for: database environment variables and secrets section.
# The following environment variables will contain the database host,
# user and password to connect to the PostgreSQL instance.
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
# [START cloudsql_secrets]
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]