I want to connect my app to a managed postgresql instance on google cloud SQL. The app would be deployed via GKE. Normally, i'd connect via a connection string:
Eg: postgres://<user>:<password>#<my-postgres-host>:5432"
But the documentation states that:
Create a Secret to provide the PostgreSQL username and password to the database.
Update your pod configuration file with the following items:
Provide the Cloud SQL instance's private IP address as the host address your application will use to access your database.
Provide the Secret you previously created to enable the application to log into the database.
Bring up your Deployment using the Kubernetes manifest file.
I can do step 1 and 3 but cannot follow step 2. Should the connection URL just be: postgres://<PRIVATE_ID>:5432 and I add ENV variables POSTGRES_USER and POSTGRES_PASSWORD through a secret?
Are there any examples I can look up?
Outcome: I'd like to derive the connection url for postgresql hosted on google cloud sql.
Thank you in advance!
You can find example of postgres_deployment.yaml file, to deploy with kubernetes. Example users proxy, but database configuration section does not change for private IP. For private IP do not use the section [proxy_container]
This is the section stated in the documentation, that you are searching for: database environment variables and secrets section.
# The following environment variables will contain the database host,
# user and password to connect to the PostgreSQL instance.
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
# [START cloudsql_secrets]
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
# [END cloudsql_secrets]
Related
I'm trying to use Bitnami Keycloak Helm, which has an internal dependency on Bitnami PostgreSQL that I cannot use. I have to use our existing RDS as an external DB, which seems possible but instruction on this page is completely empty. Unfortunately, I can only use the Bitnami helm for Keycloak, FYI. Can anyone point me to the right direction or show what and where to change the stock chart to make it happen pls? Not getting much luck with Google atm.
Thanks in advance!
You need to use a sidecar container which will handle authorization and proxy the db calls from keycloak to your managed database :
[keycloak] --localhost:XXXX-> [sidecar container] -> [Aws RDS]
You'll find documentation for this on the bitnami chart github repo : https://github.com/bitnami/charts/tree/master/bitnami/keycloak#use-sidecars-and-init-containers
On the stock chart, you can set these properties:
postgresql:
enabled: false
externalDatabase:
host: ${DB_URL}
port: ${DB_PORT}
user: ${DB_USERNAME}
database: ${DB_NAME}
password: ${DB_PASSWORD}
If you need high availability, i.e will be running keycloak with multiple replicas, add the below as well:
cache:
enabled: true
extraEnvVars:
- name: KC_CACHE
value: ispn
- name: KC_CACHE_STACK
value: kubernetes
- name: JGROUPS_DISCOVERY_PROTOCOL
value: "JDBC_PING"
Sources:
https://github.com/bitnami/charts/tree/master/bitnami/keycloak/#keycloak-cache-parameters
https://github.com/bitnami/charts/tree/master/bitnami/keycloak/#database-parameters
https://www.keycloak.org/server/caching#_relevant_options
When I deploy postgres like the following everything is ok:
.
.
.
env:
- name: POSTGRES_USER
value: a
.
.
.
But when I deploy it by configMap or secret for example:
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres
key: username
when I execute psql -U a it returns:
psql: error: could not connect to server: FATAL: role "a" does not exist
I checked environment POSTGRES_USER and this variable is set to a
Accordingly to the documentation from the PostgreSQL image:
The only variable required is POSTGRES_PASSWORD, the rest are
optional.
You're defining a custom role, which is okay but you're not passing the POSTGRES_PASSWORD which is mandatory. It's possible the role is not created at all if you don't pass the password.
Another important comment from the documentation:
Warning: the Docker specific variables will only have an effect if you
start the container with a data directory that is empty; any
pre-existing database will be left untouched on container startup.
If this is not a playground/stateless environment and you're loading existing data, the startup script won't look at the environment variables and you should use credentials that were previously configured.
I follow this but it is not working.
I created custom secret:
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
data:
POSTGRES_DATABASE: ...
POSTGRES_EXTERNAL_ADDRESS: ...
POSTGRES_EXTERNAL_PORT: ...
POSTGRES_HOST: ...
POSTGRES_USERNAME: ...
POSTGRES_PASSWORD: ...
and keycloak with external db:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
labels:
app: keycloak
name: keycloak
spec:
externalDatabase:
enabled: true
instances: 1
but when I check log, keycloak can not connect to db. It is still using default vaule: keycloak-postgresql.keycloak not value defined in my custom secret ? Why it is not using my value from secrets ?
UPDATE
when I check keycloak pod which was created by operator I can see:
env:
- name: DB_VENDOR
value: POSTGRES
- name: DB_SCHEMA
value: public
- name: DB_ADDR
value: keycloak-postgresql.keycloak
- name: DB_PORT
value: '5432'
- name: DB_DATABASE
value: keycloak
- name: DB_USER
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
so now I know why I can not connect to db. It use different DB_ADDR. How I can use address: my-app.postgres (db in another namespace).
I dont know why POSTGRES_HOST in secret not working and pod still using default service name
To connect with service in another namespace you can use.
<servicename>.<namespace>.svc.cluster.local
suppose your Postgres deployment and service running in test namespace it will go like
postgres.test.svc.cluster.local
this is what i am using : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
i have also attached the Postgres file you can use it however in my case i have setup both in the same namespace keycloak and Postgres so working like charm.
I'm using Azure PostgreSQL for that, and it works correctly. In pod configuration, it also uses keycloak-postgresql.keycloak as DB_ADDR, but this is pointing to my internal service created by operator based on keycloak-db-secret.
keycloak-postgresql.keycloak this is the another service created by Keycloak Operator, which is used to connect to Postgresql's service.
You can check its endpoint.
$ kubectl get endpoints keycloak-postgresql -n keycloak
NAME ENDPOINTS AGE
keycloak-postgresql {postgresql's service ip}:5432 4m31s
However, the reason why it fails is due to the selector of this service:
selector:
app: keycloak
component: database
So if your DB Pod has the different Label, the selector will not work.
I reported this issue to the community. If they reply me, I will try to fix this bug by submitting a patch.
I was having this same issue, and then after looking at #JiyeYu 's answer, I have searched the project's issue backlog, and I've found some related issues that are still open (at the moment of this reply).
Particularly this one: https://issues.redhat.com/browse/KEYCLOAK-18602
After reading this, and its comments, I did the following:
Don't use IPs on POSTGRES_EXTERNAL_ADDRESS. If your PostGres is hosted within K8s via a StatefulSet, use the full <servicename>.<namespace>.svc.cluster.local (like #Harsh Manvar 's answer)
Remove the POSTGRES_HOST setting from the secret (don't just set it to the default, delete it). Apparently, it is not only being ignored, but also breaking the keycloak pod initialization process somehow.
After I applied these changes the issue was solved for me.
I also had similar problem, it turned out since I was using SSLMODE: "verify-full", keycloak expected correct hostname of my external db.
Since somehow Keycloak translates internally the real external db address into "keycloak-postgresql.keycloak", it expected something like "keycloak-postgresql.my-keycloak-namespace"
The log went something like this:
SEVERE [org.postgresql.ssl.PGjdbcHostnameVerifier] (ServerService Thread Pool -- 57) Server name validation failed: certificate for host keycloak-postgresql.my-keycloak-namespace dNSName entries subjectAltName, but none of them match. Assuming server name validation failed
After I added the host keycloak-postgresql.my-keycloak-namespace on the db certificate, it worked like advertised.
I am setting up a splunk universal forwarder as a sidecar with my application through a deployment spec. The splunk universal forwarder is setup as a different docker image where I copy custom inputs.conf and outputs.conf through docker COPY (shown below).
Effectively when I deploy my application, the sidecar is starting. In the current state, the indexer configuration is in the output.conf and which is taking effect.
*The issue comes here: I want to change the indexer server host and port dynamically based on the environment. *
Here is my dockerfile content of splunk universal forwarder.
FROM splunk/universalforwarder:latest
COPY configs/*.conf /opt/splunkforwarder/etc/system/local/
Built the docker images with name splunk-universal-forwarder:demo
The configs folder have both files inputs.conf and outputs.conf.
The content of outputs.conf is
[tcpout]
defaultGroup = default-lb-group
[tcpout:default-lb-group]
server = ${SPLUNK_BASE_HOST}
[tcpout-server://host1:9997]
I want to pass the SPLUNK_BASE_HOST environment variable through the sidecar deployment like below.
- name: universalforwarder
image: splunk-universal-forwarder:demo
imagePullPolicy: Always
env:
- name: SPLUNK_START_ARGS
value: "--accept-license --answer-yes"
- name: SPLUNK_BASE_HOST
value: 123.456.789.000:9997
- name: SPLUNK_USER
valueFrom:
secretKeyRef:
name: credentials
key: splunk.username
- name: SPLUNK_PASSWORD
valueFrom:
secretKeyRef:
name: credentials
key: splunk.password
volumeMounts:
- name: container-logs
mountPath: /var/log/splunk-fwd-myapp
I have a separate deployment.yaml per environment (dev, stage, uat, qa, prod) and I should be able to pass different indexer host and port SPLUNK_BASE_HOST based on these environments. If I hardcode the indexer host and port in outputs.conf, it will take the same value across all environments but I don't want that to happen.
The environment variable ${SPLUNK_BASE_HOST} in the outputs.conf is not referring to the value supplied in deployment yaml file.
You need to create an init script that should source the host name from environment variable and update the same in the output.conf using sed command. Finally launch slunk forwarder
Trying to connect to a 2nd gen cloud sql database from a GCP Container.
I created the cloud proxy, but am a bit confused on what my app needs to connect via the proxy. My app is looking to connect on 127.0.0.1:3306 already within the application with all the needed mysql connection information which works fine outside of GCP. My app is currently logging connection errors against 127.0.0.1:3306 when deployed on GCP container.
Error: connect ECONNREFUSED 127.0.0.1:3306 at Object.exports._errnoException
Any additional sample files available for a simple node app to better understand the needed application config?
The sample below seems to address what wordpress needs, but what do I need for simple node app?
https://github.com/GoogleCloudPlatform/container-engine-samples/blob/master/cloudsql/cloudsql_deployment.yaml
Related Link:
https://cloud.google.com/sql/docs/mysql/connect-container-engine
Provide 127.0.0.1:3306 as the host address your application uses to access the database.
I have this hard coded in my app.
Because the proxy runs in a second container in the same pod, it appears to your application as localhost, so you use 127.0.0.1:3306 to connect to it.
Right, I have this hard coded in my app
Provide the cloudsql-db-credentials secret to enable the application to log in to the database.
Ok, if I have to add this, what
For example, assuming the application expected DB_USER and DB_PASSWORD:
name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
If your proxy user requires a password, you would also add:
So what variable name would I be using here? Is this asking for the mysql user name?
name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
What variable is needed here? Is this asking for the mysql pw for the id above?
In the wordpress sample from the link above, I'm trying to figure out what variables are needed for a simple node app.
containers:
- image: wordpress:4.4.2-apache
name: web
env:
- name: WORDPRESS_DB_HOST
# Connect to the SQL proxy over the local network on a fixed port.
# Change the [PORT] to the port number used by your database
# (e.g. 3306).
value: 127.0.0.1:[PORT]
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
# [END cloudsql_secrets]
ports:
- containerPort: 80
name: wordpress
Thanks!