Log in using IAM database authentication failed in Google CloudSQL - postgresql

I tried to use Log in using IAM database authentication
with service account. but failed
Cloud SQL IAM service account authentication failed for user
I was totally stacled
procedure
1. create postgreSQL
instance test-auth
and set flag
databaseFlags:
- name: cloudsql.iam_authentication
value: on
2. create db
dbname: db_hoge
3. create service account
sa-cloudsql#myproject.iam.gserviceaccount.com
and add it as user. also, I added my own email ( which is the owner of this GCP project)
it looked added correctly.
$ gcloud sql users list --instance test-auth
NAME HOST TYPE PASSWORD_POLICY
postgres CLOUD_IAM_USER
maiil#mydomain.co.jp CLOUD_IAM_USER
sa-cloudsql#myproject.iam CLOUD_IAM_SERVICE_ACCOUNT
4. running cloud sql proxy ( v 1.31.0)
$ ./cloud_sql_proxy -version
Cloud SQL Auth proxy: 1.31.0+linux.amd64
$ ./cloud_sql_proxy -enable_iam_login -instances=myproject:asia-northeast1:test-auth=tcp:127.0.0.1:5432
5. DB connection
with service account -> failed
psql "sslmode=disable dbname=db_hoge host=127.0.0.1 user=sa-cloudsql#myproject.iam"
psql: error: FATAL: Cloud SQL IAM service account authentication failed for user "sa-cloudsql#myproject.iam"
with user (myself) -> success
psql "sslmode=disable dbname=db_hoge host=127.0.0.1 user=maiil#mydomain.co.jp"
psql (12.11 (Ubuntu 12.11-0ubuntu0.20.04.1), server 14.2)
WARNING: psql major version 12, server major version 14.
Some psql features might not work.
Type "help" for help.
db_hoge=>
IAM role
maiil#mydomain.co.jp => owner
sa-cloudsql#myproject.iam => roles/cloudsql.client , roles/cloudsql.instanceUser
I sincerely ask someone's kindly help.
Thank you in advance
update
following advice, I deployed app for testing connection on GKE
But now, I got error "FATAL: empty password returned by client".
When using IAM database authentication, we should not need password for that user.
I don't know it's because of App, GKE, IAM, or CloudSQL.
Deployment and App
Using sidecar, I runned cloudSQL proxy. and service-account for this pod has been binded google-service-account which has CloudSQL instance user & CloudSQL Client.
app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector:
matchLabels:
run: web
template:
metadata:
labels:
run: web
spec:
containers:
- image: cloudproxy-test:1.0.0
imagePullPolicy: Always
name: web
ports:
- containerPort: 8080
protocol: TCP
- command:
- /cloud_sql_proxy
- -enable_iam_login
- -instances=my-project:asia-northeast1:test-auth=tcp:5432
image: gcr.io/cloudsql-docker/gce-proxy:1.25.0
name: cloudsql-proxy
serviceAccountName: ksa-workload-identity-binded
---
I made app with go. for postgreSQL, use dialers/postgres module.
package main
import (
"context"
"database/sql"
"fmt"
"log"
"net/http"
"github.com/gin-gonic/gin"
_ "github.com/GoogleCloudPlatform/cloudsql-proxy/proxy/dialers/postgres"
)
func main() {
engine := gin.Default()
engine.GET("/", func(c *gin.Context) {
ctx := context.Background()
if err := showRecords(ctx, "my-project:asia-northeast1:test-auth", "db_hoge", "sa-cloudsql#my-project.iam"); err != nil {
log.Fatal(err)
}
c.JSON(http.StatusOK, gin.H{
"How": "itwork",
})
})
engine.Run(":8080")
}
func showRecords(ctx context.Context, dbAddress, dbName, dbUser string) error {
dsn := fmt.Sprintf("host=%s user=%s dbname=%s sslmode=disable", dbAddress, dbUser, dbName)
fmt.Println(dsn)
db, err := sql.Open("cloudsqlpostgres", dsn)
if err != nil {
log.Fatal(err)
return err
}
defer db.Close()
return nil
}
When I used default user postgres with password and remove -enable_iam_login option from sidecar command, I could connect with DB.
So Application itself seemed not wrong.

Related

How can I connect GitLab to an external database using a kubernetes secret on values.yaml?

I am trying to connect GitLab to an external database an AWS RDS using a k8s secret to deploy on AWS EKS but I am not sure if it connects and how would I know that it does?
values.yaml code:
psql:
connectTimeout:
keepalives:
keepalivesIdle:
keepalivesInterval:
keepalivesCount:
tcpUserTimeout:
password:
useSecret: true
secret: gitlab-secret
key: key
host: <RDS endpoint>
port: <RDS port>
username: postgres
database: <main name of db>
# applicationName:
# preparedStatements: false
kubernetes secret:
kubectl create secret generic gitlab-secret --from-literal=key="<password>" -n devops-gitlab
The psql server details are already known

Running the Postgres CLI client from a Kubernetes jumpbox

I have setup a Postgres pod on my Kubernetes cluster, and I am trying to troubleshoot it a bit.
I would like to use the official Postgres image and deploy it to my Kubernetes cluster using kubectl. Given that my Postgres server connection details are:
host: mypostgres
port: 5432
username: postgres
password: 12345
And given that I think the command will be something like:
kubectl run -i --tty --rm debug --image=postgres --restart=Never -- sh
What do I need to do so that I can deploy this image to my cluster, connect to my Postgres server and start running SQL command against it (for troubleshooting purposes)?
If your primarily interested in troubleshooting, then you're probably looking for the kubectl port-forward command, which will expose a container port on your local host. First, you'll need to deploy the Postgres pod; you haven't shown what your pod manifest looks like, so I'm going to assume a Deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgres
namespace: sandbox
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- env:
- name: POSTGRES_PASSWORD
value: secret
- name: POSTGRES_USER
value: example
- name: POSTGRES_DB
value: example
image: docker.io/postgres:13
name: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres-data
strategy: Recreate
volumes:
- emptyDir: {}
name: postgres-data
Once this is running, you can access postgres with the port-forward
command like this:
kubectl -n sandbox port-forward deploy/postgres 5432:5432
This should result in:
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
And now we can connect to Postgres using psql and run queries
against it:
$ psql -h localhost -U example example
psql (13.4)
Type "help" for help.
example=#
kubectl port-forward is only useful as a troubleshooting mechanism. If
you were trying to access your postgres pod from another pod, you
would create a Service and then use the service name as the hostname
for your client connections.
Update
If your goal is to deploy a client container so that you can log
into it and run psql, the easiest solution is just to kubectl rsh
into the postgres container itself. Assuming you were using the
Deployment shown earlier in this question, you could run:
kubectl rsh deploy/postgres
This would get you a shell prompt inside the postgres container. You
can run psql and not have to worry about authentication:
$ kubectl rsh deploy/postgres
$ psql -U example example
psql (13.4 (Debian 13.4-1.pgdg100+1))
Type "help" for help.
example=#
If you want to start up a separate container, you can use the kubectl debug command:
kubectl debug deploy/postgres
This gets you a root prompt in a debug pod. If you know the ip address
of the postgres pod, you can connect to it using psql. To get
the address of the pod, run this on your local host:
$ kubectl get pod/postgres-6df4c549f-p2892 -o jsonpath='{.status.podIP}'
10.130.0.11
And then inside the debug container:
root#postgres-debug:/# psql -h 10.130.0.11 -U example example
In this case you would have to provide an appropriate password,
because you are accessing postgres from "another machine", rather than
running directly inside the postgres pod.
Note that in the above answer I've used the shortcut
deploy/<deployment_name, which avoids having to know the name of the
pod created by the Deployment. You can replace that with
pod/<podname> in all cases.

How to create keycloak with operator and external database

I follow this but it is not working.
I created custom secret:
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
data:
POSTGRES_DATABASE: ...
POSTGRES_EXTERNAL_ADDRESS: ...
POSTGRES_EXTERNAL_PORT: ...
POSTGRES_HOST: ...
POSTGRES_USERNAME: ...
POSTGRES_PASSWORD: ...
and keycloak with external db:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
labels:
app: keycloak
name: keycloak
spec:
externalDatabase:
enabled: true
instances: 1
but when I check log, keycloak can not connect to db. It is still using default vaule: keycloak-postgresql.keycloak not value defined in my custom secret ? Why it is not using my value from secrets ?
UPDATE
when I check keycloak pod which was created by operator I can see:
env:
- name: DB_VENDOR
value: POSTGRES
- name: DB_SCHEMA
value: public
- name: DB_ADDR
value: keycloak-postgresql.keycloak
- name: DB_PORT
value: '5432'
- name: DB_DATABASE
value: keycloak
- name: DB_USER
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
so now I know why I can not connect to db. It use different DB_ADDR. How I can use address: my-app.postgres (db in another namespace).
I dont know why POSTGRES_HOST in secret not working and pod still using default service name
To connect with service in another namespace you can use.
<servicename>.<namespace>.svc.cluster.local
suppose your Postgres deployment and service running in test namespace it will go like
postgres.test.svc.cluster.local
this is what i am using : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
i have also attached the Postgres file you can use it however in my case i have setup both in the same namespace keycloak and Postgres so working like charm.
I'm using Azure PostgreSQL for that, and it works correctly. In pod configuration, it also uses keycloak-postgresql.keycloak as DB_ADDR, but this is pointing to my internal service created by operator based on keycloak-db-secret.
keycloak-postgresql.keycloak this is the another service created by Keycloak Operator, which is used to connect to Postgresql's service.
You can check its endpoint.
$ kubectl get endpoints keycloak-postgresql -n keycloak
NAME ENDPOINTS AGE
keycloak-postgresql {postgresql's service ip}:5432 4m31s
However, the reason why it fails is due to the selector of this service:
selector:
app: keycloak
component: database
So if your DB Pod has the different Label, the selector will not work.
I reported this issue to the community. If they reply me, I will try to fix this bug by submitting a patch.
I was having this same issue, and then after looking at #JiyeYu 's answer, I have searched the project's issue backlog, and I've found some related issues that are still open (at the moment of this reply).
Particularly this one: https://issues.redhat.com/browse/KEYCLOAK-18602
After reading this, and its comments, I did the following:
Don't use IPs on POSTGRES_EXTERNAL_ADDRESS. If your PostGres is hosted within K8s via a StatefulSet, use the full <servicename>.<namespace>.svc.cluster.local (like #Harsh Manvar 's answer)
Remove the POSTGRES_HOST setting from the secret (don't just set it to the default, delete it). Apparently, it is not only being ignored, but also breaking the keycloak pod initialization process somehow.
After I applied these changes the issue was solved for me.
I also had similar problem, it turned out since I was using SSLMODE: "verify-full", keycloak expected correct hostname of my external db.
Since somehow Keycloak translates internally the real external db address into "keycloak-postgresql.keycloak", it expected something like "keycloak-postgresql.my-keycloak-namespace"
The log went something like this:
SEVERE [org.postgresql.ssl.PGjdbcHostnameVerifier] (ServerService Thread Pool -- 57) Server name validation failed: certificate for host keycloak-postgresql.my-keycloak-namespace dNSName entries subjectAltName, but none of them match. Assuming server name validation failed
After I added the host keycloak-postgresql.my-keycloak-namespace on the db certificate, it worked like advertised.

Connect from GKE to Cloud SQL through private IP

I am trying to connect from a pod in GKE to Google Cloud SQL.
Last weekend I make it work, but when I deleted the pod and recreated it was not working and I am not sure why.
Description
I have a nodejs application that it is dockerized. It uses the library sequelize and connects to postgres database.
Sequelize is reading the variables from the environment and in kubenetes I pass them through a secret
apiVersion: v1
kind: Secret
metadata:
name: myapi-secret
namespace: development
type: Opaque
data:
MYAPI_DATABASE_CLIENT: XXX
MYAPI_DATABASE_PORT : XXX
MYAPI_DATABASE_HOST: XXX
MYAPI_DATABASE_NAME : XXX
MYAPI_DATABASE_USERNAME: XXX
MYAPI_DATABASE_PASSWORD: XXX
And my pod definition
apiVersion: v1
kind: Pod
metadata:
name: myapi
namespace: development
labels:
env: dev
app: myapi
spec:
containers:
- name: myapi
image: gcr.io/companydev/myapi
envFrom:
- secretRef:
name: myapi-secret
ports:
- containerPort: 3001
name: myapi
When I deploy the pod I get a connection error to the database
Error: listen EACCES: permission denied tcp://podprivateip:3000
at Server.setupListenHandle [as _listen2] (net.js:1300:21)
at listenInCluster (net.js:1365:12)
at Server.listen (net.js:1462:5)
at Function.listen (/usr/src/app/node_modules/express/lib/application.js:618:24)
at Object.<anonymous> (/usr/src/app/src/app.js:46:5)
at Module._compile (internal/modules/cjs/loader.js:1076:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)
at Module.load (internal/modules/cjs/loader.js:941:32)
at Function.Module._load (internal/modules/cjs/loader.js:782:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
Emitted 'error' event on Server instance at:
at emitErrorNT (net.js:1344:8)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
code: 'EACCES',
errno: -13,
syscall: 'listen',
address: 'tcp://podprivateip:3000',
port: -1
}
I couldn't realize what I am missing
Thanks to #kurtisvg I was able to realize that I was not passing the host and port through env variables to express. However I still have a connection error
UnhandledPromiseRejectionWarning: SequelizeConnectionError: connect ETIMEDOUT postgresinternalip:5432
It is strange because the postgres (cloud sql) and the cluster (gke) are in the same gcp network, but it is like the pod can't see the database.
If I run a docker-compose in my local this connection is working.
You're connecting over private IP, but the port you've specified appears to be 3000. Typically Cloud SQL listens on the default port for the database engine:
MySQL - 3306
Postgres - 5432
SQL Server - 1433

Connect external databases in airflow deployed in Kubernetes

I am trying to connect my external PostgreSQL database in airflow deployed in Kubernetes using this chart: https://github.com/apache/airflow/tree/master/chart
I am modifying the values.yml file:
# Airflow database config
data:
# If secret names are provided, use those secrets
metadataSecretName: ~
resultBackendSecretName: ~
# Otherwise pass connection values in
metadataConnection:
user: <USERNAME>
pass: <PASSWORD>
host: <POSTGRESERVER>
port: 5432
db: airflow
sslmode: require
resultBackendConnection:
user: <USERNAME>
pass: <PASSWORD>
host: <POSTGRESERVER>
port: 5432
db: airflow
sslmode: require
But is not working, all my pods are in status: Waiting: PodInitializing.
My PostgreSQL is hosted in Azure Database for PostgreSQL servers and I am afraid that the IP Pods are not allowed in the firewall rules in Azure Database for PostgreSQL.
How can I know what IP my Kubernetes PODS are using to connect to the database?