Connect from GKE to Cloud SQL through private IP - postgresql

I am trying to connect from a pod in GKE to Google Cloud SQL.
Last weekend I make it work, but when I deleted the pod and recreated it was not working and I am not sure why.
Description
I have a nodejs application that it is dockerized. It uses the library sequelize and connects to postgres database.
Sequelize is reading the variables from the environment and in kubenetes I pass them through a secret
apiVersion: v1
kind: Secret
metadata:
name: myapi-secret
namespace: development
type: Opaque
data:
MYAPI_DATABASE_CLIENT: XXX
MYAPI_DATABASE_PORT : XXX
MYAPI_DATABASE_HOST: XXX
MYAPI_DATABASE_NAME : XXX
MYAPI_DATABASE_USERNAME: XXX
MYAPI_DATABASE_PASSWORD: XXX
And my pod definition
apiVersion: v1
kind: Pod
metadata:
name: myapi
namespace: development
labels:
env: dev
app: myapi
spec:
containers:
- name: myapi
image: gcr.io/companydev/myapi
envFrom:
- secretRef:
name: myapi-secret
ports:
- containerPort: 3001
name: myapi
When I deploy the pod I get a connection error to the database
Error: listen EACCES: permission denied tcp://podprivateip:3000
at Server.setupListenHandle [as _listen2] (net.js:1300:21)
at listenInCluster (net.js:1365:12)
at Server.listen (net.js:1462:5)
at Function.listen (/usr/src/app/node_modules/express/lib/application.js:618:24)
at Object.<anonymous> (/usr/src/app/src/app.js:46:5)
at Module._compile (internal/modules/cjs/loader.js:1076:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)
at Module.load (internal/modules/cjs/loader.js:941:32)
at Function.Module._load (internal/modules/cjs/loader.js:782:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
Emitted 'error' event on Server instance at:
at emitErrorNT (net.js:1344:8)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
code: 'EACCES',
errno: -13,
syscall: 'listen',
address: 'tcp://podprivateip:3000',
port: -1
}
I couldn't realize what I am missing
Thanks to #kurtisvg I was able to realize that I was not passing the host and port through env variables to express. However I still have a connection error
UnhandledPromiseRejectionWarning: SequelizeConnectionError: connect ETIMEDOUT postgresinternalip:5432
It is strange because the postgres (cloud sql) and the cluster (gke) are in the same gcp network, but it is like the pod can't see the database.
If I run a docker-compose in my local this connection is working.

You're connecting over private IP, but the port you've specified appears to be 3000. Typically Cloud SQL listens on the default port for the database engine:
MySQL - 3306
Postgres - 5432
SQL Server - 1433

Related

K0s cluster is not getting created with active firewall on Redhat 8.4

I am using k0sctl to create a cluster but it's not getting set up if the firewall is on.
Operating System - RedHat 8.4
k0s version - v1.24.2+k0s.0
k0sctl version - v0.13.0
k0sctl file -
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
spec:
hosts:
- ssh:
address: 10.210.24.11
user: root
port: 22
keyPath: /root/.ssh/id_rsa
role: controller
- ssh:
address: 10.210.24.12
user: root
port: 22
keyPath: /root/.ssh/id_rsa
role: worker
k0s:
version: 1.24.2+k0s.0
dynamicConfig: false
Error that I am getting is - failed to connect from worker to kubernetes api at https://X.X.X.X:6443 - check networking

How to create keycloak with operator and external database

I follow this but it is not working.
I created custom secret:
apiVersion: v1
kind: Secret
metadata:
name: keycloak-db-secret
data:
POSTGRES_DATABASE: ...
POSTGRES_EXTERNAL_ADDRESS: ...
POSTGRES_EXTERNAL_PORT: ...
POSTGRES_HOST: ...
POSTGRES_USERNAME: ...
POSTGRES_PASSWORD: ...
and keycloak with external db:
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
labels:
app: keycloak
name: keycloak
spec:
externalDatabase:
enabled: true
instances: 1
but when I check log, keycloak can not connect to db. It is still using default vaule: keycloak-postgresql.keycloak not value defined in my custom secret ? Why it is not using my value from secrets ?
UPDATE
when I check keycloak pod which was created by operator I can see:
env:
- name: DB_VENDOR
value: POSTGRES
- name: DB_SCHEMA
value: public
- name: DB_ADDR
value: keycloak-postgresql.keycloak
- name: DB_PORT
value: '5432'
- name: DB_DATABASE
value: keycloak
- name: DB_USER
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_USERNAME
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: keycloak-db-secret
key: POSTGRES_PASSWORD
so now I know why I can not connect to db. It use different DB_ADDR. How I can use address: my-app.postgres (db in another namespace).
I dont know why POSTGRES_HOST in secret not working and pod still using default service name
To connect with service in another namespace you can use.
<servicename>.<namespace>.svc.cluster.local
suppose your Postgres deployment and service running in test namespace it will go like
postgres.test.svc.cluster.local
this is what i am using : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment/blob/main/keycload-deployment.yaml
i have also attached the Postgres file you can use it however in my case i have setup both in the same namespace keycloak and Postgres so working like charm.
I'm using Azure PostgreSQL for that, and it works correctly. In pod configuration, it also uses keycloak-postgresql.keycloak as DB_ADDR, but this is pointing to my internal service created by operator based on keycloak-db-secret.
keycloak-postgresql.keycloak this is the another service created by Keycloak Operator, which is used to connect to Postgresql's service.
You can check its endpoint.
$ kubectl get endpoints keycloak-postgresql -n keycloak
NAME ENDPOINTS AGE
keycloak-postgresql {postgresql's service ip}:5432 4m31s
However, the reason why it fails is due to the selector of this service:
selector:
app: keycloak
component: database
So if your DB Pod has the different Label, the selector will not work.
I reported this issue to the community. If they reply me, I will try to fix this bug by submitting a patch.
I was having this same issue, and then after looking at #JiyeYu 's answer, I have searched the project's issue backlog, and I've found some related issues that are still open (at the moment of this reply).
Particularly this one: https://issues.redhat.com/browse/KEYCLOAK-18602
After reading this, and its comments, I did the following:
Don't use IPs on POSTGRES_EXTERNAL_ADDRESS. If your PostGres is hosted within K8s via a StatefulSet, use the full <servicename>.<namespace>.svc.cluster.local (like #Harsh Manvar 's answer)
Remove the POSTGRES_HOST setting from the secret (don't just set it to the default, delete it). Apparently, it is not only being ignored, but also breaking the keycloak pod initialization process somehow.
After I applied these changes the issue was solved for me.
I also had similar problem, it turned out since I was using SSLMODE: "verify-full", keycloak expected correct hostname of my external db.
Since somehow Keycloak translates internally the real external db address into "keycloak-postgresql.keycloak", it expected something like "keycloak-postgresql.my-keycloak-namespace"
The log went something like this:
SEVERE [org.postgresql.ssl.PGjdbcHostnameVerifier] (ServerService Thread Pool -- 57) Server name validation failed: certificate for host keycloak-postgresql.my-keycloak-namespace dNSName entries subjectAltName, but none of them match. Assuming server name validation failed
After I added the host keycloak-postgresql.my-keycloak-namespace on the db certificate, it worked like advertised.

ArangoDB init container fails on minikube

I'm working on a NodeJS service which uses ArangoDB as datastore, and deployed on minikube. I use an initContainer directive in the kubernetes deployment manifest to ensure that the database is ready to receive connections before the application attempts to connect. The relevant portion of the kubernetes YAML is shown below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: carservice
template:
spec:
initContainers:
- name: init-carservice
image: arangodb/arangodb:3.5.1
command: ['sh', 'c', 'arangosh --server.endpoint="https://${CARSERVICE_CARSERVICEDB_SERVICE_HOST}:${CARSERVICE_CARSERVICEDB_SERVICE_PORT}" --server.password=""; do echo waiting for database to be up; sleep 2; done;']
containers:
- name: carservice
image: carservice
imagePullPolicy: IfNotPresent
The challenge has been that sometimes the initContainer is able to wait for the database connection to be established successfully. Most of the other times, it randomly fails with the error:
ERROR caught exception: invalid endpoint spec: https://
Out of desperation, I changed the scheme to http, and it fails with a corresponding error:
ERROR caught exception: invalid endpoint spec: http://
My understanding of these errors is that the database is not able to recognize https and http in these instances, which is strange. The few times the initContainer bit worked successfully, I used https in the related command in the kubernetes spec.
I must add that the actual database (https://${CARSERVICE_CARSERVICEDB_SERVICE_HOST}:${CARSERVICE_CARSERVICEDB_SERVICE_PORT}) has been successfully deployed to minikube using kube-arangodb, and can be accessed through the web UI, so that bit is sorted.
What I'd like to know:
Is this the recommended way to wait for ArangoDB to connect using the initContainer directive, or do I have to use an entirely different approach?
What could be causing the error I'm getting? Am I missing something fundamental here?
Would be glad for any help.
The issue was that for those times the init container failed to connect to ArangoDB, the env variables were not correctly set. Therefore, I added another init container before that (since init containers are executed in sequence), that'd wait for the corresponding kubernetes "service" resource of the ArangoDB deployment to come up. That way, by the time the second init container would run, the env variables would be available.
The corresponding portion of kubernetes deployment YAML is shown as:
apiVersion: apps/v1
kind: Deployment
metadata:
name: carservice
template:
spec:
initContainers:
- name:init-db-service
image: busybox:1.28
command: ['sh', '-c', 'until nslookup carservice-carservicedb; do echo waiting for kubernetes service resource for db; sleep 2; done;']
- name: init-carservice
image: arangodb/arangodb:3.5.1
command: ['sh', 'c', 'arangosh --server.endpoint="https://${CARSERVICE_CARSERVICEDB_SERVICE_HOST}:${CARSERVICE_CARSERVICEDB_SERVICE_PORT}" --server.password=""; do echo waiting for database to be up; sleep 2; done;']
containers:
- name: carservice
image: carservice
imagePullPolicy: IfNotPresent

How to use my own hub image when deploying a jupyterhub on google kubernetes engine?

I'm trying to deploy JupyterHub on Google Kubernetes engine.
I managed to deploy it by following the Zero to JupyterHub with Kubernetes tutorial.
My next step is to deploy JupyterHub using my own hub image but I keep getting an error message (from the proxy apparently).
So I created a repository on Docker Hub registry and tried to modify my helm config file so it will pull the image (I used the helm Configuration Reference).
I updated the deploy with the following command:
helm upgrade --install $RELEASE jupyterhub/jupyterhub --namespace $NAMESPACE --version=0.8.2 --values config.yaml
As a result, I get a "Service Unavailable" message (The pods are all running).
The proxy pod log:
09:14:24.370 - info: [ConfigProxy] Adding route / ->
http://10.47.249.21:8081 09:14:24.380 - info: [ConfigProxy] Proxying
http://0.0.0.0:8000 to http://10.47.249.21:8081
09:14:24.381 - info:
[ConfigProxy] Proxy API at http://0.0.0.0:8001/api/routes 09:16:01.434
- error: [ConfigProxy] 503 GET /hub/admin connect ECONNREFUSED 10.47.249.21:8081 09:16:01.438 - error: [ConfigProxy] Failed to get custom error page Error: connect ECONNREFUSED 10.47.249.21:8081
at Object.exports._errnoException (util.js:1020:11)
at exports._exceptionWithHostPort (util.js:1043:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1086:14)
Hub image Dockerfile:
FROM jupyterhub/jupyterhub:0.9.6
USER root
COPY MZ_logo.jpg /usr/local/share/jupyter/hub/static/images/MZ-logo.jpg
USER ${NB_USER}
Helm config.yaml file:
proxy:
secretToken: "<TOKEN>"
auth:
admin:
users:
- admin1
- admin2
whitelist:
users:
- user1
- user2
hub:
imagePullPolicy: 'Always'
imagePullSecret:
enabled: true
username: <DOCKER_HUB_USERNAME>
password: <DOCKER_HUB_PASSWORD>
image:
name: <DOCKER_HUB_USERNAME>/<DOCKER_HUB_REPO>
tag: latest
extraConfig: |
c.JupyterHub.logo_file = '/usr/local/share/jupyter/hub/static/images/MZ-logo.jpg'

Connect to Google Cloud SQL from Container Engine with Java App

I'm having a tough time connecting to a Cloud SQL Instance from a Java App running in a Google Container Engine Instance.
I whitelisted the external instance IP from the Access Control of CloudSQL. Connecting from my local machine works well, however I haven't managed to establish a connection from my App yet.
I'm configuring the Container as (cloud-deployment.yaml):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: APPNAME
spec:
replicas: 1
template:
metadata:
labels:
app: APPNAME
spec:
imagePullSecrets:
- name: APPNAME.com
containers:
- image: index.docker.io/SOMEUSER/APPNAME:latest
name: web
env:
- name: MYQL_ENV_DB_HOST
value: 111.111.111.111 # the cloud sql instance ip
- name: MYQL_ENV_MYSQL_PASSWORD
value: THEPASSWORD
- name: MYQL_ENV_MYSQL_USER
value: THEUSER
ports:
- containerPort: 9000
name: APPNAME
using the connection url jdbc:mysql://111.111.111.111:3306/databaseName, resulting in:
Error while executing: Access denied for user 'root'#'ip adress of the instance' (using password: YES)`
I can confirm that the Container Engine external IP is set on the SQL instance.
I don't want to use the Cloud Proxy Image for now as I'm still in development stage.
Any help is greatly appreciated.
You must use the cloud SQL proxy as described here: https://github.com/GoogleCloudPlatform/cloudsql-proxy/blob/master/README.md