I am trying to connect my external PostgreSQL database in airflow deployed in Kubernetes using this chart: https://github.com/apache/airflow/tree/master/chart
I am modifying the values.yml file:
# Airflow database config
data:
# If secret names are provided, use those secrets
metadataSecretName: ~
resultBackendSecretName: ~
# Otherwise pass connection values in
metadataConnection:
user: <USERNAME>
pass: <PASSWORD>
host: <POSTGRESERVER>
port: 5432
db: airflow
sslmode: require
resultBackendConnection:
user: <USERNAME>
pass: <PASSWORD>
host: <POSTGRESERVER>
port: 5432
db: airflow
sslmode: require
But is not working, all my pods are in status: Waiting: PodInitializing.
My PostgreSQL is hosted in Azure Database for PostgreSQL servers and I am afraid that the IP Pods are not allowed in the firewall rules in Azure Database for PostgreSQL.
How can I know what IP my Kubernetes PODS are using to connect to the database?
Related
I am trying to connect GitLab to an external database an AWS RDS using a k8s secret to deploy on AWS EKS but I am not sure if it connects and how would I know that it does?
values.yaml code:
psql:
connectTimeout:
keepalives:
keepalivesIdle:
keepalivesInterval:
keepalivesCount:
tcpUserTimeout:
password:
useSecret: true
secret: gitlab-secret
key: key
host: <RDS endpoint>
port: <RDS port>
username: postgres
database: <main name of db>
# applicationName:
# preparedStatements: false
kubernetes secret:
kubectl create secret generic gitlab-secret --from-literal=key="<password>" -n devops-gitlab
The psql server details are already known
I need to connect to a Postgres server instance that is running as a SystemD service from within a docker-compose file.
docker-compose containers ---> postgres as systemd
This is about setting up Airflow with an external Postgres DB that is on localhost.
I've taken the docker-compose example with:
curl -LfO 'https://airflow.apache.org/docs/apache-airflow/2.2.3/docker-compose.yaml'
However in there they are defining a Postgres container where Airflow connects to by resolving the postgres host within the Docker network.
But I already have Postgres running on the machine via SystemD, I can check its status with:
# make sure the service is up and running
systemctl list-units --type=service | grep postgres.*12
# check the process
ps aux | grep postgres.*12.*config_file
# check the service details
systemctl status postgresql#12-main.service
AFAIU inside the docker-compose YAML file I need to use the feature host.docker.internal so the Docker service makes the docker container find their way out of the Docker network and find localhost with the SystemD services e.g. Postgres.
I've setup the Airflow YAML file for docker-compose with:
---
version: '3'
x-airflow-common:
&airflow-common
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.2.3}
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: LocalExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#host.docker.internal/airflow
...
extra_hosts:
- "host.docker.internal:host-gateway"
There's a lot of stuff going on there, but the point is that the SQLAlchemy connection string is using host.docker.internal as host.
Now when I invoke the command docker-compose -f airflow-local.yaml up airflow-init, then I see in the ouput logs that Airflow is complaining it does not find the Postgres server:
airflow-init_1 | psycopg2.OperationalError: connection to server at "host.docker.internal" (172.17.0.1), port 5432 failed: Connection refused
airflow-init_1 | Is the server running on that host and accepting TCP/IP connections?
It might be an issue with the DNS resolution between the Docker special network and the OS network, not sure how to troubleshoot this.
How do I make Docker container to find out SystemD services that serve on localhost?
Turns out I just need to use network_mode: host in the YAML code for the Docker container definition (a container is a service in docker-compose terminology).
This way the Docker virtual network is somehow bound to the laptop networking layer ("localhost" or "127.0.0.1"). This setup is not encouraged by the Docker people, but sometimes things are messy when dealing with legacy systems so you have to work around what has been done in the past.
Then you can use localhost to reach the Postgres DB running as a SystemD service.
The only caveat is that someone can not use port mappings when using network_mode: host, otherwise docker-compose complains with the error message:
"host" network_mode is incompatible with port_bindings
So you have to remove the YAML part similar to:
ports:
- 9999:8080
and sort out the ports (TCP sockets) in a different way.
In my specific scenario (Airflow stuff), I've done the following:
For the host/networking that makes the Airflow webserver (docker container/service) reach the Postgres DB (SystemD service/daemon) on localhost:
# see the use of "localhost"
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow#localhost/airflow
For the TCP port, in the docker-compose YAML service definition for the Airflow webserver I specified the port:
command: webserver -p 9999
I have setup a Postgres pod on my Kubernetes cluster, and I am trying to troubleshoot it a bit.
I would like to use the official Postgres image and deploy it to my Kubernetes cluster using kubectl. Given that my Postgres server connection details are:
host: mypostgres
port: 5432
username: postgres
password: 12345
And given that I think the command will be something like:
kubectl run -i --tty --rm debug --image=postgres --restart=Never -- sh
What do I need to do so that I can deploy this image to my cluster, connect to my Postgres server and start running SQL command against it (for troubleshooting purposes)?
If your primarily interested in troubleshooting, then you're probably looking for the kubectl port-forward command, which will expose a container port on your local host. First, you'll need to deploy the Postgres pod; you haven't shown what your pod manifest looks like, so I'm going to assume a Deployment like this:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgres
namespace: sandbox
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- env:
- name: POSTGRES_PASSWORD
value: secret
- name: POSTGRES_USER
value: example
- name: POSTGRES_DB
value: example
image: docker.io/postgres:13
name: postgres
ports:
- containerPort: 5432
name: postgres
protocol: TCP
volumeMounts:
- mountPath: /var/lib/postgresql
name: postgres-data
strategy: Recreate
volumes:
- emptyDir: {}
name: postgres-data
Once this is running, you can access postgres with the port-forward
command like this:
kubectl -n sandbox port-forward deploy/postgres 5432:5432
This should result in:
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
And now we can connect to Postgres using psql and run queries
against it:
$ psql -h localhost -U example example
psql (13.4)
Type "help" for help.
example=#
kubectl port-forward is only useful as a troubleshooting mechanism. If
you were trying to access your postgres pod from another pod, you
would create a Service and then use the service name as the hostname
for your client connections.
Update
If your goal is to deploy a client container so that you can log
into it and run psql, the easiest solution is just to kubectl rsh
into the postgres container itself. Assuming you were using the
Deployment shown earlier in this question, you could run:
kubectl rsh deploy/postgres
This would get you a shell prompt inside the postgres container. You
can run psql and not have to worry about authentication:
$ kubectl rsh deploy/postgres
$ psql -U example example
psql (13.4 (Debian 13.4-1.pgdg100+1))
Type "help" for help.
example=#
If you want to start up a separate container, you can use the kubectl debug command:
kubectl debug deploy/postgres
This gets you a root prompt in a debug pod. If you know the ip address
of the postgres pod, you can connect to it using psql. To get
the address of the pod, run this on your local host:
$ kubectl get pod/postgres-6df4c549f-p2892 -o jsonpath='{.status.podIP}'
10.130.0.11
And then inside the debug container:
root#postgres-debug:/# psql -h 10.130.0.11 -U example example
In this case you would have to provide an appropriate password,
because you are accessing postgres from "another machine", rather than
running directly inside the postgres pod.
Note that in the above answer I've used the shortcut
deploy/<deployment_name, which avoids having to know the name of the
pod created by the Deployment. You can replace that with
pod/<podname> in all cases.
First of all we have a MySQL DB setup on Heroku that already has data on it. I'm trying to add the Prisma layer on top of our DB.
My docker-compose.yml:
version: "3"
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
databases:
default:
connector: mysql
host:
database:
user: user
password:
rawAccess: true
port: '3306'
migrations: false
Prisma.yml
# The HTTP endpoint for Prisma API
endpoint: http://localhost:4466
# Points to the file that contains your datamodel
datamodel: datamodel.prisma
# Specifies language and location for the generated Prisma Client
generate:
- generator: javascript-client
output: ./generated/prisma-client/
I go through prisma init and it connects to the DB and sets up the datamodel as it should, from what it seems like.
After docker-compose up -d I run prisma deploy and it hits me with
Could not connect to server at http://localhost:4466. Please check if
your server is running.
I run systemctl status docker and the docker is running as it should.
I then run docker-compose logs and I get back this
Exception in thread "main" java.sql.SQLSyntaxErrorException:
(conn=10044859) Access denied for user 'user'#'%' to database 'prisma'
So from reading around does prisma need the DB user to have the privileges to create a new schema in the DB for its management purposes?
I have no clue where to go from here. If I'm doing something wrong here, the help would be much appreciated!
Btw: SSL is not active on the DB so there is no need for it to be true.
I've set up Hasura on a DigitalOcean droplet using the instructions here - https://docs.hasura.io/1.0/graphql/manual/guides/deployment/digital-ocean-one-click.html -
How can I connect to the Postgres database? Preferably using something like DBeaver - with host, database, user, password.
I guess the Postgres is running inside a Docker container, but how do you expose it to the outside world?
The docker-compose.yaml used on the Digital Ocean Marketplace does not expose the Postgres database on the host machine.
You can find the file at /etc/hasura/docker-compose.yaml. If your database management tool supports running as a docker container, I recommend adding it's relevant configuration to the docker-compose.yaml and exposing that application to the ouside like how graphql-engine is exposed via Caddy (config in /etc/hasura/Caddyfile.
But if you'd like to connect to postgres from within the machine, add a port mapping to the docker-compose file:
postgres:
image: postgres:10.5
restart: always
volumes:
- db_data:/var/lib/postgresql/data
ports:
- "127.0.0.1:5432:5432"
Now, Postgres will be available at postgres://postgres:#127.0.0.1:5432/postgres
Do set a password if you're exposing it on the host machine.