I have installed airflow with Postgres using docker compose. I am able to connect to Postgres from airflow defining connection in airflow website. Now I want to do different thing. I installed Postgres locally on my pc (not on docker). I would like by airflow running in docker to access pc's Postgres. How can I achieve that?
I've tried as follows to create test dag:
from airflow import DAG
from airflow.providers.postgres.hooks.postgres import PostgresHook
from airflow.operators.python import PythonOperator
from datetime import datetime
import psycopg2
def execute_query_with_psycopg(my_query, **kwargs):
print(my_query) # 'value_1'
conn_args = dict(
host='localhost',
user='postgres',
password='qaz',
dbname='postgres',
port=5432)
conn = psycopg2.connect(**conn_args)
cur = conn.cursor()
cur.execute(my_query)
for row in cur:
print(row)
with DAG(dag_id="test2",
start_date=datetime(2021, 1, 1),
schedule_interval="#once",
catchup=False) as dag:
task1 = PythonOperator(
task_id="test2_task",
python_callable=execute_query_with_psycopg,
op_kwargs={"my_query": 'select 1'})
task1
nevertheless I am getting following error:
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
[2022-05-31 18:30:06,005] {taskinstance.py:1531} INFO - Marking task as FAILED. dag_id=test2, task_id=test2_task, execution_date=20220531T175542, start_date=20220531T183005, end_date=20220531T183006
[2022-05-31 18:30:06,098] {local_task_job.py:151} INFO - Task exited with return code 1
Docker is not using the same network as your computer, meaning that once you run a PostgreSQL server locally, and a Docker image locally, they are not necessarily connected.
When you started the images using Docker compose, the situation is different, because Docker compose creates a network bridge between the images that it starts.
Regarding how to connect your local PostgreSQL to the Airflow Docker image, you can try to consult the following question:
Allow docker container to connect to a local/host postgres database
If you want your Airflow Docker image to use the network of your local PC (ALL network), you can start the Airflow container with the --network=host parameter, but use it with caution :)
For anyone having same issue that's how I resolved that specifying host as follows:
host='host.docker.internal'
Related
I started a process on the host that listens on a Unix socket:
/cloud_sql_proxy -enable_iam_login -dir=/var/run/cloudsql -instances=project:region:server
I confirmed I can make a connection with psql: psql "sslmode=disable host=/var/run/cloudsql/project:region:server user=myuser#project.iam dbname=mydb
I need to connect to Postgres over this socket with npgsql but inside a container.
I'm using this connection string:
string DBConnectionString = #"
User ID=myuser#project.iam;
Host=/var/run/cloudsql/project:region:server;
Database=mydb;
Port=5432
";
using (var connection = new NpgsqlConnection(DBConnectionString))
connection.Query("SELECT * FROM mytable ORDER BY zzz");
Running this application locally on the host with this connection string works as expected- it can connect to the Unix socket and query the DB without issue. From inside a container it is not working.
I start the container trying to mount the file for the Unix socket:
docker run \
--mount type=bind,source="/var/run/cloudsql",target="/var/run/cloudsql" \
"myimage"
But I'm getting this error:
[00:13:46 FTL] Npgsql.NpgsqlException (0x80004005): Exception while connecting
---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (111): Connection refused /var/run/cloudsql/project:region:server/.s.PGSQL.5432
at System.Net.Sockets.Socket.DoConnect(EndPoint endPointSnapshot, SocketAddress socketAddress)
When I look at the process using the socket (cloud SQL proxy) I see no output during the attempt. When to connect to the socket from the host with the app or psql I see logs for login attempts. So maybe the socket isn't mounted correctly?
Update: also works fine with k8s
Just to add to this, this also works in k8s. In my manifest I just create a shared volume. I have a container running the app and another container running the cloud SQL proxy and mount the volume to both and it just works. So I'm wondering if this is some local docker perms issue or something?
I'm having trouble connect to the database on google cloud.
In my Dockerfile, I'm calling a python script to connect to the database, as follows:
Dockerfile:
....
ADD script.py /home/script.py
CMD ["/home/script.py"]
ENTRYPOINT ["python"]
Python script
import sqlalchemy as db
# x.x.x.x is the public ip of the database instance
engine = db.create_engine('postgresql+psycopg2://user:db_pass#x.x.x.x/db_name')
connection = engine.connect()
metadata = db.MetaData()
experiments = db.Table('experiments', metadata, autoload=True, autoload_with=engine)
But I keep getting this error:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Operation timed out
Is the server running on host "x.x.x.x" and accepting
TCP/IP connections on port 5432?
Can someone help me? Thank you!!
I have a postgres docker cluster running in a docker swarm environment with an overlay network. everything looks fine until when I try accessing the created container from a remote host with this command psql -h -p -U .
I'm setting up the postgres initialization parameters using an environment variable file that looks like this
`# the name of the cluster
PATRONI_SCOPE=pg-test-cluster
# create an admin user on pg init
PATRONI_admin_OPTIONS=createdb, createrole
PATRONI_admin_PASSWORD=admin
# host and port of etcd service
PATRONI_ETCD_HOST=etcd:2379
# location of password file
PATRONI_POSTGRESQL_PGPASS=home/postgres/.pgpass
# address patroni will use to connect to local server
PATRONI_POSTGRESQL_LISTEN=0.0.0.0:5432
# replication user and password
PATRONI_REPLICATION_PASSWORD=abcd
PATRONI_REPLICATION_USERNAME=replicator
# address patroni used to receive incoming api calls
PATRONI_RESTAPI_LISTEN=0.0.0.0:8008
# api basic auth
PATRONI_RESTAPI_PASSWORD=admin
PATRONI_RESTAPI_USERNAME=admin
# patroni needs superuser adminstrate postgres
PATRONI_SUPERUSER_PASSWORD=postgres
PATRONI_SUPERUSER_USERNAME=postgres`
The error I get is
`psql: could not connect to server: Connection refused
Is the server running on host "10.66.112.29" and accepting
TCP/IP connections on port 5000?`
Im exposing port 5000 through patroni rest api for master DB.
Any ideas where I'm missing the point? thanks !
I have a PostgreSQL DB sitting on my local machine (Windows) and I would like to import it into my Hortonworks Sandbox using Apache Sqoop. While something like this sounds great, the complicating factor is that my Sandbox is sitting in a Docker container, so statements such as sqoop list-tables --connect jdbc:postgresql://127.0.0.1/ambari --username ambari -P seem to run into authentication errors. I believe the issue comes from trying to connect to the local host from inside the docker container.
I looked at this post on connecting to a MySQL DB from within a container and this one to try to use PostgreSQL instead, but have so far been unsuccessful. I have tried connecting to '127.0.0.1' and '172.17.0.1' (the host's IP) in order to connect to my local host from within Docker. I have also adjusted PostgreSQL's configuration file to listen for connections on all IP addresses. However, I still get the following error messages when I run sqoop list-tables --connect jdbc:postgresql://<ip>:5432/<db_name> --username postgres -P (where <ip> is either 127.0.0.1 or 172.17.0.1, and <db_name> is the name of my database)
For connecting with 127.0.0.1:
ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.RuntimeException: org.postgresql.util.PSQLException: FATAL: Ident authentication failed for user "postgres"
For connecting with 172.17.0.1:
Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
Any suggestions would be very helpful!
If this is just for local testing and not for production level coding, you can enable all trusted connections to your database by updating the pg_hba.conf file
Locate your pg_hba.conf file inside your postgres data directory
Vim the file and update it with the following lines:
#TYPE DATABASE USER ADDRESS METHOD
local all all trust
host all all 0.0.0.0/0 trust
host all all ::1/128 trust
Restart your postgres service
If you do this, your first use case (using 127.0.0.1) should work
I have a Flask application that uses PostgreSQL as its backend DB. I have installed Postgresql as a dockerized service (using docker-compose).
I am now running the postgresql successfully on my machine. I have entered all the relevant details to connect to the database (user, pwd, dbname, server address, server port) but I can't connect.
When I try to connect to the database, an exception is thrown in Flask (from the psycopg2):
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
When I check if postgresql is running, I get the following message:
$ service postgresql status
● postgresql.service
Loaded: not-found (Reason: No such file or directory)
Active: inactive (dead)
After reading a little bit more about Docker and PostgreSQL, I have come to understand why I can't connect - since postgresql is running in its own container - so the server address and port details I'm using in my db connection parameters are wrong.
Here is output from docker-ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b7c8cbac0a3b mbsolutions/tryton-server-gnuhealth:3.8 "/docker-entrypoint.s" 13 days ago Up 12 days 0.0.0.0:8000->8000/tcp gnuhealth_tryton_1
0a4d9fd42544 mbsolutions/postgres-gnuhealth:3.0 "/docker-entrypoint.s" 13 days ago Up 12 days 5432/tcp gnuhealth_db_1
My question is this:
What parameters (specifically db server address and port) should I use in my database connection parameters, so that I can connect to the dockerized postgresql service from my flask application?
Note: I have seen this other question, which seems similar, but solution given does not directly answer my question.
======================== Correct Answer Start ========================
You must map the postgres container port (5432) to be able to access it on your host.
Add this in the docker-compose.yml
ports:
- "5432:5432"
======================== Correct Answer End ========================
Old answer kept for posterity :
You maybe want to use the host network.
Add this in the docker-compose.yml
network_mode: "host"
The container will not be isolated from your host itself.