How to connect aws postgres db over the ssh using python - postgresql

I have a Postgres db over the AWS and currently we connect that using postico client by providing below information-
DB Host
DB Port
DB Username
DB Password
DB Name
SSH Host (it is domain)
SSH Port
SSH Private Key
During this time I used to my organisation VPN. But Now I have to connect the same with python code and I believe when I can connect the with postico I should be through code as well. I have used below code but unable to connect to db and fetch records so anyone can give idea or sample code?-
def __init__(self, pgres_host, pgres_port, db, ssh, ssh_user, ssh_host, ssh_pkey):
# SSH Tunnel Variables
self.pgres_host = pgres_host
self.pgres_port = pgres_port
if ssh == True:
self.server = SSHTunnelForwarder(
(ssh_host, 22),
ssh_username=ssh_user,
ssh_private_key=ssh_pkey,
remote_bind_address=(pgres_host, pgres_port),
)
server = self.server
server.start() #start ssh server
self.local_port = server.local_bind_port
print(f'Server connected via SSH || Local Port: {self.local_port}...')
elif ssh == False:
pass
def query(self, db, query, psql_user, psql_pass):
engine = create_engine(f'postgresql://{psql_user}:{psql_pass}#{self.pgres_host}:{self.local_port}/{db}')
print (f'Database [{db}] session created...')
print(f'host [{self.pgres_host}]')
self.query_df = pd.read_sql(query,engine)
print ('<> Query Sucessful <>')
engine.dispose()
return self.query_df
pgres = Postgresql_connect(pgres_host=p_host, pgres_port=p_port, db=db, ssh=ssh, ssh_user=ssh_user, ssh_host=ssh_host, ssh_pkey=ssh_pkey)
print(ssh_pkey)
query_df = pgres.query(db=db, query='Select * from table', psql_user=user, psql_pass=password)
print(query_df)

connect just as you would locally after creating an ssh tunnel
https://www.howtogeek.com/168145/how-to-use-ssh-tunneling/

Related

Problem to connect postgres with karate running in docker

I followed the instructions of https://github.com/karatelabs/karate/wiki/Docker to run the karate-chrome Docker image and worked it fine.
But when i try to connect karate with my local server of postgres i have the following error:
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
I have the postgres server running with the host and port correct (localhost:5432). And I'm using the following configuration for the JDBC API:
Background:
* def config = { username: 'postgres', password: 'pass', url: 'jdbc:postgresql://localhost:5432/database_name_here', driverClassName: 'org.postgresql.Driver' }
* def DbUtils = Java.type('Testapi.DbUtils')
* def db = new DbUtils(config)
Anyone have any suggestions to solve this problem?. Thank you in advance.
Note: When i use a mysql online server everything runs fine (with its respective configuration).

Postgresql failover is not working when using multiple hosts in connection string

I am using below python code to test failover,
import psycopg2
conn = psycopg2.connect(database="ccs_testdb", host="host1,host2", user="postgre_user",
password="secret", port="5432", target_session_attrs="read-write")
cur = conn.cursor()
cur.execute("select pg_is_in_recovery(), inet_server_addr()")
row = cur.fetchone()
print("recovery =", row[0])
print("server =", row[1])
if host1 goes down then connection is not established with host2 automatically. Does anyone have tried it?
I want to connect to master instance from my application and if master goes down then want to fallback to standby instance which would be host2 in above prog.
with target_session_attrs="any" it worked. Its connecting to next host in list which is standby

How to disable AutoCommit in Postgresql ODBC

I'm trying to connect oracle to postgreSQL via db link. Is there any option to disable AutoCommit in Postgresql ODBC?
odbc.ini looks like
[PG]
Description = PG
Driver = /usr/lib64/psqlodbc.so
ServerName = xxxxx
Username = authenticator
Password = xxxx
Port = 5432
Database = master
[Default]
Driver = /usr/lib64/libodbcpsqlS.so
I tried autocommit=false and autocommit=off but it did not work.

How to Connect Database(postgres) to Airflow composer On Google Cloud Platform?

I have airflow setup on my local machine.Dags are written in a way that they need to access database(postgres).I am trying to setup similar thing on Google Cloud Platform.But I am not able to connect database to Airflow in a composer.I am Keep getting error "no host postgres" Any Suggestions for setting up airflow on GCP or Connecting Database to airflow composer??
Here Is Link For My Complete Airflow Folder:(This setup works fine on my local machine with docker)
https://github.com/digvijay13873/airflow-docker.git
I am using GCP composer.Postgres Database is in SQL instance. My Table creation Dag is here :
https://github.com/digvijay13873/airflow-docker/blob/main/dags/tablecreation.py
What changes should I do in a My existing Dag to connect it with postgres in SQL instance. I tried Giving public IP address of postgres in Host parameter.
Answering your main question, connecting a SQL instance from GCP in Cloud Composer environment can be done in two ways:
Using Public IP
Using Cloud SQL proxy (recommended): secure access without the need of authorized networks and SSL configuration
Connecting using Public IP:
Postgres: connect directly via TCP (non-SSL)
os.environ['AIRFLOW_CONN_PUBLIC_POSTGRES_TCP'] = (
"gcpcloudsql://{user}:{password}#{public_ip}:{public_port}/{database}?"
"database_type=postgres&"
"project_id={project_id}&"
"location={location}&"
"instance={instance}&"
"use_proxy=False&"
"use_ssl=False".format(**postgres_kwargs)
)
For more information refer github
For connecting using Cloud SQL proxy: You can connect using Auth proxy from GKE as per this documentation.
After setting up the SQL proxy you can connect Composer to your SQL instance using a proxy.
Exemplar Code:
SQL = [
'CREATE TABLE IF NOT EXISTS TABLE_TEST (I INTEGER)',
'CREATE TABLE IF NOT EXISTS TABLE_TEST (I INTEGER)',
'INSERT INTO TABLE_TEST VALUES (0)',
'CREATE TABLE IF NOT EXISTS TABLE_TEST2 (I INTEGER)',
'DROP TABLE TABLE_TEST',
'DROP TABLE TABLE_TEST2',
]
HOME_DIR = expanduser("~")
def get_absolute_path(path):
if path.startswith("/"):
return path
else:
return os.path.join(HOME_DIR, path)
postgres_kwargs = dict(
user=quote_plus(GCSQL_POSTGRES_USER),
password=quote_plus(GCSQL_POSTGRES_PASSWORD),
public_port=GCSQL_POSTGRES_PUBLIC_PORT,
public_ip=quote_plus(GCSQL_POSTGRES_PUBLIC_IP),
project_id=quote_plus(GCP_PROJECT_ID),
location=quote_plus(GCP_REGION),
instance=quote_plus(GCSQL_POSTGRES_INSTANCE_NAME_QUERY),
database=quote_plus(GCSQL_POSTGRES_DATABASE_NAME),
)
os.environ['AIRFLOW_CONN_PROXY_POSTGRES_TCP'] = \
"gcpcloudsql://{user}:{password}#{public_ip}:{public_port}/{database}?" \
"database_type=postgres&" \
"project_id={project_id}&" \
"location={location}&" \
"instance={instance}&" \\
"use_proxy=True&" \
"sql_proxy_use_tcp=True".format(**postgres_kwargs)
connection_names = [
"proxy_postgres_tcp",
]
dag = DAG(
'con_SQL',
default_args=default_args,
description='A DAG that connect to the SQL server.',
schedule_interval=timedelta(days=1),
)
def print_client(ds, **kwargs):
client = storage.Client()
print(client)
print_task = PythonOperator(
task_id='print_the_client',
provide_context=True,
python_callable=print_client,
dag=dag,
)
for connection_name in connection_names:
task = CloudSqlQueryOperator(
gcp_cloudsql_conn_id=connection_name,
task_id="example_gcp_sql_task_" + connection_name,
sql=SQL,
dag=dag
)
print_task >> task

"[NoHostAvailableException: All host(s) tried for query failed" exception occurs in connecting with cassandra cluster

var cluster: Cluster = null
var session: Session = null
cluster = Cluster.builder().addContactPoints("192.168.1.3","192.168.1.2").build()
val metadata = cluster.getMetadata()
printf("Connected to cluster: %s\n",
metadata.getClusterName())
metadata.getAllHosts() map {
case host =>
printf("Datatacenter: %s; Host: %s; Rack: %s\n",
host.getDatacenter(), host.getAddress(), host.getRack())
}
i am not able to connect to cassandra cluster using this code . It is giving me error that-
[NoHostAvailableException: All host(s) tried for query failed (tried: /192.168.1.3 ([/192.168.1.3] Cannot connect), /192.168.1.2 ([/192.168.1.2] Cannot connect))]
What is my mistake in above code.
Your code looks ok on first blush. The error suggests that Cassandra is not actually running on port 9042 (the default) on IPs "192.168.1.3","192.168.1.2"
If Cassandra is running on those IPs but it's another port you will need to use
int port = 19042; // Put the correct port here
cluster = Cluster.builder().addContactPoints("192.168.1.3","192.168.1.2").withPort(port).build()
Remote access to Cassandra is via its thrift port (although note that the JMX port can be used to perform some limited operations).
The thrift port is defined in cassandra.yaml by the rpc_port parameter, which defaults to 9160. Your cassandra node should be bound to the IP address of your server's network card - it shouldn't be 127.0.0.1 or localhost which is the loopback interface's IP, binding to this will prevent direct remote access. You configure the bound address with the rpc_address parameter in cassandra.yaml. Setting this to 0.0.0.0 says "listen on all network interfaces" which may or may not be suitable for you.
To make a connection you can use:
The cassandra-cli in the cassandra distribution's bin directory provides simple get / set / list operations and depends on Java
The cqlsh shell which provides CQL access to cassandra, this depends on Python
A higher level interface such as Apollo
you can use port 9042 and try to connect with ip local host or other machine as follows:
public String serverIP = "127.0.0.1"; //change ip with yours
//public String serverIP = "52.10.160.197"; //for prod
public String keyspace = "database name"; //for prod
//public String keyspace = "dbname_test"; //for staging
Cluster cluster = Cluster.builder().addContactPoint(serverIP).build();
Session session = cluster.connect(keyspace);