I'm trying to set up pgbouncer to require a TLS/SSL connection from the applications connecting to it, but it throws an error "FATAL TLS setup failed: failed to load CA"
This is my pgbouncer.ini:
[databases]
* = host=${postgres_host} port=5432
[pgbouncer]
# Do not change these settings:
listen_addr = 0.0.0.0
auth_file = /etc/pgbouncer/userlist.txt
auth_type = trust
client_tls_sslmode = require
client_tls_key_file = /etc/pgbouncer/server.key
client_tls_cert_file = /etc/pgbouncer/server.crt
server_tls_sslmode = verify-ca
server_tls_ca_file = /etc/root.crt.pem
# These are defaults and can be configured
# please leave them as defaults if you are
# uncertain.
listen_port = 5432
unix_socket_dir =
user = postgres
pool_mode = transaction
max_client_conn = 100
ignore_startup_parameters = extra_float_digits
admin_users = postgres
# Please add any additional settings below this line
but running it it throws this error, which doesn't seem correct since a CA root file is not needed.
FATAL TLS setup failed: failed to load CA: No such file or directory
p.s. It threw the error also before I had server_tlsmode = verify-ca
Related
In my Postgres password , there is a # .Something like dba#123
in the airflow.cfg I have specified my DB password as
#sql_alchemy_conn = postgresql+psycopg2://user:dba#123#postgresserver.com:5432/airflow
throwing error as
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "123#postgresserver.com" to address: Name or service not known
I tried to specify the password as parameters to the postgresql
sql_alchemy_conn = postgresql+psycopg2://user:dba#123#postgresserver.com:5432/airflow?password=dba#123
but not working .
Can any one help
Maybe this can help you can set the DB properties as an Environment variable and then you can get them via function, like this you will not get an error.
# def db_props():
# db_config = {
# 'host': os.environ["_HOST"],
# 'port': os.environ["_PORT"],
# 'db': os.environ["_DATABASE"],
# 'username': os.environ["_USERNAME"],
# 'password': os.environ["_PASSWORD"]
# }
# return db_config
and later in code, you can do this while making a connection
db_config = db_props()
server = db_config['host']
port = db_config['port']
database = db_config['db']
username = db_config["username"]
password = db_config['password']
I am trying to create a database in the created postgres RDS in AWS with postgresql provider. The terraform script i have created is as following:
resource "aws_db_instance" "test_rds" {
allocated_storage = "" # gigabytes
backup_retention_period = 7 # in days
engine = ""
engine_version = ""
identifier = ""
instance_class = ""
multi_az = ""
name = ""
username = ""
password = ""
port = ""
publicly_accessible = "false"
storage_encrypted = "false"
storage_type = ""
vpc_security_group_ids = ["${aws_security_group.test_sg.id}"]
db_subnet_group_name = "${aws_db_subnet_group.rds_subnet_group.name}"
}
The postgresql provider is as following:
# Create databases in rds
provider "postgresql" {
alias = "alias"
host = "${aws_db_instance.test_rds.address}"
port = 5432
username =
password =
database =
sslmode = "disable"
}
# Create user in rds
resource "postgresql_role" "test_role" {
name =
replication = true
login = true
password =
}
# Create database rds
resource "postgresql_database" "test_db" {
name = testdb
owner = "${postgresql_role.test_role.name}"
lc_collate = "C"
allow_connections = true
provider = "postgresql.alias"
}
Anyway i keep retrieving
Error: Error initializing PostgreSQL client: error detecting capabilities: error PostgreSQL version: pq: SSL is not enabled on the server
Note: the empty fields are already filled and the RDS is successfully created, the problem rises when trying to create the database in the rds with the postgresql provider.
We ran into this issue as well, and the problem was that the password was not defined. It seems that we will get the SSL is not enabled error when it has problems connecting. We also had the same problem when the db host name was missing. You will need to make sure you define all of the fields needed to connect in Terraform (probably database and username too).
Ensuring there was a password set for the postgres user and disabled sslmode, did it for me
sslmode = "disable"
I have an R shiny server which also hosts a PostgreSQL Database. However, I have trouble connecting R with Postgres.
Here is my R script:
library("dplyr")
library("RPostgreSQL")
con <- dbConnect(PostgreSQL(), dbname = "___", host="localhost", port="___", user="___", password="___")
With Rscript "skript.R" I get this error:
Error in postgresqlNewConnection(drv, ...) :
RS-DBI driver: (could not connect _____#localhost:___ on dbname "versuch1": FATAL: Ident authentication failed for user "_____"
)
Calls: dbConnect -> dbConnect -> postgresqlNewConnection -> .Call
Execution halted
What am I doing wrong. Any hints?
I decided to play with lapis - https://github.com/leafo/lapis, but the application drops when I try to query the database (PostgreSQL) with the output:
2017/07/01 16:04:26 [error] 31284#0: *8 lua entry thread aborted: runtime error: attempt to yield across C-call boundary
stack traceback:
coroutine 0:
[C]: in function 'require'
/usr/local/share/lua/5.1/lapis/init.lua:15: in function 'serve'
content_by_lua(nginx.conf.compiled:22):2: in function , client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8080"
The code that causes the error:
local db = require("lapis.db")
local res = db.query("SELECT * FROM users");
config.lua:
config({ "development", "production" }, {
postgres = {
host = "0.0.0.0",
port = "5432",
user = "wars_base",
password = "12345",
database = "wars_base"
}
})
The database is running, the table is created, in table 1 there is a record.
What could be the problem?
Decision: https://github.com/leafo/lapis/issues/556
You need to specify the right server IP in the host parameter.
The IP you have specified 0.0.0.0 is not a valid one, and normally it is used when you specify a listen address, with the meaning of "every address".
Usually you can use the '127.0.0.1' address during development.
I am installing a ceph-cluster with one monitor node and one osd.
I am following the document: http://docs.ceph.com/docs/v0.86/start/quick-ceph-deploy/
During the step 5: Add the initial monitor(s) and gather the keys (new in ceph-deploy v1.1.3),
I am getting the following exception:
**[ceph-mon1][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory**
[ceph-mon1][WARNIN] monitor: mon.ceph-mon1, might not be running yet
[ceph-mon1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
**[ceph-mon1][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph-mon1][WARNIN] monitor ceph-mon1 does not exist in monmap**
Just for reference my **ceph.conf** is as follows:
*[global]
fsid = 351948ba-9716-4a04-802d-28b5510bfeb0
mon_initial_members = ceph-mon1,ceph-admin,ceph-osd1
mon_host = xxx.yyy.zzz.78,xxx.yyy.zzz.147,xxx.yyy.zzz.135
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 2
public_addr = xxx.yyy.zzz.0*
I tried to understand all the questions related to sane on ceph user mailing list but there is no precise solution I found for this problem.
Can anyone help me on this?
Thanks in advance.
I faced the same errors was able to resolve the issue by adding my other ceph node's hostname & IpAdrress and by adding "public_network ="
The sections which I tweaked in ceph.conf are:
mon_initial_members =
mon_host =
public_network =
cat /etc/ceph/ceph.conf
[global]
fsid = 33cb5c76-a685-469e-8cdd-fee7c98c3f4d
mon_initial_members = ceph1,ceph2
mon_host = 192.168.61.39,192.168.61.40
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
public_network = 192.168.61.0/24
And the running the command:
$ ceph-deploy --overwrite-conf mon create <ceph-node>
I had a similar issue...
My problem was that the alias hostname in my /etc/hosts on my deployment server and my target server was with a different hostname ....
Always make sure your hostname on the server is the same in the ceph.conf and the correct IP- HOSTNAME are the same in /etc/hosts on your deployment box ...