How to get multiple hosts configuration and database cluster failover - postgresql

In java we had an option to use multiple hosts configuration with failover mechanism:
jdbc:postgresql://node1:port1,node2:port2,node3:port3/accounting?targetServerType=primary
do we have such support in Go? How the connection string should look like? i've seen that lib/pq is not maintained
i didn't find any information about jackc/pgx if they support multi host, or how the connection string should look like. Please if someone can provide an example.

Related

Quarkus - connect to muli hosts with reactive driver

I need to connect to multiple postgres hosts with hibernate-reactive
As an example, with the classic jdbc driver, we can define this property to connect to our HA postgres instance:
quarkus.datasource.jdbc.url=jdbc:postgresql://my.host-1.com,my.host-2.com,my.host-3.com:5432/myDB?targetServerType=master&ssl=true&sslmode=verify-ca&sslcert=my-cert&sslkey=my-key&sslpassword=&sslrootcert=my-cert.crt
But here I saw that the vert.x PgClient does not support multi host connections directly in connection URI
I created an issue in vertx-sql-client here and a developer said me that it would be already possible by using the PgConnectOptions and a PgPool.
I did not see anything related in quarkus hibernate-reactive documentation.
Can anyone help me on this ? It seems we have to manage connections by URI.

Will a Kafka JDBC source connector resolve /etc/hosts?

When providing the connection.url for a source connector, I pass an IP address as part of the JDBC connection string. However, this IP is not static all the time, as the source system will be migrated soon.
Two questions arise:
does the connector support DNS names, so we can move away from IPs in the connection string?
does the JDBC connector resolve the IP, if we put it in /etc/hosts mapped to another hostname and change the host in the connection string accordingly? This suggestion came up during some discussions, but I have quite some doubts about that solution.
Java is passing the DNS resolution to the OS, so the framework doesn't control that.
The /etc/hosts file is also static, however, so you might want to look at other service-discovery options such as HashiCorp Consul.

What it means by "Authentication partially enabled" on MongoDB?

I ran a scan on Shodan for my server IP and I noticed it listed my MongoDB with "Authentication partially enabled"
Now, I can't find what it actually mean. I am sure I set up authentication right way but the word "partially" concern me.
It means you have a mongodb database with enabled authentication.
I guess Shodan uses this fancy wording to highlight that the database is still listening on the externally facing interface, i.e. you can connect to the database with command
mongo <your IP>
from anywhere.
There are commands that don't require authentication, e.g.
db.isMaster()
db.runcommand({buildInfo:1})
db.auth()
....
It leaves room for exploitation of vulnerabilities, brute force attack, etc.
The server responds to the connection request which exposes the fact that you are using mongo. Version of your server, ssl libraries, compilation options and other information advertised by the server can be used to search known or 0day vulnerabilities.
You can see what info is exposed on shodan https://www.shodan.io/search?query=mongodb+server+information. Compare it with amount of information available for hosts without "Authentication partially enabled"
The most popular ways to harden mongodb set up is to make it accessible from local network/VPC/VPN only. If nature of your business requires bare mongo accessible from the internet, hide it behind a firewall to allow connections only from known IPs. In both cases you will be completely invisible to Shodan and similar services.

How do i connect my server to Atlas?

Recently i decided to move my database from inside my server machine to the MongoDB Atlas service.
Atlas provides a IP Whitelist feature which i use to remotely connect to the database cluster.
Should i plug my server application to Atlas using this feature?
What happens if my server IP changes? Is it secure?
For a general information on how to connect to an Atlas deployment, please see Connect to a Cluster
For connecting using a driver, please see Connect via Driver. There is an extensive list of examples using all of the officially-supported drivers.
As mentioned in the Prerequisites section, you need to use SSL/TLS and IP whitelist to connect to your Atlas instance. This whitelist would need to be updated should your application server's IP changes.
The whitelist provides an additional security layer in addition to your username/password, since this list will essentially reject any connection not originating from a known IP address. It is strongly recommended to utilize this whitelist, and arguably the effort required to maintain the whitelist is comparably small to the security advantages it provides.

Database Link in DB2

How to interconnect two different DB2 database hosted on two different IPs?
I mean I want to know is there anything in DB2 which is equivalent to Oracle's DBLink?
I am sitting on a DB2 Test environment and want to copy few rows for testing from production DB2 environment. Is there any easy way to do that?
There is something like that in DB2, called "three part names". I wrote a small overview article for my blog that has an example and all the interesting links to documentation.
The steps involve creating a DRDA wrapper (DRDA is the communication protocol for DB2), then providing the connection details on how to connect to the remote database server. After that you can query the remote tables without any additional setup and address them by server/schema/table - hence the "three part name". Note that you might need to use CATALOG TCPIP NODE first to make a remote server known by its IP address. Something like catalog tcpip node yourserver remote 192.0.32.67 server db2inst1