Quarkus - connect to muli hosts with reactive driver - postgresql

I need to connect to multiple postgres hosts with hibernate-reactive
As an example, with the classic jdbc driver, we can define this property to connect to our HA postgres instance:
quarkus.datasource.jdbc.url=jdbc:postgresql://my.host-1.com,my.host-2.com,my.host-3.com:5432/myDB?targetServerType=master&ssl=true&sslmode=verify-ca&sslcert=my-cert&sslkey=my-key&sslpassword=&sslrootcert=my-cert.crt
But here I saw that the vert.x PgClient does not support multi host connections directly in connection URI
I created an issue in vertx-sql-client here and a developer said me that it would be already possible by using the PgConnectOptions and a PgPool.
I did not see anything related in quarkus hibernate-reactive documentation.
Can anyone help me on this ? It seems we have to manage connections by URI.

Related

How to get multiple hosts configuration and database cluster failover

In java we had an option to use multiple hosts configuration with failover mechanism:
jdbc:postgresql://node1:port1,node2:port2,node3:port3/accounting?targetServerType=primary
do we have such support in Go? How the connection string should look like? i've seen that lib/pq is not maintained
i didn't find any information about jackc/pgx if they support multi host, or how the connection string should look like. Please if someone can provide an example.

Connect Cloud Run to Cloud SQL Server Instance in C#

If I understand the "Cloud SQL Connections" tab in Cloud Run should instantiate the Cloud SQL Proxy.
What is the sql server connectionstring that I should use to make this work?
Setup : (All in the the same GCP Project):
1. Create a Cloud Sql instance of SQL Server
2. Upload your docker image to Google Container registry.
Written using .netcore with code to connect to the SQL Server created in step 1
2. Create a Service instance in Google Cloud Run.
3. Specify Cloud SQL Connections and select your sql server instance from the list and deploy.
I've not tried this using Cloud Run and SQL Server but ...
The proxy should make a connection available to your .NET client on 127.0.0.1:1443 (link)
Assuming you're using a database client similar to the Google example, your connection string will be:
"ConnectionString": "User Id=[[USER]];Password=[[PASS]];Server=127.0.0.1;Database=[[DB]];"
If I understand correctly, the default port is 1443.
NB Per other commenters, your question would be improved with more details. When you write that you're completing steps, please include the links. When you reference your configuration, please include snippets. Folks answering your questions benefit from having to assume as little as information as possible from questions.
I do not think it is supported yet. There is no documentation for Cloud SQL Server.
According to the official documentation :
Once correctly configured, you can connect your service to your Cloud
SQL instance's unix domain socket using the format:
/cloudsql/INSTANCE_CONNECTION_NAME.
Note: Cloud Run (fully managed) does not support connecting to the
Cloud SQL instance using TCP. Your code should not try to access the
instance using an IP address such as 127.0.0.1 or 172.17.0.1.
Also:
Note: The Cloud SQL Proxy does not support Unix sockets on Windows.
I tried to do it using Cloud SQL Proxy with tcp and got:
System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (111): Connection refused 127.0.0.1:1433

Does the fact I'm running a VM alter the whitelisting status of my regular ip address?

Our dev ops team have whitelisted my home ip address so that I can connect to our Postgres database on Azure. I am able to connect to our Azure database due to this.
Today I set up a VM in order to run Docker. I am running a container for RStudio which is an app that, among many other things, allows me to connect to our database using ODBC.
After configuring the odbcinst and odbc.ini files I believe that those are configured correctly because when I try to connect I get the following error:
Error: nanodbc/nanodbc.cpp:983: 00000: FATAL: SSL connection is required. Please specify SSL options and retry.
Thus I think that my odbc set up is correct because this error suggests my connection setting are fine, it's just that Azure will not allow it without SSL.
Searching that error message took me to this SO post with the following accepted answer:
By default, Azure Database for PostgreSQL enforces SSL connections between your server and your client applications to protect against MITM (man in the middle) attacks. This is done to make the connection to your server as secure as possible.
Although not recommended, you have the option to disable requiring SSL for connecting to your server if your client application does not support SSL connectivity. Please check How to Configure SSL Connectivity for your Postgres server in Azure for more details. You can disable requiring SSL connections from either the portal or using CLI. Note that Azure does not recommend disabling requiring SSL connections when connecting to your server.
My question is, if I am already able to connect to our database outside of my VM due to my home IP being whitelisted and just using a Postgres Driver with Dbeaver SQL client, is there anything I can do to connect from within my VM?
I can get my VMs ip address but I suspect (am not sure) if sending hat to our developers to whitelist would work?
Is there a prescribed course of action here?
I added this parameter to my .odbc.ini file and was able to connect:
sslmode=require
From Azure Postgres documentation, this parameter may take on different permutations depending on the context
"for example "ssl=true" or "sslmode=require" or "sslmode=required" and other variations"

Google Cloud Data Fusion 1. Does not connect to oracle 2. When the pipeline is running I get 'default' network port error

I installed oracle-jdbc thin driver to connect with On prem oracle DB but when I test the connection I get network adapter error
I tried the changing the host but still same
When running the pipeline from GCS-BQ I getting network port error. Can we change the VPC the pipeline is running on ?
Regarding the oracle db connection error, is the db available on the public network for connection? Currently wrangler service in Cloud Data Fusion cannot talk to the on-prem db over a private connection and we are actively working towards it.
However if the db is available on the public network then it seems like the issue with the oracle db configurations. Can you please take a look at this answer and see if it helps - Oracle SQL Developer: Failure - Test failed: The Network Adapter could not establish the connection?
Also are you able to connect to the oracle db through some other query tool such as SqlWorkbench?
Breaking down your question:
1. Connecting to on-prem databases
It is possible nowadays to connect to on-premise databases. Make sure you created an interconnect between the on-prem network and the network used by Data Fusion instance and make sure you applied the right firewall rules (seems you are getting firewall issues by the logs). I suggest trying to connect directly in the database first to confirm that the network setup works.
2. Change network configurations on the Data Fusion job.
You can specify parameters for your job. There are options to change the network and subnetwork that the job will be executed under Configure > Compute config > Customize option. If you use shared VPC you can also specify the Host project.

How do i connect my server to Atlas?

Recently i decided to move my database from inside my server machine to the MongoDB Atlas service.
Atlas provides a IP Whitelist feature which i use to remotely connect to the database cluster.
Should i plug my server application to Atlas using this feature?
What happens if my server IP changes? Is it secure?
For a general information on how to connect to an Atlas deployment, please see Connect to a Cluster
For connecting using a driver, please see Connect via Driver. There is an extensive list of examples using all of the officially-supported drivers.
As mentioned in the Prerequisites section, you need to use SSL/TLS and IP whitelist to connect to your Atlas instance. This whitelist would need to be updated should your application server's IP changes.
The whitelist provides an additional security layer in addition to your username/password, since this list will essentially reject any connection not originating from a known IP address. It is strongly recommended to utilize this whitelist, and arguably the effort required to maintain the whitelist is comparably small to the security advantages it provides.