I am trying to connect a database over SSL(namely TLSv1.3).
When tested with psql, it connects over TLSv1.3.
With JDBC driver as below, the connection is not secured by SSL at all for some reason.
(confirmed this by executing select * from pg_stat_ssl;)
val props = new Properties()
props.setProperty("user", "postgres")
props.setProperty("password", "example")
props.setProperty("ssl", "true")
props.setProperty("sslmode", "allow")
props.setProperty("sslfactory", "org.postgresql.ssl.NonValidatingFactory")
DriverManager.getConnection("jdbc:postgresql://localhost/", props)
What would be the cause of this problem?
PostgreSQL JDBC driver versions: 42.2.5/42.2.22
If you set sslmode to allow, the client will prefer non-encrypted connections. Change the setting to require or prefer.
Related
I am trying to query an Oracle database from within an Azure Synapse notebook, preferably using Pyodbc but a pyspark solution would also be acceptable. The complexity here comes from, I believe, the low configurability of the spark pool - I believe the code is generally correct.
host = 'my_endpoint.com:[port here as plain numbers, e.g. 1111]/orcl'
database = 'my_db_name'
username = 'my_username'
password = 'my_password'
conn = pyodbc.connect( 'DRIVER={ODBC Driver 17 for SQL Server};'
'SERVER=' + host + ';'
'DATABASE=' + database + ';'
'UID=' + username + ';'
'PWD=' + password + ';')
Approaches I have tried:
Pyodbc - I can use the default driver available ({ODBC Driver 17 for SQL Server}) and I get login timeouts. I have tried both the normal URL of the server and the IP, and with all combinations of port, no port, comma port, colon port, and without the service name "orcl" appended. Code sample is above, but I believe the issue lies with the drivers.
Pyspark .read - With no JDBC driver specified, I get a "No suitable driver" error. I am able to add the OJDBC .jar to the workspace or to my file directory, but I was not able to figure out how to tell spark that it should be used.
cx_oracle - This is not permitted in my workspace.
If the solution requires setting environment variables or using spark-submit, please provide a link that explains how best to do that in Synapse. I would be happy with either a JDBC or an ODBC solution.
By adding the .jar here (ojdbc8-19.15.0.0.1.jar) to the Synapse workspace packages and then adding that package to the Apache spark pool packages, I was able to execute the following code:
host = 'my_host_url'
port = 1521
service_name = 'my_service_name'
jdbcUrl = f'jdbc:oracle:thin:#{host}:{port}:{service_name}'
sql = 'SELECT * FROM my_table'
user = 'my_username'
password = 'my_password'
jdbcDriver = 'oracle.jdbc.driver.OracleDriver'
jdbcDF = spark.read.format('jdbc') \
.option('url', jdbcUrl) \
.option('query', sql) \
.option('user', user) \
.option('password', password) \
.option('driver', jdbcDriver) \
.load()
display(jdbcDF)
In Akka Projection’s documentation under offsetting in a relational database with JDBC, there is no information about how and where the configuration for establishing a connection to the relational database used should be included. I mean configs such as the username, password, or the url.
In the documentation under offset in a relational database with Slick, the following configuration is provided for the database connection, which is unclear whether it can be used for JDBC as well:
# add here your Slick db settings
db {
# url = "jdbc:h2:mem:test1"
# driver = org.h2.Driver
# connectionPool = disabled
# keepAliveConnection = true
}
How and where should I specify the JDBC connection parameters?
https://doc.akka.io/docs/akka-projection/current/jdbc.html#defining-a-jdbcsession
The following line in the Scala code snippet is where you can specify connection parameters:
val c = DriverManager.getConnection("jdbc:h2:mem:test;DB_CLOSE_DELAY=-1")
I am trying to connect to an Azure Sql managed instance from databricks. I am using Scala to connect to it. The code I have copied from the Microsoft web site
My actual scala code : (I have changed the credentials and IP. But I have made sure they are correct as I have copied them from the connection strings in the sql server managed instance options)
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver")
val jdbcHostname = "dev-migdb.nf53e3653n43.database.windows.net"
val jdbcPort = 1433
val jdbcDatabase = "MYDB"
// Create the JDBC URL without passing in the user and password parameters.
val jdbcUrl = s"jdbc:sqlserver://${jdbcHostname}:${jdbcPort};database=${jdbcDatabase};loginTimeout=90"
// Create a Properties() object to hold the parameters.
import java.util.Properties
val connectionProperties = new Properties()
connectionProperties.put("user", "db-devmigmgd")
connectionProperties.put("password", "pwd##321232123")
val driverClass = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
connectionProperties.setProperty("Driver", driverClass)
val employees_table = spark.read.jdbc(jdbcUrl, "employees", connectionProperties)
0
Error :
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host dev-migdb.nf53e3653n43.database.windows.net, port 1433 has failed. Error: "Connection timed out: no further information. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:227)
at com.microsoft.sqlserver.jdbc.SQLServerException.ConvertConnectExceptionToSQLServerException(SQLServerException.java:284)
at com.microsoft.sqlserver.jdbc.SocketFinder.findSocket(IOBuffer.java:2435)
at com.microsoft.sqlserver.jdbc.TDSChannel.open(IOBuffer.java:635)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2010)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1687)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1528)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:866)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:569)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:336)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:286)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:274)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:198)
at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:301)
at linee84eb162c20345fc84ad591cfefe930f29.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-999597493877319:49)
at linee84eb162c20345fc84ad591cfefe930f29.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-999597493877319:104)
On the other hand :
I am able to connect to the same managed instance from the VM that is there on the same Azure subscription (using SSMS)
My custom application that is written in .Net and hosted on that VM is also able to connect to the same instance
Also, I am unable to connect to the same instance from scala code that I am executing using spark shell on the above VM. BUT the errors that I am getting are different. Please find errors below.
com.microsoft.sqlserver.jdbc.SQLServerException: Login failed for user 'db-devmigmgd#dev-migdb'.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:2529)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:1905)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$000(SQLServerConnection.java:41)
at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:1893)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4874)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1045)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:817)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:700)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:842)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:238)
... 76 elided
Thanks #simon_dmorias,
As per your comments and discussions with the Microsoft team, resources on different VNET was the real problem.
So as a best practice, always create resources in the same VNETs.
When writing spark dataframe to Oracle database (Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit), the spark job is failing with the exception java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection. The scala code is
dataFrame.write.mode(SaveMode.Append).jdbc("jdbc:oracle:thin:#" + ipPort + ":" + sid, table, props)
Already tried setting below properties for jdbc connection but hasn't worked.
props.put("driver", "oracle.jdbc.OracleDriver")
props.setProperty("testOnBorrow","true")
props.setProperty("testOnReturn","false")
props.setProperty("testWhileIdle","false")
props.setProperty("validationQuery","SELECT 1 FROM DUAL")
props.setProperty("autoReconnect", "true")
Based on the earlier search results, it seems that the connection is opened initially but is being killed by the firewall after some idle time. The connection URL is verified and is working as the select queries work fine. Need help in getting this resolved.
I am trying to connect MongoDB using MongoClientURI(URL) my URL is mongodb://userName:Password#host:PortNumber/DBName?connectTimeoutMS=10000
when my MongoDB is Down i try to Post Request but it take default time 30 sec.
Can any one help me solve the problem
Thanks in Advance.
You can set timeouts by using the Mongo Java client's MongoClientOptions. For example:
MongoClientOptions clientOptions = MongoClientOptions.builder()
.connectTimeout(...)
.socketTimeout(...)
.serverSelectionTimeout(...)
.build();
MongoClient mongoClient = new MongoClient(new ServerAddress(host, port), clientOptions);
Examining mongoClient.getMongoClientOptions() after the above line of code clearly shows that the created client is faithful to the supplied config values. By contrast, if you do not set these values via MongoClientOptions then mongoClient.getMongoClientOptions() shows that the default values have been chosen.
Based on your updated comments I think the situation you are trying to cater for is this:
Creating a connection against a server instance which does not exists / is unavailable should fail sooner that the default of 30s.
If so then the configuration parameter you want to use is serverSelectionTimeout. The following invocation ...
MongoClientOptions clientOptions = MongoClientOptions.builder()
.serverSelectionTimeout(2000)
.build();
MongoClient mongoClient = new MongoClient(new ServerAddress(host, port), clientOptions);
... will cause this exception to be thrown:
com.mongodb.MongoTimeoutException: Timed out after 2000 ms while waiting to connect.
Note: serverSelectionTimeout is available in the version of the MongoDB Java driver which you are using (3.2.2 according to the comment you posted on your question).