I am trying to connect to hive in remote CDH cluster.
Dependency used:
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>1.1.0</version>
<classifier>standalone</classifier>
</dependency>
Code:
val url: String = "jdbc:hive2://ip-11-11-5-228.eu-central-1.compute.internal:10000/test;" +
"principal=hive/my#test-TELEKOM.COM;"
val driver = "org.apache.hive.jdbc.HiveDriver"
val fullTableName = "test.student_data"
val keytab_path = "/etc/my.keytab"
val conf:org.apache.hadoop.conf.Configuration = new org.apache.hadoop.conf.Configuration()
System.setProperty("java.security.krb5.conf", "/etc/krb5.conf")
System.setProperty("java.security.krb5.realm", "my-test.COM")
System.setProperty("HADOOP_CONF_DIR", "/etc/hadoop/conf/")
System.setProperty("java.security.krb5.kdc", "ip-11-11-5-228.eu-central-1.compute.internal")
conf.set("hadoop.security.authentication", "kerberos")
conf.set("hadoop.security.authorization", "true")
UserGroupInformation.setConfiguration(conf)
UserGroupInformation.loginUserFromKeytab("hive/my#test-TELEKOM.COM",
keytab_path)
Class.forName("org.apache.hive.jdbc.HiveDriver")
DriverManager.getConnection(url)
Error on running:
javax.security.auth.login.LoginException: Unable to obtain password from user
I have placed the keytab file in local which i have received but still getting the error
Can you validate that the user that is trying to use the keytab file can kinit with it. (Does the keytab file have the correct permissions?)
Login as the user that the scala will be running as and perform:
kinit -kt /etc/my.keytab hive/my#test-TELEKOM.COM
It's likely the case that the scala user should own the keytab. This is a Really common thing I forget to do all the time.
Related
I have recently tried my hands on Postgres. Installed it on local (PostgreSQL 13.0).
Created a maven project and used Spring Data JPA, works just fine. Whereas when I tried using Gradle project, I am not able to connect to the DB and keep getting the following error.
org.postgresql.util.PSQLException: The authentication type 10 is not
supported. Check that you have configured the pg_hba.conf file to
include the client's IP address or subnet, and that it is using an
authentication scheme supported by the driver. at
org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:614)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.jdbc.PgConnection.(PgConnection.java:194)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.Driver.makeConnection(Driver.java:450)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.Driver.connect(Driver.java:252)
~[postgresql-42.1.4.jar:42.1.4] at
java.sql.DriverManager.getConnection(Unknown Source) [na:1.8.0_261]
at java.sql.DriverManager.getConnection(Unknown Source)
[na:1.8.0_261] at
org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:94)
[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:79)
[postgresql-42.1.4.jar:42.1.4]
I tried using JDBCTemplate as well. Doesn't work
Modified the pg_hba.cfg file referring to this post - Doesn't work
Used the deprecated Lib of - Doesn't Work either.
Please Suggest me a solution for this problem.
My code and Config:
#Configuration
public class DataSourceConfig {
#Bean
public DriverManagerDataSource getDataSource() {
DriverManagerDataSource dataSourceBuilder = new DriverManagerDataSource();
dataSourceBuilder.setDriverClassName("org.postgresql.Driver");
dataSourceBuilder.setUrl("jdbc:postgresql://localhost:5432/postgres");
dataSourceBuilder.setUsername("postgres");
dataSourceBuilder.setPassword("root");
return dataSourceBuilder;
}
}
#Component
public class CustomerOrderJDBCTemplate implements CustomerOrderDao{
private DataSource dataSource;
private JdbcTemplate jdbcTemplateObject;
#Autowired
ApplicationContext context;
public void setDataSource() {
//Getting Bean by Class
DriverManagerDataSource dataSource = context.getBean(DriverManagerDataSource.class);
this.dataSource = dataSource;
this.jdbcTemplateObject = new JdbcTemplate(this.dataSource);
}
#Override
public Customer create(Customer customer) {
setDataSource();
String sql = "insert into CustomerOrder (customerType, customerPayment) values (?, ?)";
//jdbcTemplateObject.update(sql, customerOrder.getCustomerOrderType(), customerOrder.getCustomerOrderPayment());
KeyHolder holder = new GeneratedKeyHolder();
jdbcTemplateObject.update(new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection connection) throws SQLException {
PreparedStatement ps = connection.prepareStatement(sql, Statement.RETURN_GENERATED_KEYS);
ps.setString(1, customer.getType());
ps.setString(2, customer.getPayment());
return ps;
}
}, holder);
long customerId = holder.getKey().longValue();
customer.setCustomerID(customerOrderId);
return customer;
}
}
dependencies
implementation('org.springframework.boot:spring-boot-starter-web')
compile("org.springframework.boot:spring-boot-devtools")
compile(group: 'org.postgresql', name: 'postgresql', version: '42.1.4')
compile("org.springdoc:springdoc-openapi-ui:1.4.1")
compile("org.springframework:spring-jdbc:5.2.5.RELEASE")
password_encryption is set like this:
postgres=# show password_encryption;
password_encryption
---------------------
scram-sha-256
(1 row)
I solved similar issue by applying below steps in PostgreSQL Version 13:
Change password_encryption to md5 in postgresql.conf
Windows: C:\Program Files\PostgreSQL\13\data\postgresql.conf
GNU/Linux: /etc/postgresql/13/main/postgresql.conf
Change scram-sha-256 to md5 in pg_hba.conf
Windows: C:\Program Files\PostgreSQL\13\data\pg_hba.conf
GNU/Linux: /etc/postgresql/13/main/pg_hba.conf
host all all 0.0.0.0/0 md5
Change Password ( this restore password in md5 format).
Example: ALTER ROLE postgres WITH PASSWORD 'root';
Make sure you set listen_addresses = '*' in postgresql.conf if you are working non production environment.
Get your pg_hba.conf File in the Directory
C:\Program Files\PostgreSQL\13\data\pg_hba.conf
And Simply Change scram-sha-256 under Column Method to trust.
It worked For me!
According to the wiki, the supported JDBC driver for SCRAM-SHA-256 encryption is 42.2.0 or above.
In my case, the driver was 41.1.1. Change it to 42.2.0 or above. That fixed it for me.
(Maven, pom.xml):
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.0</version>
</dependency>
By setting password_encryption to scram-sha-256 (which is the default value in v13) you also get scram-sha-256 authentication, even if you have md5 in pg_hba.conf.
Now you are using an old JDBC driver version on the client side that does not support that authentication method, even though PostgreSQL introduced it in v10, three years ago.
You should upgrade your JDBC driver. An alternative would be to set password_encryption back to md5, but then you'll have to reset all passwords and live with lower security.
<?xml version="1.0" encoding="UTF-8"?>
4.0.0
<groupId>org.example</groupId>
<artifactId>postgresJDBC</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<java.version>11</java.version>
<maven.compiler.target>${java.version}</maven.compiler.target>
<maven.compiler.source>${java.version}</maven.compiler.source>
</properties>
<dependencies>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.18</version>
</dependency>
</dependencies>
you have to check your maven dependency if you are using postgresql 9.1+ then your dependency should be like above
to know about maven dependency refer this link How do you add PostgreSQL Driver as a dependency in Maven?
Change METHOD to "trust" in pg_hba.conf
In case you are struggling to get this working in Docker:
Firstly: run the container with -e POSTGRES_HOST_AUTH_METHOD=md5 (doc)
docker run -e POSTGRES_HOST_AUTH_METHOD=md5 -e POSTGRES_PASSWORD=doesntmatter -p 5432:5432 --name CONTAINERNAME -d postgres
Secondly: allow md5 encryption as discussed in other answers:
docker exec -ti -u postgres CONTAINERNAME bash -c "echo 'password_encryption=md5' >> /var/lib/postgresql/data/postgresql.conf"
Thirdly: restart the container
docker restart CONTAINER NAME
Fourthly: you need to recreate the postgres password in md5 format
docker exec -ti -u postgres CONTAINERNAME psql
alter role postgres with password 'THE-NEW-PASSWORD';
* please be aware scram-sha-256 is much better than md5 (doc)
use these :
wget https://jdbc.postgresql.org/download/postgresql-42.2.24.jar
Copy it to your hive library
sudo mv postgresql-42.2.24.jar /opt/hive/lib/postgresql-42.2.24.jar
For me, updating the postgres library helped fixing this.
working fine with version 12.6 ... just downgrade the PostgreSQL
You might need to check the version of Postgres you are running. Migh need to update spring version if the version is being pointed through spring parent.
In my case: since current postgres is at v13. Modified spring parent version: it was on 1.4; made it to match to 2.14. Finally update maven dependency and re-run the application.This fixed the issue.
Suggestions:
Current JDBC driver will help (e.g. postgresql-42.3.6.jar)
Copy it to the /jars folder under your spark install directory (I'm assuming a single machine here in this example)
Python - install "findspark" to make pyspark importable as a regular library
Here is an example I hope will help someone:
import findspark
findspark.init()
from pyspark.sql import SparkSession
sparkClassPath = "C:/spark/spark-3.0.3-bin-hadoop2.7/jars"
spark = SparkSession \
.builder \
.config("spark.driver.extraClassPath", sparkClassPath) \
.getOrCreate()
df = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql://{YourHostName}:5432/{YourDBName}") \
.option("driver", "org.postgresql.Driver") \
.option("dbtable", "{YourTableName}") \
.option("user", "{YourUserName") \
.option("password", "{YourSketchyPassword") \
.load()
Install pgadmin if you have not already done so.
Try it via Docker
You need to download postgresql..jar and then move it into .../jre/lib/ext/ folder. It worked for me
Even after changing pg_hba.conf to MD5 on everything it didn't work.
What worked was doing this:
show password_encryption;
If it shows up as being scram-sha-256 do this:
set password_encryption = 'md5';
Restart server, this solved my issue
Use latest maven dependency for Postgres in pom.xml
Changing trust for ipv4 local connect worked for me.
Solution:
Get your pg_hba.conf File in the Directory C:\Program Files\PostgreSQL\13\data\pg_hba.conf
And Simply Change scram-sha-256 under Column Method to trust.
I guess the solution to this problem is using version 9.6.
It works just fine after changing the version.
Open pg_hba.conf
Set IPv4 local connections to trust
I am trying to query an Oracle database from within an Azure Synapse notebook, preferably using Pyodbc but a pyspark solution would also be acceptable. The complexity here comes from, I believe, the low configurability of the spark pool - I believe the code is generally correct.
host = 'my_endpoint.com:[port here as plain numbers, e.g. 1111]/orcl'
database = 'my_db_name'
username = 'my_username'
password = 'my_password'
conn = pyodbc.connect( 'DRIVER={ODBC Driver 17 for SQL Server};'
'SERVER=' + host + ';'
'DATABASE=' + database + ';'
'UID=' + username + ';'
'PWD=' + password + ';')
Approaches I have tried:
Pyodbc - I can use the default driver available ({ODBC Driver 17 for SQL Server}) and I get login timeouts. I have tried both the normal URL of the server and the IP, and with all combinations of port, no port, comma port, colon port, and without the service name "orcl" appended. Code sample is above, but I believe the issue lies with the drivers.
Pyspark .read - With no JDBC driver specified, I get a "No suitable driver" error. I am able to add the OJDBC .jar to the workspace or to my file directory, but I was not able to figure out how to tell spark that it should be used.
cx_oracle - This is not permitted in my workspace.
If the solution requires setting environment variables or using spark-submit, please provide a link that explains how best to do that in Synapse. I would be happy with either a JDBC or an ODBC solution.
By adding the .jar here (ojdbc8-19.15.0.0.1.jar) to the Synapse workspace packages and then adding that package to the Apache spark pool packages, I was able to execute the following code:
host = 'my_host_url'
port = 1521
service_name = 'my_service_name'
jdbcUrl = f'jdbc:oracle:thin:#{host}:{port}:{service_name}'
sql = 'SELECT * FROM my_table'
user = 'my_username'
password = 'my_password'
jdbcDriver = 'oracle.jdbc.driver.OracleDriver'
jdbcDF = spark.read.format('jdbc') \
.option('url', jdbcUrl) \
.option('query', sql) \
.option('user', user) \
.option('password', password) \
.option('driver', jdbcDriver) \
.load()
display(jdbcDF)
I have recently tried my hands on Postgres. Installed it on local (PostgreSQL 13.0).
Created a maven project and used Spring Data JPA, works just fine. Whereas when I tried using Gradle project, I am not able to connect to the DB and keep getting the following error.
org.postgresql.util.PSQLException: The authentication type 10 is not
supported. Check that you have configured the pg_hba.conf file to
include the client's IP address or subnet, and that it is using an
authentication scheme supported by the driver. at
org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:614)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.jdbc.PgConnection.(PgConnection.java:194)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.Driver.makeConnection(Driver.java:450)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.Driver.connect(Driver.java:252)
~[postgresql-42.1.4.jar:42.1.4] at
java.sql.DriverManager.getConnection(Unknown Source) [na:1.8.0_261]
at java.sql.DriverManager.getConnection(Unknown Source)
[na:1.8.0_261] at
org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:94)
[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:79)
[postgresql-42.1.4.jar:42.1.4]
I tried using JDBCTemplate as well. Doesn't work
Modified the pg_hba.cfg file referring to this post - Doesn't work
Used the deprecated Lib of - Doesn't Work either.
Please Suggest me a solution for this problem.
My code and Config:
#Configuration
public class DataSourceConfig {
#Bean
public DriverManagerDataSource getDataSource() {
DriverManagerDataSource dataSourceBuilder = new DriverManagerDataSource();
dataSourceBuilder.setDriverClassName("org.postgresql.Driver");
dataSourceBuilder.setUrl("jdbc:postgresql://localhost:5432/postgres");
dataSourceBuilder.setUsername("postgres");
dataSourceBuilder.setPassword("root");
return dataSourceBuilder;
}
}
#Component
public class CustomerOrderJDBCTemplate implements CustomerOrderDao{
private DataSource dataSource;
private JdbcTemplate jdbcTemplateObject;
#Autowired
ApplicationContext context;
public void setDataSource() {
//Getting Bean by Class
DriverManagerDataSource dataSource = context.getBean(DriverManagerDataSource.class);
this.dataSource = dataSource;
this.jdbcTemplateObject = new JdbcTemplate(this.dataSource);
}
#Override
public Customer create(Customer customer) {
setDataSource();
String sql = "insert into CustomerOrder (customerType, customerPayment) values (?, ?)";
//jdbcTemplateObject.update(sql, customerOrder.getCustomerOrderType(), customerOrder.getCustomerOrderPayment());
KeyHolder holder = new GeneratedKeyHolder();
jdbcTemplateObject.update(new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection connection) throws SQLException {
PreparedStatement ps = connection.prepareStatement(sql, Statement.RETURN_GENERATED_KEYS);
ps.setString(1, customer.getType());
ps.setString(2, customer.getPayment());
return ps;
}
}, holder);
long customerId = holder.getKey().longValue();
customer.setCustomerID(customerOrderId);
return customer;
}
}
dependencies
implementation('org.springframework.boot:spring-boot-starter-web')
compile("org.springframework.boot:spring-boot-devtools")
compile(group: 'org.postgresql', name: 'postgresql', version: '42.1.4')
compile("org.springdoc:springdoc-openapi-ui:1.4.1")
compile("org.springframework:spring-jdbc:5.2.5.RELEASE")
password_encryption is set like this:
postgres=# show password_encryption;
password_encryption
---------------------
scram-sha-256
(1 row)
I solved similar issue by applying below steps in PostgreSQL Version 13:
Change password_encryption to md5 in postgresql.conf
Windows: C:\Program Files\PostgreSQL\13\data\postgresql.conf
GNU/Linux: /etc/postgresql/13/main/postgresql.conf
Change scram-sha-256 to md5 in pg_hba.conf
Windows: C:\Program Files\PostgreSQL\13\data\pg_hba.conf
GNU/Linux: /etc/postgresql/13/main/pg_hba.conf
host all all 0.0.0.0/0 md5
Change Password ( this restore password in md5 format).
Example: ALTER ROLE postgres WITH PASSWORD 'root';
Make sure you set listen_addresses = '*' in postgresql.conf if you are working non production environment.
Get your pg_hba.conf File in the Directory
C:\Program Files\PostgreSQL\13\data\pg_hba.conf
And Simply Change scram-sha-256 under Column Method to trust.
It worked For me!
According to the wiki, the supported JDBC driver for SCRAM-SHA-256 encryption is 42.2.0 or above.
In my case, the driver was 41.1.1. Change it to 42.2.0 or above. That fixed it for me.
(Maven, pom.xml):
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.0</version>
</dependency>
By setting password_encryption to scram-sha-256 (which is the default value in v13) you also get scram-sha-256 authentication, even if you have md5 in pg_hba.conf.
Now you are using an old JDBC driver version on the client side that does not support that authentication method, even though PostgreSQL introduced it in v10, three years ago.
You should upgrade your JDBC driver. An alternative would be to set password_encryption back to md5, but then you'll have to reset all passwords and live with lower security.
<?xml version="1.0" encoding="UTF-8"?>
4.0.0
<groupId>org.example</groupId>
<artifactId>postgresJDBC</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<java.version>11</java.version>
<maven.compiler.target>${java.version}</maven.compiler.target>
<maven.compiler.source>${java.version}</maven.compiler.source>
</properties>
<dependencies>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.18</version>
</dependency>
</dependencies>
you have to check your maven dependency if you are using postgresql 9.1+ then your dependency should be like above
to know about maven dependency refer this link How do you add PostgreSQL Driver as a dependency in Maven?
Change METHOD to "trust" in pg_hba.conf
In case you are struggling to get this working in Docker:
Firstly: run the container with -e POSTGRES_HOST_AUTH_METHOD=md5 (doc)
docker run -e POSTGRES_HOST_AUTH_METHOD=md5 -e POSTGRES_PASSWORD=doesntmatter -p 5432:5432 --name CONTAINERNAME -d postgres
Secondly: allow md5 encryption as discussed in other answers:
docker exec -ti -u postgres CONTAINERNAME bash -c "echo 'password_encryption=md5' >> /var/lib/postgresql/data/postgresql.conf"
Thirdly: restart the container
docker restart CONTAINER NAME
Fourthly: you need to recreate the postgres password in md5 format
docker exec -ti -u postgres CONTAINERNAME psql
alter role postgres with password 'THE-NEW-PASSWORD';
* please be aware scram-sha-256 is much better than md5 (doc)
use these :
wget https://jdbc.postgresql.org/download/postgresql-42.2.24.jar
Copy it to your hive library
sudo mv postgresql-42.2.24.jar /opt/hive/lib/postgresql-42.2.24.jar
For me, updating the postgres library helped fixing this.
working fine with version 12.6 ... just downgrade the PostgreSQL
You might need to check the version of Postgres you are running. Migh need to update spring version if the version is being pointed through spring parent.
In my case: since current postgres is at v13. Modified spring parent version: it was on 1.4; made it to match to 2.14. Finally update maven dependency and re-run the application.This fixed the issue.
Suggestions:
Current JDBC driver will help (e.g. postgresql-42.3.6.jar)
Copy it to the /jars folder under your spark install directory (I'm assuming a single machine here in this example)
Python - install "findspark" to make pyspark importable as a regular library
Here is an example I hope will help someone:
import findspark
findspark.init()
from pyspark.sql import SparkSession
sparkClassPath = "C:/spark/spark-3.0.3-bin-hadoop2.7/jars"
spark = SparkSession \
.builder \
.config("spark.driver.extraClassPath", sparkClassPath) \
.getOrCreate()
df = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql://{YourHostName}:5432/{YourDBName}") \
.option("driver", "org.postgresql.Driver") \
.option("dbtable", "{YourTableName}") \
.option("user", "{YourUserName") \
.option("password", "{YourSketchyPassword") \
.load()
Install pgadmin if you have not already done so.
Try it via Docker
You need to download postgresql..jar and then move it into .../jre/lib/ext/ folder. It worked for me
Even after changing pg_hba.conf to MD5 on everything it didn't work.
What worked was doing this:
show password_encryption;
If it shows up as being scram-sha-256 do this:
set password_encryption = 'md5';
Restart server, this solved my issue
Use latest maven dependency for Postgres in pom.xml
Changing trust for ipv4 local connect worked for me.
Solution:
Get your pg_hba.conf File in the Directory C:\Program Files\PostgreSQL\13\data\pg_hba.conf
And Simply Change scram-sha-256 under Column Method to trust.
I guess the solution to this problem is using version 9.6.
It works just fine after changing the version.
Open pg_hba.conf
Set IPv4 local connections to trust
I am trying to connect a database over SSL(namely TLSv1.3).
When tested with psql, it connects over TLSv1.3.
With JDBC driver as below, the connection is not secured by SSL at all for some reason.
(confirmed this by executing select * from pg_stat_ssl;)
val props = new Properties()
props.setProperty("user", "postgres")
props.setProperty("password", "example")
props.setProperty("ssl", "true")
props.setProperty("sslmode", "allow")
props.setProperty("sslfactory", "org.postgresql.ssl.NonValidatingFactory")
DriverManager.getConnection("jdbc:postgresql://localhost/", props)
What would be the cause of this problem?
PostgreSQL JDBC driver versions: 42.2.5/42.2.22
If you set sslmode to allow, the client will prefer non-encrypted connections. Change the setting to require or prefer.
I am trying to connect to an Azure Sql managed instance from databricks. I am using Scala to connect to it. The code I have copied from the Microsoft web site
My actual scala code : (I have changed the credentials and IP. But I have made sure they are correct as I have copied them from the connection strings in the sql server managed instance options)
Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver")
val jdbcHostname = "dev-migdb.nf53e3653n43.database.windows.net"
val jdbcPort = 1433
val jdbcDatabase = "MYDB"
// Create the JDBC URL without passing in the user and password parameters.
val jdbcUrl = s"jdbc:sqlserver://${jdbcHostname}:${jdbcPort};database=${jdbcDatabase};loginTimeout=90"
// Create a Properties() object to hold the parameters.
import java.util.Properties
val connectionProperties = new Properties()
connectionProperties.put("user", "db-devmigmgd")
connectionProperties.put("password", "pwd##321232123")
val driverClass = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
connectionProperties.setProperty("Driver", driverClass)
val employees_table = spark.read.jdbc(jdbcUrl, "employees", connectionProperties)
0
Error :
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host dev-migdb.nf53e3653n43.database.windows.net, port 1433 has failed. Error: "Connection timed out: no further information. Verify the connection properties. Make sure that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port. Make sure that TCP connections to the port are not blocked by a firewall.".
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:227)
at com.microsoft.sqlserver.jdbc.SQLServerException.ConvertConnectExceptionToSQLServerException(SQLServerException.java:284)
at com.microsoft.sqlserver.jdbc.SocketFinder.findSocket(IOBuffer.java:2435)
at com.microsoft.sqlserver.jdbc.TDSChannel.open(IOBuffer.java:635)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:2010)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1687)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1528)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:866)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:569)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:336)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:286)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:274)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:198)
at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:301)
at linee84eb162c20345fc84ad591cfefe930f29.$read$$iw$$iw$$iw$$iw$$iw$$iw.<init>(command-999597493877319:49)
at linee84eb162c20345fc84ad591cfefe930f29.$read$$iw$$iw$$iw$$iw$$iw.<init>(command-999597493877319:104)
On the other hand :
I am able to connect to the same managed instance from the VM that is there on the same Azure subscription (using SSMS)
My custom application that is written in .Net and hosted on that VM is also able to connect to the same instance
Also, I am unable to connect to the same instance from scala code that I am executing using spark shell on the above VM. BUT the errors that I am getting are different. Please find errors below.
com.microsoft.sqlserver.jdbc.SQLServerException: Login failed for user 'db-devmigmgd#dev-migdb'.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.sendLogon(SQLServerConnection.java:2529)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.logon(SQLServerConnection.java:1905)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.access$000(SQLServerConnection.java:41)
at com.microsoft.sqlserver.jdbc.SQLServerConnection$LogonCommand.doExecute(SQLServerConnection.java:1893)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4874)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1045)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:817)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:700)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:842)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation$.getSchema(JDBCRelation.scala:210)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:35)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
at org.apache.spark.sql.DataFrameReader.jdbc(DataFrameReader.scala:238)
... 76 elided
Thanks #simon_dmorias,
As per your comments and discussions with the Microsoft team, resources on different VNET was the real problem.
So as a best practice, always create resources in the same VNETs.