import uploaded library to Databricks - scala

I uploaded the spark time series spark-ts library to DataBricks using maven coordinate option in the Create Library. I was able to successfully create the library and attach it to my cluster. But when I tried to import the spark-ts library in DataBricks using org.apache.spark.spark-ts. But it throws an error stating that notebook:1: error: object ts is not a member of package org.apache.spark Please let me know how to handle this issue.

Related

object neo4j is not a member of package org

I am trying to read data from neo4j using spark job. The import itself is showing this error. I tried to import org.neo4j.spark._ in intellij IDEA, but it is showing an error "Cannot resolve symbol spark". While trying in the spark-shell, it is throwing an error,
":23: error: object neo4j is not a member of package org
import org.neo4j.spark._"
Spark version - 3.1.1
dependencies

Error while running Scala code - Databricks 7.3LTS and above

I am running databricks 7.3LTS and having errors while trying to use scala bulk copy.
The error is:
object sqldb is not a member of package com.microsoft.
I have installed correct sqlconnector drivers but not sure how to fix this error.
The installed drivers are:
com.microsoft.azure:spark-mssql-connector_2.12:1.1.0.
also i have installed the JAR dependencies as below:
spark_mssql_connector_2_12_1_1_0.jar
i couldnt find any scala code example for the above configurations on the internet.
my scala code sample is as below:
%scala
import com.microsoft.azure.sqldb.spark.config.Config
as soon as i run this command i get the error
Object sqldb is not a member of package com.microsoft.azure
any help please
In the new connector you need to use com.microsoft.sqlserver.jdbc.spark.SQLServerBulkJdbcOptions class to specify bulk copy options.

Cannot import Cosmosdb in databricks

I setup a new cluster on databricks which using databricks runtime version 10.1 (includes Apache Spark 3.2.0, Scala 2.12). I also installed azure_cosmos_spark_3_2_2_12_4_6_2.jar in Libraries.
I create a new notebook with Scala
import com.microsoft.azure.cosmosdb.spark.schema._
import com.microsoft.azure.cosmosdb.spark.CosmosDBSpark
import com.microsoft.azure.cosmosdb.spark.config.Config
But I still get error: object cosmosdb is not a member of package com.microsoft.azure
Does anyone know which step I missing?
Thanks
Looks like the imports you are doing are for the older Spark Connector (https://github.com/Azure/azure-cosmosdb-spark).
For the Spark 3.2 Connector, you might want to follow the quickstart guides: https://learn.microsoft.com/azure/cosmos-db/sql/create-sql-api-spark
The official repository is: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3-2_2-12
Complete Scala sample: https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Scala-Sample.scala
Here is the configuration reference: https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/docs/configuration-reference.md
You may be missing the pip install step:
pip install azure-cosmos

Reference uploaded JAR library

I've billed set of support function into helper.jar library and imported to Databricks cluster. The jar is installed on the cluster, but I'm not able to reference the functions in the library.
The jar import has been tested, cluster restarted and the jar can be referenced in InelliJ where it was developed as Azure Spark/HDInsight project.
//next line generates error value helper is not a member of org.apache.spark.sql.SparkSession
import helper
//nex line generates error: not found: value fn_conversion
display(df.withColumn("RevenueConstantUSD", fn_conversion($"Revenue"))
I'd expect the helper function would be visible after library deployment or possibly after adding the import command.
Edit: added information about IntelliJ project type

Export scala code, without a main function, to jar

I am working with Spark streaming and have written a custom streaming adapter. I want to export this adapter as a jar and use it in my scala streaming jobs. When I refer the jar inside my streaming code, I am getting this error:
import org.custom.streaming
[ERROR] object custom is not a member of package org
Note that the adapter doesn't have any main method, so I can't use generic methods available online to export the project as a runnable JAR.
I also tried exporting it as a shaded JAR but in that case I am getting:
error: error while loading <root>, error in opening zip file
[EDIT]
I am using maven for packaging
Have you considered using the package command of your maven build file ?