I am using Zeppelin inside an AWS EMR.
Then i created a DocumentDb instance and tried to run it through Zeppelin
but when I try to run the code it gives the error as there isn't any MongoDB Java Driver dependency.
<console>:25: error: object mongodb is not a member of package org
import org.mongodb.scala._
^
<console>:26: error: object bson is not a member of package org
import org.bson._
^
Is there any way to add maven dependency through which I can run mongo in Zeppelin?
I am using hadoop 2.7.2 , hbase 1.4.9, spark 2.2.0, scala 2.11.8 and java 1.8 on a hadoop cluster which is composed of one master and two slave.
when I run spark-shell after starting the cluster , it works fine.
I am trying to connect to hbase using scala by following this tutorial : [https://www.youtube.com/watch?v=gGwB0kCcdu0][1] .
But when I try like he does to run the spark-shell by adding those jars like argument I have this error:
spark-shell --jars
"hbase-annotations-1.4.9.jar,hbase-common-1.4.9.jar,hbase-protocol-1.4.9.jar,htrace-core-3.1.0-incubating.jar,zookeeper-3.4.6.jar,hbase-client-1.4.9.jar,hbase-hadoop2-compat-1.4.9.jar,metrics-json-3.1.2.jar,hbase-server-1.4.9.jar"
<console>:14: error: not found: value spark
import spark.implicits._
^
<console>:14: error: not found: value spark
import spark.sql
^
and after that even I log out and run spark-shell another time I have the same issue.
Can any one tell me please what is the cause and how to fix it .
In your import statement spark should be an object of type SparkSession. That object should have been created previously for you. Or you need to create it yourself (read spark docs). I didn't watch your tutorial video.
The point is it doesn't have to be called spark. It could be for instance called sparkSession and then you can do import sparkSession.implicits._
I uploaded the spark time series spark-ts library to DataBricks using maven coordinate option in the Create Library. I was able to successfully create the library and attach it to my cluster. But when I tried to import the spark-ts library in DataBricks using org.apache.spark.spark-ts. But it throws an error stating that notebook:1: error: object ts is not a member of package org.apache.spark Please let me know how to handle this issue.
I am using Scala on Eclipse Luna and trying to connect to Cassandra. My code is showing the error apache object is not a member of package org on the following line:
import org.apache.spark.SparkConf
I already imported the Scala and Spark libraries into the project. Does someone know how can I make my program import Spark libraries?
I was trying to run a scala file using command
scala myclass.scala
However, it complains about one of the import library. I included the jar using the -classpath option like this.
scala -class ncscala-time.jar myclass.scala
Error I got is:
myclass.scala:5: error: object github is not a member of package com
import com.github.nscala_time.time.Imports._
Any idea why?