Hbase Kerberos connection from Spark scala - scala

I’m trying to connect to a kerberized secures Hbase cluster from a spark scala shell , below is my code and appreciate any help with the errors . I’m passing hdfs-site.xml, hbase-site.xml , core-site.xml and my keytab in the spark shell using — files
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.hadoop.hbase.client.Connection
import org.apache.hadoop.hbase.client.ConnectionFactory
import org.apache.hadoop.hbase.TableName
import org.apache.hadoop.security.UserGroupInformation
val conf: Configuration = HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "xxxxx1#abc.com,xxxxx2#abc.com,xxxxx3#abc.com")
conf.set("zookeeper.znode.parent", "/hbase-secure")
conf.setInt("hbase.client.scanner.caching", 10000)
conf.set("hbase.rpc.controllerfactory.class","org.apache.hadoop.hbase.ipc.RpcControllerFactory")
conf.set("hbase.rpc.controllerfactory.class","org.apache.hadoop.hbase.ipc.RpcControllerFactory")
conf.set("hadoop.security.authentication", "kerberos")
conf.set("hbase.security.authentication", "kerberos")
val userGroupInformation = UserGroupInformation.loginUserFromKeytabAndReturnUGI("XXX#abc.COM", "/u/xxxxx/XXXX.keytab")
UserGroupInformation.setLoginUser(userGroupInformation)
val connection: Connection = ConnectionFactory.createConnection(conf)
print(connection)
val admin = connection.getAdmin
val listtables = admin.listNamespaceDescriptors()
]
I see a lot of warning in the process as below
warning: Class
org.apache.hadoop.hbase.classification.InterfaceAudience not found - continuing with a stub.
Error -
——
WARN AbstractRpcClient: Couldn't setup connection for XXXX#abc.COM to null
RpcRetryingCaller{globalStartTime=1541788150382, pause=100, retries=35}, org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Couldn't setup connection for XXXX#abc.COM to null
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:158)
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4427)
at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4416)
at org.apache.hadoop.hbase.client.HBaseAdmin.listNamespaceDescriptors(HBaseAdmin.java:3123)
... 49 elided
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Couldn't setup connection for XXXX#abc.COM to null
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1560)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1580)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1731)
at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:134)
... 52 more
Caused by: com.google.protobuf.ServiceException: java.io.IOException: Couldn't setup connection for XXXX#abc.COM to null
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:228)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:292)
at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:62896)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.isMasterRunning(ConnectionManager.java:1591)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1529)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1551)
... 56 more
Caused by: java.io.IOException: Couldn't setup connection for XXXX#abc.COM to null
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$1.run(RpcClientImpl.java:665)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.handleSaslConnectionFailure(RpcClientImpl.java:637)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:745)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:889)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1201)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:218)
... 61 more
Caused by: java.io.IOException: Failed to specify server's Kerberos principal name
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.<init>(HBaseSaslRpcClient.java:117)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:609)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:156)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:737)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:734)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:734)
... 65 more

Your zookeeper configurations is missing zookeeper port. add,
conf.set("hbase.zookeeper.property.clientPort", "2181")
I would like you to suggest you few other things,
After this you might have problems with spark configurations. Copy hdfs-site.xml, hbase-site.xml, core-site.xml and yarn-site.xml (If you are using yarn) to spark conf folder.
Add resources to hbase configuration object
conf.addResource("/path/to/hbase-site.xml");
Set java properties
//Point to the krb5.conf file. Enable Kerberos debug.
System.setProperty("java.security.krb5.conf", "/etc/krb5.conf");
System.setProperty("sun.security.krb5.debug", "true");
Hope this would help...

Related

Error during spark submiting job on Yarn cluster from remote host

I try to spark-submit my jar with Spark application to remote Yarn Cluster.
I downloaded files from cluster:
hdfs-site.xml
yarn-site.xml
core-site.xml
Set environment variable HADOOP_CONF_DIR on directory with these files.
Than I do spark-sumbit:
set HADOOP_CONF_DIR=C:\projects\config\0
spark-submit ^
--deploy-mode cluster ^
--principal test#tdomain ^
--keytab "test.keytab" ^
--queue garliq ^
--properties-file "SparkSubmit.conf" ^
--class ru.rosbank.App ^
scala-spark-maven-1.0-SNAPSHOT-jar-with-dependencies.jar
But I get error:
INFO ConfiguredRMFailoverProxyProvider: Failing over to rm1 Exception
in thread "main" java.io.IOException: DestHost:destPort
node1.tdomain:8032 , LocalHost:localPort
RS-AAA11111111/11.23.111.164:0. Failed on local exception:
java.io.IOException: Couldn't set up IO streams:
java.lang.IllegalArgumentException: Server has invalid Kerberos
principal: rm/node1.tdomain#DOMAIN, expecting: rm/11.22.33.155#TDOMAIN
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
at org.apache.hadoop.ipc.Client.call(Client.java:1457)
at org.apache.hadoop.ipc.Client.call(Client.java:1367)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy7.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:271)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy8.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:605)
at org.apache.spark.deploy.yarn.Client.$anonfun$submitApplication$1(Client.scala:179)
at org.apache.spark.internal.Logging.logInfo(Logging.scala:57)
at org.apache.spark.internal.Logging.logInfo$(Logging.scala:56)
at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:65)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:179)
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1227)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1634)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:951)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1030)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1039)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.io.IOException: Couldn't set up IO streams:
java.lang.IllegalArgumentException: Server has invalid Kerberos
principal: rm/node1.tdomain#DOMAIN, expecting: rm/11.22.33.155#TDOMAIN
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:866)
at org.apache.hadoop.ipc.Client$Connection.access$3700(Client.java:411)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1572)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
... 29 more Caused by: java.lang.IllegalArgumentException: Server has invalid Kerberos principal: rm/node1.tdomain#DOMAIN,
expecting: rm/11.22.33.155#TDOMAIN
at org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:337)
at org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:234)
at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:160)
at org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:390)
at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:617)
at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:411)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:804)
at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:800)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:800)
... 32 more
Problem is here:
Server has invalid Kerberos principal: rm/node1.tdomain#DOMAIN, expecting: rm/11.22.33.155#TDOMAIN
As you can see, Domain on test cluster has value DOMAIN, but have to be TDOMAIN.
Where can I find settings of SErver principal rm/node1.tdomain#DOMAIN? Is it somewhere on cluster? or I have to do additional settings on my local host for launching spark-submit?
You could look through this deployment steps doc from cloudera. You can ignore the spark streaming bit.
You need to pass the keytab files in the —files option so that it may be copied onto the remote spark machine that would then use it to authenticate with the Kerberos server using your principal/service account, if it is reachable.

How run glue job locally?

I have setup project as described here. But code:
import com.amazonaws.services.glue.{AWSGlueClientBuilder, GlueContext}
import org.apache.spark.SparkContext
import org.slf4j.LoggerFactory
object MyGlueJob {
private val logger = LoggerFactory.getLogger(getClass)
def main(sysArgs: Array[String]) {
val spark: SparkContext = SparkContext.getOrCreate()
val glueContext: GlueContext = new GlueContext(spark)
val awsGlueClient = AWSGlueClientBuilder.defaultClient
}
}
fails with error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
19/11/21 15:40:32 INFO SparkContext: Running Spark version 2.4.3
19/11/21 15:40:33 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:368)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:117)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2544)
at MyGlueJob$.main(MyGlueJob.scala:13)
at MyGlueJob.main(MyGlueJob.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.CommandLineWrapper.main(CommandLineWrapper.java:66)
19/11/21 15:40:33 ERROR Utils: Uncaught exception in thread main
java.lang.NullPointerException
at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$postApplicationEnd(SparkContext.scala:2416)
at org.apache.spark.SparkContext$$anonfun$stop$1.apply$mcV$sp(SparkContext.scala:1931)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1340)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1930)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:585)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:117)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2544)
at MyGlueJob$.main(MyGlueJob.scala:13)
at MyGlueJob.main(MyGlueJob.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.CommandLineWrapper.main(CommandLineWrapper.java:66)
19/11/21 15:40:33 INFO SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.intellij.rt.execution.CommandLineWrapper.main(CommandLineWrapper.java:66)
Caused by: org.apache.spark.SparkException: A master URL must be set in your configuration
at org.apache.spark.SparkContext.<init>(SparkContext.scala:368)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:117)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2544)
at MyGlueJob$.main(MyGlueJob.scala:13)
at MyGlueJob.main(MyGlueJob.scala)
... 5 more
It is obvious that master url should be set but how to this from commandline or system variables? (E.g. without touching the code)
Also I have [read] that --master argument can fix problem, but adding it to args do nothing (here is Intellij Idea run configuration):
The key question is to run glue job locally and be able to run it in aws without code touching, is it possible?
You can created a spark session explicitly and set any parameters you want. But I cannot say that this will work eventually in Glue. The following is a local session that I use to test Spark jobs locally even though I do run them eventually in Glue. I test only pure spark code.
lazy val spark: SparkSession = {
UserGroupInformation.setLoginUser(UserGroupInformation.createRemoteUser("hduser"))
SparkSession
.builder()
.master("local")
.appName("spark unit test")
.getOrCreate()
}
The key question is to run glue job locally and be able to run it in aws without code touching, is it possible?
It's possible to run any code with a dev endpoint and Zeppelin. See aws docs.

Spark Hbase connector not working in parallel mode?

I am trying to use Hortonworks hbase connector for spark 2.0 to work with hbase (https://github.com/hortonworks-spark/shc/tree/v1.1.0-2.0)
With the provided example in the above link,
val spark = SparkSession
.builder()
.appName(getClass.toString)
.getOrCreate()
def withCatalog(cat: String, spark: SparkSession): DataFrame = {
spark
.read
.options(Map(HBaseTableCatalog.tableCatalog->cat))
.format("org.apache.spark.sql.execution.datasources.hbase")
.load()
}
val df = withCatalog(cat, spark)
df.printSchema()
df.show(20, false)
Schema:
val cat =
s"""{
|"table":{"namespace":"test", "name":"test_src_data", "tableCoder":"PrimitiveType"},
|"rowkey":"tfkod_description",
|"columns":{
|"col0":{"cf":"rowkey", "col":"tfkod_description", "type":"string"},
|"src_stream_desc":{"cf":"src_data", "col":"src_desc", "type":"string"}
|}
|}""".stripMargin
After I do spark2-submit the job runs and print only the schema. Later all the excutors are existing and stuck forever.
Last Message in log:
Existing executor 41 has been removed (new total is 1)
But I could successfully work with Hbase in sequential way i.e put or BulkPut but not RDD or DF (with any of hbase connector) way to work in spark.
Is there anything wrong in hbase / spark config due to which spark executor not able to work in parallel ? or something missing in worker nodes ?
Error Message from Worker:
19/05/13 11:36:44 ERROR ipc.AbstractRpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:642)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:166)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:769)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:766)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:766)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:920)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:889)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1222)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32651)
at org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:201)
at org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:180)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:346)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:320)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 25 more

Trying to load janusgraph with local backend cassandra and elastic search

I am using Spark to load Janusgraph with Cassandra backend and elastic search, both running locally.
val conf = new BaseConfiguration()
conf.setProperty("storage.backend", "cassandrathrift")
conf.setProperty("storage.hostname", "127.0.0.1")
conf.setProperty("index.search.backend","elasticsearch")
conf.setProperty("index.search.hostname", "127.0.0.1")
val gr = JanusGraphFactory.open(conf)
I am however, getting this error,
main" java.lang.IllegalArgumentException: Could not find implementation class: org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftStoreManager
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:61)
at org.janusgraph.diskstorage.Backend.getImplementationClass(Backend.java:477)
at org.janusgraph.diskstorage.Backend.getStorageManager(Backend.java:409)
at org.janusgraph.graphdb.configuration.GraphDatabaseConfiguration.<init>(GraphDatabaseConfiguration.java:1376)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:164)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:133)
at org.janusgraph.core.JanusGraphFactory.open(JanusGraphFactory.java:113)
at Test2$$anonfun$main$1.apply$mcVI$sp(make_graph3.scala:177)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at Test2$.main(make_graph3.scala:162)
at Test2.main(make_graph3.scala)
Caused by: java.lang.ClassNotFoundException: org.janusgraph.diskstorage.cassandra.thrift.CassandraThriftStoreManager
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:338)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:56)
... 10 more
I tried running jps on terminal, and this is the output:
32689 NailgunRunner
44609 GremlinServer
32162
44917 Jps
44326 CassandraDaemon
44744 Launcher
44509 Elasticsearch
So, Cassandra and ES both are running. What could be the issue?
Thanks in advance.

Pyspark command on jupyter: Connecting spark on remote server

I have configured Spark 2.1 on my remote linux server (IBM RHEL Z systems). I am trying to create a SparkContext and getting the below error
from pyspark.context import SparkContext, SparkConf
master_url="spark://<IP>:7077"
conf = SparkConf()
conf.setMaster(master_url)
conf.setAppName("App1")
sc = SparkContext.getOrCreate(conf)
I am getting the below error. when i run the same code on the remote server in pyspark shell it works without error.
The currently active SparkContext was created at:
(No active SparkContext.)
at org.apache.spark.SparkContext.assertNotStopped(SparkContext.scala:100)
at org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1768)
at org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2411)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:563)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:247)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:236)
at py4j.commands.ConstructorCommand.invokeConstructor(ConstructorCommand.java:80)
at py4j.commands.ConstructorCommand.execute(ConstructorCommand.java:69)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
It sounds like you haven't set jupyter to be the pyspark driver. Before controlling pyspark from jupyter you must first set PYSPARK_DRIVER_PYTHON=jupyter and PYSPARK_DRIVER_PYTHON_OPTS='notebook'. If I am not mistaken if you look at the code in libexec/bin/pyspark (on OSX) you will find instructions for setting up the jupyter notebook.