specs/scalatest interaction issue in Play app - scala

I am having a problem I really can't explain... It is in isolation in the project at https://github.com/betehess/play-scalatest.
When I run test, sbt gets stuck for a while and then throws this exception:
> test
[error] Uncaught exception when running tests: java.net.ConnectException: Connection timed out
Exception in thread "Thread-1" java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2293)
at java.io.ObjectInputStream$BlockDataInputStream.readBlockHeader(ObjectInputStream.java:2473)
at java.io.ObjectInputStream$BlockDataInputStream.refill(ObjectInputStream.java:2543)
at java.io.ObjectInputStream$BlockDataInputStream.skipBlockData(ObjectInputStream.java:2445)
at java.io.ObjectInputStream.skipCustomData(ObjectInputStream.java:1941)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:500)
at java.lang.Throwable.readObject(Throwable.java:914)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
at sbt.React.react(ForkTests.scala:117)
at sbt.ForkTests$$anonfun$mainTestTask$1$Acceptor$2$.run(ForkTests.scala:76)
at java.lang.Thread.run(Thread.java:745)
Looks like sbt gets stuck in a blocking call with the forked environment at https://github.com/sbt/sbt/blob/0.13.5/main/actions/src/main/scala/sbt/ForkTests.scala#L117.
Some remarks:
I run Ubuntu 13.10 and Java HotSpot(TM) 64-Bit "1.7.0_65"
none of my colleagues can reproduce the problem on their machines...
the problem happens only when scalatest is on the classpath, even if not used here
the problem goes away if I don't use the PlayScala pluggin and add specs2 explicitly as a dependency
the problem goes away if I move the scalatest dependency into the main build.sbt

I finally found out what was happening.
It turns out that under the right settings, sbt will fork the JVM to execute the tests, and will want to communicate with it. How this is done is up to the test framework. In the case of scalatest, the communication between the two processes will be done through a server. scalatest just communicates the server address and port that have to be used by sbt. And this is happening there.
val array = Array(InetAddress.getLocalHost.getHostAddress, skeleton.port.toString)
Now, go read what the javadoc says for InetAddress#getLocalHost:
Returns the address of the local host. This is achieved by retrieving
the name of the host from the system, then resolving that name into an
InetAddress.
I am on Linux. My local host (which is never localhost) ends up being dopey. Now, for some reason (I was messing with my network at home), my /etc/hosts was assigning a bogus address to dopey. So instead of something like 127.0.0.1, scalatest would try to open a socket on this fictional server. And because of where this is happening, you don't see anything helpful in the stacktrace.
My guess is that the intention was always to use 127.0.0.1...

Related

Problems with Kafka Source initialization in Siddhi

Can't create stream from Kafka topic using Siddhi. Even if I create string with Design View.
I copied all required jars to lib and bundle folders. Even started Kafka with Zookeeper locally (dunno why I need it locally but nwm).
On tooling.sh start I have following error:
[2020-02-26 22:15:43,041] WARNING {org.wso2.carbon.launcher.extensions.OSGiLibBundleDeployerUtils lambda$getBundlesInfo$1} - Error when loading the OSGi bundle information from /home/Hed/StreamProcessor/siddhi-tooling-5.1.2/lib/kafka-clients-2.3.0.jar
java.io.IOException: Required bundle manifest headers do not exist
at org.wso2.carbon.launcher.extensions.OSGiLibBundleDeployerUtils.getBundleInfo(OSGiLibBundleDeployerUtils.java:183)
at org.wso2.carbon.launcher.extensions.OSGiLibBundleDeployerUtils.lambda$getBundlesInfo$1(OSGiLibBundleDeployerUtils.java:135)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.StreamSpliterators$WrappingSpliterator.forEachRemaining(StreamSpliterators.java:313)
at java.util.stream.StreamSpliterators$DistinctSpliterator.forEachRemaining(StreamSpliterators.java:1291)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747)
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721)
at java.util.stream.AbstractTask.compute(AbstractTask.java:327)
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
For this script:
#App:name("HelloKafka")
#App:description('Consume events from a Kafka Topic and publish to a different Kafka Topic')
#source(type='kafka',
topic.list='kafka_topic',
partition.no.list='0',
threading.option='single.thread',
group.id="group",
bootstrap.servers='localhost:9092',
#map(type='json'))
define stream SweetProductionStream (name string, amount double);
I have see error on Run command:
io.siddhi.core.exception.SiddhiAppCreationException: Error on 'HelloKafka' # Line: 10. Position: 26, near '#source(type='kafka',
topic.list='kafka_topic',
partition.no.list='0',
threading.option='single.thread',
group.id="group",
bootstrap.servers='localhost:9092',
#map(type='json'))'. org/apache/kafka/clients/producer/Producer
at io.siddhi.core.util.ExceptionUtil.populateQueryContext(ExceptionUtil.java:43)
at io.siddhi.core.util.parser.helper.DefinitionParserHelper.addEventSource(DefinitionParserHelper.java:388)
at io.siddhi.core.util.SiddhiAppRuntimeBuilder.defineStream(SiddhiAppRuntimeBuilder.java:117)
at io.siddhi.core.util.parser.SiddhiAppParser.defineStreamDefinitions(SiddhiAppParser.java:374)
at io.siddhi.core.util.parser.SiddhiAppParser.parse(SiddhiAppParser.java:230)
at io.siddhi.core.SiddhiManager.createSiddhiAppRuntime(SiddhiManager.java:85)
at io.siddhi.core.SiddhiManager.createSiddhiAppRuntime(SiddhiManager.java:95)
at io.siddhi.distribution.editor.core.internal.DebugRuntime.createRuntime(DebugRuntime.java:201)
at io.siddhi.distribution.editor.core.internal.DebugRuntime.(DebugRuntime.java:56)
at io.siddhi.distribution.editor.core.internal.DebugProcessorService.start(DebugProcessorService.java:38)
at io.siddhi.distribution.editor.core.internal.EditorMicroservice.start(EditorMicroservice.java:761)
at io.siddhi.distribution.editor.core.internal.EditorMicroservice.startWithVariables(EditorMicroservice.java:781)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invokeResource(HttpMethodInfo.java:187)
at org.wso2.msf4j.internal.router.HttpMethodInfo.invoke(HttpMethodInfo.java:143)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.dispatchMethod(MSF4JHttpConnectorListener.java:218)
at org.wso2.msf4j.internal.MSF4JHttpConnectorListener.lambda$onMessage$58(MSF4JHttpConnectorListener.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoClassDefFoundError: org/apache/kafka/clients/producer/Producer
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
at java.lang.Class.newInstance(Class.java:412)
at io.siddhi.core.util.SiddhiClassLoader.loadClass(SiddhiClassLoader.java:32)
at io.siddhi.core.util.SiddhiClassLoader.loadExtensionImplementation(SiddhiClassLoader.java:48)
at io.siddhi.core.util.parser.helper.DefinitionParserHelper.addEventSource(DefinitionParserHelper.java:346)
... 21 more
Caused by: java.lang.ClassNotFoundException: org.apache.kafka.clients.producer.Producer cannot be found by siddhi-io-kafka_5.0.7
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:448)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:361)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:353)
at org.eclipse.osgi.internal.loader.ModuleClassLoader.loadClass(ModuleClassLoader.java:161)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 28 more
Can somebody tell me what am I doing wrong? :(
Please make sure you had the OSGi-converted jars to the "C:\Program Files\WSO2\Enterprise Integrator\7.0.2\streaming-integrator\lib".
The OSGi-converted jar list:
kafka_2.12_2.3.0_1.0.0
kafka_clients_2.3.0_1.0.0
metrics_core_2.2.0_1.0.0
scala_library_2.12.8_1.0.0
zkclient_0.11_1.0.0
zookeeper_3.4.14_1.0.0
The, copy the original jars to to the "C:\Program Files\WSO2\Enterprise Integrator\7.0.2\streaming-integrator\samples\sample-clients\lib"
The list of original jars:
kafka_2.12-2.3.0
kafka-clients-2.3.0
metrics-core-2.2.0
scala-library-2.12.8
zkclient-0.11
zookeeper-3.4.14
In order to generate the OSGi-converted jars, copy all original jars to a folder called "source" and create an empty folder called "destination". Then run the following command in the terminal:
MINGW32 /c/Program Files/WSO2/Enterprise Integrator/7.0.2/streaming-integrator/bin
$ ./jartobundle.sh C:/DevTools/source C:/DevTools/destination
Finally, distribute the OSGis and original in accordance with the directories above.
PS1: in my case i am using kafka_2.12-2.4.1, but the basename of the jars does not change.
PS2: adapt the directories to your installation path
For more details check WSO2 documentation: Kafka transport

Jboss EAP 7.1 ServiceModuleLoader returning null

I'm using Jboss EAP 7.1 and when i try to dumpAllModuleInformation from ServiceModuleLoader, i'm getting null pointer exception, however i could see results for LocalModuleLoader. Attaching the Stack Trace below.
Basically i'm trying to see all loaded resources for my war file. I'm really not sure what's the reason for null pointer exception. All other operations like dumpModuleInformation, getDependencies, getModuleDescription, getModulesPathInfo, refreshResourceLoaders, relink and unLoadModule throwing IllegalArguementException: Module specification is null. Only queryLoadedModuleNames is returning my war filenames. Application is running fine without any issues. Jconsole is also throwing the same exception. I need to find a way to see all loaded jars for my war file. Standalone server is hosted with multiple war files, so planning to write a JMX program to get loaded dependencies for all of my wars/ears. Can you guys help me on this
javax.management.RuntimeMBeanException: java.lang.NullPointerException
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrow(DefaultMBeanServerInterceptor.java:839)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.rethrowMaybeMBeanException(DefaultMBeanServerInterceptor.java:852)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:821)
at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at org.jboss.as.jmx.PluggableMBeanServerImpl$TcclMBeanServer.invoke(PluggableMBeanServerImpl.java:1503)
at org.jboss.as.jmx.PluggableMBeanServerImpl.invoke(PluggableMBeanServerImpl.java:724)
at org.jboss.as.jmx.BlockingNotificationMBeanServer.invoke(BlockingNotificationMBeanServer.java:168)
at org.jboss.remotingjmx.protocol.v2.ServerProxy$InvokeHandler.handle(ServerProxy.java:950)
at org.jboss.remotingjmx.protocol.v2.ServerCommon$MessageReciever$1$1.run(ServerCommon.java:153)
at org.jboss.as.jmx.ServerInterceptorFactory$Interceptor$1.run(ServerInterceptorFactory.java:75)
at org.jboss.as.jmx.ServerInterceptorFactory$Interceptor$1.run(ServerInterceptorFactory.java:70)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.jboss.as.controller.AccessAuditContext.doAs(AccessAuditContext.java:92)
at org.jboss.as.jmx.ServerInterceptorFactory$Interceptor.handleEvent(ServerInterceptorFactory.java:70)
at org.jboss.remotingjmx.protocol.v2.ServerCommon$MessageReciever$1.run(ServerCommon.java:149)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at org.jboss.modules.ModuleLoader$MXBeanImpl.doGetResourceLoaders(ModuleLoader.java:857)
at org.jboss.modules.ModuleLoader$MXBeanImpl.getModuleDescription(ModuleLoader.java:866)
at org.jboss.modules.ModuleLoader$MXBeanImpl.doDumpModuleInformation(ModuleLoader.java:737)
at org.jboss.modules.ModuleLoader$MXBeanImpl.dumpAllModuleInformation(ModuleLoader.java:725)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source

minecraft forge modding mods folder

I tried to test a Minecraft Mod im developing right now and this error popped up in the console:
[15:31:05] [main/INFO] [FML]: Searching E:\MinecraftForgeMods\forge-1.12.2-14.23.4.2705-mdk\run\.\mods for mods
[15:31:05] [main/ERROR] [FML]: Unable to construct net.minecraftforge.fml.common.Mod container
In theory there shouldn't be a folder between 'run' and 'mods'. I tried creating such a folder, but that doesn't work of course, and searched for while but found nothing to this problem.
So does anyone have an idea how to get the right searching path?
As per https://unix.stackexchange.com/questions/249039/what-means-the-dots-on-a-path
E:\MinecraftForgeMods\forge-1.12.2-14.23.4.2705-mdk\run\.\mods
will resolve to E:\MinecraftForgeMods\forge-1.12.2-14.23.4.2705-mdk\run\mods
. represents the current directory, when it's mid-way through a path, it doesn't do anything.
The reason it's getting displayed, is the path that is being output isn't the resolved/absolute path, but the relative/dynamic path that has been built from multiple pieces.
this error popped up
The first line isn't an Error! It's an INFO, there is no reason to worry, this is normal.
Unable to construct net.minecraftforge.fml.common.Mod container
This is a problem, but unless there were lines before this it's hard if not impossible to tell what's going wrong.
If you have other mods in your mods directory, try removing them.
If this has only started happening after you started making your mod, Then it's likely something in your mod.
Usually there is a stack trace immediately after that, this one shows an issue in
*Caused by: java.lang.IllegalArgumentException: The modid CraftingTableIV is not the same as it's lowercase version. Lowercasing will be enforced in 1.11
at
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_111]
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_111]
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_111]
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_111]
at net.minecraftforge.fml.common.ModContainerFactory.build(ModContainerFactory.java:86) [ModContainerFactory.class:?]
at net.minecraftforge.fml.common.discovery.JarDiscoverer.discover(JarDiscoverer.java:87) [JarDiscoverer.class:?]
at net.minecraftforge.fml.common.discovery.ContainerType.findMods(ContainerType.java:49) [ContainerType.class:?]
at net.minecraftforge.fml.common.discovery.ModCandidate.explore(ModCandidate.java:78) [ModCandidate.class:?]
at net.minecraftforge.fml.common.discovery.ModDiscoverer.identifyMods(ModDiscoverer.java:141) [ModDiscoverer.class:?]
at net.minecraftforge.fml.common.Loader.identifyMods(Loader.java:382) [Loader.class:?]
at net.minecraftforge.fml.common.Loader.loadMods(Loader.java:522) [Loader.class:?]
at net.minecraftforge.fml.client.FMLClientHandler.beginMinecraftLoading(FMLClientHandler.java:225) [FMLClientHandler.class:?]
at net.minecraft.client.Minecraft.func_71384_a(Minecraft.java:438) [beq.class:?]
at net.minecraft.client.Minecraft.func_99999_d(Minecraft.java:350) [beq.class:?]
at net.minecraft.client.main.Main.main(SourceFile:124) [Main.class:?]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_111]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_111]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_111]
at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_111]
at net.minecraft.launchwrapper.Launch.launch(Launch.java:135) [launchwrapper-1.12.jar:?]
at net.minecraft.launchwrapper.Launch.main(Launch.java:28) [launchwrapper-1.12.jar:?]
*Caused by: java.lang.IllegalArgumentException: The modid CraftingTableIV is not the same as it's lowercase version. Lowercasing will be enforced in 1.11
at net.minecraftforge.fml.common.FMLModContainer.sanityCheckModId(FMLModContainer.java:144) ~[FMLModContainer.class:?]
at net.minecraftforge.fml.common.FMLModContainer.<init>(FMLModContainer.java:126) ~[FMLModContainer.class:?]
... 21 more

WSConfig adding BouncyCastleProvider fails

I'm seeing the following log statement if i turn the loglevel to debug:
|DEBUG|service thread 1-15|ws.security.WSSConfig||The provider FirstProvider was added at position: 3
|DEBUG|service thread 1-15|security.util.Loader||org.bouncycastle.jce.provider.BouncyCastleProvider from [Module "org.jboss.as.webservices.server.integration:main" ...
at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:213)
at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:459)
at org.jboss.modules.ConcurrentClassLoader.performLoadClassChecked(ConcurrentClassLoader.java:408)
at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:389)
at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:134)
at org.apache.ws.security.util.Loader.loadClass(Loader.java:252)
at org.apache.ws.security.util.Loader.loadClass(Loader.java:245)
at org.apache.ws.security.WSSConfig.addJceProvider(WSSConfig.java:868)
at org.apache.ws.security.WSSConfig$5.run(WSSConfig.java:446)
at org.apache.ws.security.WSSConfig$5.run(WSSConfig.java:443)
at java.security.AccessController.doPrivileged(Native Method)
at org.apache.ws.security.WSSConfig.init(WSSConfig.java:443)
at org.jboss.wsf.stack.cxf.config.CXFStackConfig.<init>(CXFStackConfigFactory.java:61)
at org.jboss.wsf.stack.cxf.config.CXFStackConfigFactory.getStackConfig(CXFStackConfigFactory.java:45)
at org.jboss.ws.common.management.AbstractServerConfig.create(AbstractServerConfig.java:272)
at org.jboss.as.webservices.config.ServerConfigImpl.create(ServerConfigImpl.java:62)
at org.jboss.as.webservices.service.ServerConfigService.start(ServerConfigService.java:72)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerImpl.java:1980)
at org.jboss.msc.service.ServiceControllerImpl$StartTask.run(ServiceControllerImpl.java:1913)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
|DEBUG|service thread 1-15|ws.security.WSSConfig||The provider SecondProvider was added at position: 8
This happens because of the WSS4J library. I am wondering if i need to take any action. If i understand it correctly 2 providers have been added but adding the bouncycastle fails. I know that i could add the bc libs to jboss or jre but is this really necessary? The fact that it is "only" a debug statement makes me also wonder if it is necessary. Maybe somebody knows what this actually means and can help me.
WSS4J attempts to install the BouncyCastle provider if it is available, and logs that DEBUG level error if it is not (this behaviour will change in the next major release). There is nothing to worry about if you don't require BouncyCastle to be installed.

Why spark-shell fails with NullPointerException?

I try to execute spark-shell on Windows 10, but I keep getting this error every time I run it.
I used both latest and spark-1.5.0-bin-hadoop2.4 versions.
15/09/22 18:46:24 WARN Connection: BoneCP specified but not present in
CLASSPATH (or one of dependencies)
15/09/22 18:46:24 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
15/09/22 18:46:27 WARN ObjectStore: Version information not found in
metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
15/09/22 18:46:27 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
15/09/22 18:46:27 WARN : Your hostname, DESKTOP-8JS2RD5 resolves to a loopback/non-reachable address: fe80:0:0:0:0:5efe:c0a8:103%net1, but we couldn't find any external IP address!
java.lang.RuntimeException: java.lang.NullPointerException
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.spark.sql.hive.client.ClientWrapper.<init> (ClientWrapper.scala:171)
at org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala :163)
at org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:161)
at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:168)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1028)
at $iwC$$iwC.<init>(<console>:9)
at $iwC.<init>(<console>:18)
at <init>(<console>:20)
at .<init>(<console>:24)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:132)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.sca la:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$proc ess$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$proc ess$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$proc ess$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scal a:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NullPointerException
at java.lang.ProcessBuilder.start(Unknown Source)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:559)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:534)
org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:599)
at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:554)
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
... 56 more
<console>:10: error: not found: value sqlContext
import sqlContext.implicits._
^
<console>:10: error: not found: value sqlContext
import sqlContext.sql
^
I used Spark 1.5.2 with Hadoop 2.6 and had similar problems. Solved by doing the following steps:
Download winutils.exe from the repository to some local folder, e.g. C:\hadoop\bin.
Set HADOOP_HOME to C:\hadoop.
Create c:\tmp\hive directory (using Windows Explorer or any other tool).
Open command prompt with admin rights.
Run C:\hadoop\bin\winutils.exe chmod 777 /tmp/hive
With that, I am still getting some warnings, but no ERRORs and can run Spark applications just fine.
I was facing a similar issue, got it resolved by putting the winutil inside bin folder. The Hadoop_home should be set as C:\Winutils and winutil to be placed in C:\Winutils\bin.
Windows 10 64 bit Winutils are available in https://github.com/steveloughran/winutils/tree/master/hadoop-2.6.0/bin
Also ensure that command line has administrative access.
Refer https://wiki.apache.org/hadoop/WindowsProblems
My guess is that you're running into https://issues.apache.org/jira/browse/SPARK-10528. I was seeing the same issue running on Windows 7. Initially I was getting the NullPointerException as you did. When I put winutils into the bin directory and set HADOOP_HOME to point to the Spark directory, I got the error described in the JIRA issue.
Or perhaps this link here below be easier to follow,
https://wiki.apache.org/hadoop/WindowsProblems
Basically download and copy winutils.exe to your spark\bin folder. Re-run spark-shell
If you have not set your /tmp/hive to a writable state, please do so.
You need to give permission to /tmp/hive directory to resolve this exception.
Hope you already have winutils.exe and set HADOOP_HOME environment variable. Then open the command prompt and run following command as administrator:
If winutils.exe is present in D:\winutils\bin location and \tmp\hive is also in D drive:
D:\winutils\bin\winutils.exe chmod 777 D:\tmp\hive
For more details,you can refer the following links :
Frequent Issues occurred during Spark Development
How to run Apache Spark on Windows7 in standalone mode
You can resolve this issue by placing mysqlconnector jar in spark-1.6.0/libs folder and restart it again.It works.
The important thing is here instead of running spark-shell you should do
spark-shell --driver-class-path /home/username/spark-1.6.0-libs-mysqlconnector.jar
Hope it should work.
For Python - Create a SparkSession in your python (This config section is only for Windows)
spark = SparkSession.builder.config("spark.sql.warehouse.dir", "C:/temp").appName("SparkSQL").getOrCreate()
Copy winutils.exe and keep in C:\winutils\bin and execute the bellow commands
C:\Windows\system32>C:\winutils\bin\winutils.exe chmod 777 C:/temp
Run command prompt in ADMIN mode ( Run as Administrator)
My issue was having other .exe's/Jars inside the winutils/bin folder. So I cleared all the others and was left with winutils.exe alone. Was using spark 2.1.1
Issue was resolved after installing correct Java version in my case its java 8 and setting the environmental variables. Make sure you run the winutils.exe to create a temporary directory as below.
c:\winutils\bin\winutils.exe chmod 777 \tmp\hive
Above should not return any error. Use java -version to verify the version of java you are using before invoking spark-shell.
In Windows, you need to clone "winutils"
git clone https://github.com/steveloughran/winutils.git
And
set var HADOOP_HOME to DIR_CLONED\hadoop-{version}
Remember to choose the version of your hadoop.
Setting SPARK_LOCAL_HOSTNAME as localhost (on Windows 10) resolved the problem for me