just getting started with Prediction.IO - Getting an exception when performing pio train. I'm using the universal recommender template.
Pio status, and pio-start-all working fine, reporting no errors
Seems " nor Elasticsearch client found" - but curl localhost fine on port 9200.
Does anybody has a clue on what the error message given is referring to?
> [INFO] [URModel] Ready to pass date fields names to closure Some(List(, , ))
> [INFO] [URModel] Converting PropertyMap into Elasticsearch style rdd
> [Stage 41:=============================> (2 + 2) / 4][INFO] [URModel] Grouping all correlators into doc +
> fields for writing to index
> [INFO] [URModel] Finding non-empty RDDs from a list of 2 correlators and 1 properties
> [INFO] [URModel] New data to index, performing a hot swap of the index.
> Exception in thread "main" java.lang.IllegalStateException: No Elasticsearch client configuration detected, check your pio-env.sh
> forproper configuration settings
> at dk.bilzonen.esClient$.client$lzycompute(esClient.scala:58)
> at dk.bilzonen.esClient$.client(esClient.scala:55)
> at dk.bilzonen.esClient$.hotSwap(esClient.scala:169)
> at dk.bilzonen.URModel.save(URModel.scala:147)
> at dk.bilzonen.URModel.save(URModel.scala:38)
> at io.prediction.controller.P2LAlgorithm.makePersistentModel(P2LAlgorithm.scala:111)
> at io.prediction.controller.Engine$$anonfun$makeSerializableModels$2.apply(Engine.scala:294)
> at io.prediction.controller.Engine$$anonfun$makeSerializableModels$2.apply(Engine.scala:293)
> at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at io.prediction.controller.Engine.makeSerializableModels(Engine.scala:293)
> at io.prediction.controller.Engine.train(Engine.scala:185)
> at io.prediction.workflow.CoreWorkflow$.runTrain(CoreWorkflow.scala:65)
> at io.prediction.workflow.CreateWorkflow$.main(CreateWorkflow.scala:247)
> at io.prediction.workflow.CreateWorkflow.main(CreateWorkflow.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
The default configuration for PredictionIO (in pio-env.sh in your PredictionIO /conf directory) has ElasticSearch running on port 9300. If you update it to 9200 there does your training step succeed?
Related
I've been trying to problem solve this issue for a few days now and have not gotten anywhere - would love some suggestions.
I'm running one scala object from within a project which includes a udf which broadcasts a scala set (dict). This is being run from the IntelliJ IDE and the project is built with Maven
Now, this udf works just fine when I run it within another object contained in the same project (using dummy data in a dataframe), but when I run the same code on real data in a dataframe, I get java.lang.ClassCastException errors.
The key udf code is:
// does an exact match check against a set (Bool)
def listCheckExact(words: Broadcast[scala.collection.immutable.Set[String]]) = {
udf { (s: String) => words.value.contains(s) }
}
df.withColumn("new_col", when(listCheckExact(sparkSession.sparkContext.broadcast(dict))($"column"), 1).otherwise(0) )
I thought it might be a spark or scala version mismatch error, but have checked my pom file to make sure all the versions are correct and have re-downloaded them from the maven repository.
The thing that is really doing my head in is that this runs on dummy data, and not on the real data, using the exact SAME project setup (same pom file and dependencies, all uploaded to spark as a fat jar with dependencies). In the real data code there is additional code which grabs and munges the data (and works just fine), but it fails when it comes to this exact same udf - how can that be?
The full error stack is:
17/09/07 14:06:57 WARN TaskSetManager: Lost task 0.0 in stage 17.0 (TID 5369, 213.248.211.179, executor 1): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2006)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
17/09/07 14:06:57 INFO TaskSetManager: Starting task 0.1 in stage 17.0 (TID 5370, 213.248.211.179, executor 1, partition 0, PROCESS_LOCAL, 5058 bytes)
17/09/07 14:06:57 INFO TaskSetManager: Lost task 0.1 in stage 17.0 (TID 5370) on 213.248.211.179, executor 1: java.lang.ClassCastException (cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD) [duplicate 1]
17/09/07 14:06:57 INFO TaskSetManager: Starting task 0.2 in stage 17.0 (TID 5371, 213.248.211.179, executor 1, partition 0, PROCESS_LOCAL, 5058 bytes)
17/09/07 14:06:57 INFO TaskSetManager: Lost task 0.2 in stage 17.0 (TID 5371) on 213.248.211.179, executor 1: java.lang.ClassCastException (cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD) [duplicate 2]
17/09/07 14:06:57 INFO TaskSetManager: Starting task 0.3 in stage 17.0 (TID 5372, 213.248.211.179, executor 1, partition 0, PROCESS_LOCAL, 5058 bytes)
17/09/07 14:06:57 INFO TaskSetManager: Lost task 0.3 in stage 17.0 (TID 5372) on 213.248.211.179, executor 1: java.lang.ClassCastException (cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD) [duplicate 3]
17/09/07 14:06:57 ERROR TaskSetManager: Task 0 in stage 17.0 failed 4 times; aborting job
17/09/07 14:06:57 INFO TaskSchedulerImpl: Removed TaskSet 17.0, whose tasks have all completed, from pool
17/09/07 14:06:57 INFO TaskSchedulerImpl: Cancelling stage 17
17/09/07 14:06:57 INFO DAGScheduler: ResultStage 17 (show at getDwData.scala:704) failed in 0.111 s due to Job aborted due to stage failure: Task 0 in stage 17.0 failed 4 times, most recent failure: Lost task 0.3 in stage 17.0 (TID 5372, 213.248.211.179, executor 1): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2006)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
17/09/07 14:06:57 INFO DAGScheduler: Job 1 failed: show at getDwData.scala:704, took 0.223107 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 17.0 failed 4 times, most recent failure: Lost task 0.3 in stage 17.0 (TID 5372, 213.248.211.179, executor 1): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2006)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1486)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1714)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:336)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2853)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2153)
at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2153)
at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2837)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2836)
at org.apache.spark.sql.Dataset.head(Dataset.scala:2153)
at org.apache.spark.sql.Dataset.take(Dataset.scala:2366)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:245)
at org.apache.spark.sql.Dataset.show(Dataset.scala:644)
at org.apache.spark.sql.Dataset.show(Dataset.scala:603)
at uk.nominet.renewals_analysis.uk.nominet.renewals_analysis.getDwData$.getDwData(getDwData.scala:704)
at uk.nominet.renewals_analysis.uk.nominet.renewals_analysis.getDwData$.main(getDwData.scala:863)
at uk.nominet.renewals_analysis.uk.nominet.renewals_analysis.getDwData.main(getDwData.scala)
Caused by: java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2089)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1261)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2006)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I am trying to run the following code which leverages graphframes, and I am getting an error now which, to the best of my knowledge and after some hours of Googling, I cannot resolve. It seems like a class cannot be loaded, but I don't really know what else I should be doing.
Can someone please have another look at the code and error below? I have followed the instructions from here, and in case you want to quickly give it a try, you can find my dataset here.
"""
Program: RUNNING GRAPH ANALYTICS WITH SPARK GRAPH-FRAMES:
Author: Dr. C. Hadjinikolis
Date: 14/09/2016
Description: This is the application's core module from where everything is executed.
The module is responsible for:
1. Loading Spark
2. Loading GraphFrames
3. Running analytics by leveraging other modules in the package.
"""
# IMPORT OTHER LIBS -------------------------------------------------------------------------------#
import os
import sys
import pandas as pd
# IMPORT SPARK ------------------------------------------------------------------------------------#
# Path to Spark source folder
USER_FILE_PATH = "/Users/christoshadjinikolis"
SPARK_PATH = "/PycharmProjects/GenesAssociation"
SPARK_FILE = "/spark-2.0.0-bin-hadoop2.7"
SPARK_HOME = USER_FILE_PATH + SPARK_PATH + SPARK_FILE
os.environ['SPARK_HOME'] = SPARK_HOME
# Append pySpark to Python Path
sys.path.append(SPARK_HOME + "/python")
sys.path.append(SPARK_HOME + "/python" + "/lib/py4j-0.10.1-src.zip")
try:
from pyspark import SparkContext
from pyspark import SparkConf
from pyspark.sql import SQLContext
from graphframes import *
except ImportError as ex:
print "Can not import Spark Modules", ex
sys.exit(1)
# GLOBAL VARIABLES --------------------------------------------------------------------------------#
# Configure spark properties
CONF = (SparkConf()
.setMaster("local")
.setAppName("My app")
.set("spark.executor.memory", "10g")
.set("spark.executor.instances", "4"))
# Instantiate SparkContext object
SC = SparkContext(conf=CONF)
# Instantiate SQL_SparkContext object
SQL_CONTEXT = SQLContext(SC)
# MAIN CODE ---------------------------------------------------------------------------------------#
if __name__ == "__main__":
# Main Path to CSV files
DATA_PATH = '/PycharmProjects/GenesAssociation/data/'
FILE_NAME = 'gene_gene_associations_50k.csv'
# LOAD DATA CSV USING PANDAS -----------------------------------------------------------------#
print "STEP 1: Loading Gene Nodes -------------------------------------------------------------"
# Read csv file and load as df
GENES = pd.read_csv(USER_FILE_PATH + DATA_PATH + FILE_NAME,
usecols=['OFFICIAL_SYMBOL_A'],
low_memory=True,
iterator=True,
chunksize=1000)
# Concatenate chunks into list & convert to dataFrame
GENES_DF = pd.DataFrame(pd.concat(list(GENES), ignore_index=True))
# Remove duplicates
GENES_DF_CLEAN = GENES_DF.drop_duplicates(keep='first')
# Name Columns
GENES_DF_CLEAN.columns = ['id']
# Output dataFrame
print GENES_DF_CLEAN
# Create vertices
VERTICES = SQL_CONTEXT.createDataFrame(GENES_DF_CLEAN)
# Show some vertices
print VERTICES.take(5)
print "STEP 2: Loading Gene Edges -------------------------------------------------------------"
# Read csv file and load as df
EDGES = pd.read_csv(USER_FILE_PATH + DATA_PATH + FILE_NAME,
usecols=['OFFICIAL_SYMBOL_A', 'OFFICIAL_SYMBOL_B', 'EXPERIMENTAL_SYSTEM'],
low_memory=True,
iterator=True,
chunksize=1000)
# Concatenate chunks into list & convert to dataFrame
EDGES_DF = pd.DataFrame(pd.concat(list(EDGES), ignore_index=True))
# Name Columns
EDGES_DF.columns = ["src", "dst", "rel_type"]
# Output dataFrame
print EDGES_DF
# Create vertices
EDGES = SQL_CONTEXT.createDataFrame(EDGES_DF)
# Show some edges
print EDGES.take(5)
print "STEP 3: Generating the Graph -----------------------------------------------------------"
GENES_GRAPH = GraphFrame(VERTICES, EDGES)
print "STEP 4: Running Various Basic Analytics ------------------------------------------------"
print "Vertex in-Degree -----------------------------------------------------------------------"
GENES_GRAPH.inDegrees.sort('inDegree', ascending=False).show()
print "Vertex out-Degree ----------------------------------------------------------------------"
GENES_GRAPH.outDegrees.sort('outDegree', ascending=False).show()
print "Vertex degree --------------------------------------------------------------------------"
GENES_GRAPH.degrees.sort('degree', ascending=False).show()
print "Triangle Count -------------------------------------------------------------------------"
RESULTS = GENES_GRAPH.triangleCount()
RESULTS.select("id", "count").show()
print "Label Propagation ----------------------------------------------------------------------"
GENES_GRAPH.labelPropagation(maxIter=10).show() # Convergence is not guaranteed
print "PageRank -------------------------------------------------------------------------------"
GENES_GRAPH.pageRank(resetProbability=0.15, tol=0.01)\
.vertices.sort('pagerank', ascending=False).show()
print "STEP 5: Find Shortest Paths w.r.t. Landmarks -------------------------------------------"
# Shortest paths
SHORTEST_PATH = GENES_GRAPH.shortestPaths(landmarks=["ARF3", "MAP2K4"])
SHORTEST_PATH.select("id", "distances").show()
print "STEP 6: Save Vertices and Edges --------------------------------------------------------"
# Save vertices and edges as Parquet to some location.
# Note: You can't overwrite existing vertices and edges directories.
GENES_GRAPH.vertices.write.parquet("vertices")
GENES_GRAPH.edges.write.parquet("edges")
print "STEP 7: Load "
# Load the vertices and edges back.
SAME_VERTICES = GENES_GRAPH.read.parquet("vertices")
SAME_EDGES = GENES_GRAPH.read.parquet("edges")
# Create an identical GraphFrame.
SAME_GENES_GRAPH = GF.GraphFrame(SAME_VERTICES, SAME_EDGES)
# END OF FILE -------------------------------------------------------------------------------------#
This is the output:
Ivy Default Cache set to: /Users/username/.ivy2/cache
The jars for the packages stored in: /Users/username/.ivy2/jars
:: loading settings :: url = jar:file:/Users/username/PycharmProjects/GenesAssociation/spark-2.0.0-bin-hadoop2.7/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
graphframes#graphframes added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0
confs: [default]
found graphframes#graphframes;0.2.0-spark2.0-s_2.11 in list
found com.typesafe.scala-logging#scala-logging-api_2.11;2.1.2 in list
found com.typesafe.scala-logging#scala-logging-slf4j_2.11;2.1.2 in list
found org.scala-lang#scala-reflect;2.11.0 in list
[2.11.0] org.scala-lang#scala-reflect;2.11.0
found org.slf4j#slf4j-api;1.7.7 in list
:: resolution report :: resolve 391ms :: artifacts dl 14ms
:: modules in use:
com.typesafe.scala-logging#scala-logging-api_2.11;2.1.2 from list in [default]
com.typesafe.scala-logging#scala-logging-slf4j_2.11;2.1.2 from list in [default]
graphframes#graphframes;0.2.0-spark2.0-s_2.11 from list in [default]
org.scala-lang#scala-reflect;2.11.0 from list in [default]
org.slf4j#slf4j-api;1.7.7 from list in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 5 | 0 | 0 | 0 || 5 | 0 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent
confs: [default]
0 artifacts copied, 5 already retrieved (0kB/11ms)
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel).
16/09/20 11:00:29 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
OK1
Traceback (most recent call last):
File "/Users/username/PycharmProjects/GenesAssociation/main.py", line 32, in <module>
g = GraphFrame(v, e)
File "/Users/tjhunter/work/graphframes/python/graphframes/graphframe.py", line 62, in __init__
File "/Users/tjhunter/work/graphframes/python/graphframes/graphframe.py", line 34, in _java_api
File "/Users/christoshadjinikolis/PycharmProjects/GenesAssociation/spark-2.0.0-bin-hadoop2.7/python/lib/py4j-0.10.1-src.zip/py4j/java_gateway.py", line 933, in __call__
File "/Users/christoshadjinikolis/PycharmProjects/GenesAssociation/spark-2.0.0-bin-hadoop2.7/python/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/Users/username/PycharmProjects/GenesAssociation/spark-2.0.0-bin-hadoop2.7/python/lib" \
"/py4j-0.10.1-src.zip/py4j/protocol.py", line 312, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o53.newInstance.
: java.lang.NoClassDefFoundError: com/typesafe/scalalogging/slf4j/LazyLogging
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.graphframes.GraphFrame$.<init>(GraphFrame.scala:677)
at org.graphframes.GraphFrame$.<clinit>(GraphFrame.scala)
at org.graphframes.GraphFramePythonAPI.<init>(GraphFramePythonAPI.scala:11)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:211)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: com.typesafe.scalalogging.slf4j.LazyLogging
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 43 more
Process finished with exit code 1
I've had the same issue with spark/scala I've resolved it by adding the fowlowing jar to classpath:
spark-shell --jars scala-logging_2.12-3.5.0.jar
You can find the jar here:
https://mvnrepository.com/artifact/com.typesafe.scala-logging/scala-logging_2.12/3.5.0
Source: https://github.com/graphframes/graphframes/issues/113
I am using "JPA Entities from table option" in order to get the entity generated form database tables, all the setup is correct even JPA is generating the entity for the database tables, but when my table consist the column type XML..
"MAPSETDETAIL" XML
the entity not getting generated. Any one have any idea. I am using JPA prespective from eclipse LUNA for entity generation.
Error in workspace .metadata/.log file.....
> !MESSAGE Error Generating Entities
> !STACK 0
> org.apache.velocity.exception.MethodInvocationException: Invocation of method 'getImportStatements' in class
> org.eclipse.jpt.jpa.gen.internal.ORMGenTable threw exception
> java.lang.NullPointerException # main.java.vm[7,9]
> at org.apache.velocity.runtime.parser.node.ASTIdentifier.execute(ASTIdentifier.java:205)
> at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:203)
> at org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:294)
> at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:318)
> at org.apache.velocity.Template.merge(Template.java:254)
> at org.apache.velocity.app.VelocityEngine.mergeTemplate(VelocityEngine.java:508)
> at org.apache.velocity.app.VelocityEngine.mergeTemplate(VelocityEngine.java:473)
> at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateJavaFile(PackageGenerator.java:333)
> at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateClass(PackageGenerator.java:310)
> at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateInternal(PackageGenerator.java:132)
> at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.doGenerate(PackageGenerator.java:106)
> at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generate(PackageGenerator.java:82)
> at org.eclipse.jpt.jpa.ui.internal.wizards.gen.GenerateEntitiesFromSchemaWizard$GenerateEntitiesJob.runInWorkspace(GenerateEntitiesFromSchemaWizard.java:285)
> at org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:38)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54)
> Caused by: java.lang.NullPointerException
Complete Stack Trace....
> !ENTRY org.eclipse.egit.ui 2 0 2016-08-31 00:21:55.373 !MESSAGE
> Warning: EGit couldn't detect the installation path "gitPrefix" of
> native Git. Hence EGit can't respect system level Git settings which
> might be configured in ${gitPrefix}/etc/gitconfig under the native Git
> installation directory. The most important of these settings is
> core.autocrlf. Git for Windows by default sets this parameter to true
> in this system level configuration. The Git installation location can
> be configured on the Team > Git > Configuration preference page's
> 'System Settings' tab. This warning can be switched off on the Team >
> Git > Confirmations and Warnings preference page.
>
> !ENTRY org.eclipse.egit.ui 2 0 2016-08-31 00:21:55.375 !MESSAGE
> Warning: The environment variable HOME is not set. The following
> directory will be used to store the Git user global configuration and
> to define the default location to store repositories:
> 'C:\Users\Katara'. If this is not correct please set the HOME
> environment variable and restart Eclipse. Otherwise Git for Windows
> and EGit might behave differently since they see different
> configuration options. This warning can be switched off on the Team >
> Git > Confirmations and Warnings preference page.
>
> !ENTRY org.eclipse.jdt.ui 4 10001 2016-08-31 00:47:41.172 !MESSAGE
> Internal Error !STACK 0 org.eclipse.jface.text.BadLocationException
> at
> org.eclipse.jface.text.AbstractDocument.addPosition(AbstractDocument.java:355)
> at
> org.eclipse.core.internal.filebuffers.SynchronizableDocument.addPosition(SynchronizableDocument.java:236)
> at
> org.eclipse.jdt.internal.ui.javaeditor.SemanticHighlightingPresenter.updatePresentation(SemanticHighlightingPresenter.java:414)
> at
> org.eclipse.jdt.internal.ui.javaeditor.SemanticHighlightingPresenter$1.run(SemanticHighlightingPresenter.java:347)
> at org.eclipse.swt.widgets.RunnableLock.run(RunnableLock.java:35) at
> org.eclipse.swt.widgets.Synchronizer.runAsyncMessages(Synchronizer.java:136)
> at
> org.eclipse.swt.widgets.Display.runAsyncMessages(Display.java:4147)
> at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3764)
> at
> org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$9.run(PartRenderingEngine.java:1151)
> at
> org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
> at
> org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:1032)
> at
> org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:148)
> at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:636) at
> org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)
> at
> org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:579)
> at
> org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150)
> at
> org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:135)
> at
> org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
> at
> org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134)
> at
> org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
> at
> org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:380)
> at
> org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:235)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
> at java.lang.reflect.Method.invoke(Method.java:611) at
> org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:648) at
> org.eclipse.equinox.launcher.Main.basicRun(Main.java:603) at
> org.eclipse.equinox.launcher.Main.run(Main.java:1465)
>
> !ENTRY org.eclipse.jpt.jpa.gen 4 0 2016-08-31 02:12:51.390 !MESSAGE
> Error Generating Entities !STACK 0
> org.apache.velocity.exception.MethodInvocationException: Invocation of
> method 'getImportStatements' in class
> org.eclipse.jpt.jpa.gen.internal.ORMGenTable threw exception
> java.lang.NullPointerException # main.java.vm[7,9] at
> org.apache.velocity.runtime.parser.node.ASTIdentifier.execute(ASTIdentifier.java:205)
> at
> org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:203)
> at
> org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:294)
> at
> org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:318)
> at org.apache.velocity.Template.merge(Template.java:254) at
> org.apache.velocity.app.VelocityEngine.mergeTemplate(VelocityEngine.java:508)
> at
> org.apache.velocity.app.VelocityEngine.mergeTemplate(VelocityEngine.java:473)
> at
> org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateJavaFile(PackageGenerator.java:333)
> at
> org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateClass(PackageGenerator.java:310)
> at
> org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateInternal(PackageGenerator.java:132)
> at
> org.eclipse.jpt.jpa.gen.internal.PackageGenerator.doGenerate(PackageGenerator.java:106)
> at
> org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generate(PackageGenerator.java:82)
> at
> org.eclipse.jpt.jpa.ui.internal.wizards.gen.GenerateEntitiesFromSchemaWizard$GenerateEntitiesJob.runInWorkspace(GenerateEntitiesFromSchemaWizard.java:285)
> at
> org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:38)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54) Caused
> by: java.lang.NullPointerException at
> org.eclipse.jpt.common.utility.internal.StringTools.indexOfWhitespace(StringTools.java:697)
> at
> org.eclipse.jpt.common.utility.internal.StringTools.removeAllWhitespace(StringTools.java:687)
> at
> org.eclipse.jpt.common.utility.internal.TypeDeclarationTools.className(TypeDeclarationTools.java:215)
> at
> org.eclipse.jpt.jpa.db.internal.DTPColumnWrapper.getJavaType(DTPColumnWrapper.java:140)
> at
> org.eclipse.jpt.jpa.db.internal.DTPColumnWrapper.getJavaType(DTPColumnWrapper.java:125)
> at
> org.eclipse.jpt.jpa.db.internal.DTPColumnWrapper.getJavaTypeDeclaration(DTPColumnWrapper.java:119)
> at
> org.eclipse.jpt.jpa.gen.internal.util.DTPUtil.getJavaType(DTPUtil.java:72)
> at
> org.eclipse.jpt.jpa.gen.internal.BaseEntityGenCustomizer.getPropertyTypeFromColumn(BaseEntityGenCustomizer.java:90)
> at
> org.eclipse.jpt.jpa.gen.internal.ORMGenColumn.getPropertyType(ORMGenColumn.java:184)
> at
> org.eclipse.jpt.jpa.gen.internal.ORMGenTable.buildColumnTypesMap(ORMGenTable.java:204)
> at
> org.eclipse.jpt.jpa.gen.internal.ORMGenTable.getImportStatements(ORMGenTable.java:138)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
> at java.lang.reflect.Method.invoke(Method.java:589) at
> org.apache.velocity.runtime.parser.node.PropertyExecutor.execute(PropertyExecutor.java:137)
> at
> org.apache.velocity.util.introspection.UberspectImpl$VelGetterImpl.invoke(UberspectImpl.java:350)
> at
> org.apache.velocity.runtime.parser.node.ASTIdentifier.execute(ASTIdentifier.java:180)
> ... 14 more
>
> !ENTRY org.eclipse.jpt.jpa.gen 4 0 2016-08-31 02:13:52.696 !MESSAGE
> Error Generating Entities !STACK 0
> org.apache.velocity.exception.MethodInvocationException: Invocation of
> method 'getImportStatements' in class
> org.eclipse.jpt.jpa.gen.internal.ORMGenTable threw exception
> java.lang.NullPointerException # main.java.vm[7,9] at
> org.apache.velocity.runtime.parser.node.ASTIdentifier.execute(ASTIdentifier.java:205)
> at
> org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:203)
> at
> org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:294)
> at
> org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:318)
> at org.apache.velocity.Template.merge(Template.java:254) at
> org.apache.velocity.app.VelocityEngine.mergeTemplate(VelocityEngine.java:508)
> at
> org.apache.velocity.app.VelocityEngine.mergeTemplate(VelocityEngine.java:473)
> at
> org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateJavaFile(PackageGenerator.java:333)
> at
> org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateClass(PackageGenerator.java:310)
> at
> org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateInternal(PackageGenerator.java:132)
> at
> org.eclipse.jpt.jpa.gen.internal.PackageGenerator.doGenerate(PackageGenerator.java:106)
> at
> org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generate(PackageGenerator.java:82)
> at
> org.eclipse.jpt.jpa.ui.internal.wizards.gen.GenerateEntitiesFromSchemaWizard$GenerateEntitiesJob.runInWorkspace(GenerateEntitiesFromSchemaWizard.java:285)
> at
> org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:38)
> at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54) Caused
> by: java.lang.NullPointerException at
> org.eclipse.jpt.common.utility.internal.StringTools.indexOfWhitespace(StringTools.java:697)
> at
> org.eclipse.jpt.common.utility.internal.StringTools.removeAllWhitespace(StringTools.java:687)
> at
> org.eclipse.jpt.common.utility.internal.TypeDeclarationTools.className(TypeDeclarationTools.java:215)
> at
> org.eclipse.jpt.jpa.db.internal.DTPColumnWrapper.getJavaType(DTPColumnWrapper.java:140)
> at
> org.eclipse.jpt.jpa.db.internal.DTPColumnWrapper.getJavaType(DTPColumnWrapper.java:125)
> at
> org.eclipse.jpt.jpa.db.internal.DTPColumnWrapper.getJavaTypeDeclaration(DTPColumnWrapper.java:119)
> at
> org.eclipse.jpt.jpa.gen.internal.util.DTPUtil.getJavaType(DTPUtil.java:72)
> at
> org.eclipse.jpt.jpa.gen.internal.BaseEntityGenCustomizer.getPropertyTypeFromColumn(BaseEntityGenCustomizer.java:90)
> at
> org.eclipse.jpt.jpa.gen.internal.ORMGenColumn.getPropertyType(ORMGenColumn.java:184)
> at
> org.eclipse.jpt.jpa.gen.internal.ORMGenTable.buildColumnTypesMap(ORMGenTable.java:204)
> at
> org.eclipse.jpt.jpa.gen.internal.ORMGenTable.getImportStatements(ORMGenTable.java:138)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
> at java.lang.reflect.Method.invoke(Method.java:589) at
> org.apache.velocity.runtime.parser.node.PropertyExecutor.execute(PropertyExecutor.java:137)
> at
> org.apache.velocity.util.introspection.UberspectImpl$VelGetterImpl.invoke(UberspectImpl.java:350)
> at
> org.apache.velocity.runtime.parser.node.ASTIdentifier.execute(ASTIdentifier.java:180)
> ... 14 more
I am working with Talend tMatchGroupHadoop component with Amazon EMR cluster,
it is giving an error: "could only be replicated to 0 nodes, instead of 1".
Actually data node is running in the AMR cluster.
hadoop fsck
..............Status: HEALTHY
Total size: 315153 B
Total dirs: 12
Total files: 14 (Files currently being written: 1)
Total blocks (validated): 13 (avg. block size 24242 B)
Minimally replicated blocks: 13 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 1
Average block replication: 1.0
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 1
Number of racks: 1
FSCK ended at Mon Sep 08 06:07:58 UTC 2014 in 158 milliseconds
I am getting the following error:
[statistics] connecting to socket on port 3645
[statistics] connected
[INFO ]: org.apache.hadoop.hdfs.DFSClient - Exception in createBlockOutputStream 10.230.30.124:9200 java.net.ConnectException: Connection timed out: no further information
[INFO ]: org.apache.hadoop.hdfs.DFSClient - Abandoning block blk_-3580819895919001579_2135
[INFO ]: org.apache.hadoop.hdfs.DFSClient - Excluding datanode 10.230.30.124:9200
Exception in component tMatchGroupHadoop_2_GroupOut
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /in could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:701)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:583)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1140)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1092)
[WARN ]: org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /in could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:701)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:583)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1140)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3595)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3456)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2672)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2912)
at org.apache.hadoop.ipc.Client.call(Client.java:1092)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3595)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3456)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2672)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2912)
[WARN ]: org.apache.hadoop.hdfs.DFSClient - Error Recovery for block blk_-3580819895919001579_2135 bad datanode[0] nodes == null
[WARN ]: org.apache.hadoop.hdfs.DFSClient - Could not get block locations. Source file "/in" - Aborting...
[statistics] disconnected
[ERROR]: org.apache.hadoop.hdfs.DFSClient - Exception closing file /in : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /in could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:701)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:583)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1140)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /in could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1569)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:701)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:583)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1140)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
at org.apache.hadoop.ipc.Client.call(Client.java:1092)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
at com.sun.proxy.$Proxy1.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3595)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3456)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2672)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2912)
Job HadoopMatch ended at 17:54 08/09/2014. [exit code=1]
What is wrong here?
to work this, we have to check the option, use datanode hostname.
We have do modification in the windows host file
C:\Windows\System32\Drivers\etc\hosts
EC2PublicIP PrivateDNS
example :
10.210.202.106 ip-101-210-141-188.ec2.internal
then it is working now.
I'm running ScalaCheck tests in sbt, and if my test fails because the code under test throws an exception, the test report shows the failed test, the thrown exception and the message, but not the entire stack trace (note the mere Exception: java.lang.NullPointerException: exception exception message below).
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Set current project to scalacheck (in build file:/Users/jacek/sandbox/scalacheck/)
[info] Updating {file:/Users/jacek/sandbox/scalacheck/}scalacheck...
[info] Resolving jline#jline;2.11 ...
[info] Done updating.
[info] Compiling 1 Scala source to /Users/jacek/sandbox/scalacheck/target/scala-2.11/test-classes...
[info] ! String.throw exception: Exception raised on property evaluation.
[info] > ARG_0: ""
[info] > Exception: java.lang.NullPointerException: exception
[error] Error: Total 1, Failed 0, Errors 1, Passed 0
[error] Error during tests:
[error] StringSpecification
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 5 s, completed Jun 25, 2014 3:25:47 AM
I found https://groups.google.com/forum/#!msg/scalacheck/AGBgE_JlqpI/B2eSG84_QzYJ from 2008 which seems to report the same issue, and indicates it should be fixed in the next release. I'm currently using the latest release 1.11.4.
I also found http://www.scala-sbt.org/release/docs/Testing.html which indicates sbt has a testOptions key which I suppose seems reasonable to use, and I know ScalaTest has a setting for full stack traces, "-F", but that doesn't work for ScalaCheck. Even the example from the above page, testOptions in Test += Tests.Argument(TestFrameworks.ScalaCheck, "-d", "-g") gives me an error:
[error] Could not run test org.example.myproject.MyTestClass: java.lang.Exception: [1.1] failure: option name expected
[error]
[error] -d -g
[error] ^
How do I use these test arguments, is there a list of these arguments anywhere, and finally, is it possible to get a stacktrace out of it all, or am I chasing a red herring?
tl;dr There's no support for verbosity under sbt with the released version of ScalaCheck. You'd have to build a version from the sources yourself to have the feature.
The available options for ScalaCheck are described in Test Execution:
Available options:
-workers, -w: Number of threads to execute in parallel for testing
-minSize, -n: Minimum data generation size
-verbosity, -v: Verbosity level
-minSuccessfulTests, -s: Number of tests that must succeed in order to pass a property
-maxDiscardRatio, -r: The maximum ratio between discarded and succeeded tests allowed before ScalaCheck stops testing a property. At least minSuccessfulTests will always be tested, though.
-maxSize, -x: Maximum data generation size
The source code of org.scalacheck.util.Pretty tells us more about the different levels of vebosity:
implicit def prettyThrowable(e: Throwable) = Pretty { prms =>
val strs = e.getStackTrace.map { st =>
import st._
getClassName+"."+getMethodName + "("+getFileName+":"+getLineNumber+")"
}
val strs2 =
if(prms.verbosity <= 0) Array[String]()
else if(prms.verbosity <= 1) strs.take(5)
else strs
e.getClass.getName + ": " + e.getMessage / strs2.mkString("\n")
}
So, 0 gives nothing, 1 takes 5 lines out of a stack trace, whereas a number greater than 1 gives you the entire stack trace as follows:
➜ scalacheck scala -cp .:/Users/jacek/.ivy2/cache/org.scalacheck/scalacheck_2.11/jars/scalacheck_2.11-1.11.4.jar StringSpecification -verbosity 3
+ String.startsWith: OK, passed 100 tests.
Elapsed time: 0.242 sec
! String.concatenate: Falsified after 0 passed tests.
> ARG_0: ""
> ARG_1: ""
Elapsed time: 0.003 sec
+ String.substring: OK, passed 100 tests.
Elapsed time: 0.126 sec
! String.throw exception: Exception raised on property evaluation.
> ARG_0: ""
> Exception: java.lang.NullPointerException: exception
StringSpecification$$anonfun$14.apply(StringSpecification.scala:19)
StringSpecification$$anonfun$14.apply(StringSpecification.scala:18)
scala.Function1$$anonfun$andThen$1.apply(Function1.scala:55)
org.scalacheck.Prop$$anonfun$forAllShrink$1$$anonfun$3.apply(Prop.scala:622
)
org.scalacheck.Prop$$anonfun$forAllShrink$1$$anonfun$3.apply(Prop.scala:622
)
org.scalacheck.Prop$.secure(Prop.scala:473)
org.scalacheck.Prop$$anonfun$forAllShrink$1.org$scalacheck$Prop$$anonfun$$r
esult$1(Prop.scala:622)
org.scalacheck.Prop$$anonfun$forAllShrink$1.apply(Prop.scala:659)
org.scalacheck.Prop$$anonfun$forAllShrink$1.apply(Prop.scala:616)
org.scalacheck.Prop$$anon$1.apply(Prop.scala:309)
org.scalacheck.Test$.org$scalacheck$Test$$workerFun$1(Test.scala:335)
org.scalacheck.Test$$anonfun$org$scalacheck$Test$$worker$1$1.apply(Test.sca
la:316)
org.scalacheck.Test$$anonfun$org$scalacheck$Test$$worker$1$1.apply(Test.sca
la:316)
org.scalacheck.Test$.check(Test.scala:385)
org.scalacheck.Test$$anonfun$checkProperties$1.apply(Test.scala:402)
org.scalacheck.Test$$anonfun$checkProperties$1.apply(Test.scala:395)
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala
:245)
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala
:245)
scala.collection.immutable.List.foreach(List.scala:383)
scala.collection.generic.TraversableForwarder$class.foreach(TraversableForw
arder.scala:35)
scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
scala.collection.AbstractTraversable.map(Traversable.scala:104)
org.scalacheck.Test$.checkProperties(Test.scala:395)
org.scalacheck.Properties.mainRunner(Properties.scala:62)
org.scalacheck.Prop$class.main(Prop.scala:106)
org.scalacheck.Properties.main(Properties.scala:27)
StringSpecification.main(StringSpecification.scala:-1)
sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethodAccessorImpl.java:
-2)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:5
7)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImp
l.java:43)
java.lang.reflect.Method.invoke(Method.java:606)
scala.reflect.internal.util.ScalaClassLoader$$anonfun$run$1.apply(ScalaClas
sLoader.scala:68)
scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoad
er.scala:31)
scala.reflect.internal.util.ScalaClassLoader$URLClassLoader.asContext(Scala
ClassLoader.scala:99)
scala.reflect.internal.util.ScalaClassLoader$class.run(ScalaClassLoader.sca
la:68)
scala.reflect.internal.util.ScalaClassLoader$URLClassLoader.run(ScalaClassL
oader.scala:99)
scala.tools.nsc.CommonRunner$class.run(ObjectRunner.scala:22)
scala.tools.nsc.ObjectRunner$.run(ObjectRunner.scala:39)
scala.tools.nsc.CommonRunner$class.runAndCatch(ObjectRunner.scala:29)
scala.tools.nsc.ObjectRunner$.runAndCatch(ObjectRunner.scala:39)
scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:72)
scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:94)
scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:103)
scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala:-1)
Elapsed time: 0.000 sec
You're right that the documentation for sbt is incorect. There are no "-d", "-g" options and I believe they're simply a copy-and-paste error in the documentation. It was already fixed in a pull request that's soon to be accepted.
The verbosity option is not supported under sbt in the recent version of ScalaCheck 1.11.4. The following is the entire build definition of a sample project.
build.sbt
scalaVersion := "2.11.1"
libraryDependencies += "org.scalacheck" %% "scalacheck" % "1.11.4" % "test"
testOptions in Test += Tests.Argument(TestFrameworks.ScalaCheck, "-verbosity", "3")
Even when the verbosity parameter is properly specified in build.sbt, test execution won't print more than a single line.
➜ scalacheck xsbt test
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Set current project to scalacheck (in build file:/Users/jacek/sandbox/scalacheck/)
[info] + String.startsWith: OK, passed 100 tests.
[info] ! String.concatenate: Falsified after 0 passed tests.
[info] > ARG_0: ""
[info] > ARG_1: ""
[info] + String.substring: OK, passed 100 tests.
[info] ! String.throw exception: Exception raised on property evaluation.
[info] > ARG_0: ""
[info] > Exception: java.lang.NullPointerException: exception
[error] Error: Total 4, Failed 1, Errors 1, Passed 2
[error] Error during tests:
[error] StringSpecification
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 1 s, completed Jun 25, 2014 2:57:08 AM
The reason is that org.scalacheck.ScalaCheckFramework has the following implementation and assumes verbosity always be 0:
override def onTestResult(n: String, r: Test.Result) = {
for (l <- loggers) {
import Pretty._
l.info(
(if (r.passed) "+ " else "! ") + n + ": " + pretty(r, Params(0))
)
}
handler.handle(asEvent((n,r)))
}
A pull request to fix it has already been accepted to the repo under Support verbosity under sbt for TestFrameworks.ScalaCheck. You'd have to build ScalaCheck yourself with sbt publishLocal in the directory where you cloned the repo. Don't forget to use 1.12.0-SNAPSHOT version in build.sbt to pick up the changes.
Since the issue that Jacek Laskowski noted has been resolved, you can now get stack traces from ScalaCheck in sbt by adding a line to your build.sbt file to set ScalaCheck's verbosity level.
testOptions in Test += Tests.Argument(TestFrameworks.ScalaCheck, "-verbosity", "2")
As Jacek Laskowski noted, a verbosity of 0 will give you just the exception.
[info] ! RandomAccessFile.readWriteInt: Exception raised on property evaluation.
[info] > ARG_0: 0
[info] > ARG_0_ORIGINAL: -2073744736
[info] > Exception: java.lang.UnsupportedOperationException: null
A verbosity of 1 will give you the first five lines of the stack trace.
[info] ! RandomAccessFile.readWriteInt: Exception raised on property evaluation.
[info] > ARG_0: 0
[info] > ARG_0_ORIGINAL: 2147483647
[info] > Exception: java.lang.UnsupportedOperationException: null
[info] java.nio.IntBuffer.array(IntBuffer.java:994)
[info] csc365a01.RandomAccessFile.readInt(RandomAccessFile.scala:22)
[info] RandomAccessFileProps$$anonfun$1$$anonfun$apply$1.apply$mcZI$sp(RandomAccessFileSpec.scala:14)
[info] RandomAccessFileProps$$anonfun$1$$anonfun$apply$1.apply(RandomAccessFileSpec.scala:11)
[info] RandomAccessFileProps$$anonfun$1$$anonfun$apply$1.apply(RandomAccessFileSpec.scala:11)
A verbosity greater than one will give you the full stack trace.
[info] ! RandomAccessFile.readWriteInt: Exception raised on property evaluation.
[info] > ARG_0: 0
[info] > ARG_0_ORIGINAL: 1693735989
[info] > Exception: java.lang.UnsupportedOperationException: null
[info] java.nio.IntBuffer.array(IntBuffer.java:994)
[info] csc365a01.RandomAccessFile.readInt(RandomAccessFile.scala:22)
[info] RandomAccessFileProps$$anonfun$1$$anonfun$apply$1.apply$mcZI$sp(RandomAccessFileSpec.scala:14)
[info] RandomAccessFileProps$$anonfun$1$$anonfun$apply$1.apply(RandomAccessFileSpec.scala:11)
[info] RandomAccessFileProps$$anonfun$1$$anonfun$apply$1.apply(RandomAccessFileSpec.scala:11)
[info] scala.Function1$$anonfun$andThen$1.apply(Function1.scala:52)
[info] org.scalacheck.Prop$$anonfun$forAllShrink$1$$anonfun$3.apply(Prop.scala:712)
[info] org.scalacheck.Prop$$anonfun$forAllShrink$1$$anonfun$3.apply(Prop.scala:712)
[info] org.scalacheck.Prop$.secure(Prop.scala:456)
[info] org.scalacheck.Prop$$anonfun$forAllShrink$1.org$scalacheck$Prop$$anonfun$$result$1(Prop.scala:712)
[info] org.scalacheck.Prop$$anonfun$forAllShrink$1$$anonfun$4.apply(Prop.scala:719)
[info] org.scalacheck.Prop$$anonfun$forAllShrink$1$$anonfun$4.apply(Prop.scala:719)
[info] scala.collection.immutable.Stream.map(Stream.scala:418)
[info] org.scalacheck.Prop$$anonfun$forAllShrink$1.getFirstFailure$1(Prop.scala:719)
[info] org.scalacheck.Prop$$anonfun$forAllShrink$1.shrinker$1(Prop.scala:729)
[info] org.scalacheck.Prop$$anonfun$forAllShrink$1.apply(Prop.scala:751)
[info] org.scalacheck.Prop$$anonfun$forAllShrink$1.apply(Prop.scala:706)
[info] org.scalacheck.Prop$$anonfun$apply$5.apply(Prop.scala:292)
[info] org.scalacheck.Prop$$anonfun$apply$5.apply(Prop.scala:291)
[info] org.scalacheck.PropFromFun.apply(Prop.scala:22)
[info] org.scalacheck.Prop$$anonfun$delay$1.apply(Prop.scala:461)
[info] org.scalacheck.Prop$$anonfun$delay$1.apply(Prop.scala:461)
[info] org.scalacheck.Prop$$anonfun$apply$5.apply(Prop.scala:292)
[info] org.scalacheck.Prop$$anonfun$apply$5.apply(Prop.scala:291)
[info] org.scalacheck.PropFromFun.apply(Prop.scala:22)
[info] org.scalacheck.Test$.org$scalacheck$Test$$workerFun$1(Test.scala:294)
[info] org.scalacheck.Test$$anonfun$3.apply(Test.scala:323)
[info] org.scalacheck.Test$$anonfun$3.apply(Test.scala:323)
[info] org.scalacheck.Platform$.runWorkers(Platform.scala:40)
[info] org.scalacheck.Test$.check(Test.scala:323)
[info] org.scalacheck.ScalaCheckRunner$$anon$2$$anonfun$execute$3$$anonfun$apply$2.apply(ScalaCheckFramework.scala:102)
[info] org.scalacheck.ScalaCheckRunner$$anon$2$$anonfun$execute$3$$anonfun$apply$2.apply(ScalaCheckFramework.scala:100)
[info] scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
[info] scala.collection.immutable.List.foreach(List.scala:381)
[info] scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
[info] scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
[info] scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
[info] org.scalacheck.ScalaCheckRunner$$anon$2$$anonfun$execute$3.apply(ScalaCheckFramework.scala:100)
[info] org.scalacheck.ScalaCheckRunner$$anon$2$$anonfun$execute$3.apply(ScalaCheckFramework.scala:97)
[info] scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
[info] scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
[info] scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
[info] scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
[info] scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
[info] scala.collection.mutable.ArrayOps$ofRef.flatMap(ArrayOps.scala:186)
[info] org.scalacheck.ScalaCheckRunner$$anon$2.execute(ScalaCheckFramework.scala:97)
[info] sbt.TestRunner.runTest$1(TestFramework.scala:76)
[info] sbt.TestRunner.run(TestFramework.scala:85)
[info] sbt.TestFramework$$anon$2$$anonfun$$init$$1$$anonfun$apply$8.apply(TestFramework.scala:202)
[info] sbt.TestFramework$$anon$2$$anonfun$$init$$1$$anonfun$apply$8.apply(TestFramework.scala:202)
[info] sbt.TestFramework$.sbt$TestFramework$$withContextLoader(TestFramework.scala:185)
[info] sbt.TestFramework$$anon$2$$anonfun$$init$$1.apply(TestFramework.scala:202)
[info] sbt.TestFramework$$anon$2$$anonfun$$init$$1.apply(TestFramework.scala:202)
[info] sbt.TestFunction.apply(TestFramework.scala:207)
[info] sbt.Tests$$anonfun$9.apply(Tests.scala:216)
[info] sbt.Tests$$anonfun$9.apply(Tests.scala:216)
[info] sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
[info] sbt.std.Transform$$anon$3$$anonfun$apply$2.apply(System.scala:44)
[info] sbt.std.Transform$$anon$4.work(System.scala:63)
[info] sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
[info] sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
[info] sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
[info] sbt.Execute.work(Execute.scala:235)
[info] sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
[info] sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
[info] sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
[info] sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
[info] java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[info] java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] java.lang.Thread.run(Thread.java:745)
[info] Elapsed time: 0.047 sec