AttributeError: module 'pyspark.rdd' has no attribute 'V' - pyspark

py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0) (hadoop102 executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/opt/module/spark/python/lib/pyspark.zip/pyspark/worker.py", line 601, in main
func, profiler, deserializer, serializer = read_command(pickleSer, infile)
File "/opt/module/spark/python/lib/pyspark.zip/pyspark/worker.py", line 71, in read_command
command = serializer._read_with_length(file)
File "/opt/module/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 160, in _read_with_length
return self.loads(obj)
File "/opt/module/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 430, in loads
return pickle.loads(obj, encoding=encoding)
AttributeError: module 'pyspark.rdd' has no attribute 'V'

Related

pyspark gives 'KeyError: 0'

I want to image classification with pyspark but
spark_model = sparkdn.fit(train)
gives
PythonException:
An exception was thrown from the Python worker. Please see the stack trace below.
Traceback (most recent call last):
File "/home/esra/anaconda3/envs/env/lib/python3.9/site-packages/sparkdl/image/imageIO.py", line 158, in resizeImageAsRow
imgAsArray = imageStructToArray(imgAsRow)
File "/home/esra/anaconda3/envs/env/lib/python3.9/site-packages/sparkdl/image/imageIO.py", line 121, in imageStructToArray
imType = imageType(imageRow)
File "/home/esra/anaconda3/envs/env/lib/python3.9/site-packages/sparkdl/image/imageIO.py", line 111, in imageType
return sparkModeLookup[imageRow.mode]
KeyError: 0
Any suggestion? Thanks

py4j.protocol.Py4JJavaError: An error occurred while calling o86.toDF: org.apache.spark.SparkException: Job aborted due to stage failure:

Code:
from pyspark.sql.types import *
from pyspark.sql.functions import *
retail_sales_transaction = glueContext.create_dynamic_frame.from_catalog(
database="conform_main_mobconv",
table_name="retail_sales_transaction"
).select_fields(["business week","transaction_key","dh_audit_record_type","dh_audit_active_record"])
#TODO: Implement delta logics here & exclude Deleted & Inactivve records here
df_retail_sales_transaction= (retail_sales_transaction.toDF().filter((f.col('dh_audit_record_type')!='DELETE') & (f.col('dh_audit_active_record')=='1')))
Error I'm getting is :
df_retail_sales_transaction= (retail_sales_transaction.toDF().filter((f.col('dh_audit_record_type')!='DELETE') & (f.col('dh_audit_active_record')=='1')))
py4j.protocol.Py4JJavaError: An error occurred while calling o86.toDF.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1.0 failed 4 times, most recent failure: Lost task 1.3 in stage 1.0 (TID 14, 172.35.203.73, executor 1): java.lang.UnsupportedOperationException
I had to implement something close to that, Filter transform did the job. Try it:
from awsglue.transforms import Filter
dynamicframeFiltered = Filter.apply(
frame = retail_sales_transaction,
f = lambda row: row["dh_audit_record_type"] != 'DELETE' and row["dh_audit_active_record"] == '1'
)
dynamicframeFiltered.toDF().show(1)

Random forest classifier gives out a weird error

def predict(training_data, test_data):
# TODO: Train random forest classifier from given data
# Result should be an RDD with the prediction of the random forest for each
# test data point
RANDOM_SEED = 13579
RF_NUM_TREES = 3
RF_MAX_DEPTH = 4
RF_NUM_BINS = 32
model = RandomForest.trainClassifier(training_data, numClasses=2, categoricalFeaturesInfo={}, \
numTrees=RF_NUM_TREES, featureSubsetStrategy="auto", impurity="gini", \
maxDepth=RF_MAX_DEPTH, seed=RANDOM_SEED)
predictions = model.predict(test_data.map(lambda x: x.features))
labels_and_predictions = test_data.map(lambda x: x.label).zip(predictions)
return predictions
I encounter below error:
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 122.0 failed 1 times, most recent failure: Lost task 0.0 in stage 122.0 (TID 226, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "C:\Spark\spark-2.4.3-bin-hadoop2.7\spark-2.4.3-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 377, in main
File "C:\Spark\spark-2.4.3-bin-hadoop2.7\spark-2.4.3-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 372, in process
File "C:\Spark\spark-2.4.3-bin-hadoop2.7\spark-2.4.3-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\serializers.py", line 393, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "C:\Spark\spark-2.4.3-bin-hadoop2.7\spark-2.4.3-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\util.py", line 99, in wrapper
return f(*args, **kwargs)
File "<ipython-input-20-170be0983095>", line 12, in <lambda>
File "C:\Spark\spark-2.4.3-bin-hadoop2.7\spark-2.4.3-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\mllib\linalg\__init__.py", line 483, in __getattr__
return getattr(self.array, item)
AttributeError: 'numpy.ndarray' object has no attribute 'features'
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:588)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:571)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.hasNext(SerDeUtil.scala:153)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:148)

Pyspark 'tzinfo' error when using the Cassandra connector

I'm reading from Cassandra using
a = sc.cassandraTable("my_keyspace", "my_table").select("timestamp", "vaue")
and then want to convert it to a dataframe:
a.toDF()
and the schema is correctly infered:
DataFrame[timestamp: timestamp, value: double]
but then when materializing the dataframe I get the following error:
Py4JJavaError: An error occurred while calling o89372.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 285.0 failed 4 times, most recent failure: Lost task 0.3 in stage 285.0 (TID 5243, kepler8.cern.ch): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/opt/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main
process()
File "/opt/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/opt/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/opt/spark-1.6.0-bin-hadoop2.6/python/pyspark/sql/types.py", line 541, in toInternal
return tuple(f.toInternal(v) for f, v in zip(self.fields, obj))
File "/opt/spark-1.6.0-bin-hadoop2.6/python/pyspark/sql/types.py", line 541, in <genexpr>
return tuple(f.toInternal(v) for f, v in zip(self.fields, obj))
File "/opt/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/sql/types.py", line 435, in toInternal
return self.dataType.toInternal(obj)
File "/opt/spark-1.6.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal
seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo
AttributeError: 'str' object has no attribute 'tzinfo'
which sounds like a string as been given to pyspark.sql.types.TimestampType.
How could I debug this further?

Error in Pyspark : Job aborted due to stage failure: Task 0 in stage 69.0 failed 1 times ; ValueError: too many values to unpack

I was attempting a simple rightOuterJoin, in Pyspark. The datasets which I am trying to join are the following
temp1.take(5)
Out[138]:
[u'tube_assembly_id,component_id_1,quantity_1,component_id_2,quantity_2,component_id_3,quantity_3,component_id_4,quantity_4,component_id_5,quantity_5,component_id_6,quantity_6,component_id_7,quantity_7,component_id_8,quantity_8',
u'TA-00001,C-1622,2,C-1629,2,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA',
u'TA-00002,C-1312,2,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA',
u'TA-00003,C-1312,2,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA',
u'TA-00004,C-1312,2,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA']
In [139]:
temp2.take(5)
Out[139]:
[u'tube_assembly_id,spec1,spec2,spec3,spec4,spec5,spec6,spec7,spec8,spec9,spec10',
u'TA-00001,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA',
u'TA-00002,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA',
u'TA-00003,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA',
u'TA-00004,NA,NA,NA,NA,NA,NA,NA,NA,NA,NA']
The join command is as follows
In [140]:
temp4 = temp1.rightOuterJoin(temp2)
​
​
temp4
Out[140]:
PythonRDD[191] at RDD at PythonRDD.scala:43
However, When I attempt to do any of the operations like
temp4.take(4) or temp4.count() I get the long error as listed below
Py4JJavaError Traceback (most recent call last)
<ipython-input-141-3372dfa2c550> in <module>()
----> 1 temp4.take(5)
/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py in take(self, num)
1222
1223 p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
-> 1224 res = self.context.runJob(self, takeUpToNumLeft, p, True)
1225
1226 items += res
/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/context.py in runJob(self, rdd, partitionFunc, partitions, allowLocal)
840 mappedRDD = rdd.mapPartitions(partitionFunc)
841 port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, javaPartitions,
--> 842 allowLocal)
843 return list(_load_from_socket(port, mappedRDD._jrdd_deserializer))
844
/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py in __call__(self, *args)
536 answer = self.gateway_client.send_command(command)
537 return_value = get_return_value(answer, self.gateway_client,
--> 538 self.target_id, self.name)
539
540 for temp_arg in temp_args:
/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
298 raise Py4JJavaError(
299 'An error occurred while calling {0}{1}{2}.\n'.
--> 300 format(target_id, '.', name), value)
301 else:
302 raise Py4JError(
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 74.0 failed 1 times, most recent failure: Lost task 0.0 in stage 74.0 (TID 78, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/worker.py", line 101, in main
process()
File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/worker.py", line 96, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/serializers.py", line 236, in dump_stream
vs = list(itertools.islice(iterator, batch))
File "/usr/local/bin/spark-1.3.1-bin-hadoop2.6/python/pyspark/rdd.py", line 1806, in <lambda>
map_values_fn = lambda (k, v): (k, f(v))
ValueError: too many values to unpack
at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:135)
at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:176)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:94)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:243)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1618)
at org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:205)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Appreciate help on this. I am new to Pyspark