With dill.dumps(), getting error as "can't pickle tensorflow.python._tf_stack.StackSummary objects" - dill

I have tensorflow model object and trying to serialize through dill.dumps
but getting following error:
can't pickle tensorflow.python._tf_stack.StackSummary objects

Related

value option is not a member of org.apache.spark.sql.DataFrame

I'm trying to create a data frame in scala as below:
var olympics =spark.read.csv("/FileStore/tables/Soccer_Data_Set_c46d1.txt").option("inferSchema","true").option("header","true").option("delimiter",",")
When I submit the code it throws me value option is not a member of org.apache.spark.sql.DataFrame error.
However when i modify the code as below:
var olympics = spark.read.option("inferSchema","true").option("header","true").option("delimiter",",").csv("/FileStore/tables/Soccer_Data_Set_-c46d1.txt")
olympics dataframe is successfully created.
Can someone please help me understand the difference between these two code snippets?
After you've called csv method, you already have a DataFrame, and data is already read "into" spark, so it doesn't make sense to set options there.
In the second example, you're calling read to "say" that you want spark to read a file, setting properties of such read, and then actually reading the file.
In the first set of code: On invoking 'read.csv("/FileStore/tables/Soccer_Data_Set_c46d1.txt")' method you will be getting 'org.apache.spark.sql.Dataset' object as return value. This class do not define any 'option()' method which you are trying to invoke later ('csv(..).option("inferSchema", "true")'). So, the compiler is restricting you and throwing the error.
Please refer: Dataset class API where you can find no definition of 'option()' method
In the second set of code: On invoking 'spark.read' method you will be getting 'org.apache.spark.sql.DataFrameReader' object as return value. This class has got some of the overloaded 'option' methods been defined and as you are using one of the valid methods you are not getting any error from compiler.
Please refer DataFrameReader class API where you can find overloaded methods of 'option()' been defined.

Reference a java nested class in Spark Scala

I'm trying to read some data from hadoop into an RDD in Spark using the interactive Scala shell but I'm having trouble accessing some of the classes I need to deserialise the data.
I start by importing the necessary class
import com.example.ClassA
Which works fine. ClassA is located in a jar in the 'jars' path and has ClassB as a public static nested class
I'm then trying to use ClassB like so:
val rawData = sc.newAPIHadoopFile(dataPath, classOf[com.exmple.mapreduce.input.Format[com.example.ClassA$ClassB]], classOf[org.apache.hadoop.io.LongWritable], classOf[com.example.ClassA$ClassB])
This is slightly complicated by one of the other classes taking ClassB as a type, but I think that should be fine.
When I execute this line, I get the following error:
<console>:17: error: type ClassA$ClassB is not a member of package com.example
I have also tried using the import statement
import com.example.ClassA$ClassB
and it also seems fine with that.
Any advice as to how I could proceed to debug this would be appreciated
Thanks for reading.
update:
Changing the '$' to a '.' to reference the nested class seems to get past this problem, although I then got the following syntax error:
'<console>:17: error: inferred type arguments [org.apache.hadoop.io.LongWritable,com.example.ClassA.ClassB,com.example.mapredu‌​ce.input.Format[com.example.ClassA.ClassB]] do not conform to method newAPIHadoopFile's type parameter bounds [K,V,F <: org.apache.hadoop.mapreduce.InputFormat[K,V]]
Notice the types that the newAPIHadoopFile expects:
K,V,F <: org.apache.hadoop.mapreduce.InputFormat[K,V]
the important part here is that the generic type InputFormat expects the types K and V, i.e. the exact types of the first two parameters to the method.
In your case, the third parameter should be of type
F <: org.apache.hadoop.mapreduce.InputFormat[LongWritable, ClassA.ClassB]
does your class extend FileInputFormat<LongWritable, V>?

Use Scala Manifest to dynamically instantiate Objects

I am wondering if it is possible to instantiate an object (companion object) dynamically using Manifest. I want to parse json in MongoRecord but to do so, I have to understand which is the type that is passed.
def getCompanion[T](implicit mf : Manifest[T])={
if (mf <:< classOf[MongoRecord[C]]){
Class[C].asInstanceOf[MongoRecord].setFieldsFromJSON(request.body.toString)
}
}
but I am receiving an error during compilation:
error: object Class is not a value
Class[C].asInstanceOf[MongoRecord].setFieldsFromJSON(request.body.toString)
it is a difficult topic to me. Perhaps it is not feasible but I would like to know if it is possible?
Thanks

NSInvocation error in class which doesn't exist anymore

I had a class named "Object", changed the name to TheObject (everywhere in my project) cause I thought it would cause problems. (TheObject is a CoreData generated class)
Now i get this error:
2012-09-21 14:19:45.794 Choose3[1348:fb03] *** NSInvocation: warning: object 0x191f500 of class 'Object' does not implement methodSignatureForSelector: -- trouble ahead
2012-09-21 14:19:45.794 Choose3[1348:fb03] *** NSInvocation: warning: object 0x191f500 of class 'Object' does not implement doesNotRecognizeSelector: -- abort
Why does Xcode still think I have a class named Object while I haven't ?
The name of the entity in the model was changed but the class of the entity was still Object, and not TheObject.

iPhone Coredata saving error

I'm trying to create core data application.
Some times when trying to save data, i'm seeing following error:
Error: NSInvalidArgumentException,
Reason: * -_referenceData64 only defined for abstract class. Define -[NSTemporaryObjectID_default _referenceData64]!,
Description: * -_referenceData64 only defined for abstract class. Define -[NSTemporaryObjectID_default _referenceData64]!
I didn't understand why this error is coming and how to avoid it. Can some one help me please.
Edit: The original answer below is technically correct but doesn't accurately describe the true source of the error. The runtime can't find the correct attribute but the reason it can't find it is because the entity exist in another managed object context. The OP probably never had a _referenceData64 attribute for any of his entities.
See: http://www.cocoadev.com/index.pl?TemporaryObjectIdsDoNotRespondToReferenceData
Original Answer:
You have a class that has an attribute _referenceData64. In the data model, that class is marked as "abstract'. Select the entity in data model editor and check the box below that says "abstract". If it is checked, then that is your problem.
An abstract entity is never instantiated. Unless is has a subclass, you can't actually set its attributes to any value. Abstract entities just exist to provide templates for subclasses.