call ResultSet.getMetaData in Scala results in java.lang.SecurityException - scala

I'm using Slick in Akka to access MySQL DB.
I want to get the table column info by calling resultSet.getMetaData, as shown below:
(I tried with slick.jdbc.meta.MTable.getTables, but it always returns an empty vector; as someone else also ran into it.)
implicit val meta2string = GetResult{ row: PositionedResult =>
val md = row.rs.getMetaData() // call rs.getMetaData will cause exception thrown
"debugging"
}
val done = Slick.source(sql"SELECT * FROM trips".as[String](meta2string)).runForeach(println)
The problem is, when ever I call rs.getMetaData(), an exception will occur:
java.lang.SecurityException: Prohibited package name: java.sql
at java.base/java.lang.ClassLoader.preDefineClass(ClassLoader.java:898)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1014)
at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174)
at java.base/java.net.URLClassLoader.defineClass(URLClassLoader.java:550)
at java.base/java.net.URLClassLoader$1.run(URLClassLoader.java:458)
at java.base/java.net.URLClassLoader$1.run(URLClassLoader.java:452)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:451)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:575)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
at $line303.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$.$anonfun$meta2string$1(<pastie>:138)
at slick.jdbc.GetResult$$anon$2.apply(GetResult.scala:73)
at slick.jdbc.GetResult$$anon$2.apply(GetResult.scala:73)
at slick.jdbc.SQLActionBuilder$$anon$1$$anon$2.extractValue(StaticQuery.scala:100)
at slick.jdbc.StatementInvoker$$anon$2.extractValue(StatementInvoker.scala:67)
at slick.jdbc.PositionedResultIterator.fetchNext(PositionedResult.scala:176)
at slick.util.ReadAheadIterator.update(ReadAheadIterator.scala:28)
at slick.util.ReadAheadIterator.hasNext(ReadAheadIterator.scala:34)
at slick.util.ReadAheadIterator.hasNext$(ReadAheadIterator.scala:33)
at slick.jdbc.PositionedResultIterator.hasNext(PositionedResult.scala:167)
at slick.jdbc.StreamingInvokerAction.emitStream(StreamingInvokerAction.scala:31)
at slick.jdbc.StreamingInvokerAction.emitStream$(StreamingInvokerAction.scala:26)
at slick.jdbc.SQLActionBuilder$$anon$1.emitStream(StaticQuery.scala:95)
at slick.jdbc.SQLActionBuilder$$anon$1.emitStream(StaticQuery.scala:95)
at slick.basic.BasicBackend$DatabaseDef$$anon$4.run(BasicBackend.scala:342)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
I'm not sure if there's sth. wrong with my use, or it's not supported by Slick?
Could anyone please help me on this?

I'm not quite sure of this, but I suspect this issue is due to that I was running the code example in sbt console.
I ran the same code snippet with a jar and everything was fine.

Related

"IllegalStateException: state should be: open" when using mapPartitions with Mongo connector

The setup
I have a simple Spark application that uses mapPartitions to transform an RDD. As part of this transformation, I retrieve some necessary data from a Mongo database. The connection from the Spark worker to the Mongo database is managed using the MongoDB Connector for Spark (https://docs.mongodb.com/spark-connector/current/).
I'm using mapPartitions instead of the simpler map because there is some relatively expensive setup that is only required once for all elements in a partition. If I were to use map instead, this setup would have to be repeated for every element individually.
The problem
When one of the partitions in the source RDD becomes large enough, the transformation fails with the message
IllegalStateException: state should be: open
or, occasionally, with
IllegalStateException: The pool is closed
The code
Below is the code of a simple Scala application with which I can reproduce the issue:
package my.package
import com.mongodb.spark.MongoConnector
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession
import org.bson.Document
object MySparkApplication {
def main(args: Array[String]): Unit = {
val sparkSession: SparkSession = SparkSession.builder()
.appName("MySparkApplication")
.master(???) // The Spark master URL
.config("spark.jars", ???) // The path at which the application's fat JAR is located.
.config("spark.scheduler.mode", "FAIR")
.config("spark.mongodb.keep_alive_ms", "86400000")
.getOrCreate()
val mongoConnector: MongoConnector = MongoConnector(Map(
"uri" -> ??? // The MongoDB URI.
, "spark.mongodb.keep_alive_ms" -> "86400000"
, "keep_alive_ms" -> "86400000"
))
val localDocumentIds: Seq[Long] = Seq.range(1L, 100L)
val documentIdsRdd: RDD[Long] = sparkSession.sparkContext.parallelize(localDocumentIds)
val result: RDD[Document] = documentIdsRdd.mapPartitions { documentIdsIterator =>
mongoConnector.withMongoClientDo { mongoClient =>
val collection = mongoClient.getDatabase("databaseName").getCollection("collectionName")
// Some expensive query that should only be performed once for every partition.
collection.find(new Document("_id", 99999L)).first()
documentIdsIterator.map { documentId =>
// An expensive operation that does not interact with the Mongo database.
Thread.sleep(1000)
collection.find(new Document("_id", documentId)).first()
}
}
}
val resultLocal = result.collect()
}
}
The stack trace
Below is the stack trace returned by Spark when I run the application above:
Driver stacktrace:
[...]
at my.package.MySparkApplication.main(MySparkApplication.scala:41)
at my.package.MySparkApplication.main(MySparkApplication.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.IllegalStateException: state should be: open
at com.mongodb.assertions.Assertions.isTrue(Assertions.java:70)
at com.mongodb.connection.BaseCluster.getDescription(BaseCluster.java:152)
at com.mongodb.Mongo.getConnectedClusterDescription(Mongo.java:885)
at com.mongodb.Mongo.createClientSession(Mongo.java:877)
at com.mongodb.Mongo$3.getClientSession(Mongo.java:866)
at com.mongodb.Mongo$3.execute(Mongo.java:823)
at com.mongodb.FindIterableImpl.first(FindIterableImpl.java:193)
at my.package.MySparkApplication$$anonfun$1$$anonfun$apply$1$$anonfun$apply$2.apply(MySparkApplication.scala:36)
at my.package.MySparkApplication$$anonfun$1$$anonfun$apply$1$$anonfun$apply$2.apply(MySparkApplication.scala:33)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at scala.collection.AbstractIterator.to(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$13.apply(RDD.scala:936)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
The research I have done
I have found several people asking about this issue, and it seems that in all of their cases, the problem turned out to be them using the Mongo client after it had been closed. As far as I can tell, this is not happening in my application - opening and closing the connection should be handled by the Mongo-Spark connector, and I would expect the client to only be closed after the function passed to mongoConnector.withMongoClientDo returns.
I did manage to discover that the issue does not arise for the very first element in the RDD. It seems instead that a number of elements are being processed successfully, and that the failure only occurs once the process has taken a certain amount of time. This amount of time seems to be on the order of 5 to 15 seconds.
The above leads me to believe that something is automatically closing the client once it has been active for a certain amount of time, even though it is still being used.
As you can tell by my code, I have discovered the fact that the Mongo-Spark connector exposes a configuration spark.mongodb.keep_alive_ms that, according to the connector documentation, controls "The length of time to keep a MongoClient available for sharing". Its default value is 5 seconds, so this seemed like a useful thing to try. In the application above, I attempt to set it to an entire day in three different ways, with zero effect. The documentation does state that this specific property "can only be configured via a System Property". I think that this is what I'm doing (by setting the property when initialising the Spark session and/or Mongo connector), but I'm not entirely sure. It seems to be impossible to verify the setting once the Mongo connector has been initialised.
One other StackOverflow question mentions that I should try setting the maxConnectionIdleTime option in the MongoClientOptions, but as far as I can tell it is not possible to set these options through the connector.
As a sanity check, I tried replacing the use of mapPartitions with a functionally equivalent use of map. The issue disappeared, which is probably because the connection to the Mongo database is re-initialised for each individual element of the RDD. However, as mentioned above, this approach would have significantly worse performance because I would end up repeating expensive setup work for every element in the RDD.
Out of curiosity I also tried replacing the call to mapPartitions with a call to foreachPartition, also replacing the call to documentIdsIterator.map with documentIdsIterator.foreach. The issue also disappeared in this case. I have no idea why this would be, but because I need to transform my RDD, this is also not an acceptable approach.
The kind of answer I am looking for
"You actually are closing the client prematurely, and here's where: [...]"
"This is a known issue in the Mongo-Spark connector, and here's a link to their issue tracker: [...]"
"You are setting the spark.mongodb.keep_alive_ms property incorrectly, this is how you should do it: [...]"
"It is possible to verify the value of spark.mongodb.keep_alive_ms on your Mongo connector, and here's how: [...]"
"It is possible to set MongoClientOptions such as maxConnectionIdleTime through the Mongo connector, and here's how: [...]"
Edit
Further investigation has yielded the following insight:
The phrase 'System property' used in the connector's documentation refers to a Java system property, set using System.setProperty("spark.mongodb.keep_alive_ms", desiredValue) or the command line option -Dspark.mongodb.keep_alive_ms=desiredValue. This value is then read by the MongoConnector singleton object, and passed to the MongoClientCache. However, neither of the approaches for setting this property actually works:
Calling System.setProperty() from the driver program sets the value only in the JVM for the Spark driver program, while the value is needed in the Spark worker's JVM.
Calling System.setProperty() from the worker program sets the value only after it is read by MongoConnector.
Passing the command line option -Dspark.mongodb.keep_alive_ms to the Spark option spark.driver.extraJavaOptions again only sets the value in the driver's JVM.
Passing the command line option to the Spark option spark.executor.extraJavaOptions results in an error message from Spark:
Exception in thread "main" java.lang.Exception: spark.executor.extraJavaOptions is not allowed to set Spark options (was '-Dspark.mongodb.keep_alive_ms=desiredValue'). Set them directly on a SparkConf or in a properties file when using ./bin/spark-submit.
The Spark code that throws this error is located in org.apache.spark.SparkConf#validateSettings, where it checks for any worker option value that contains the string -Dspark.
This seems like an oversight in the design of the Mongo connector; either the property should be set through the Spark session (as I originally expected it to be), or it should be renamed to something that doesn't start with spark. I added this information to the JIRA ticket mentioned in the comments.
The core issue is that the MongoConnector uses a cache for MongoClients and follows the loan pattern for managing that cache. Once all loaned MongoClients are returned and the keep_alive_ms time has passed the MongoClient will be closed and removed from the cache.
Due to the nature of how RDD's are implemented (they follow Scala's lazy collection semantics) the following code: documentIdsIterator.map { documentId => ... } is only processed once the RDD is actioned. By that time the loaned MongoClient has already been returned back to the cache and after the keep_alive_ms the MongoClient will be closed. This results in a state should be open exception on the client.
How to solve?
You could once SPARK-246 is fixed set the keep_alive_ms to be high enough so that the MongoClient is not closed during the processing of the RDD. However, that still breaks the contract of the loan pattern that the MongoConnector uses - so should be avoided.
Reuse the MongoConnector to get the client as needed. This way the cache can still be used, should a client be available, but should a client timeout for any reason then a new one will be automatically created:
documentIdsRdd.mapPartitions { documentIdsIterator =>
mongoConnector.withMongoClientDo { mongoClient =>
// Do some expensive operation
...
// Return the lazy collection
documentIdsIterator.map { documentId =>
// Loan the mongoClient
mongoConnector.withMongoClientDo { mongoClient => ... }
}
}
}
Connection objects are in general tightly bound to the context, in which they where initialized. You cannot simply serialize such objects and pass them around. Instead you these should be initialized in-place, in the mapPartitions:
val result: RDD[Document] = documentIdsRdd.mapPartitions { documentIdsIterator =>
val mongoConnector: MongoConnector = MongoConnector(Map(
"uri" -> ??? // The MongoDB URI.
, "spark.mongodb.keep_alive_ms" -> "86400000"
, "keep_alive_ms" -> "86400000"
))
mongoConnector.withMongoClientDo { mongoClient =>
...
}
}

Is hot code replace supposed to work for Groovy in Eclipse?

I was wondering if anyone was able to get Groovy hot replace working in Eclipse reliably. I can't find any useful info about this, so I am not sure if it's b/c it's just working for everyone else? Or is nobody using Eclipse to do Groovy development?
I have tried using the latest Eclipse (4.5 Mars) with latest Groovy-Eclipse plugin (Groovy Eclipse 2.9.2 from http://dist.springsource.org/snapshot/GRECLIPSE/e4.5/), and I still can't get reliable hot replace.
Some simple hot replace scenarios work fine. However, just a little bit of complexity leads to strange Groovy exceptions. I get different errors in different situations, but I was able to reproduce one in a simple junit, so I'll demonstrate that one with some simplified domain objects.
HotSwapTests.groovy:
class HotSwapTests {
#Test
public void testHotReplace() {
DefaultTxView transactionGroup = new DefaultTxView();
List<Default> defaults = [];
Default d1 = new Default(ProducerAccountTransactionType.REPAID_AMOUNT, ParticipantAccountType.DEFAULT);
Default d2 = new Default(ProducerAccountTransactionType.REPAID_AMOUNT, ParticipantAccountType.DEFAULT);
d1.setCancelledDefault(d2);
defaults << d1;
transactionGroup.setDefaultTransactions(defaults);
while (true) {
Default result = transactionGroup.getRepaymentTransaction();
println result
}
}
}
DefaultTxView.groovy:
public class DefaultTxView {
def List<Default> defaultTransactions;
public Default getRepaymentTransaction() { return getTransactionOfType(REPAID_AMOUNT); }
public Default getTransactionOfType(ProducerAccountTransactionType type) {
return defaultTransactions.find { it.getType() == type };
}
Default.java:
The contents of this domain object are not really important - it's a simple POJO.
Now, to test hotswap I place a breakpoint at the marked line:
while (true) {
Default result = transactionGroup.getRepaymentTransaction(); <<< break
println result
}
And then I go to DefaultTxView.groovy and modify the code inside the closure passed in to the find method:
public Default getTransactionOfType(ProducerAccountTransactionType type) {
return defaultTransactions.find { it.getType() == type && it.getCancelledDefault() == null};
}
I don't get any warning or error messages when I save the file, but if I attempt to step over the modified line now, I get the following exception:
java.lang.ArrayIndexOutOfBoundsException: 2
at ca.gc.agr.app.web.jsf.producer.DefaultTxView$_getTransactionOfType_closure1.doCall(DefaultTxView.groovy:15)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:324)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:278)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1016)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:39)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.BooleanReturningMethodInvoker.invoke(BooleanReturningMethodInvoker.java:48)
at org.codehaus.groovy.runtime.callsite.BooleanClosureWrapper.call(BooleanClosureWrapper.java:50)
at org.codehaus.groovy.runtime.DefaultGroovyMethods.find(DefaultGroovyMethods.java:3060)
at org.codehaus.groovy.runtime.dgm$175.invoke(Unknown Source)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite$PojoMetaMethodSiteNoUnwrapNoCoerce.invoke(PojoMetaMethodSite.java:271)
at org.codehaus.groovy.runtime.callsite.PojoMetaMethodSite.call(PojoMetaMethodSite.java:53)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:116)
at ca.gc.agr.app.web.jsf.producer.DefaultTxView.getTransactionOfType(DefaultTxView.groovy:15)
at ca.gc.agr.app.web.jsf.producer.DefaultTxView$getTransactionOfType$1.callCurrent(Unknown Source)
at ca.gc.agr.app.web.jsf.producer.DefaultTxView.getRepaymentTransaction(DefaultTxView.groovy:11)
at ca.gc.agr.app.web.jsf.producer.DefaultTxView$getRepaymentTransaction$0.call(Unknown Source)
at ca.gc.agr.app.web.jsf.temp.HotSwapTests.testHotReplace(HotSwapTests.groovy:29)
I get very similar results when running my webapp in TomCat, with the same exception after modifying that line. Restarting the junit, or TomCat makes the new line work fine, so it's definitely a hot replace issue.
So what am I doing wrong? Any advice would be appreciated.
I've used hot deploy in a web dev environment with groovy successfully in the past using the eclipse plugin.
IIRC, I used groovyReset.jar, DCEVM and jdk1.7.
groovyReset.jar should be in the classpath and set as java agent. I've used the one found inside the groovy-eclipse plugin folder (like eclipse/plugins/org.codehaus.groovy_2.3.7.xx-201411061335-e44-RELEASE/extras/groovyReset.jar)
-javaagent:/groovyReset.jar
New closures and methods were instantly visible without redeploy. Of course including a simple LOC in a method worked too. Sometimes i needed to restart the VM, but still a breath of fresh air.
In your case, i think at least groovyReset.jar must be present. It is responsible for resetting the metaclass. If you decompile a groovy class you can check method calls being called by reflection using an array of java.lang.Method. Upon hot code swap this array gets out of order, needing a reset.

Apache Camel File process is resulting in TypeConversion Error

I am using akka-camel to process files. My initial tests were working great, however when I started passing in actual xml files it is puking with type conversions.
Here is my consumer (very simple, but puking at msg.bodyAs[String]
class FileConsumer extends Consumer {
def endpointUri = "file:/data/input/actor"
val processor = context.actorOf(Props[Processor], "processor")
def receive = {
case msg: CamelMessage => {
println("Parent...received %s" format msg)
processor ! msg.bodyAs[String]
}
}
}
Error:
[ERROR] [04/27/2015 12:10:48.617] [ArdisSystem-akka.actor.default-dispatcher-5] [akka://ArdisSystem/user/$a] Error during type conversion from type: org.apache.camel.converter.stream.FileInputStreamCache to the required type: java.lang.String with value org.apache.camel.converter.stream.FileInputStreamCache#4611b35a due java.io.FileNotFoundException: /var/folders/dh/zfqvn9gn7cl6h63d3400y4zxp3xtzf/T/camel-tmp-807558/cos2920459202139947606.tmp (No such file or directory)
org.apache.camel.TypeConversionException: Error during type conversion from type: org.apache.camel.converter.stream.FileInputStreamCache to the required type: java.lang.String with value org.apache.camel.converter.stream.FileInputStreamCache#4611b35a due java.io.FileNotFoundException: /var/folders/dh/zfqvn9gn7cl6h63d3400y4zxp3xtzf/T/camel-tmp-807558/cos2920459202139947606.tmp (No such file or directory)
I am wondering if it has something to do with the actual contents of the xml. They are not big at all (roughly 70kb). I doubt I will be able to provide an actual example of the XML itself. Just baffled as to why something so small and being converted to a string is having issues. Other dummy example xml files have worked fine.
EDIT:
One of the suggestions I had was to enable StreamCache, which I did. However, it still doesn't seem to be working. As Ankush commented, the error is confusing. I am not sure if it actually is a Stream issue or if it really is a conversion problem.
http://camel.apache.org/stream-caching.html
Added the below
camel.context.setStreamCaching(true)
I was finally able to figure out the problem. The issue was not bad data, but the size of the files. To account for this, you need to add addtional settings to the camel context.
http://camel.apache.org/stream-caching.html
The settings I used are below. I will need to further research if I should just turn off the streamcache, but this is a start.
camel.context.getProperties.put(CachedOutputStream.THRESHOLD, "750000");
or turn off streamcache
camel.context.setStreamCaching(false)
Hope this helps someone else.
we were having same issue commenting the streamCaching() helped
from(IEricssonConstant.ROUTE_USAGE_DATA_INDIVIDUAL_PROCSESS)
//.streamCaching()
.split(new ZipSplitter()) .stopOnException()
.streaming()
.unmarshal().csv()
.process(new UsageDataCSVRequestProcessor())

Is there anyone on planet earth who know SocialAuth - SocialAuth throws java.lang.reflect.InvocationTargetException with no message

I am using SocialAuth libraries to authenticate against Facebook in my jsf application. I am getting java.lang.reflect.InvocationTargetException exception with no message from org.brickred.socialauth.SocialAuthManager
The probable statement causing this is:
String providerUrl = manager.getAuthenticationUrl(Common.FACEBOOK_AS_ID, Common.SOCIAL_AUTH_REDIRECT_URL);
Any clue guys. Any helps will be greatly appreciated.
I just encountered the same issue today trying to authenticate via Facebook with socialauth-4.0.
The solution is really simple, just make sure that the three jars (openid4java.jar, json-20080701.jar, commons-logging-1.1.jar) that are inside the folder dependencies (inside the zip archive of socialauth) are available at runtime.
In my case I had to put them in the lib folder of my tomcat installation.
This exception is throw if the method called threw an exception.
Just unwrap the cause within the InvocationTargetException and you'll get to the original one.
try{
String providerUrl = manager.getAuthenticationUrl(Common.FACEBOOK_AS_ID, Common.SOCIAL_AUTH_REDIRECT_URL);
}catch (InvocationTargetException ex) {
System.out.println("oops! "+ ex.getCause()) ;
}
This will tell you the actual problem, so you can resolve that issue.

Drools - ClassCastException while using RuleNameEndsWithAgendaFilter

Here is the snippet of code that I'm using :
AgendaFilter filter = (AgendaFilter) new RuleNameEndsWithAgendaFilter("Test");
// Gives a compile time error if I don't cast it.
// Run the rules
int numOfRulesFired = stateFulKnowledgeSession.fireAllRules(filter);
This spits out a runtime Exception :
java.lang.ClassCastException: org.drools.base.RuleNameEndsWithAgendaFilter cannot be cast to org.drools.runtime.rule.AgendaFilter
Please let me know If I'm missing out something here.
Thanks,
Ashwin
Looks like you have the wrong AgendaFilter. I checked the latest Drools codes and org.drools.runtime.rule.AgendaFilter no longer exists or it is renamed to something better.
Use org.drools.spi.AgendaFilter and it works.