mongodb allanbank async driver callback being called twice - mongodb

I have a simple callback bound to a findAsync call. Every few requests I observe a failure even though the data in the database is static.
Some quick debugging shows that my callback is ALWAYS being called twice. The cases where my app fails are caused by a null MongoIterator<Document> being passed in to my callback. Of course I only expected a single callback when the data is ready (or if there was an exception).
Is this expected behavior? Is there something I can do to ensure my callback is only called once when the query is complete?
Here is the code snippet:
collection.findAsync(
[
callback: { MongoIterator<Document> v ->
List data = []
try {
while(v.hasNext()) {
data.add(docToJson(v.next()))
}
} finally {
if (v != null) v.close()
}
sendReply([ status: 'ok', data: data ])
},
exception: { Throwable t ->
sendReply([ status: 'error', message: t.message ])
}
] as Callback<MongoIterator<Document>>,
find
)
Here is the stack trace:
Unexpected MongoDB Connection closed: Auth(MongoDB(43026-->localhost/127.0.0.1:27017)). Will try to reconnect.
Reconnected to localhost/127.0.0.1:27017
Exception in thread "MongoDB 43026<--localhost/127.0.0.1:27017" java.lang.NullPointerException: Cannot invoke method hasNext() on null object
at org.codehaus.groovy.runtime.NullObject.invokeMethod(NullObject.java:77)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:45)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.NullCallSite.call(NullCallSite.java:32)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:54)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at mongoAsync$_run_closure2_closure6.doCall(mongoAsync.groovy:126)
at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:233)
at org.codehaus.groovy.runtime.metaclass.ClosureMetaClass.invokeMethod(ClosureMetaClass.java:272)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:909)
at groovy.lang.Closure.call(Closure.java:411)
at org.codehaus.groovy.runtime.ConvertedMap.invokeCustom(ConvertedMap.java:50)
at org.codehaus.groovy.runtime.ConversionHandler.invoke(ConversionHandler.java:81)
at com.sun.proxy.$Proxy14.callback(Unknown Source)
at com.allanbank.mongodb.client.AbstractReplyCallback.handle(AbstractReplyCallback.java:82)
at com.allanbank.mongodb.client.AbstractValidatingReplyCallback.callback(AbstractValidatingReplyCallback.java:72)
at com.allanbank.mongodb.client.AbstractValidatingReplyCallback.callback(AbstractValidatingReplyCallback.java:33)
at com.allanbank.mongodb.connection.message.ReplyHandler.reply(ReplyHandler.java:77)
at com.allanbank.mongodb.connection.socket.SocketConnection.reply(SocketConnection.java:560)
at com.allanbank.mongodb.connection.socket.SocketConnection$ReceiveRunnable.receiveOne(SocketConnection.java:735)
at com.allanbank.mongodb.connection.socket.SocketConnection$ReceiveRunnable.run(SocketConnection.java:683)
at java.lang.Thread.run(Thread.java:722)

Thanks for the extra information.
I am pretty sure I found the bug.
There is a race between the callback/iterator for the query getting the name of the server handling the query (needed for the GetMore requests) and the receive of a reply. Not sure how it has not been seen by others.
There is a patched jar here: [redact]. Let me know if it solves the problem for you.
The 1.2.2 version is available with the fix: http://www.allanbank.com/mongodb-async-driver/download.html
Assuming it does, I will push this fix out as a 1.2.2 tomorrow.
Rob.
Edit: Add link to official version.

Related

spark: SAXParseException while writing to parquet on s3

I'm trying to read in some json, infer a schema, and write it out again as parquet to s3 (s3a). For some reason, about a third of the way through the writing portion of the run, spark always errors out with the error included below. I can't find any obvious reasons for the issue: it isn't out of memory; there are no long GC pauses. There don't seem to be any additional error messages in the logs of the individual executors.
The script runs fine on another set of data that I have, which is of a very similar structure, but several orders of magnitude smaller.
I am running spark 2.0.1-hadoop-2.7 and am using the FileOutputCommitter. The algorithm version doesn't seem to matter.
Edit:
This does not appear to be a problem in badly formed json or corrupted files. I have unzipped and read in each file individually with no error.
Here's a simplified version of the script:
object Foo {
def parseJson(json: String): Option[Map[String, Any]] = {
if (json == null)
Some(Map())
else
parseOpt(json).map((j: JValue) => j.values.asInstanceOf[Map[String, Any]])
}
}
}
// read in as text and parse json using json4s
val jsonRDD: RDD[String] = sc.textFile(inputPath)
.map(row -> Foo.parseJson(row))
// infer a schema that will encapsulate the most rows in a sample of size sampleRowNum
val schema: StructType = Infer.getMostCommonSchema(sc, jsonRDD, sampleRowNum)
// get documents compatibility with schema
val jsonWithCompatibilityRDD: RDD[(String, Boolean)] = jsonRDD
.map(js => (js, Infer.getSchemaCompatibility(schema, Infer.inferSchema(js)).toBoolean))
.repartition(partitions)
val jsonCompatibleRDD: RDD[String] = jsonWithCompatibilityRDD
.filter { case (js: String, compatible: Boolean) => compatible }
.map { case (js: String, _: Boolean) => js }
// create a dataframe from documents with compatible schema
val dataFrame: DataFrame = spark.read.schema(schema).json(jsonCompatibleRDD)
It completes the earlier schema inferring steps successfully. The error itself occurs on the last line, but I suppose that could encompass at least the immediately preceding statemnt, if not earlier:
org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:86)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Failed to commit task
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1345)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
... 8 more
Suppressed: java.lang.NullPointerException
at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:147)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1354)
... 9 more
Caused by: com.amazonaws.AmazonClientException: Unable to unmarshall response (Failed to parse XML document with handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler). Response Code: 200, Response Text: OK
at com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:738)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:399)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
at com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:604)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:962)
at org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:1147)
at org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:1136)
at org.apache.hadoop.fs.s3a.S3AOutputStream.close(S3AOutputStream.java:142)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:400)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:117)
at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267)
... 13 more
Caused by: com.amazonaws.AmazonClientException: Failed to parse XML document with handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:150)
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseListBucketObjectsResponse(XmlResponsesSaxParser.java:279)
at com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller.unmarshall(Unmarshallers.java:75)
at com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller.unmarshall(Unmarshallers.java:72)
at com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
at com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
at com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:712)
... 29 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 2; XML document structures must start and end within the same entity.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.endEntity(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl.endEntity(Unknown Source)
at org.apache.xerces.impl.XMLEntityManager.endEntity(Unknown Source)
at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source)
at org.apache.xerces.impl.XMLEntityScanner.skipChar(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:141)
... 35 more
Here's my conf:
spark.executor.extraJavaOptions -XX:+UseG1GC -XX:MaxPermSize=1G -XX:+HeapDumpOnOutOfMemoryError
spark.executor.memory 16G
spark.executor.uri https://s3.amazonaws.com/foo/spark-2.0.1-bin-hadoop2.7.tgz
spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem
spark.hadoop.fs.s3a.buffer.dir /raid0/spark
spark.hadoop.fs.s3n.buffer.dir /raid0/spark
spark.hadoop.fs.s3a.connection.timeout 500000
spark.hadoop.fs.s3n.multipart.uploads.enabled true
spark.hadoop.parquet.block.size 2147483648
spark.hadoop.parquet.enable.summary-metadata false
spark.jars.packages com.databricks:spark-avro_2.11:3.0.1
spark.local.dir /raid0/spark
spark.mesos.coarse false
spark.mesos.constraints priority:1
spark.network.timeout 600
spark.rpc.message.maxSize 500
spark.speculation false
spark.sql.parquet.mergeSchema false
spark.sql.planner.externalSort true
spark.submit.deployMode client
spark.task.cpus 1
I can think for three possible reasons for this problem.
JVM version. AWS SDK checks for the following ones. "1.6.0_06",
"1.6.0_13", "1.6.0_17", "1.6.0_65", "1.7.0_45". If you are using one
of them, try upgrading.
Old AWS SDK. Refer to
https://github.com/aws/aws-sdk-java/issues/460 for a workaround.
If you lots of files in the directory where you are writing these files, you might be hitting https://issues.apache.org/jira/browse/HADOOP-13164. Consider increasing the timeout to larger values.
A SAXParseException may indicate a badly formatted XML file. Since the job fails roughly a third of the way through consistently, this means it's probably failing in the same place every time (a file whose partition is roughly a third of the way through the partition list).
Can you paste your script? It may be possible to wrap the Spark step in a try/catch loop that will print out the file if this error occurs, which will let you easily zoom in on the problem.
From the logs:
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 2; XML document structures must start and end within the same entity.
and
Caused by: com.amazonaws.AmazonClientException: Failed to parse XML document with handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler
It looks like you have a corrupted/incorrectly formatted file, and your error is actually occurring during the read portion of the task. You could confirm this by trying another operation that will force the read such as count().
If confirmed, the goal would then be to find the corrupted file. You could do this by listing the s3 files, sc.parallelize() that list, and then trying to read the files in a custom function using map().
import boto3
from pyspark.sql import Row
def scanKeys(startKey, endKey):
bucket = boto3.resource('s3').Bucket('bucketName')
for obj in bucket.objects.filter(Prefix='prefix', Marker=startKey):
if obj.key < endKey:
yield obj.key
else:
return
def testFile(s3Path):
s3obj = boto3.resource('s3').Object(bucket_name='bucketName', key=key)
body = s3obj.get()['Body']
...
logic to test file format, or use a try/except and attempt to parse it
...
if fileFormatedCorrectly == True:
return Row(status='Good', key = s3Path)
else:
return Row(status='Fail', key = s3Path)
keys = list(scanKeys(startKey, endKey))
keyListRdd = sc.parallelize(keys, 1000)
keyListRdd.map(testFile).filter(lambda x: x.asDict.get('status') == 'Fail').collect()
This will return the s3 paths for the incorrectly formatted files
For Googlers:
If you:
have a versioned bucket
use s3a://
see ListBucketHandler and listObjects in your error message
Quick solution:
use s3:// instead of s3a://, which will use S3 driver provided by EMR
You may see this error because s3a:// in older versions uses S3::ListObjects (v1) API instead of S3::ListObjectsV2. The former would return extra info like owner, and is not robust against large number of deletion markers. Newer versions of the s3a:// driver solved this problem, but you could always use the s3:// driver instead.
Quote:
the V1 list API experience always returns 5000 entries (as set in fs.s3a.paging.maximum
except for the final entry
if you have versioning turned on in your bucket, deleted entries retain tombstone markers with references to their versions
which will surface in the S3-side of list calls, but get stripped out from the response
so...for a very large tree, you may end up S3 having to keep a channel open while is skips of thousands to millions of deleted
objects before it can find actual ones to return.
which can time out connections.
Quote:
Introducing a new version of the ListObjects (ListObjectsV2) API that allows listing objects with a large number of delete markers.
Quote:
If there are thousands of delete markers, the list operation might timeout。

Solution to NullPointerException in my TestNG test case

I am trying to login Facebook by using multiple sets of login cridential from external excel sheet using TestNG but in between the code what i have written throws:
FAILED: f
java.lang.NullPointerException
at DataDriven.loginRetesting.f(loginRetesting.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.testng.internal.MethodInvocationHelper.invokeMethod(MethodInvocationHelper.java:84)
at org.testng.internal.Invoker.invokeMethod(Invoker.java:714)
at org.testng.internal.Invoker.invokeTestMethod(Invoker.java:901)
at org.testng.internal.Invoker.invokeTestMethods(Invoker.java:1231)
at org.testng.internal.TestMethodWorker.invokeTestMethods(TestMethodWorker.java:127)
at org.testng.internal.TestMethodWorker.run(TestMethodWorker.java:111)
at org.testng.TestRunner.privateRun(TestRunner.java:767)
at org.testng.TestRunner.run(TestRunner.java:617)
at org.testng.SuiteRunner.runTest(SuiteRunner.java:334)
at org.testng.SuiteRunner.runSequentially(SuiteRunner.java:329)
at org.testng.SuiteRunner.privateRun(SuiteRunner.java:291)
at org.testng.SuiteRunner.run(SuiteRunner.java:240)
at org.testng.SuiteRunnerWorker.runSuite(SuiteRunnerWorker.java:52)
at org.testng.SuiteRunnerWorker.run(SuiteRunnerWorker.java:86)
at org.testng.TestNG.runSuitesSequentially(TestNG.java:1224)
at org.testng.TestNG.runSuitesLocally(TestNG.java:1149)
at org.testng.TestNG.run(TestNG.java:1057)
at org.testng.remote.RemoteTestNG.run(RemoteTestNG.java:111)
at org.testng.remote.RemoteTestNG.initAndRun(RemoteTestNG.java:204)
at org.testng.remote.RemoteTestNG.main(RemoteTestNG.java:175)
Code:
#Test
public void f() throws Exception{
FileInputStream fi=new FileInputStream("E:\\workspace99\\SeleniumAutomations\\testdata\\LoginData.xls");
Workbook w=Workbook.getWorkbook(fi);
Sheet s=w.getSheet(0);
for (int i = 1; i < 8; i++) {
driver.findElement(By.id("email")).sendKeys(s.getCell(0,i).getContents());
driver.findElement(By.id("pass")).sendKeys(s.getCell(1,i).getContents());
driver.findElement(By.id("u_0_1")).click();
Thread.sleep(1000);
if (selenium.isElementPresent("id=userNavigationLabel")) {
//driver.findElement(By.cssSelector("span.gb_V.gbii")).click();
driver.findElement(By.id("userNavigationLabel")).click();
driver.findElement(By.cssSelector("input.uiLinkButtonInput")).click();
Thread.sleep(1000);
}
else{
System.out.println("invalid cridential");
driver.findElement(By.id("email")).clear();
driver.findElement(By.id("pass")).clear();
}
}
}
I am not able to find out where the problem is? so how to solve it.
For one thing, don't use "throws Exception" in the method declaration. Instead, have a try block inside the method with a "not null assertion".
Keep in mind that you never want to throw typical exceptions in a test method because it will drop you out of the test lifecycle too early and skip your #After phases. An exception thrown in a #Before annotated method will probably cause a SkipAssertion.
So, what you want to do instead is fail assertions. So, if you have a exception condition that occurs sometimes, swallow the exception (don't throw it) and use an assertion to handle it.
For example:
try {
} catch ( NullPointerException e ) {
Assert.assertTrue( "There was an error in blah.", false );
}
Null pointer means, exepected value of variable is null.
Please check the file exists at the specified path "E:\workspace99\SeleniumAutomations\testdata\LoginData.xls" and you get the value correctly from the file.
The stack trace says the issue is line 31 in your code but your snippet does not allow us to work out where line 31 is .
you are calling "selenium.isElementPresent". Could "selenium" be nil?

Endpoints API generation error when using Guava

We are using Appengine Endpoints Java with Guava and when Function is used inside an endpoint method the API generator returns an exception.
Function is NOT part of the method signature. Its just used inside the method to transform a list. Comment out this body part and it generates OK.
Guava sure is in the classpath, and other parts of the application uses it normally.
I'm not sure its related to Guava or any outside API would give the same error.
The method:
#ApiMethod(
httpMethod = "GET",
name = "ledgers.accountgroups.get",
path="ledgers/{ledgerId}/accountgroups")
public Collection<IAccountGroup> listAccountGroups(#Named("ledgerId") String ledgerId, User user) throws Exception {
collaboratorDAO.assertCollaboratorOn(ledgerId, user);
List<Group> groups = accountGroupsDAO.getAccountGroups(ledgerId);
List<IAccountGroup> transformedGroups = Lists.transform(groups, new Function<Group, IAccountGroup>() {
#Override
public IAccountGroup apply(Group group) {
return group.createClient();
}
});
return transformedGroups;
}
The exception:
INFO: Successfully processed ./war/WEB-INF/appengine-web.xml
interface com.google.api.server.spi.config.Api
interface com.google.api.server.spi.config.Api
Exception in thread "main" java.lang.NoClassDefFoundError: com/google/common/base/Function
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2436)
at java.lang.Class.getDeclaredMethods(Class.java:1793)
at com.google.api.server.spi.MethodHierarchyReader.addServiceMethods(MethodHierarchyReader.java:174)
at com.google.api.server.spi.MethodHierarchyReader.readMethodHierarchyIfNecessary(MethodHierarchyReader.java:44)
at com.google.api.server.spi.MethodHierarchyReader.getEndpointOverrides(MethodHierarchyReader.java:99)
at com.google.api.server.spi.config.annotationreader.ApiConfigAnnotationReader.readEndpointMethods(ApiConfigAnnotationReader.java:215)
at com.google.api.server.spi.config.annotationreader.ApiConfigAnnotationReader.loadEndpointMethods(ApiConfigAnnotationReader.java:92)
at com.google.api.server.spi.config.ApiConfigLoader.loadConfiguration(ApiConfigLoader.java:55)
at com.google.api.server.spi.tools.AnnotationApiConfigGenerator.generateConfigObjects(AnnotationApiConfigGenerator.java:237)
at com.google.api.server.spi.tools.AnnotationApiConfigGenerator.generateConfig(AnnotationApiConfigGenerator.java:185)
at com.google.api.server.spi.tools.GenApiConfigAction.genApiConfig(GenApiConfigAction.java:78)
at com.google.api.server.spi.tools.GetClientLibAction.getClientLib(GetClientLibAction.java:66)
at com.google.api.server.spi.tools.GetClientLibAction.execute(GetClientLibAction.java:49)
at com.google.api.server.spi.tools.EndpointsTool.execute(EndpointsTool.java:66)
at com.google.api.server.spi.tools.EndpointsTool.main(EndpointsTool.java:92)
Caused by: java.lang.ClassNotFoundException: com.google.common.base.Function
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 16 more
It's very simple not to use Guava at this point, and it works. I just want to know why this is happening as I need a more have use of Guava in other parts.
Update:
I think it the same problem as:
Google Cloud Endpoints doesn't know about the Work class from Objectify 4 Transaction, causing ClassNotFoundException
Any tips?
Thanks!

How to change value of com.arjuna.ats.jbossatx.jta.TransactionManagerService TransactionTimeout at the run-time?

We have JBoss [EAP] 4.3.0.GA_CP01 environment and I need to modify the
TransactionTimeout
property of
com.arjuna.ats.jbossatx.jta.TransactionManagerService
but whenever i try to change the value via MBean from JMX-Console; following stacktrace shows up:
java.lang.IllegalStateException: Cannot set transaction timeout once MBean has started
com.arjuna.ats.jbossatx.jta.TransactionManagerService.setTransactionTimeout(TransactionManagerService.java:323)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
java.lang.reflect.Method.invoke(Method.java:597)
org.jboss.mx.interceptor.AttributeDispatcher.invoke(AttributeDispatcher.java:136)
org.jboss.mx.server.Invocation.dispatch(Invocation.java:94)
org.jboss.mx.server.Invocation.invoke(Invocation.java:86)
org.jboss.mx.interceptor.ModelMBeanAttributeInterceptor.invoke(ModelMBeanAttributeInterceptor.java:103)
org.jboss.mx.interceptor.PersistenceInterceptor.invoke(PersistenceInterceptor.java:76)
org.jboss.mx.server.Invocation.invoke(Invocation.java:88)
org.jboss.mx.server.AbstractMBeanInvoker.setAttribute(AbstractMBeanInvoker.java:461)
org.jboss.mx.server.MBeanServerImpl.setAttribute(MBeanServerImpl.java:608)
org.jboss.jmx.adaptor.control.Server.setAttributes(Server.java:206)
org.jboss.jmx.adaptor.html.HtmlAdaptorServlet.updateAttributes(HtmlAdaptorServlet.java:236)
org.jboss.jmx.adaptor.html.HtmlAdaptorServlet.processRequest(HtmlAdaptorServlet.java:98)
org.jboss.jmx.adaptor.html.HtmlAdaptorServlet.doPost(HtmlAdaptorServlet.java:82)
javax.servlet.http.HttpServlet.service(HttpServlet.java:710)
javax.servlet.http.HttpServlet.service(HttpServlet.java:803)
org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)
Is there a way programmatically to change the value of TransactionTimeout without bouncing the server at the run-time??
Here's a groovy example of how to do this (using embedded groovy inside a JBoss 4.3.0.GA_CP01 instance):
mbeanserver.getAttribute(JMXHelper.objectName("jboss:service=TransactionManager"), "TransactionManager").setTransactionTimeout(200);
Basically, the MBeanService com.arjuna.ats.jbossatx.jta.TransactionManagerService does not allow you to change the default transaction timeout, but if you retrieve the attribute TransactionManager (an instance of com.arjuna.ats.jbossatx.jta.TransactionManagerDelegate), it exposes the method:
public void com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.setTransactionTimeout(int) throws javax.transaction.SystemException
Note that this will not change the value reported in the MBean's TransactionTimeout attribute, but all transactions started after this method is called will have the new transaction timeout.
More groovy code:
def txManager = mbeanserver.getAttribute(JMXHelper.objectName("jboss:service=TransactionManager"), "TransactionManager");
TX.exec({
println "Timeout:${txManager.getTransactionTimeout()}";
});
txManager.setTransactionTimeout(txManager.getTransactionTimeout() * 2);
TX.exec({
println "Timeout:${txManager.getTransactionTimeout()}";
});
Output:
Timeout:200
Timeout:400
Thank you Nicholas!
Here is the Java code which can be used to change the transaction timeout on runtime...
MBeanServer mBeanServer = MBeanServerLocator.locateJBoss();
TransactionManagerDelegate tmd = (TransactionManagerDelegate) mBeanServer.getAttribute(new ObjectName("jboss:service=TransactionManager"), "TransactionManager");
System.out.println("Prev: " + tmd.getTransactionTimeout());
tmd.setTransactionTimeout(200);
System.out.println("New: " + tmd.getTransactionTimeout());
And as you quoted,
Note that this will not change the value reported in the MBean's TransactionTimeout attribute however the value will be effective for all transactions post this operation.

Scala Object Serialization

I am having trouble opening multiple objects that were serialized into a single .bin file. Right now, I can only get one object to be read in when I attempt to open the file. After the first object is read, the error message is displayed (and no further objects are read). My code looks like the following:
val ois = new ObjectInputStream(new BufferedInputStream(newFileInputStream(chooser.selectedFile)))
val toRead:Int = ois.readInt()
for (i <- 0 to toRead) {
ois.readObject() match {
case anObject : myObject =>
aMutableBuffer += anObject
case _ =>
}
ois.close()
}
The error that I am getting is a lot of the following:
Exception in thread "AWT-EventQueue-0" java.security.PrivilegedActionException:java.security.PrivilegedActionException: java.io.IOException: Stream closed
at java.security.AccessController.doPrivileged(Native Method)
at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:87)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:649)
at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:296)
at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:211)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:201)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:196)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:188)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:122)
Caused by: java.security.PrivilegedActionException: java.io.IOException: Stream closed
at java.security.AccessController.doPrivileged(Native Method)
at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:87)
at java.security.AccessControlContext$1.doIntersectionPrivilege(AccessControlContext.java:98)
at java.awt.EventQueue$2.run(EventQueue.java:652)
at java.awt.EventQueue$2.run(EventQueue.java:650)
... 9 more
Caused by: java.io.IOException: Stream closed
at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:145)
at java.io.BufferedInputStream.read(BufferedInputStream.java:241)
at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2248)
at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2541)
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2551)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1296)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
atABCk.PhotoshopApp$$anonfun$ABCPhotoshopApp$$fileOpenPicture$1.apply$mcVI$sp(PhotoshopApp.scala:32)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:75)
at ABC.PhotoshopApp$.ABC$PhotoshopApp$$fileOpenPicture(PhotoshopApp.scala:31)
at ABC.PhotoshopApp$$anon$9$$anon$11$$anon$1$$anonfun$2.apply$mcV$sp(PhotoshopApp.scala:111)
at scala.swing.Action$$anon$2.apply(Action.scala:60)
at scala.swing.Action$$anon$1.actionPerformed(Action.scala:78)
at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:2028)
at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2351)
at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:387)
at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:242)
at javax.swing.AbstractButton.doClick(AbstractButton.java:389)
at javax.swing.plaf.basic.BasicMenuItemUI.doClick(BasicMenuItemUI.java:809)
at com.apple.laf.AquaMenuItemUI.doClick(AquaMenuItemUI.java:137)
at javax.swing.plaf.basic.BasicMenuItemUI$Handler.menuDragMouseReleased(BasicMenuItemUI.java:913)
at javax.swing.JMenuItem.fireMenuDragMouseReleased(JMenuItem.java:568)
at javax.swing.JMenuItem.processMenuDragMouseEvent(JMenuItem.java:465)
at javax.swing.JMenuItem.processMouseEvent(JMenuItem.java:411)
at javax.swing.MenuSelectionManager.processMouseEvent(MenuSelectionManager.java:305)
at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(BasicMenuItemUI.java:852)
at java.awt.Component.processMouseEvent(Component.java:6373)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3267)
at java.awt.Component.processEvent(Component.java:6138)
at java.awt.Container.processEvent(Container.java:2085)
at java.awt.Component.dispatchEventImpl(Component.java:4735)
at java.awt.Container.dispatchEventImpl(Container.java:2143)
at java.awt.Component.dispatchEvent(Component.java:4565)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4621)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4282)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4212)
at java.awt.Container.dispatchEventImpl(Container.java:2129)
at java.awt.Window.dispatchEventImpl(Window.java:2478)
at java.awt.Component.dispatchEvent(Component.java:4565)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:679)
at java.awt.EventQueue.access$000(EventQueue.java:85)
at java.awt.EventQueue$1.run(EventQueue.java:638)
at java.awt.EventQueue$1.run(EventQueue.java:636)
... 14 more
This only happens when I read in an object to my buffer that keeps track of the objects. Moreover, I am able to save the file correctly (as I have done tests to ensure everything got there). Anyone have any ideas what is going on here?
You are closing your Stream ois at the end, but inside the loop. Then you try to read from it on the first line of the loop. Which obviously failes with a IOException: Stream closed