Vert.x: PostgreSql result containing DateTime not getting added to JsonArray - postgresql

I have a sql query which returns DateTime as one of the objects. I am getting an error when it is being to added to JsonArray.
Stack Trace:
SEVERE: An exception occurred
java.lang.IllegalStateException: Illegal type in JsonObject: class org.joda.time.DateTime
at io.vertx.core.json.Json.checkAndCopy(Json.java:120)
at io.vertx.core.json.JsonArray.add(JsonArray.java:437)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl$2.apply(AsyncSQLConnectionImpl.java:286)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl$2.apply(AsyncSQLConnectionImpl.java:274)
at scala.collection.Iterator$class.foreach(Iterator.scala:743)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1195)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at com.github.mauricio.async.db.general.ArrayRowData.foreach(ArrayRowData.scala:22)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl.rowToJsonArray(AsyncSQLConnectionImpl.java:274)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl.access$000(AsyncSQLConnectionImpl.java:46)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl$1.apply(AsyncSQLConnectionImpl.java:265)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl$1.apply(AsyncSQLConnectionImpl.java:262)
at scala.collection.Iterator$class.foreach(Iterator.scala:743)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1195)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at com.github.mauricio.async.db.general.MutableResultSet.foreach(MutableResultSet.scala:27)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl.rowDataSeqToJsonArray(AsyncSQLConnectionImpl.java:262)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl.queryResultToResultSet(AsyncSQLConnectionImpl.java:250)
at io.vertx.ext.asyncsql.impl.AsyncSQLConnectionImpl.lambda$null$10(AsyncSQLConnectionImpl.java:130)
at io.vertx.ext.asyncsql.impl.ScalaUtils$3.apply(ScalaUtils.java:81)
at io.vertx.ext.asyncsql.impl.ScalaUtils$3.apply(ScalaUtils.java:77)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at io.vertx.ext.asyncsql.impl.VertxEventLoopExecutionContext.lambda$execute$5(VertxEventLoopExecutionContext.java:70)
at io.vertx.core.impl.ContextImpl.lambda$wrapTask$18(ContextImpl.java:335)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:358)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
I believe the error is occuring here:
private List<JsonArray> rowDataSeqToJsonArray(com.github.mauricio.async.db.ResultSet set) {
List<JsonArray> list = new ArrayList<>();
set.foreach(new AbstractFunction1<RowData, Void>() {
#Override
public Void apply(RowData row) {
list.add(rowToJsonArray(row));
return null;
}
});
return list;
}
My Rowdata looks like this:
Some(MutableResultSet(ArrayRowData(, , hphan, 2016-04-26T00:00:00.000-07:00, 1), ArrayRowData(, , hphan, 2016-04-28T00:00:00.000-07:00, 2), ArrayRowData(BXBSVA, BLUE CROSS BLUE SHIELD VIRGINIA, null, 2016-04-26T00:00:00.000-07:00, 1)))
Does anyone know how to fix this ?

This should work with the upcoming release 3.3. However note that the asynchronous client is marked as technology preview so there are rough edges.
Having that said if you need more features that I'd suggest to use the jdbc client instead. While not so high performance it is feature complete.

Related

Apache Beam Update side input from database

I have an Apache Beam pipeline that processes unbounded data and the results are written into MySQL. In this process, there is a need to look up the username from the user identifier. I'm using side input user id and username map to the pipeline.
Since we keep adding users, the side input needs to be updated periodically. I've gone through the side input patterns "Slowly updating global window side inputs" and "Slowly updating side input using windowing".
I lean towards the first due to new users added not being that frequent.
Reading users from the database using JdbcIO.
final PCollection userCollection =
pipeline.apply("read-users-info", jdbcMgr.readUserInfo(userDsFn));
Reading data from MySQL
public PTransform<PBegin, PCollection<KV<String, String>>> readUserInfo(
SerializableFunction<Void, DataSource> dataSourceProviderFn) {
LOG.info("reading users");
return JdbcIO.<KV<String, String>>read()
.withDataSourceProviderFn(dataSourceProviderFn)
.withQuery("select id, concat(first_name, ' ', last_name) from users")
.withCoder(KvCoder.of(StringUtf8Coder.of(), StringUtf8Coder.of()))
.withRowMapper(
(JdbcIO.RowMapper<KV<String, String>>) rs -> KV.of(rs.getString(1), rs.getString(2)));
}
}
Updating side input using global window.
final PCollectionView<Map<String, String>> userMap =
pipeline
.apply(GenerateSequence.from(0).withRate(1, Duration.standardSeconds(30)))
.apply(Window.into(FixedWindows.of(Duration.standardSeconds(30))))
.apply(Sum.longsGlobally().withoutDefaults())
.apply(
ParDo.of(
new DoFn<Long, Map<String, String>>() {
#ProcessElement
public void process(
#Element Long input,
#Timestamp Instant timestamp,
OutputReceiver<PCollection<KV<String, String>>> o) {
o.output(userCollection);
}
}))
.apply(
Window.<Map<String, String>>into(new GlobalWindows())
.triggering(Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane()))
.discardingFiredPanes())
.apply(View.asSingleton());
I'm sure there is an issue with o.output(userCollection);, can you please help me out here.
I'm running into the following issue.
org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: com.streaming.pipelines.ContactPipeline$1, #ProcessElement process(Long, Instant, OutputReceiver), #ProcessElement process(Long, Instant, OutputReceiver), parameter of type DoFn.OutputReceiver<PCollection<KV<String, String>>> at index 2: OutputReceiver should be parameterized by java.util.Map<java.lang.String, java.lang.String>
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:372)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:222)
at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114)
at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:812)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:246)
at org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:1054)
at org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1132)
at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1132)
Caused by: java.lang.IllegalArgumentException: com.streaming.pipelines.ContactPipeline$1, #ProcessElement process(Long, Instant, OutputReceiver), #ProcessElement process(Long, Instant, OutputReceiver), parameter of type DoFn.OutputReceiver<PCollection<KV<String, String>>> at index 2: OutputReceiver should be parameterized by java.util.Map<java.lang.String, java.lang.String>
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures$ErrorReporter.throwIllegalArgument(DoFnSignatures.java:2397)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures$ErrorReporter.checkArgument(DoFnSignatures.java:2403)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures.analyzeExtraParameter(DoFnSignatures.java:1406)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures.analyzeProcessElementMethod(DoFnSignatures.java:1230)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures.parseSignature(DoFnSignatures.java:638)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures.lambda$getSignature$0(DoFnSignatures.java:294)
at java.util.HashMap.computeIfAbsent(HashMap.java:1127)
at org.apache.beam.sdk.transforms.reflect.DoFnSignatures.getSignature(DoFnSignatures.java:294)
at org.apache.beam.sdk.transforms.ParDo.validate(ParDo.java:614)
at org.apache.beam.sdk.transforms.ParDo.of(ParDo.java:403)
at com.streaming.pipelines.ContactPipeline.buildPipeline(ContactPipeline.java:61)
at com.streaming.pipelines.StreamingApp.main(StreamingApp.java:28)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:355)
... 8 more
Thanks,
Suresh

How to handle failures when publishing to pubsub using pubsub write in apache beam

I'm developing an Apache beam pipeline to publish unbounded data into a pubsub topic. Publishing is done using pubsub IO connector PubsubIO.writeMessages().
If pubsub connection is failed during pipeline is processing, I need to capture the connection failure and identify the data which is being processed during the connection failure. But I couldn't find a straight forward failure handling mechanism in Apache beam pubsub write.
When I test this using a bad pubsub connection, pipeline is trying to connect throwing following exception for a while and if the connection is unsuccessful pipeline execution will fail.
com.google.api.gax.rpc.UnavailableException: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:69)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1041)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1215)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:983)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:771)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:553)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:68)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:739)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:718)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
at io.grpc.Status.asRuntimeException(Status.java:535)
... 10 more
Caused by: io.grpc.netty.shaded.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: no further information: /127.0.0.1:58843
Caused by: java.net.ConnectException: Connection refused: no further information
at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:779)
at io.grpc.netty.shaded.io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:330)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:702)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.grpc.netty.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
I tried to catch this exception from the pubsub write transform and it is not working either.
So my question is: Is there any way to capture above exception and continue pipeline until the connection is successful? My pubsub write code snippet is as follows:
public class PubSubWrite extends PTransform<PCollection<String>, PDone> {
private final String outputTopic;
public PubSubWrite(String outputTopic) {
this.outputTopic = outputTopic;
}
#Override
public PDone expand(PCollection<String> input) {
return input
.apply(
"convertMessagesToPubsubMessages",
MapElements.into(TypeDescriptor.of(PubsubMessage.class))
.via(
(String json) ->
new PubsubMessage(json.getBytes(Charsets.UTF_8), ImmutableMap.of("SOURCE", "TEST"))))
.apply(
"writePubsubMessagesToPubSub", PubsubIO.writeMessages().to(outputTopic));
}
}
There is not a native API for error handling in transforms for PubsubIO as you can see on the documentation.
I recommend you to open a feature request on issue tracker asking for a error handling implementation on the Java Library - PubsubIO connector.
While, you could return ans empty error collection or implement it to catch the exception by yourself.
Example for the empty error:
#Override
public WithFailures.Result < PDone, PubsubMessage > expand(PCollection < PubsubMessage > input) {
PDone done = input //
.apply(
"convertMessagesToPubsubMessages",
MapElements.into(TypeDescriptor.of(PubsubMessage.class))
.via(
(String json) - >
new PubsubMessage(json.getBytes(Charsets.UTF_8), ImmutableMap.of("SOURCE", "TEST"))))
.apply(
"writePubsubMessagesToPubSub", PubsubIO.writeMessages().to(outputTopic));
return WithFailures.Result.of(done, EmptyErrors.in(input.getPipeline()));
}
private static class EmptyErrors extends PTransform < PBegin, PCollection < PubsubMessage >> {
/** Creates an empty error collection in the given pipeline. */
public static PCollection < PubsubMessage > in (Pipeline pipeline) {
return pipeline.apply(new EmptyErrors());
}
#Override
public PCollection < PubsubMessage > expand(PBegin input) {
return input.apply(Create.empty(PubsubMessageWithAttributesCoder.of()));
}
}
Usually such failures are retried by the runner. For example, Dataflow runner will retry failures indefinitely for streaming jobs. Note that this is in addition to any local (VM level) retries for errors that produce re-triable HTTP error codes (for example 5xx). So pipeline should continue once you fix the underlying issue. But note that your backlog might significantly increase if the pipeline is unable to process data for some time so you might see a delay.

Big number of values IN Query with ItemReader

The following ItemReader gets a list of thousands accounts (acc).
The database that the ItemReader will connected to in order to retrieve the data is HIVE. I don’t have permission to create any table, only read option.
#Bean
#StepScope
public ItemReader<OmsDto> omsItemReader(#Value("#{stepExecutionContext[acc]}") List<String> accountList) {
String inParams = String.join(",", accountList.stream().map(id ->
"'"+id+"'").collect(Collectors.toList()));
String query = String.format("SELECT ..... account IN (%s)", inParams);
BeanPropertyRowMapper<OmsDto> rowMapper = new BeanPropertyRowMapper<>(OmsDto.class);
rowMapper.setPrimitivesDefaultedForNullValue(true);
JdbcCursorItemReader<OmsDto> reader = new JdbcCursorItemReader<OmsDto>();
reader.setVerifyCursorPosition(false);
reader.setDataSource(hiveDataSource());
reader.setRowMapper(rowMapper);
reader.setSql(query);
reader.open(new ExecutionContext());
return reader;
}
This is the error message that I get when using ItemReader:
Caused by: org.springframework.batch.item.ItemStreamException: Failed to initialize the reader
at org.springframework.batch.item.support.AbstractItemCountingItemStreamItemReader.open(AbstractItemCountingItemStreamItemReader.java:153) ~[spring-batch-infrastructure-4.2.4.RELEASE.jar:4.2.4.RELEASE]
Caused by: java.sql.SQLException: Error executing query
at com.facebook.presto.jdbc.PrestoStatement.internalExecute(PrestoStatement.java:279) ~[presto-jdbc-0.243.2.jar:0.243.2-128118e]
at com.facebook.presto.jdbc.PrestoStatement.execute(PrestoStatement.java:228) ~[presto-jdbc-0.243.2.jar:0.243.2-128118e]
at com.facebook.presto.jdbc.PrestoPreparedStatement.<init>(PrestoPreparedStatement.java:84) ~[presto-jdbc-0.243.2.jar:0.243.2-128118e]
at com.facebook.presto.jdbc.PrestoConnection.prepareStatement(PrestoConnection.java:130) ~[presto-jdbc-0.243.2.jar:0.243.2-128118e]
at com.facebook.presto.jdbc.PrestoConnection.prepareStatement(PrestoConnection.java:300) ~[presto-jdbc-0.243.2.jar:0.243.2-128118e]
at org.springframework.batch.item.database.JdbcCursorItemReader.openCursor(JdbcCursorItemReader.java:121) ~[spring-batch-infrastructure-4.2.4.RELEASE.jar:4.2.4.RELEASE]
... 63 common frames omitted
Caused by: java.lang.RuntimeException: Error fetching next at https://prestoanalytics-ch2-p.sys.comcast.net:6443/v1/statement/executing/20201118_131314_11079_v3w47/yf55745951e0beccc234c98f36005723457073854/0 returned an invalid response: JsonResponse{statusCode=502, statusMessage=Bad Gateway, headers={cache-control=[no-cache], content-length=[107], content-type=[text/html]}, hasValue=false} [Error: <html><body><h1>502 Bad Gateway</h1>
The server returned an invalid or incomplete response.
</body></html>
]
I was sure that the root cause is because of the driver but I have tested the driver with the same SQL this time using DriverManager and its run perfectly.
#Component
public class OmsItemReader implements ItemReader<OmsDto>, StepExecutionListener {
private ItemReader<OmsDto> delegate;
public SikOmsItemReader() {
Properties properties = new Properties();
properties.setProperty("user", "....");
properties.setProperty("password", "...");
properties.setProperty("SSL", "true");
Connection connection = null;
try {
connection = DriverManager.getConnection("jdbc:presto://.....", properties);
Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery(
I am not sure what is the different ? Is it the driver or sparing batch ?
I am looking for a workaround. How can I retrieve thousands of accounts via IN clauses with spring batch ?
Thank you

create a big Table using HBASE client API

I am working with google cloud big table using client API in scala,
I am trying to create a table with a single column family but I am getting errors
Below is the code I wrote :
`object TestBigtable {
val columnFamilyName = Bytes.toBytes("cf1")
def createConnection(ProjectId: String, InstanceID: String): Connection = {
BigtableConfiguration.connect(ProjectId, InstanceID)
}
def createTableIfNotExists(connection: Connection, name: String) = {
val tableName = TableName.valueOf(name)
val admin = connection.getAdmin()
if (!admin.tableExists(tableName)) {
val tableDescriptor = new HTableDescriptor(tableName)
tableDescriptor.addFamily(
new HColumnDescriptor(columnFamilyName))
admin.createTable(tableDescriptor)
}
}
def runner(projectId: String,
instanceId: String,
tableName: String) = {
val createTableConnection = createConnection(projectId, instanceId)
try {
createTableIfNotExists(createTableConnection, tableName)
} finally {
createTableConnection.close()
}
}`
Once I execute my jar I get the following set of errors:
18/07/25 10:36:20 INFO com.google.cloud.bigtable.grpc.BigtableSession: Bigtable options: BigtableOptions{dataHost=bigtable.googleapis.com, adminHost=bigtableadmin.googleapis.com, port=443, projectId=renault-ftt, instanceId=testfordeletion, appProfileId=, userAgent=hbase-1.4.3, credentialType=DefaultCredentials, dataChannelCount=4, retryOptions=RetryOptions{retriesEnabled=true, allowRetriesWithoutTimestamp=false, statusToRetryOn=[UNAUTHENTICATED, ABORTED, DEADLINE_EXCEEDED, UNAVAILABLE], initialBackoffMillis=5, maxElapsedBackoffMillis=60000, backoffMultiplier=2.0, streamingBufferSize=60, readPartialRowTimeoutMillis=60000, maxScanTimeoutRetries=3}, bulkOptions=BulkOptions{asyncMutatorCount=2, useBulkApi=true, bulkMaxKeyCount=125, bulkMaxRequestSize=1048576, autoflushMs=0, maxInflightRpcs=40, maxMemory=97307852, enableBulkMutationThrottling=false, bulkMutationRpcTargetMs=100}, callOptionsConfig=CallOptionsConfig{useTimeout=false, shortRpcTimeoutMs=60000, longRpcTimeoutMs=600000}, usePlaintextNegotiation=false, useCachedDataPool=false}.
18/07/25 10:36:20 INFO com.google.cloud.bigtable.grpc.io.OAuthCredentialsCache: Refreshing the OAuth token
Exception in thread "grpc-default-executor-0" java.lang.IllegalAccessError: tried to access field com.google.protobuf.AbstractMessage.memoizedSize from class com.google.bigtable.admin.v2.ListTablesRequest
at com.google.bigtable.admin.v2.ListTablesRequest.getSerializedSize(ListTablesRequest.java:236)
at io.grpc.protobuf.lite.ProtoInputStream.available(ProtoInputStream.java:108)
at io.grpc.internal.MessageFramer.getKnownLength(MessageFramer.java:204)
at io.grpc.internal.MessageFramer.writePayload(MessageFramer.java:136)
at io.grpc.internal.AbstractStream.writeMessage(AbstractStream.java:52)
at io.grpc.internal.DelayedStream$5.run(DelayedStream.java:218)
at io.grpc.internal.DelayedStream.drainPendingCalls(DelayedStream.java:132)
at io.grpc.internal.DelayedStream.setStream(DelayedStream.java:101)
at io.grpc.internal.DelayedClientTransport$PendingStream.createRealStream(DelayedClientTransport.java:361)
at io.grpc.internal.DelayedClientTransport$PendingStream.access$300(DelayedClientTransport.java:344)
at io.grpc.internal.DelayedClientTransport$5.run(DelayedClientTransport.java:302)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Any one could help me with this please ?
Reposting the comment from Solomon as an answer:
io.grpc.protobuf.lite is in the stack. The Cloud Bigtable client was
never tested with protobuf lite. A dependency graph would help. As a
quick fix, you can also try the bigtable-hbase-1.x-shaded artifact
instead of the bigtable-hbase-1.x artifact.
It's possible that your use of io.grpc.protobuf.lite is causing issues. As I understand it, io.grpc.protobuf.lite is mainly for use on Android clients.
Using the shaded artifact should prevent dependency conflicts at the cost of a larger JAR size and potential memory footprint. You may also want to review these similar issue reports and how they were resolved:
https://groups.google.com/forum/#!topic/protobuf/_Yq0Dar_jhk
https://github.com/grpc/grpc-java/issues/2300

Handle JDBC exception in BIRT API

I have a scheduler job which is based on a standalone RunAndRenderTask. The report design connects to a remote mysql database to fetch data. The scheduler generates a PDF and emails the report as attachment to a set of people. This works as long as the database is available.
But when the database is unavailable, then I can see the error in the logs, but the RunAndRenderTask still generates a PDF report which is blank and useless, and this gets emailed by the scheduler. I need to be able to catch this exception and instead email another set of people who can fix the DB issue. I tried various things but couldn't figure out how to do it.
In the code below, I expect the API to return an exception, and hence print "BirtException" or "Exception", but this code prints "Success" even when there is a JDBC exception.
Any help is appreciated.
Here's the code I have.
IReportEngine engine = null;
IRunAndRenderTask runAndRenderTask = null;
try {
EngineConfig config = new EngineConfig();
config.setEngineHome("birt-runtime-4_4_0/RuntimeEngine");
Platform.startup(config);
IReportEngineFactory factory = (IReportEngineFactory) Platform
.createFactoryObject(IReportEngineFactory.EXTENSION_REPORT_ENGINE_FACTORY);
engine = factory.createReportEngine(config);
IReportRunnable reportRunnable = engine.openReportDesign(DATA_PATH + "sample.rptdesign");
runAndRenderTask = engine.createRunAndRenderTask(reportRunnable);
PDFRenderOption option = new PDFRenderOption();
option.setOutputFileName(DATA_PATH + "output.pdf");
option.setOutputFormat("pdf");
runAndRenderTask.setRenderOption(option);
runAndRenderTask.run();
System.out.println("Success!");
} catch (BirtException e) {
System.out.println("BirtException");
e.printStackTrace();
} catch (Throwable e) {
System.out.println("Exception");
e.printStackTrace();
} finally {
if (runAndRenderTask != null) {
runAndRenderTask.close();
}
if (engine != null) {
engine.destroy();
}
Platform.shutdown();
RegistryProviderFactory.releaseDefault();
}
This is the exception stacktrace, which never gets propagated back by RunAndRenderTask.run()
INFO: Loaded JDBC driver class in class path: com.mysql.jdbc.Driver
Jun 26, 2014 9:26:43 PM org.eclipse.birt.data.engine.odaconsumer.ConnectionManager openConnection
SEVERE: Unable to open connection.
org.eclipse.birt.report.data.oda.jdbc.JDBCException: There is an error in get connection, Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server..
at org.eclipse.birt.report.data.oda.jdbc.JDBCDriverManager.doConnect(JDBCDriverManager.java:336)
at org.eclipse.birt.report.data.oda.jdbc.JDBCDriverManager.getConnection(JDBCDriverManager.java:235)
at org.eclipse.birt.report.data.oda.jdbc.Connection.connectByUrl(Connection.java:252)
at org.eclipse.birt.report.data.oda.jdbc.Connection.open(Connection.java:162)
at org.eclipse.datatools.connectivity.oda.consumer.helper.OdaConnection.open(OdaConnection.java:250)
at org.eclipse.birt.data.engine.odaconsumer.ConnectionManager.openConnection(ConnectionManager.java:165)
at org.eclipse.birt.data.engine.executor.DataSource.newConnection(DataSource.java:224)
at org.eclipse.birt.data.engine.executor.DataSource.open(DataSource.java:212)
at org.eclipse.birt.data.engine.impl.DataSourceRuntime.openOdiDataSource(DataSourceRuntime.java:217)
at org.eclipse.birt.data.engine.impl.QueryExecutor.openDataSource(QueryExecutor.java:435)
at org.eclipse.birt.data.engine.impl.QueryExecutor.prepareExecution(QueryExecutor.java:322)
at org.eclipse.birt.data.engine.impl.PreparedQuery.doPrepare(PreparedQuery.java:463)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.produceQueryResults(PreparedDataSourceQuery.java:190)
at org.eclipse.birt.data.engine.impl.PreparedDataSourceQuery.execute(PreparedDataSourceQuery.java:178)
at org.eclipse.birt.data.engine.impl.PreparedOdaDSQuery.execute(PreparedOdaDSQuery.java:178)
at org.eclipse.birt.report.data.adapter.impl.DataRequestSessionImpl.execute(DataRequestSessionImpl.java:637)
at org.eclipse.birt.report.engine.data.dte.DteDataEngine.doExecuteQuery(DteDataEngine.java:152)
at org.eclipse.birt.report.engine.data.dte.AbstractDataEngine.execute(AbstractDataEngine.java:275)
at org.eclipse.birt.report.engine.executor.ExtendedGenerateExecutor.executeQueries(ExtendedGenerateExecutor.java:205)
at org.eclipse.birt.report.engine.executor.ExtendedGenerateExecutor.execute(ExtendedGenerateExecutor.java:65)
at org.eclipse.birt.report.engine.executor.ExtendedItemExecutor.execute(ExtendedItemExecutor.java:62)
at org.eclipse.birt.report.engine.internal.executor.dup.SuppressDuplicateItemExecutor.execute(SuppressDuplicateItemExecutor.java:43)
at org.eclipse.birt.report.engine.internal.executor.wrap.WrappedReportItemExecutor.execute(WrappedReportItemExecutor.java:46)
at org.eclipse.birt.report.engine.internal.executor.l18n.LocalizedReportItemExecutor.execute(LocalizedReportItemExecutor.java:34)
at org.eclipse.birt.report.engine.layout.html.HTMLBlockStackingLM.layoutNodes(HTMLBlockStackingLM.java:65)
at org.eclipse.birt.report.engine.layout.html.HTMLPageLM.layout(HTMLPageLM.java:92)
at org.eclipse.birt.report.engine.layout.html.HTMLReportLayoutEngine.layout(HTMLReportLayoutEngine.java:100)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.doRun(RunAndRenderTask.java:181)
at org.eclipse.birt.report.engine.api.impl.RunAndRenderTask.run(RunAndRenderTask.java:77)
at test.ReportTester.test(ReportTester.java:50)
at test.ReportTester.main(ReportTester.java:19)
In addition to catching BirtException, you should be aware that the way BIRT handles Javascript errors is - by default - browser-like. That is, BIRT tries to continue generating the report.
There are different ways to handle this for production-quality code (where task is a RunAndRenderTask or RunTask or RenderTask):
Use task.setErrorHandlingOption(CANCEL_ON_ERROR) (see BIRT docs). Personally, I have never tried this.
After task.run(...), but before task.close(), call task.getErrors(). If this list is not empty, your code should output these messages and throw an exception.
You need to add catch block that catches EngineException, not JDBC exception.
You can find javadocs at link.