How to Set Spring Kafka consumer max attempts when using Schema Registry - apache-kafka

I am developing Spring boot server with Spring kafka(1.3.2.RELEASE), apache avro(1.8.2) and io.confluent's Schema Registry(3.1.2). So evenytime the kafka listener gets a kafka message, it will find the schema id in message and get the avro schema from the registry server by id. The problem is, if the scheme registry config server is down, my listener will keep trying to send http request to the registry server to get the avro schema when it get a message(also prints large amount of error log), and it will block all the next kafka message since the offset won't move on.
16:56:41.541 ERROR KafkaMessageListenerContainer$ListenerConsumer - - org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1 - Container exception
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition trade-0 at offset 810845
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id 21
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1202)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:966)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1546)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:153)
at io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:187)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:323)
at io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:316)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:63)
at io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getBySubjectAndID(CachedSchemaRegistryClient.java:118)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:121)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.deserialize(AbstractKafkaAvroDeserializer.java:92)
at io.confluent.kafka.serializers.KafkaAvroDeserializer.deserialize(KafkaAvroDeserializer.java:54)
at org.apache.kafka.common.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:65)
at org.apache.kafka.common.serialization.ExtendedDeserializer$Wrapper.deserialize(ExtendedDeserializer.java:55)
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:918)
at org.apache.kafka.clients.consumer.internals.Fetcher.access$2600(Fetcher.java:93)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1095)
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1200(Fetcher.java:944)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:567)
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:528)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1086)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:614)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
I have tried to use RetryTemplate to set the max attempts but it didn't work, It seems that the RetryTemplate may only works in my listener method. Also I didn't find any helpful config in the io confluent's website.

Now I replace the KafkaAvroDeserializer by using a CustomAvroDeserializer, which extends the KafkaAvroDeserializer and override its deserialize method with adding a try-catch to its content, like this:
#Log4j
public class CustomAvroDeserializer extends KafkaAvroDeserializer {
#Override
public Object deserialize(String s, byte[] bytes) {
try {
return this.deserialize(bytes);
} catch (Exception e) {
log.error("encounter a problem when deserializer message with schema registry:{}", e);
return null;
}
}
}

Related

Failed to deserialize data for topic to protobuf sink connector

I can consume produced protobuf message data from kafka topic using different tool like conduktor. However, when I try to poll data using JdbcSinkConnector, it throws the exception like
org.apache.kafka.common.errors.SerializationException: Error
deserializing Protobuf message
Please take a look at the following error detail when I call kafka-connect api as follows
URL : http://localhost:8083/connectors?expand=info&expand=status
JSON Response and trace
{
"sink_postgres_03_proto":{
"info":{
"name":"sink_postgres_03_proto",
"config":{
"connector.class":"io.confluent.connect.jdbc.JdbcSinkConnector",
"connection.password":"54321",
"topics":"order-messages",
"value.converter.schema.registry.url":"http://localhost:8081",
"key.converter.schemas.enable":"false",
"auto.evolve":"true",
"connection.user":"postgres",
"value.converter.schemas.enable":"true",
"name":"sink_postgres_03_proto",
"auto.create":"true",
"connection.url":"jdbc:postgresql://localhost:5432/CallHistoryService",
"value.converter":"io.confluent.connect.protobuf.ProtobufConverter",
"insert.mode":"insert",
"key.converter":"org.apache.kafka.connect.storage.StringConverter"
},
"tasks":[
{
"connector":"sink_postgres_03_proto",
"task":0
}
],
"type":"sink"
},
"status":{
"name":"sink_postgres_03_proto",
"connector":{
"state":"RUNNING",
"worker_id":"kafka-connect:8083"
},
"tasks":[
{
"id":0,
"state":"FAILED",
"worker_id":"kafka-connect:8083",
"trace":"org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:196)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:122)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:495)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:472)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:198)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\nCaused by: org.apache.kafka.connect.errors.DataException: Failed to deserialize data for topic order-messages to Protobuf: \n\tat io.confluent.connect.protobuf.ProtobufConverter.toConnectData(ProtobufConverter.java:123)\n\tat org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:495)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:146)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:180)\n\t... 13 more\nCaused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Protobuf message for id 1\nCaused by: java.net.ConnectException: Connection refused (Connection refused)\n\tat java.base/java.net.PlainSocketImpl.socketConnect(Native Method)\n\tat java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)\n\tat java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)\n\tat java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)\n\tat java.base/java.net.Socket.connect(Socket.java:609)\n\tat java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:177)\n\tat java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)\n\tat java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)\n\tat java.base/sun.net.www.http.HttpClient.<init>(HttpClient.java:242)\n\tat java.base/sun.net.www.http.HttpClient.New(HttpClient.java:341)\n\tat java.base/sun.net.www.http.HttpClient.New(HttpClient.java:362)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1253)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:1015)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1592)\n\tat java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1520)\n\tat java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:527)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.sendHttpRequest(RestService.java:272)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.httpRequest(RestService.java:352)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:660)\n\tat io.confluent.kafka.schemaregistry.client.rest.RestService.getId(RestService.java:642)\n\tat io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaByIdFromRegistry(CachedSchemaRegistryClient.java:217)\n\tat io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaBySubjectAndId(CachedSchemaRegistryClient.java:291)\n\tat io.confluent.kafka.schemaregistry.client.CachedSchemaRegistryClient.getSchemaById(CachedSchemaRegistryClient.java:276)\n\tat io.confluent.kafka.serializers.protobuf.AbstractKafkaProtobufDeserializer.deserialize(AbstractKafkaProtobufDeserializer.java:117)\n\tat io.confluent.kafka.serializers.protobuf.AbstractKafkaProtobufDeserializer.deserializeWithSchemaAndVersion(AbstractKafkaProtobufDeserializer.java:235)\n\tat io.confluent.connect.protobuf.ProtobufConverter$Deserializer.deserialize(ProtobufConverter.java:163)\n\tat io.confluent.connect.protobuf.ProtobufConverter.toConnectData(ProtobufConverter.java:107)\n\tat org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:495)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:146)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:180)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:122)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:495)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:472)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:198)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n"
}
],
"type":"sink"
}
}
}
Please advice
Thanks you!
If you take the trace node and convert the \n and \t to newlines and tabs you get a readable stack trace with shows the problem:
Caused by: org.apache.kafka.connect.errors.DataException: Failed to deserialize data for topic order-messages to Protobuf:
at io.confluent.connect.protobuf.ProtobufConverter.toConnectData(ProtobufConverter.java:123)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:146)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:180)
... 13 more
Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Protobuf message for id 1
Caused by: java.net.ConnectException: Connection refused (Connection refused)
The error is java.net.ConnectException: Connection refused and means that either (a) you've misconfigured the location of your Schema Registry or (b) the Schema Registry is not running

Apollo GraphQL with Vertx Subscription failed

I'm running Hasura GraphQL on docker instance(window machine) exposed at
http://192.168.99.100:8080/v1/graphql
I want to perform subscription in my verticle but getting the following exception:
com.apollographql.apollo.exception.ApolloNetworkException: Subscription failed
at com.apollographql.apollo.internal.RealApolloSubscriptionCall$SubscriptionManagerCallback.onNetworkError(RealApolloSubscriptionCall.java:246)
at com.apollographql.apollo.internal.subscription.RealSubscriptionManager$SubscriptionRecord.notifyOnNetworkError(RealSubscriptionManager.java:524)
at com.apollographql.apollo.internal.subscription.RealSubscriptionManager.onTransportFailure(RealSubscriptionManager.java:296)
at com.apollographql.apollo.internal.subscription.RealSubscriptionManager$SubscriptionTransportCallback$2.run(RealSubscriptionManager.java:556)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
Here's my sample code for apollo graphQL client:
public class MyFirstVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> fut) {
OkHttpClient okHttp = new OkHttpClient().newBuilder().build();
ApolloClient subscribe = ApolloClient.builder()
.serverUrl("http://192.168.99.100:8080/v1/graphql")
.subscriptionTransportFactory(new WebSocketSubscriptionTransport.Factory(
"wss://192.168.99.100:8080/v1/graphql", okHttp))
.build();
subscribe.subscribe(new MySubscription(1)).execute(new ApolloSubscriptionCall.Callback<Optional<MySubscription.Data>>() {
.....
....
I found the solution, apparently it has to do with the type of connection you make. Im not an expert of this but, have you tried change wss to http?
:-)

How to catch SOAP request message in CXF with WSS4JOutInterceptor when connection fails

I can catch SOAP request message according to
this SO answer.
However if the server is unavailable I get the following exception from SAAJOutInterceptor (added by WSS4JOutInterceptor) and LoggingCallback is not invoked hence message is not caugth:
org.apache.cxf.binding.soap.SoapFault: Connection refused: connect
at org.apache.cxf.binding.soap.saaj.SAAJOutInterceptor$SAAJOutEndingInterceptor.handleMessage(SAAJOutInterceptor.java:221)
at org.apache.cxf.binding.soap.saaj.SAAJOutInterceptor$SAAJOutEndingInterceptor.handleMessage(SAAJOutInterceptor.java:174)
at org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:308)
at org.apache.cxf.endpoint.ClientImpl.doInvoke(ClientImpl.java:531)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:440)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:355)
at org.apache.cxf.endpoint.ClientImpl.invoke(ClientImpl.java:313)
at org.apache.cxf.frontend.ClientProxy.invokeSync(ClientProxy.java:96)
at org.apache.cxf.jaxws.JaxWsClientProxy.invoke(JaxWsClientProxy.java:140)
...
Caused by: com.ctc.wstx.exc.WstxIOException: Connection refused: connect
at com.ctc.wstx.sw.BaseStreamWriter.flush(BaseStreamWriter.java:262)
at org.apache.cxf.binding.soap.saaj.SAAJOutInterceptor$SAAJOutEndingInterceptor.handleMessage(SAAJOutInterceptor.java:215)
...
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:558)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:242)
at sun.net.www.http.HttpClient.New(HttpClient.java:339)
at sun.net.www.http.HttpClient.New(HttpClient.java:357)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1220)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1199)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1050)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:984)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream0(HttpURLConnection.java:1334)
at sun.net.www.protocol.http.HttpURLConnection.getOutputStream(HttpURLConnection.java:1309)
at org.apache.cxf.transport.http.URLConnectionHTTPConduit$URLConnectionWrappedOutputStream.setupWrappedStream(URLConnectionHTTPConduit.java:274)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.handleHeadersTrustCaching(HTTPConduit.java:1345)
at org.apache.cxf.transport.http.HTTPConduit$WrappedOutputStream.onFirstWrite(HTTPConduit.java:1306)
at org.apache.cxf.transport.http.URLConnectionHTTPConduit$URLConnectionWrappedOutputStream.onFirstWrite(URLConnectionHTTPConduit.java:307)
at org.apache.cxf.io.AbstractWrappedOutputStream.write(AbstractWrappedOutputStream.java:47)
at org.apache.cxf.io.AbstractThresholdOutputStream.unBuffer(AbstractThresholdOutputStream.java:89)
at org.apache.cxf.io.AbstractThresholdOutputStream.write(AbstractThresholdOutputStream.java:63)
at org.apache.cxf.io.CacheAndWriteOutputStream.write(CacheAndWriteOutputStream.java:80)
at org.apache.cxf.io.CacheAndWriteOutputStream.write(CacheAndWriteOutputStream.java:80)
at org.apache.cxf.io.AbstractWrappedOutputStream.write(AbstractWrappedOutputStream.java:51)
at com.ctc.wstx.io.UTF8Writer.flush(UTF8Writer.java:100)
at com.ctc.wstx.sw.BufferingXmlWriter.flush(BufferingXmlWriter.java:242)
at com.ctc.wstx.sw.BaseStreamWriter.flush(BaseStreamWriter.java:260)
...
I need to save request SOAP message with all the security stuff added by WSS4JOutInterceptor (certificate, signature ...) regardless of whether the message was successfully send or whether the server is even up.
The thing is that you must first write to the output stream where you catch the message and write to network connection only in onClose callback:
class RequestInterceptor extends AbstractPhaseInterceptor<Message> {
OutputStream outputStream
OutputStream originalOutputStream
public RequestInterceptor() {
super(Phase.PRE_STREAM);
}
#Override
public void handleMessage(Message message) throws Fault {
originalOutputStream = message.getContent(OutputStream.class)
CacheAndWriteOutputStream newOutputStream = new CacheAndWriteOutputStream(outputStream)
message.setContent(OutputStream.class, newOutputStream)
newOutputStream.registerCallback(new CachedOutputStreamCallback() {
void onFlush(CachedOutputStream cos) {
}
void onClose(CachedOutputStream cos) {
cos.writeCacheTo(originalOutputStream)
originalOutputStream.close()
}
})
}
}

Spring Cloud Stream Kafka binder fails to publish to DLQ with a key

I'm getting the following exception when processing fails with #StreamListener and Spring Cloud Stream Kafka binder tries to re-route messages to DLQ. Using Spring Cloud Edgware.SR5.
org.springframework.messaging.MessageDeliveryException: failed to send Message to channel 'my.message.destination.my.message.group.errors'; nested exception is java.lang.ClassCastException: java.lang.String cannot be cast to [B
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:451) ~[spring-integration-core-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:375) ~[spring-integration-core-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:115) ~[spring-messaging-4.3.19.RELEASE.jar:4.3.19.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:45) ~[spring-messaging-4.3.19.RELEASE.jar:4.3.19.RELEASE]
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:105) ~[spring-messaging-4.3.19.RELEASE.jar:4.3.19.RELEASE]
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:95) ~[spring-messaging-4.3.19.RELEASE.jar:4.3.19.RELEASE]
at org.springframework.integration.support.ErrorMessagePublisher.publish(ErrorMessagePublisher.java:155) ~[spring-integration-core-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.integration.handler.advice.ErrorMessageSendingRecoverer.recover(ErrorMessageSendingRecoverer.java:83) ~[spring-integration-core-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.retry.support.RetryTemplate.handleRetryExhausted(RetryTemplate.java:512) ~[spring-retry-1.2.2.RELEASE.jar:?]
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:351) ~[spring-retry-1.2.2.RELEASE.jar:?]
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:180) ~[spring-retry-1.2.2.RELEASE.jar:?]
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter.onMessage(RetryingAcknowledgingMessageListenerAdapter.java:73) ~[spring-kafka-1.1.8.RELEASE.jar:?]
at org.springframework.kafka.listener.adapter.RetryingAcknowledgingMessageListenerAdapter.onMessage(RetryingAcknowledgingMessageListenerAdapter.java:39) ~[spring-kafka-1.1.8.RELEASE.jar:?]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:792) [spring-kafka-1.1.8.RELEASE.jar:?]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:736) [spring-kafka-1.1.8.RELEASE.jar:?]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.access$2100(KafkaMessageListenerContainer.java:246) [spring-kafka-1.1.8.RELEASE.jar:?]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer$ListenerInvoker.run(KafkaMessageListenerContainer.java:1025) [spring-kafka-1.1.8.RELEASE.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_192]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_192]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_192]
Caused by: java.lang.ClassCastException: java.lang.String cannot be cast to [B
at org.springframework.cloud.stream.binder.kafka.KafkaMessageChannelBinder$4.handleMessage(KafkaMessageChannelBinder.java:360) ~[spring-cloud-stream-binder-kafka-1.3.3.RELEASE.jar:1.3.3.RELEASE]
at org.springframework.integration.dispatcher.BroadcastingDispatcher.invokeHandler(BroadcastingDispatcher.java:236) ~[spring-integration-core-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.integration.dispatcher.BroadcastingDispatcher.dispatch(BroadcastingDispatcher.java:185) ~[spring-integration-core-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:89) ~[spring-integration-core-4.3.17.RELEASE.jar:4.3.17.RELEASE]
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:425) ~[spring-integration-core-4.3.17.RELEASE.jar:4.3.17.RELEASE]
... 19 more
Tried producing messages from kafka-console-producer, and figured out that this only happens when Kafka key is used.
Following is the relevant code snippets:
MyMessageConsumer.java:
#StreamListener(MyMessageSink.MY_MESSAGE_INPUT)
#Transactional
public void consumeMyMessage(#Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String myMessageId, #Payload MyMessage myMessage) {
if (true) {
throw new RuntimeException("MockRuntimeException");
}
}
application.properties (for consumer):
spring.kafka.producer.keySerializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.keyDeserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.cloud.stream.bindings.my-message-in.destination=my.message.destination
spring.cloud.stream.bindings.my-message-in.group=my.message.group
spring.cloud.stream.bindings.my-message-in.content-type=application/json
spring.cloud.stream.bindings.my-message-in.consumer.headerMode=raw
spring.cloud.stream.bindings.my-message-in.consumer.partitioned=true
spring.cloud.stream.kafka.bindings.my-message-in.consumer.enableDlq=true
spring.cloud.stream.kafka.bindings.my-message-in.consumer.dlqName=my.message.destination.dlq
spring.cloud.stream.kafka.bindings.my-message-in.consumer.dlqProducerProperties.configuration.key.serializer=org.apache.kafka.common.serialization.StringSerializer
spring.cloud.stream.kafka.bindings.my-message-in.consumer.dlqProducerProperties.configuration.value.serializer=org.apache.kafka.common.serialization.StringSerializer
spring.cloud.stream.kafka.bindings.my-message-in.consumer.maxAttempts=3
application.properties (for producer):
spring.kafka.producer.keySerializer=org.apache.kafka.common.serialization.StringSerializer
spring.cloud.stream.bindings.my-message-out.destination=my.message.destination
spring.cloud.stream.bindings.my-message-out.content-type=application/json
spring.cloud.stream.bindings.my-message-out.producer.headerMode=raw
spring.cloud.stream.bindings.my-message-out.producer.partitionKeyExtractorClass=com.example.message.TransactionKeyExtractor
spring.cloud.stream.bindings.my-message-out.producer.partitionCount=80
Is there any way to get the DLQ re-routing to work with a message key?
Dead lettering with 1.3.x only supports Spring Cloud Stream's default key/value type (byte[]/byte[]).
Try upgrading to a more recent version.

Hazelcast need to be connected as client in the existing cluster instead of member

The changes which I made in server side:
#Bean(name = {"hazelcast"})
public HazelcastInstance hazelcastInstance() {
ClientConfig clientConfig = new ClientConfig();
clientConfig.getGroupConfig().setName(integrationSettings.getHazelcastClusterGroupName())
.setPassword(integrationSettings.getHazelcastClusterGroupPass());
final ClientNetworkConfig clientNetworkConfig = new ClientNetworkConfig();
clientNetworkConfig.addAddress("127.0.0.1:6701");
clientConfig.setNetworkConfig(clientNetworkConfig);
clientConfig.setInstanceName("INTEGRATION_INSTANCE");
final String hazelcastEnterpriseLicenseKey = null;
if (hazelcastEnterpriseLicenseKey != null) {
clientConfig.setLicenseKey(hazelcastEnterpriseLicenseKey);
}
return HazelcastClient.newHazelcastClient(clientConfig);
}
I will be getting my groupname and password from my property file.
My client side code:
ClientConfig clientConfig = new ClientConfig();
clientConfig.getGroupConfig().setName(hazelcastGroupName).setPassword(hazelcastGroupPwd);
clientConfig.getNetworkConfig().addAddress(serverAddress);
hazelcastInstance = HazelcastClient.newHazelcastClient(clientConfig);
My error log:
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.hazelcast.core.HazelcastInstance]: Factory method 'hazelcastInstance' threw exception; nested exception is java.lang.IllegalStateException: Unable to connect to any address in the config! The following addresses were tried: [[127.0.0.1]:6701]
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:189)
at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:588)
... 37 more
Caused by: java.lang.IllegalStateException: Unable to connect to any address in the config! The following addresses were tried: [[127.0.0.1]:6701]
at com.hazelcast.client.spi.impl.ClusterListenerSupport.connectToCluster(ClusterListenerSupport.java:178)
at com.hazelcast.client.spi.impl.ClientClusterServiceImpl.start(ClientClusterServiceImpl.java:189)
at com.hazelcast.client.impl.HazelcastClientInstanceImpl.start(HazelcastClientInstanceImpl.java:404)
at com.hazelcast.client.HazelcastClientManager.newHazelcastClient(HazelcastClientManager.java:78)
at com.hazelcast.client.HazelcastClient.newHazelcastClient(HazelcastClient.java:72)
at com.zafin.zrpe.integration.config.ZrpeIntegrationConfiguration.hazelcastInstance(ZrpeIntegrationConfiguration.java:85)
at com.zafin.zrpe.integration.config.ZrpeIntegrationConfiguration$$EnhancerBySpringCGLIB$$7af6798e.CGLIB$hazelcastInstance$6(<generated>)
at com.zafin.zrpe.integration.config.ZrpeIntegrationConfiguration$$EnhancerBySpringCGLIB$$7af6798e$$FastClassBySpringCGLIB$$25f010cb.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228)
at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:358)
at com.zafin.zrpe.integration.config.ZrpeIntegrationConfiguration$$EnhancerBySpringCGLIB$$7af6798e.hazelcastInstance(<generated>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:162)
... 38 more
I need to connect my hazelcast as a client, but this bean exception is failing the deployments. Is there is any other way of doing it?
Looking at your code you are creating "Hazelcast-Client" on the server side and Client Side. In the server side code , please create a hazelcast ServerMember instance by passing "Config" object and not "ClientConfig" more like
#Bean
public HazelcastInstance hazelcastInstance() throws Exception {
Config cfg = new Config();
...
...
HazelcastInstance instance = Hazelcast.newHazelcastInstance(cfg);
return instance;
}
The the Hazelcast-client can connect to the Hazelcast ServerMember. You also need to ensure the ServerMember is started before client can connect to it.