Unable to send message with KafkaNull as Value - apache-kafka

I am building a Kafka Application Using Log Compaction on a Topic but I am not able to send a Tombstone Value (KafkaNull)
I have tried using the default configuration for a serializer and when that did not work I used the suggested changes from "Publish null/tombstone message with raw headers" To set the application.properties to:
spring.cloud.stream.output.producer.useNativeEncoding=true
spring.cloud.stream.kafka.binder.configuration.value.serializer=org.springframework.kafka.support.serializer.JsonSerializer
The code I have to send a message to a stream is
this.stockTopics.compactedStocks().send(MessageBuilder
.withPayload(KafkaNull.INSTANCE)
.setHeader(KafkaHeaders.MESSAGE_KEY,company.getBytes())
.build())
this.stopTopics.compactedStocks() returns a messageStream that I can send messages to.
Every time I try and send that message with a KafkaNull instance as a payload I get the error Failed to convert message: 'GenericMessage [payload=org.springframework.kafka.support.KafkaNull#1c2d8163, headers={id=f81857e7-fbd0-56f5-8418-6a1944e7f2b1, kafka_messageKey=[B#36ec022a, contentType=application/json, timestamp=1547827957485}]' to outbound message.
I expect the message to simply be sent to the consumer with a null value but obviously it errors.

I opened a GitHub issue for this.
EDIT
Workaround - this works...
#SpringBootApplication
#EnableBinding(Source.class)
public class So54257687Application {
public static void main(String[] args) {
SpringApplication.run(So54257687Application.class, args);
}
#Bean
public ApplicationRunner runner(MessageChannel output) {
return args -> output.send(new GenericMessage<>(KafkaNull.INSTANCE));
}
#KafkaListener(id = "foo", topics = "output")
public void listen(#Payload(required = false) byte[] in) {
System.out.println(in);
}
#Bean
#StreamMessageConverter
public MessageConverter kafkaNullConverter() {
class KafkaNullConverter extends AbstractMessageConverter {
KafkaNullConverter() {
super(Collections.emptyList());
}
#Override
protected boolean supports(Class<?> clazz) {
return KafkaNull.class.equals(clazz);
}
#Override
protected Object convertFromInternal(Message<?> message, Class<?> targetClass, Object conversionHint) {
return message.getPayload();
}
#Override
protected Object convertToInternal(Object payload, MessageHeaders headers, Object conversionHint) {
return payload;
}
}
return new KafkaNullConverter();
}
}

Related

Spring framework integration TCP IP - Client application SSL not working and posting incomplete requests

I am new to Spring framework. We have a requirement where our application is acting as a client and needs to integrate with another application using TCP. We will be sending them fixed length requests and we will receive response for the same. We have been asked to use the same TCP connection for each request. Using the same open connection, our application will also be receiving heartbeat messages from server application and we do not need to send any response for them.
The request messages that we need to send is header + body where header has message type and length details.
We will be using SSL. When we try to test with SSL, it does not show any exception during getConnection but is not able to receive any heartbeat messages.
When we test without SSL, it is able to send requests and receive response as well as heartbeat messages. But after the first request response, it sends partial request text to server application for subsequent messages which is causing issues and connections are being reset by peer due to unexpected message received at their end.
I have tried many things referring to online documents available but not able to successfully implement the requirement.
Please find below code. Thanks in advance.
public class ClientConfig implements ApplicationEventPublisherAware{
protected String port;
protected String host;
protected String connectionTimeout;
protected String keyStorePath;
protected String trustStorePath;
protected String keyStorePassword;
protected String trustStorePassword;
protected String protocol;
private ApplicationEventPublisher applicationEventPublisher;
#Override
public void setApplicationEventPublisher(ApplicationEventPublisher applicationEventPublisher) {
this.applicationEventPublisher = applicationEventPublisher;
}
#Bean
public DefaultTcpNioSSLConnectionSupport connectionSupport() {
if("SSL".equalsIgnoreCase(getProtocol())) {
DefaultTcpSSLContextSupport sslContextSupport =
new DefaultTcpSSLContextSupport(getKeyStorePath(),
getTrustStorePath(), getKeyStorePassword(), getTrustStorePassword());
sslContextSupport.setProtocol(getProtocol());
DefaultTcpNioSSLConnectionSupport tcpNioConnectionSupport =
new DefaultTcpNioSSLConnectionSupport(sslContextSupport);
return tcpNioConnectionSupport;
}
return null;
}
#Bean
public AbstractClientConnectionFactory clientConnectionFactory() {
if(StringUtils.isNullOrEmptyTrim(getHost()) || StringUtils.isNullOrEmptyTrim(getPort())) {
return null;
}
TcpNioClientConnectionFactory tcpNioClientConnectionFactory =
new TcpNioClientConnectionFactory(getHost(), Integer.valueOf(getPort()));
tcpNioClientConnectionFactory.setApplicationEventPublisher(applicationEventPublisher);
tcpNioClientConnectionFactory.setSoKeepAlive(true);
tcpNioClientConnectionFactory.setDeserializer(new CustomSerializerDeserializer());
tcpNioClientConnectionFactory.setSerializer(new CustomSerializerDeserializer());
tcpNioClientConnectionFactory.setLeaveOpen(true);
tcpNioClientConnectionFactory.setSingleUse(false);
if("SSL".equalsIgnoreCase(getProtocol())) {
tcpNioClientConnectionFactory.setSslHandshakeTimeout(60);
tcpNioClientConnectionFactory.setTcpNioConnectionSupport(connectionSupport());
}
return tcpNioClientConnectionFactory;
}
#Bean
public MessageChannel outboundChannel() {
return new DirectChannel();
}
#Bean
public PollableChannel receiverChannel() {
return new QueueChannel();
}
#Bean
#ServiceActivator(inputChannel = "outboundChannel")
public TcpSendingMessageHandler outboundClient
(AbstractClientConnectionFactory clientConnectionFactory) {
TcpSendingMessageHandler outbound = new TcpSendingMessageHandler();
outbound.setConnectionFactory(clientConnectionFactory);
if(!StringUtils.isNullOrEmpty(getConnectionTimeout())) {
long timeout = Long.valueOf(getConnectionTimeout());
outbound.setRetryInterval(TimeUnit.SECONDS.toMillis(timeout));
}
outbound.setClientMode(true);
return outbound;
}
#Bean
public TcpReceivingChannelAdapter inboundClient(TcpNioClientConnectionFactory connectionFactory) {
TcpReceivingChannelAdapter inbound = new TcpReceivingChannelAdapter();
inbound.setConnectionFactory(connectionFactory);
if(!StringUtils.isNullOrEmpty(getConnectionTimeout())) {
long timeout = Long.valueOf(getConnectionTimeout());
inbound.setRetryInterval(TimeUnit.SECONDS.toMillis(timeout));
}
inbound.setOutputChannel(receiverChannel());
inbound.setClientMode(true);
return inbound;
}
}
public class CustomSerializerDeserializer implements Serializer<String>, Deserializer<String> {
#Override
public String deserialize(InputStream inputStream) throws IOException {
int i = 0;
byte[] lenbuf = new byte[8];
String message = null;
while ((i = inputStream.read(lenbuf)) != -1) {
String messageType = new String(lenbuf);
if(messageType.contains(APP_DATA_LEN)){
byte byteResp[] = new byte[RESP_MSG_LEN-8];
inputStream.read(byteResp, 0, RESP_MSG_LEN-8);
String readMsg = new String(byteResp);
message = messageType + readMsg;
}else {
byte byteResp[] = new byte[HANDSHAKE_LEN-8];
inputStream.read(byteResp, 0, HANDSHAKE_LEN-8);
String readMsg = new String(byteResp);
message = messageType + readMsg;
}
}
return message;
}
#Override
public void serialize(String object, OutputStream outputStream) throws IOException {
outputStream.write(object.getBytes());
outputStream.flush();
}
}
#Override
public String sendMessage(String message) {
Message<String> request = MessageBuilder.withPayload(message).build();
DirectChannel outboundChannel = (DirectChannel) applicationContext.getBean(DirectChannel.class);
outboundChannel.send(request);
}
//Below code is being used to open connection
TcpNioClientConnectionFactory cf = (TcpNioClientConnectionFactory) applicationContext.getBean(AbstractClientConnectionFactory.class);
if(cf != null) {
TcpNioConnection conn = (TcpNioConnection) cf.getConnection();
}

Kafka: Consumer api: Regression test fails if runs in a group (sequentially)

I have implemented a kafka application using consumer api. And I have 2 regression tests implemented with stream api:
To test happy path: by producing data from the test ( into the input topic that the application is listening to) that will be consumed by the application and application will produce data (into the output topic ) that the test will consume and validate against expected output data.
To test error path: behavior is the same as above. Although this time application will produce data into output topic and test will consume from application's error topic and will validate against expected error output.
My code and the regression-test codes are residing under the same project under expected directory structure. Both time ( for both tests) data should have been picked up by the same listener at the application side.
The problem is :
When I am executing the tests individually (manually), each test is passing. However, If I execute them together but sequentially ( for example: gradle clean build ) , only first test is passing. 2nd test is failing after the test-side-consumer polling for data and after some time it gives up not finding any data.
Observation:
From debugging, it looks like, the 1st time everything works perfectly ( test-side and application-side producers and consumers). However, during the 2nd test it seems that application-side-consumer is not receiving any data ( It seems that test-side-producer is producing data, but can not say that for sure) and hence no data is being produced into the error topic.
What I have tried so far:
After investigations, my understanding is that we are getting into race conditions and to avoid that found suggestions like :
use #DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
Tear off broker after each test ( Please see the ".destry()" on brokers)
use different topic names for each test
I applied all of them and still could not recover from my issue.
I am providing the code here for perusal. Any insight is appreciated.
Code for 1st test (Testing error path):
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false,
topics = {
AdapterStreamProperties.Constants.INPUT_TOPIC,
AdapterStreamProperties.Constants.ERROR_TOPIC
},
brokerProperties = {
"listeners=PLAINTEXT://localhost:9092",
"port=9092",
"log.dir=/tmp/data/logs",
"auto.create.topics.enable=true",
"delete.topic.enable=true"
}
)
public class AbstractIntegrationFailurePathTest {
private final int retryLimit = 0;
#Autowired
protected EmbeddedKafkaBroker embeddedFailurePathKafkaBroker;
//To produce data
#Autowired
protected KafkaTemplate<PreferredMediaMsgKey, SendEmailCmd> inputProducerTemplate;
//To read from output error
#Autowired
protected Consumer<PreferredMediaMsgKey, ErrorCmd> outputErrorConsumer;
//Service to execute notification-preference
#Autowired
protected AdapterStreamProperties projectProerties;
protected void subscribe(Consumer consumer, String topic, int attempt) {
try {
embeddedFailurePathKafkaBroker.consumeFromAnEmbeddedTopic(consumer, topic);
} catch (ComparisonFailure ex) {
if (attempt < retryLimit) {
subscribe(consumer, topic, attempt + 1);
}
}
}
}
.
#TestConfiguration
public class AdapterStreamFailurePathTestConfig {
#Autowired
private EmbeddedKafkaBroker embeddedKafkaBroker;
#Value("${spring.kafka.adapter.application-id}")
private String applicationId;
#Value("${spring.kafka.adapter.group-id}")
private String groupId;
//Producer of records that the program consumes
#Bean
public Map<String, Object> sendEmailCmdProducerConfigs() {
Map<String, Object> results = KafkaTestUtils.producerProps(embeddedKafkaBroker);
results.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.KEY_SERDE.serializer().getClass());
results.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.INPUT_VALUE_SERDE.serializer().getClass());
return results;
}
#Bean
public ProducerFactory<PreferredMediaMsgKey, SendEmailCmd> inputProducerFactory() {
return new DefaultKafkaProducerFactory<>(sendEmailCmdProducerConfigs());
}
#Bean
public KafkaTemplate<PreferredMediaMsgKey, SendEmailCmd> inputProducerTemplate() {
return new KafkaTemplate<>(inputProducerFactory());
}
//Consumer of the error output, generated by the program
#Bean
public Map<String, Object> outputErrorConsumerConfig() {
Map<String, Object> props = KafkaTestUtils.consumerProps(
applicationId, Boolean.TRUE.toString(), embeddedKafkaBroker);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.KEY_SERDE.deserializer().getClass()
.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
AdapterStreamProperties.Constants.ERROR_VALUE_SERDE.deserializer().getClass()
.getName());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return props;
}
#Bean
public Consumer<PreferredMediaMsgKey, ErrorCmd> outputErrorConsumer() {
DefaultKafkaConsumerFactory<PreferredMediaMsgKey, ErrorCmd> rpf =
new DefaultKafkaConsumerFactory<>(outputErrorConsumerConfig());
return rpf.createConsumer(groupId, "notification-failure");
}
}
.
#RunWith(SpringRunner.class)
#SpringBootTest(classes = AdapterStreamFailurePathTestConfig.class)
#ActiveProfiles(profiles = "errtest")
public class ErrorPath400Test extends AbstractIntegrationFailurePathTest {
#Autowired
private DataGenaratorForErrorPath400Test datagen;
#Mock
private AdapterHttpClient httpClient;
#Autowired
private ErroredEmailCmdDeserializer erroredEmailCmdDeserializer;
#Before
public void setup() throws InterruptedException {
Mockito.when(httpClient.callApi(Mockito.any()))
.thenReturn(
new GenericResponse(
400,
TestConstants.ERROR_MSG_TO_CHK));
Mockito.when(httpClient.createURI(Mockito.any(),Mockito.any(),Mockito.any())).thenCallRealMethod();
inputProducerTemplate.send(
projectProerties.getInputTopic(),
datagen.getKey(),
datagen.getEmailCmdToProduce());
System.out.println("producer: "+ projectProerties.getInputTopic());
subscribe(outputErrorConsumer , projectProerties.getErrorTopic(), 0);
}
#Test
public void testWithError() throws InterruptedException, InvalidProtocolBufferException, TextFormat.ParseException {
ConsumerRecords<PreferredMediaMsgKeyBuf.PreferredMediaMsgKey, ErrorCommandBuf.ErrorCmd> records;
List<ConsumerRecord<PreferredMediaMsgKeyBuf.PreferredMediaMsgKey, ErrorCommandBuf.ErrorCmd>> outputListOfErrors = new ArrayList<>();
int attempt = 0;
int expectedRecords = 1;
do {
records = KafkaTestUtils.getRecords(outputErrorConsumer);
records.forEach(outputListOfErrors::add);
attempt++;
} while (attempt < expectedRecords && outputListOfErrors.size() < expectedRecords);
//Verify the recipient event stream size
Assert.assertEquals(expectedRecords, outputListOfErrors.size());
//Validate output
}
#After
public void tearDown() {
outputErrorConsumer.close();
embeddedFailurePathKafkaBroker.destroy();
}
}
2nd test is almost the same in structure. Although this time the test-side-consumer is consuming from application-side-output-topic( instead of error topic). And I named the consumers,broker,producer,topics differently. Like :
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false,
topics = {
AdapterStreamProperties.Constants.INPUT_TOPIC,
AdapterStreamProperties.Constants.OUTPUT_TOPIC
},
brokerProperties = {
"listeners=PLAINTEXT://localhost:9092",
"port=9092",
"log.dir=/tmp/data/logs",
"auto.create.topics.enable=true",
"delete.topic.enable=true"
}
)
public class AbstractIntegrationSuccessPathTest {
private final int retryLimit = 0;
#Autowired
protected EmbeddedKafkaBroker embeddedKafkaBroker;
//To produce data
#Autowired
protected KafkaTemplate<PreferredMediaMsgKey,SendEmailCmd> sendEmailCmdProducerTemplate;
//To read from output regular topic
#Autowired
protected Consumer<PreferredMediaMsgKey, NotifiedEmailCmd> ouputConsumer;
//Service to execute notification-preference
#Autowired
protected AdapterStreamProperties projectProerties;
protected void subscribe(Consumer consumer, String topic, int attempt) {
try {
embeddedKafkaBroker.consumeFromAnEmbeddedTopic(consumer, topic);
} catch (ComparisonFailure ex) {
if (attempt < retryLimit) {
subscribe(consumer, topic, attempt + 1);
}
}
}
}
Please let me know if I should provide any more information.,
"port=9092"
Don't use a fixed port; leave that out and the embedded broker will use a random port; the consumer configs are set up in KafkaTestUtils to point to the random port.
You shouldn't need to dirty the context after each test method - use a different group.id for each test and a different topic.
In my case the consumer was not closed properly. I had to do :
#After
public void tearDown() {
// shutdown hook to correctly close the streams application
Runtime.getRuntime().addShutdownHook(new Thread(ouputConsumer::close));
}
to resolve.

Apache Flink - how to send and consume POJOs using AWS Kinesis

I want to consume POJOs arriving from Kinesis with Flink.
Is there any standard for how to correctly send and deserialize the messages?
Thanks
I resolved it with:
DataStream<SamplePojo> kinesis = see.addSource(new FlinkKinesisConsumer<>(
"my-stream",
new POJODeserializationSchema(),
kinesisConsumerConfig));
and
public class POJODeserializationSchema extends AbstractDeserializationSchema<SamplePojo> {
private ObjectMapper mapper;
#Override
public SamplePojo deserialize(byte[] message) throws IOException {
if (mapper == null) {
mapper = new ObjectMapper();
}
SamplePojo retVal = mapper.readValue(message, SamplePojo.class);
return retVal;
}
#Override
public boolean isEndOfStream(SamplePojo nextElement) {
return false;
}
}

Why use Kryo serialize framework into apache storm will over write data when blot get values

Maybe mostly develop were use AVRO as serialize framework in Kafka and Apache Storm scheme. But I need handle most complex data then I found the Kryo serialize framework also were successfully integrate it into our project which follow Kafka and Apache Storm environment. But when want to further operation there had a strange status.
I had sent 5 times message to Kafka, the Storm job also can read the 5 messages and deserialize success. But next blot get the data value is wrong. There print out the same value as the last message. Then I had add the print out after when complete the deserialize code. Actually it print out true there had different 5 message. Why the next blot can't the values? See my code below:
KryoScheme.java
public abstract class KryoScheme<T> implements Scheme {
private static final long serialVersionUID = 6923985190833960706L;
private static final Logger logger = LoggerFactory.getLogger(KryoScheme.class);
private Class<T> clazz;
private Serializer<T> serializer;
public KryoScheme(Class<T> clazz, Serializer<T> serializer) {
this.clazz = clazz;
this.serializer = serializer;
}
#Override
public List<Object> deserialize(byte[] buffer) {
Kryo kryo = new Kryo();
kryo.register(clazz, serializer);
T scheme = null;
try {
scheme = kryo.readObject(new Input(new ByteArrayInputStream(buffer)), this.clazz);
logger.info("{}", scheme);
} catch (Exception e) {
String errMsg = String.format("Kryo Scheme failed to deserialize data from Kafka to %s. Raw: %s",
clazz.getName(),
new String(buffer));
logger.error(errMsg, e);
throw new FailedException(errMsg, e);
}
return new Values(scheme);
}}
PrintFunction.java
public class PrintFunction extends BaseFunction {
private static final Logger logger = LoggerFactory.getLogger(PrintFunction.class);
#Override
public void execute(TridentTuple tuple, TridentCollector collector) {
List<Object> data = tuple.getValues();
if (data != null) {
logger.info("Scheme data size: {}", data.size());
for (Object value : data) {
PrintOut out = (PrintOut) value;
logger.info("{}.{}--value: {}",
Thread.currentThread().getName(),
Thread.currentThread().getId(),
out.toString());
collector.emit(new Values(out));
}
}
}}
StormLocalTopology.java
public class StormLocalTopology {
public static void main(String[] args) {
........
BrokerHosts zk = new ZkHosts("xxxxxx");
Config stormConf = new Config();
stormConf.put(Config.TOPOLOGY_DEBUG, false);
stormConf.put(Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS, 1000 * 5);
stormConf.put(Config.TOPOLOGY_WORKERS, 1);
stormConf.put(Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS, 5);
stormConf.put(Config.TOPOLOGY_TASKS, 1);
TridentKafkaConfig actSpoutConf = new TridentKafkaConfig(zk, topic);
actSpoutConf.fetchSizeBytes = 5 * 1024 * 1024 ;
actSpoutConf.bufferSizeBytes = 5 * 1024 * 1024 ;
actSpoutConf.scheme = new SchemeAsMultiScheme(scheme);
actSpoutConf.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
TridentTopology topology = new TridentTopology();
TransactionalTridentKafkaSpout actSpout = new TransactionalTridentKafkaSpout(actSpoutConf);
topology.newStream(topic, actSpout).parallelismHint(4).shuffle()
.each(new Fields("act"), new PrintFunction(), new Fields());
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(topic+"Topology", stormConf, topology.build());
}}
There also other problem why the kryo scheme only can read one message buffer. Is there other way get multi messages buffer then can batch send data to next blot.
Also if I send 1 message the full flow seems success.
Then send 2 message is wrong. the print out message like below:
56157 [Thread-18-spout0] INFO s.s.a.s.s.c.KryoScheme - 2016-02- 05T17:20:48.122+0800,T6mdfEW#N5pEtNBW
56160 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Scheme data size: 1
56160 [Thread-18-spout0] INFO s.s.a.s.s.c.KryoScheme - 2016-02- 05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
56161 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Thread-20-b-0.99--value: 2016-02-05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
56162 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Scheme data size: 1
56162 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Thread-20-b-0.99--value: 2016-02-05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
I'm sorry this my mistake. Just found a bug in Kryo deserialize class, there exist an local scope parameter, so it can be over write in multi thread environment. Not change the parameter in party scope, the code run well.
reference code see blow:
public class KryoSerializer<T extends BasicEvent> extends Serializer<T> implements Serializable {
private static final long serialVersionUID = -4684340809824908270L;
// It's wrong set
//private T event;
public KryoSerializer(T event) {
this.event = event;
}
#Override
public void write(Kryo kryo, Output output, T event) {
event.write(output);
}
#Override
public T read(Kryo kryo, Input input, Class<T> type) {
T event = new T();
event.read(input);
return event;
}
}

Spring Cloud - Getting Retry Working In RestTemplate?

I have been migrating an existing application over to Spring Cloud's service discovery, Ribbon load balancing, and circuit breakers. The application already makes extensive use of the RestTemplate and I have been able to successfully use the load balanced version of the template. However, I have been testing the situation where there are two instances of a service and I drop one of those instances out of operation. I would like the RestTemplate to failover to the next server. From the research I have done, it appears that the fail-over logic exists in the Feign client and when using Zuul. It appears that the LoadBalancedRest template does not have logic for fail-over. In diving into the code, it looks like the RibbonClientHttpRequestFactory is using the netflix RestClient (which appears to have logic for doing retries).
So where do I go from here to get this working?
I would prefer to not use the Feign client because I would have to sweep A LOT of code.
I had found this link that suggested using the #Retryable annotation along with #HystrixCommand but this seems like something that should be a part of the load balanced rest template.
I did some digging into the code for RibbonClientHttpRequestFactory.RibbonHttpRequest:
protected ClientHttpResponse executeInternal(HttpHeaders headers) throws IOException {
try {
addHeaders(headers);
if (outputStream != null) {
outputStream.close();
builder.entity(outputStream.toByteArray());
}
HttpRequest request = builder.build();
HttpResponse response = client.execute(request, config);
return new RibbonHttpResponse(response);
}
catch (Exception e) {
throw new IOException(e);
}
}
It appears that if I override this method and change it to use "client.executeWithLoadBalancer()" that I might be able to leverage the retry logic that is built into the RestClient? I guess I could create my own version of the RibbonClientHttpRequestFactory to do this?
Just looking for guidance on the best approach.
Thanks
To answer my own question:
Before I get into the details, a cautionary tale:
Eureka's self preservation mode sent me down a rabbit hole while testing the fail-over on my local machine. I recommend turning self preservation mode off while doing your testing. Because I was dropping nodes at a regular rate and then restarting (with a different instance ID using a random value), I tripped Eureka's self preservation mode. I ended up with many instances in Eureka that pointed to the same machine, same port. The fail-over was actually working but the next node that was chosen happened to be another dead instance. Very confusing at first!
I was able to get fail-over working with a modified version of RibbonClientHttpRequestFactory. Because RibbonAutoConfiguration creates a load balanced RestTemplate with this factory, rather then injecting this rest template, I create a new one with my modified version of the request factory:
protected RestTemplate restTemplate;
#Autowired
public void customizeRestTemplate(SpringClientFactory springClientFactory, LoadBalancerClient loadBalancerClient) {
restTemplate = new RestTemplate();
// Use a modified version of the http request factory that leverages the load balacing in netflix's RestClient.
RibbonRetryHttpRequestFactory lFactory = new RibbonRetryHttpRequestFactory(springClientFactory, loadBalancerClient);
restTemplate.setRequestFactory(lFactory);
}
The modified Request Factory is just a copy of RibbonClientHttpRequestFactory with two minor changes:
1) In createRequest, I removed the code that was selecting a server from the load balancer because the RestClient will do that for us.
2) In the inner class, RibbonHttpRequest, I changed executeInternal to call "executeWithLoadBalancer".
The full class:
#SuppressWarnings("deprecation")
public class RibbonRetryHttpRequestFactory implements ClientHttpRequestFactory {
private final SpringClientFactory clientFactory;
private LoadBalancerClient loadBalancer;
public RibbonRetryHttpRequestFactory(SpringClientFactory clientFactory, LoadBalancerClient loadBalancer) {
this.clientFactory = clientFactory;
this.loadBalancer = loadBalancer;
}
#Override
public ClientHttpRequest createRequest(URI originalUri, HttpMethod httpMethod) throws IOException {
String serviceId = originalUri.getHost();
IClientConfig clientConfig = clientFactory.getClientConfig(serviceId);
RestClient client = clientFactory.getClient(serviceId, RestClient.class);
HttpRequest.Verb verb = HttpRequest.Verb.valueOf(httpMethod.name());
return new RibbonHttpRequest(originalUri, verb, client, clientConfig);
}
public class RibbonHttpRequest extends AbstractClientHttpRequest {
private HttpRequest.Builder builder;
private URI uri;
private HttpRequest.Verb verb;
private RestClient client;
private IClientConfig config;
private ByteArrayOutputStream outputStream = null;
public RibbonHttpRequest(URI uri, HttpRequest.Verb verb, RestClient client, IClientConfig config) {
this.uri = uri;
this.verb = verb;
this.client = client;
this.config = config;
this.builder = HttpRequest.newBuilder().uri(uri).verb(verb);
}
#Override
public HttpMethod getMethod() {
return HttpMethod.valueOf(verb.name());
}
#Override
public URI getURI() {
return uri;
}
#Override
protected OutputStream getBodyInternal(HttpHeaders headers) throws IOException {
if (outputStream == null) {
outputStream = new ByteArrayOutputStream();
}
return outputStream;
}
#Override
protected ClientHttpResponse executeInternal(HttpHeaders headers) throws IOException {
try {
addHeaders(headers);
if (outputStream != null) {
outputStream.close();
builder.entity(outputStream.toByteArray());
}
HttpRequest request = builder.build();
HttpResponse response = client.executeWithLoadBalancer(request, config);
return new RibbonHttpResponse(response);
}
catch (Exception e) {
throw new IOException(e);
}
//TODO: fix stats, now that execute is not called
// use execute here so stats are collected
/*
return loadBalancer.execute(this.config.getClientName(), new LoadBalancerRequest<ClientHttpResponse>() {
#Override
public ClientHttpResponse apply(ServiceInstance instance) throws Exception {}
});
*/
}
private void addHeaders(HttpHeaders headers) {
for (String name : headers.keySet()) {
// apache http RequestContent pukes if there is a body and
// the dynamic headers are already present
if (!isDynamic(name) || outputStream == null) {
List<String> values = headers.get(name);
for (String value : values) {
builder.header(name, value);
}
}
}
}
private boolean isDynamic(String name) {
return name.equals("Content-Length") || name.equals("Transfer-Encoding");
}
}
public class RibbonHttpResponse extends AbstractClientHttpResponse {
private HttpResponse response;
private HttpHeaders httpHeaders;
public RibbonHttpResponse(HttpResponse response) {
this.response = response;
this.httpHeaders = new HttpHeaders();
List<Map.Entry<String, String>> headers = response.getHttpHeaders().getAllHeaders();
for (Map.Entry<String, String> header : headers) {
this.httpHeaders.add(header.getKey(), header.getValue());
}
}
#Override
public InputStream getBody() throws IOException {
return response.getInputStream();
}
#Override
public HttpHeaders getHeaders() {
return this.httpHeaders;
}
#Override
public int getRawStatusCode() throws IOException {
return response.getStatus();
}
#Override
public String getStatusText() throws IOException {
return HttpStatus.valueOf(response.getStatus()).name();
}
#Override
public void close() {
response.close();
}
}
}
I had the same problem but then, out of the box, everything was working (using a #LoadBalanced RestTemplate). I am using Finchley version of Spring Cloud, and I think my problem was that I was not explicity adding spring-retry in my pom configuration. I'll leave here my spring-retry related yml configuration (remember this only works with #LoadBalanced RestTemplate, Zuul of Feign):
spring:
# Ribbon retries on
cloud:
loadbalancer:
retry:
enabled: true
# Ribbon service config
my-service:
ribbon:
MaxAutoRetries: 3
MaxAutoRetriesNextServer: 1
OkToRetryOnAllOperations: true
retryableStatusCodes: 500, 502