Spring Cloud Ilford (2020.0.3) WebClient DataBufferLimitException: Exceeded limit on max bytes to buffer - spring-cloud

Spring boot 2.5.3
Spring Cloud 2020.0.3
My understanding is this was fixed in Hoxton SR7?
public byte[] findImage(String guid) {
return webClient
.get()
.uri(sUrl + "/findImage/" + guid )
.header(AUTHORIZATION_PROPERTY, AUTHENTICATION_SCHEME + clientToken)
.retrieve()
.onStatus(
(HttpStatus::isError),
it -> handleError(it.statusCode().getReasonPhrase()))
.bodyToMono(FileResponse.class)
.block(Duration.of(6000, ChronoUnit.MILLIS))
.getContent();
}
nested exception is org.springframework.core.io.buffer.DataBufferLimitException: Exceeded limit on max bytes to buffer : 262144
at org.springframework.web.reactive.function.client.WebClientResponseException.create(WebClientResponseException.java:229)

https://github.com/spring-projects/spring-framework/issues/23961
https://docs.spring.io/spring-framework/docs/current/reference/html/web-reactive.html#webflux-client-builder-maxinmemorysize
#Slf4j
#Configuration
public class WebClientConfig implements WebFluxConfigurer {
#Override
public void configureHttpMessageCodecs(ServerCodecConfigurer configurer) {
configurer.defaultCodecs().maxInMemorySize(16 * 1024 * 1024);
}
}

Related

Kafka consumer poll with offset 0 not returning message

I am using spring-kafka to poll message, when I use the annotation for the consumer and set offset to 0 it will see all messages from the earliest. But when I try to use a injected ConsumerFactory to create consumer on my own, then poll will only return a few message or no message at all. Is there some other config I need in order to be able to pull message? The poll timeout is already set to 10 seconds.
#Component
public class GenericConsumer {
private static final Logger logger = LoggerFactory.getLogger(GenericConsumer.class);
#Autowired
ConsumerFactory<String, Record> consumerFactory;
public ConsumerRecords<String, Record> poll(String topic, String group){
logger.info("---------- Polling kafka recrods from topic " + topic + " group" + group);
Consumer<String, Record> consumer = consumerFactory.createConsumer(group, "");
consumer.subscribe(Arrays.asList(topic));
// need to make a dummy poll before we can seek
consumer.poll(1000);
consumer.seekToBeginning(consumer.assignment());
ConsumerRecords<String, Record> records;
records = consumer.poll(10000);
logger.info("------------ Total " + records.count() + " records polled");
consumer.close();
return records;
}
}
It works fine for me, this was with boot 2.0.5, Spring Kafka 2.1.10 ...
#SpringBootApplication
public class So52284259Application implements ConsumerAwareRebalanceListener {
private static final Logger logger = LoggerFactory.getLogger(So52284259Application.class);
public static void main(String[] args) {
SpringApplication.run(So52284259Application.class, args);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template, GenericConsumer consumer) {
return args -> {
// for (int i = 0; i < 1000; i++) { // load up the topic on first run
// template.send("so52284259", "foo" + i);
// }
consumer.poll("so52284259", "generic");
};
}
#KafkaListener(id = "listener", topics = "so52284259")
public void listen(String in) {
if ("foo999".equals(in)) {
logger.info("#KafkaListener: " + in);
}
}
#Override
public void onPartitionsAssigned(Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
consumer.seekToBeginning(partitions);
}
#Bean
public NewTopic topic() {
return new NewTopic("so52284259", 1, (short) 1);
}
}
#Component
class GenericConsumer {
private static final Logger logger = LoggerFactory.getLogger(GenericConsumer.class);
#Autowired
ConsumerFactory<String, String> consumerFactory;
public void poll(String topic, String group) {
logger.info("---------- Polling kafka recrods from topic " + topic + " group" + group);
Consumer<String, String> consumer = consumerFactory.createConsumer(group, "");
consumer.subscribe(Arrays.asList(topic));
// need to make a dummy poll before we can seek
consumer.poll(1000);
consumer.seekToBeginning(consumer.assignment());
ConsumerRecords<String, String> records;
boolean done = false;
while (!done) {
records = consumer.poll(10000);
logger.info("------------ Total " + records.count() + " records polled");
Iterator<ConsumerRecord<String, String>> iterator = records.iterator();
while (iterator.hasNext()) {
String value = iterator.next().value();
if ("foo999".equals(value)) {
logger.info("Consumer: " + value);
done = true;
}
}
}
consumer.close();
}
}
and
2018-09-12 09:35:25.929 INFO 61390 --- [ main] com.example.GenericConsumer : ------------ Total 500 records polled
2018-09-12 09:35:25.931 INFO 61390 --- [ main] com.example.GenericConsumer : ------------ Total 500 records polled
2018-09-12 09:35:25.932 INFO 61390 --- [ main] com.example.GenericConsumer : Consumer: foo999
2018-09-12 09:35:25.942 INFO 61390 --- [ listener-0-C-1] com.example.So52284259Application : #KafkaListener: foo999

spring cloud gateway, Is Request size limit filter is available

I am working in a project with spring-cloud-gateway. I see that the Request Size limitation filter is yet not available. But I need to develop it. Any idea , is it coming ? or should I start my own development.
I Know that it is difficult to get any answer, as except the developers there are a few persons who are working on it.
I have created a filter named RequestSizeGatewayFilterFactory, It is working fine for our application as of now. But not sure this can be a part of the spring-cloud-gateway project.
package com.api.gateway.somename.filter;
import org.springframework.cloud.gateway.filter.GatewayFilter;
import org.springframework.cloud.gateway.filter.factory.AbstractGatewayFilterFactory;
import org.springframework.http.HttpStatus;
import org.springframework.http.server.reactive.ServerHttpRequest;
import org.springframework.util.Assert;
import org.springframework.util.StringUtils;
/**
* This filter blocks the request, if the request size is more than
* the permissible size.The default request size is 5 MB
*
* #author Arpan
*/
public class RequestSizeGatewayFilterFactory
extends AbstractGatewayFilterFactory<RequestSizeGatewayFilterFactory.RequestSizeConfig> {
private static String PREFIX = "kMGTPE";
private static String ERROR = "Request size is larger than permissible limit." +
"Request size is %s where permissible limit is %s";
public RequestSizeGatewayFilterFactory() {
super(RequestSizeGatewayFilterFactory.RequestSizeConfig.class);
}
#Override
public GatewayFilter apply(RequestSizeGatewayFilterFactory.RequestSizeConfig requestSizeConfig) {
requestSizeConfig.validate();
return (exchange, chain) -> {
ServerHttpRequest request = exchange.getRequest();
String contentLength = request.getHeaders().getFirst("content-length");
if (!StringUtils.isEmpty(contentLength)) {
Long currentRequestSize = Long.valueOf(contentLength);
if (currentRequestSize > requestSizeConfig.getMaxSize()) {
exchange.getResponse().setStatusCode(HttpStatus.PAYLOAD_TOO_LARGE);
exchange.getResponse().getHeaders().add("errorMessage",
getErrorMessage(currentRequestSize, requestSizeConfig.getMaxSize()));
return exchange.getResponse().setComplete();
}
}
return chain.filter(exchange);
};
}
public static class RequestSizeConfig {
// 5 MB is the default request size
private Long maxSize = 5000000L;
public RequestSizeGatewayFilterFactory.RequestSizeConfig setMaxSize(Long maxSize) {
this.maxSize = maxSize;
return this;
}
public Long getMaxSize() {
return maxSize;
}
public void validate() {
Assert.isTrue(this.maxSize != null && this.maxSize > 0,
"maxSize must be greater than 0");
Assert.isInstanceOf(Long.class, maxSize, "maxSize must be a number");
}
}
private static String getErrorMessage(Long currentRequestSize, Long maxSize) {
return String.format(ERROR,
getHumanReadableByteCount(currentRequestSize),
getHumanReadableByteCount(maxSize));
}
private static String getHumanReadableByteCount(long bytes) {
int unit = 1000;
if (bytes < unit) return bytes + " B";
int exp = (int) (Math.log(bytes) / Math.log(unit));
String pre = Character.toString(PREFIX.charAt(exp - 1));
return String.format("%.1f %sB", bytes / Math.pow(unit, exp), pre);
}
}
And the configuration for the filter is:
When it works as a default filter:
spring:
application:
name: somename
cloud:
gateway:
default-filters:
- Hystrix=default
- RequestSize=7000000
When needs to be applied in some API
# ===========================================
- id: request_size_route
uri: ${test.uri}/upload
predicates:
- Path=/upload
filters:
- name: RequestSize
args:
maxSize: 5000000
Also you need to configure the bean in some component scanable class in your project, which is GatewayAutoConfiguration for the spring-cloud-gateway-core project.
#Bean
public RequestSizeGatewayFilterFactory requestSizeGatewayFilterFactory() {
return new RequestSizeGatewayFilterFactory();
}

Kafka + Spring Batch Listener Flush Batch

Using Kafka Broker: 1.0.1
spring-kafka: 2.1.6.RELEASE
I'm using a batched consumer with the following settings:
// Other settings are not shown..
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "100");
I use spring listener in the following way:
#KafkaListener(topics = "${topics}", groupId = "${consumer.group.id}")
public void receive(final List<String> data,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) final List<Integer> partitions,
#Header(KafkaHeaders.RECEIVED_TOPIC) Set<String> topics,
#Header(KafkaHeaders.OFFSET) final List<Long> offsets) { // ......code... }
I always find the a few messages remain in the batch and not received in my listener. It appears to be that if the remaining messages are less than a batch size, it isn't consumed (may be in memory and published to my listener). Is there any way to have a setting to auto-flush the batch after a time interval so as to avoid the messages not being flushed?
What's the best way to deal with such kind of situation with a batch consumer?
I just ran a test without any problems...
#SpringBootApplication
public class So50370851Application {
public static void main(String[] args) {
SpringApplication.run(So50370851Application.class, args);
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
for (int i = 0; i < 230; i++) {
template.send("so50370851", "foo" + i);
}
};
}
#KafkaListener(id = "foo", topics = "so50370851")
public void listen(List<String> in) {
System.out.println(in.size());
}
#Bean
public NewTopic topic() {
return new NewTopic("so50370851", 1, (short) 1);
}
}
and
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.max-poll-records=100
spring.kafka.listener.type=batch
and
100
100
30
Also, the debug logs shows after a while that it is polling and fetching 0 records (and this gets repeated over and over).
That implies the problem is on the sending side.

Why use Kryo serialize framework into apache storm will over write data when blot get values

Maybe mostly develop were use AVRO as serialize framework in Kafka and Apache Storm scheme. But I need handle most complex data then I found the Kryo serialize framework also were successfully integrate it into our project which follow Kafka and Apache Storm environment. But when want to further operation there had a strange status.
I had sent 5 times message to Kafka, the Storm job also can read the 5 messages and deserialize success. But next blot get the data value is wrong. There print out the same value as the last message. Then I had add the print out after when complete the deserialize code. Actually it print out true there had different 5 message. Why the next blot can't the values? See my code below:
KryoScheme.java
public abstract class KryoScheme<T> implements Scheme {
private static final long serialVersionUID = 6923985190833960706L;
private static final Logger logger = LoggerFactory.getLogger(KryoScheme.class);
private Class<T> clazz;
private Serializer<T> serializer;
public KryoScheme(Class<T> clazz, Serializer<T> serializer) {
this.clazz = clazz;
this.serializer = serializer;
}
#Override
public List<Object> deserialize(byte[] buffer) {
Kryo kryo = new Kryo();
kryo.register(clazz, serializer);
T scheme = null;
try {
scheme = kryo.readObject(new Input(new ByteArrayInputStream(buffer)), this.clazz);
logger.info("{}", scheme);
} catch (Exception e) {
String errMsg = String.format("Kryo Scheme failed to deserialize data from Kafka to %s. Raw: %s",
clazz.getName(),
new String(buffer));
logger.error(errMsg, e);
throw new FailedException(errMsg, e);
}
return new Values(scheme);
}}
PrintFunction.java
public class PrintFunction extends BaseFunction {
private static final Logger logger = LoggerFactory.getLogger(PrintFunction.class);
#Override
public void execute(TridentTuple tuple, TridentCollector collector) {
List<Object> data = tuple.getValues();
if (data != null) {
logger.info("Scheme data size: {}", data.size());
for (Object value : data) {
PrintOut out = (PrintOut) value;
logger.info("{}.{}--value: {}",
Thread.currentThread().getName(),
Thread.currentThread().getId(),
out.toString());
collector.emit(new Values(out));
}
}
}}
StormLocalTopology.java
public class StormLocalTopology {
public static void main(String[] args) {
........
BrokerHosts zk = new ZkHosts("xxxxxx");
Config stormConf = new Config();
stormConf.put(Config.TOPOLOGY_DEBUG, false);
stormConf.put(Config.TOPOLOGY_TRIDENT_BATCH_EMIT_INTERVAL_MILLIS, 1000 * 5);
stormConf.put(Config.TOPOLOGY_WORKERS, 1);
stormConf.put(Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS, 5);
stormConf.put(Config.TOPOLOGY_TASKS, 1);
TridentKafkaConfig actSpoutConf = new TridentKafkaConfig(zk, topic);
actSpoutConf.fetchSizeBytes = 5 * 1024 * 1024 ;
actSpoutConf.bufferSizeBytes = 5 * 1024 * 1024 ;
actSpoutConf.scheme = new SchemeAsMultiScheme(scheme);
actSpoutConf.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
TridentTopology topology = new TridentTopology();
TransactionalTridentKafkaSpout actSpout = new TransactionalTridentKafkaSpout(actSpoutConf);
topology.newStream(topic, actSpout).parallelismHint(4).shuffle()
.each(new Fields("act"), new PrintFunction(), new Fields());
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(topic+"Topology", stormConf, topology.build());
}}
There also other problem why the kryo scheme only can read one message buffer. Is there other way get multi messages buffer then can batch send data to next blot.
Also if I send 1 message the full flow seems success.
Then send 2 message is wrong. the print out message like below:
56157 [Thread-18-spout0] INFO s.s.a.s.s.c.KryoScheme - 2016-02- 05T17:20:48.122+0800,T6mdfEW#N5pEtNBW
56160 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Scheme data size: 1
56160 [Thread-18-spout0] INFO s.s.a.s.s.c.KryoScheme - 2016-02- 05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
56161 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Thread-20-b-0.99--value: 2016-02-05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
56162 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Scheme data size: 1
56162 [Thread-20-b-0] INFO s.s.a.s.s.PrintFunction - Thread-20-b-0.99--value: 2016-02-05T17:20:48.282+0800,T(o2KnFxtGB0Tlp8
I'm sorry this my mistake. Just found a bug in Kryo deserialize class, there exist an local scope parameter, so it can be over write in multi thread environment. Not change the parameter in party scope, the code run well.
reference code see blow:
public class KryoSerializer<T extends BasicEvent> extends Serializer<T> implements Serializable {
private static final long serialVersionUID = -4684340809824908270L;
// It's wrong set
//private T event;
public KryoSerializer(T event) {
this.event = event;
}
#Override
public void write(Kryo kryo, Output output, T event) {
event.write(output);
}
#Override
public T read(Kryo kryo, Input input, Class<T> type) {
T event = new T();
event.read(input);
return event;
}
}

Implementing MDB Pool Listener in JBoss JMS

I've an application deployed in JBoss with multiple MDBs deployed using JBoss JMS implementation, each one with a different configuration of MDB Pool Size. I was looking forward to some kind of mechanism where we can have a listener on each MDB Pool size where we can check if at any point all instances from the MDB pool are getting utilized. This will help in analyzing and configuring the appropriate MDB pool size for each MDB.
We use Jamon to monitor instances of MDBs, like this:
#MessageDriven
#TransactionManagement(value = TransactionManagementType.CONTAINER)
#TransactionAttribute(value = TransactionAttributeType.REQUIRED)
#ResourceAdapter("wmq.jmsra.rar")
#AspectDomain("YourDomainName")
public class YourMessageDrivenBean implements MessageListener
{
// jamon package constant
protected static final String WB_ONMESSAGE = "wb.onMessage";
// instance counter
private static AtomicInteger counter = new AtomicInteger(0);
private int instanceIdentifier = 0;
#Resource
MessageDrivenContext ctx;
#Override
public void onMessage(Message message)
{
final Monitor monall = MonitorFactory.start(WB_ONMESSAGE);
final Monitor mon = MonitorFactory.start(WB_ONMESSAGE + "." + toString()
+ "; mdb instance identifier=" + instanceIdentifier);
try {
// process your message here
}
} catch (final Exception x) {
log.error("Error onMessage " + x.getMessage(), x);
ctx.setRollbackOnly();
} finally {
monall.stop();
mon.stop();
}
}
#PostConstruct
public void init()
{
instanceIdentifier = counter.incrementAndGet();
log.debug("constructed instance #" + instanceIdentifier);
}
}
You can then see in the Jamon-Monitor every created instance of your MDB.