I have a client server application and I'm using rxjava to do server requests from the client. The client should only do one request at a time so I intent to use a thread queue scheduler similar to the trampoline scheduler.
Now I try to implement a mechanism to watch changes on the server. Therefore I send a long living request that blocks until the server has some changes and sends back the result (long pull).
This long pull request should only run when the job queue is idle. I'm looking for a way to automatically stop the watch request when a regular request is scheduled and start it again when the queue becomes empty. I thought about modifying the trampoline scheduler to get this behavior but I have the feeling that this is a common problem and there might be an easier solution?
You can hold onto the Subscription returned by scheduling the long poll task, unsubscribe it if the queue becomes non-empty and re-schedule if the queue becomes empty.
Edit: here is an example with the basic ExecutorScheduler:
import java.util.concurrent.*;
import java.util.concurrent.atomic.*;
public class IdleScheduling {
static final class TaskQueue {
final ExecutorService executor;
final AtomicReference<Future<?>> idleFuture;
final Runnable idleRunnable;
final AtomicInteger wip;
public TaskQueue(Runnable idleRunnable) {
this.executor = Executors.newFixedThreadPool(1);
this.idleRunnable = idleRunnable;
this.idleFuture = new AtomicReference<>();
this.wip = new AtomicInteger();
this.idleFuture.set(executor.submit(idleRunnable));
}
public void shutdownNow() {
executor.shutdownNow();
}
public Future<?> enqueue(Runnable task) {
if (wip.getAndIncrement() == 0) {
idleFuture.get().cancel(true);
}
return executor.submit(() -> {
task.run();
if (wip.decrementAndGet() == 0) {
startIdle();
}
});
}
void startIdle() {
idleFuture.set(executor.submit(idleRunnable));
}
}
public static void main(String[] args) throws Exception {
TaskQueue tq = new TaskQueue(() -> {
while (!Thread.currentThread().isInterrupted()) {
try {
Thread.sleep(1000);
} catch (InterruptedException ex) {
System.out.println("Idle interrupted...");
return;
}
System.out.println("Idle...");
}
});
try {
Thread.sleep(1500);
tq.enqueue(() -> System.out.println("Work 1"));
Thread.sleep(500);
tq.enqueue(() -> {
System.out.println("Work 2");
try {
Thread.sleep(500);
} catch (InterruptedException ex) {
}
});
tq.enqueue(() -> System.out.println("Work 3"));
Thread.sleep(1500);
} finally {
tq.shutdownNow();
}
}
}
Related
We are using spring-kafka 2.3.0 in our app . Have observed some processing glitches in the scenarios below with
#Service
#EnableScheduling
public class KafkaService {
public void sendToKafkaProducer(String data) {
kafkaTemplate.send(configuration.getProducer().getTopicName(), data);
}
#KafkaListener(id = "consumer_grpA_id",
topics = "#{__listener.getEnvironmentConfiguration().getConsumer().getTopicName()}", groupId = "consumer_grpA", autoStartup = "false")
public void onMessage(ConsumerRecord<String, String> data) throws Exception {
passA(data);
}
private void passB(String message) {
//counter to keep track of retry attempts
if (counter.containsKey(message.getEventID())) {
//RETRY_COUNT = 5
if (counter.get(message.getEventID()) < RETRY_COUNT) {
retryAgain(message);
}
} else {
firstRetryPass(message);
}
}
private void retryAgain(String message) {
counter.put(message.getEventID(), counter.get(message.getEventID()) + 1);
try {
registry.stop(); //pause the listener
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void firstRetryPass(String message) {
// First Time Entry for count and time
counter.put(message.getEventID(), 1);
try {
registry.stop();//pause the listener
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void passA(String message) {
try {
passToTarget(message); //Call target processor
LOGGER.info("Message Processed Successfully to the target");
} catch (Exception e) {
targetUnavailable= true;
passB(message);
}
}
private void passToTarget(String message){
//processor logic, if the target is not available, retry after 15 mins, call passB func
}
#Scheduled(cron = "0 0/15 * 1/1 * ?")
public void scheduledMethod() {
try {
if (targetUnavailable) {
registry.start();
firstTimeStart = false;
}
LOGGER.info(">>>Scheduler Running ?>>>" + registry.isRunning());
} catch (Exception e) {
LOGGER.error(e.getMessage());
}
}
}
On receipt of the first message after a gap in processing, the consumer doesn't pick up the first message. The subsequent messages are processed.
As we don't have the direct access to Kafka topics, we aren't able to identify the process that didn't get picked up from consumer.
How do we track those events that arenot picked up and why is it so.?
We also configured a scheduler whose job is to keep the registry for Kafka running . So is this scheduler required when we already have a listener configured ?
What is the mem and CPU utilization metrics if we keep the listener running. That was one of the reason we used the Kafka registry to stop the listener explicitly whenever the target is down. So need to validate if this approach is sustainable. My hunch is this is against the basic working of Listener, as it's main job is to continue listening for new events irrespective of target status
Edited*
You shouldn't stop the registry on the listener thread unless you use stop(Runnable) - otherwise there will be a deadlock and a delay since the container waits for the listener to exit.
Stopping the container (via the registry) won't actually take effect until any remaining records fetched by the last poll have been processed (unless you set max.poll.records=1.
When the listener exits normally, the record's offset will be committed so that record will not be redelivered on the next start.
You can use the ContainerStoppingErrorHandler for this use case. See here.
Throw an exception and the error handler will stop the container for you.
But that will stop the container on the first try.
If you want retries, use a SeekToCurrentErrorHandler and call the ContainerStoppingErrorHandler from the recoverer after retries are exhausted.
In my processor API I store the messages in a key value store and every 100 messages I make a POST request. If something fails while trying to send the messages (api is not responding etc.) I want to stop processing messages. Until there is evidence the API calls work.
Here is my code:
public class BulkProcessor implements Processor<byte[], UserEvent> {
private KeyValueStore<Integer, ArrayList<UserEvent>> keyValueStore;
private BulkAPIClient bulkClient;
private String storeName;
private ProcessorContext context;
private int count;
#Autowired
public BulkProcessor(String storeName, BulkClient bulkClient) {
this.storeName = storeName;
this.bulkClient = bulkClient;
}
#Override
public void init(ProcessorContext context) {
this.context = context;
keyValueStore = (KeyValueStore<Integer, ArrayList<UserEvent>>) context.getStateStore(storeName);
count = 0;
// to check every 15 minutes if there are any remainders in the store that are not sent yet
this.context.schedule(Duration.ofMinutes(15), PunctuationType.WALL_CLOCK_TIME, (timestamp) -> {
if (count > 0) {
sendEntriesFromStore();
}
});
}
#Override
public void process(byte[] key, UserEvent value) {
int userGroupId = Integer.valueOf(value.getUserGroupId());
ArrayList<UserEvent> userEventArrayList = keyValueStore.get(userGroupId);
if (userEventArrayList == null) {
userEventArrayList = new ArrayList<>();
}
userEventArrayList.add(value);
keyValueStore.put(userGroupId, userEventArrayList);
if (count == 100) {
sendEntriesFromStore();
}
}
private void sendEntriesFromStore() {
KeyValueIterator<Integer, ArrayList<UserEvent>> iterator = keyValueStore.all();
while (iterator.hasNext()) {
KeyValue<Integer, ArrayList<UserEvent>> entry = iterator.next();
BulkRequest bulkRequest = new BulkRequest(entry.key, entry.value);
if (bulkRequest.getLocation() != null) {
URI url = bulkClient.buildURIPath(bulkRequest);
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);
} catch (BulkApiException e) {
logger.warn(e.getMessage(), e.fillInStackTrace());
}
}
}
iterator.close();
count = 0;
}
#Override
public void close() {
}
}
Currently in my code if a call to the API fails it will iterate the next 100 (and this will keep happening as long as it fails) and add them to the keyValueStore. I don't want this to happen. Instead I would prefer to stop the stream and continue once the keyValueStore is emptied. Is that possible?
Could I throw a StreamsException?
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);
} catch (BulkApiException e) {
throw new StreamsException(e);
}
Would that kill my stream app and so the process dies?
You should only delete the record from state store after you make sure your record is successfully processed by the API, so remove the first keyValueStore.delete(entry.key); and keep the second one. If not then you can potentially lost some messages when keyValueStore.delete is committed to underlying changelog topic but your messages are not successfully process yet, so it's only at most one guarantee.
Just wrap the calling API code around an infinite loop and keep trying until the record successfully processed, your processor will not consume new message from above processor node cause it's running in a same StreamThread:
private void sendEntriesFromStore() {
KeyValueIterator<Integer, ArrayList<UserEvent>> iterator = keyValueStore.all();
while (iterator.hasNext()) {
KeyValue<Integer, ArrayList<UserEvent>> entry = iterator.next();
//remove this state store delete code : keyValueStore.delete(entry.key);
BulkRequest bulkRequest = new BulkRequest(entry.key, entry.value);
if (bulkRequest.getLocation() != null) {
URI url = bulkClient.buildURIPath(bulkRequest);
while (true) {
try {
bulkClient.postRequestBulkApi(url, bulkRequest);
keyValueStore.delete(entry.key);//only delete after successfully process the message to achieve at least one processing guarantee
break;
} catch (BulkApiException e) {
logger.warn(e.getMessage(), e.fillInStackTrace());
}
}
}
}
iterator.close();
count = 0;
}
Yes you could throw a StreamsException, this StreamTask will be migrate to another StreamThread during re-balance, maybe on the sample application instance. If the API keep causing Exception until all StreamThread had died, your application will not automatically exit and receive below Exception, you should add a custom StreamsException handler to exit your app when all stream threads had died using KafkaStreams#setUncaughtExceptionHandler or listen to Stream State change (to ERROR state):
All stream threads have died. The instance will be in error state and should be closed.
In the end I used a simple KafkaConsumer instead of KafkaStreams, but the bottom line was that I changed the BulkApiException to extend RuntimeException, which I throw again after I log it. So now it looks as follows:
} catch (BulkApiException bae) {
logger.error(bae.getMessage(), bae.fillInStackTrace());
throw new BulkApiException();
} finally {
consumer.close();
int exitCode = SpringApplication.exit(ctx, () -> 1);
System.exit(exitCode);
}
This way the application is exited and the k8s restarts the pod. That was because if the api where I'm trying to forward the requests is down, then there is no point on continue reading messages. So until the other api is back up k8s will restart a pod.
I developed a project with Springboot and used Vertx as an asynchronous reactive toolkit. My ServerVerticle, create a httpServer which receives http requests from an Angular app and sends messages to it via eventBus. By the way, the time that received message arrives, ServerVerticle sends it to another verticle which has service instance in it (for connecting to repository). i tested it with postman and get "No handlers for address" error as a bad request.
here is my ServerVerticle:
HttpServerResponse res = routingContext.response();
res.setChunked(true);
EventBus eventBus = vertx.eventBus();
eventBus.request(InstrumentsServiceVerticle.FETCH_INSTRUMENTS_ADDRESS, "", result -> {
if (result.succeeded()) {
res.setStatusCode(200).write((Buffer) result.result().body()).end();
} else {
res.setStatusCode(400).write(result.cause().toString()).end();
}
});
My instrumentVerticle is as follows:
static final String FETCH_INSTRUMENTS_ADDRESS = "fetch.instruments.service";
// Reuse the Vert.x Mapper :)
private final ObjectMapper mapper = Json.mapper;
private final InstrumentService instrumentService;
public InstrumentsServiceVerticle(InstrumentService instrumentService) {
this.instrumentService = instrumentService;
}
private Handler<Message<String>> fetchInstrumentsHandler() {
return msg -> vertx.<String>executeBlocking(future -> {
try {
future.complete(mapper.writeValueAsString(instrumentService.getInstruments()));
} catch (JsonProcessingException e) {
logger.error("Failed to serialize result "+ InstrumentsServiceVerticle.class.getName());
future.fail(e);
}
},
result -> {
if (result.succeeded()) {
msg.reply(result.result());
} else {
msg.reply(result.cause().toString());
}
});
}
#Override
public void start() throws Exception {
super.start();
vertx.eventBus().<String>consumer(FETCH_INSTRUMENTS_ADDRESS).handler(fetchInstrumentsHandler());
}
and i deployed both verticles in the springbootApp starter.
Below is verticle
package com.api.redis.gateway.verticle;
import java.util.UUID;
import io.vertx.core.json.JsonObject;
import io.vertx.ext.web.RoutingContext;
import io.vertx.redis.RedisClient;
import io.vertx.redis.RedisOptions;
public class SimpleRestChild extends SimpleRestServer{
RedisClient client;
#Override
public void start() {
// TODO Auto-generated method stub
super.start();
client = RedisClient.create(vertx, new RedisOptions().setHost("127.0.0.1").setPort(6379));
client.subscribe("channelForServiceToPublish", handler -> {
if(handler.succeeded())
System.out.println("SimpleRestServer subscibed to the channel successfully");
});
}
public void handleSubscription(RoutingContext routingContext) {
JsonObject requestAsJson = routingContext.getBodyAsJson();
requestAsJson.put("uuid", getUUID());
// this client object is null.
client.set("request", requestAsJson.toString(), handler ->{
System.out.println("Simple server is setting value to redis client");
if(handler.succeeded()) {
System.out.println("Key and value is stored in Redis Server");
}else if(handler.failed()) {
System.out.println("Key and value is failed to be stored on Redis Server with cause : "+ handler.cause().getMessage());
}
});
client.publish("channelForServerToPublish", "ServiceOne", handler -> {
if(handler.succeeded()) {
System.out.println("Simple Server published message successfully");
}else if(handler.failed()) {
System.out.println("Simple Server failed to published message");
}
});
routingContext.vertx().eventBus().consumer("io.vertx.redis.channelForServiceToPublish", handler -> {
client.get("response", res ->{
if(res.succeeded()) {
JsonObject responseAsJson = new JsonObject(res.result());
if(responseAsJson.getString("uuid").equalsIgnoreCase(requestAsJson.getString("uuid"))) {
routingContext.response().setStatusCode(200).end(res.result());
}
}else if(res.failed()) {
System.out.println("Failed to get message from Redis Server");
routingContext.response().setStatusCode(500).end("Server Error ");
}
});
});
}
private String getUUID() {
UUID uid = UUID.randomUUID();
return uid.toString();
}
}
And below is the main verticle from where the above verticle is getting deployed and on any request to httpserver it's hanlder method is getting called.
package com.api.redis.gateway.verticle;
import io.vertx.core.AbstractVerticle;
import io.vertx.ext.web.Router;
import io.vertx.ext.web.handler.BodyHandler;
import io.vertx.redis.RedisClient;
import io.vertx.redis.RedisOptions;
public class SimpleRestServer extends AbstractVerticle{
#Override
public void start(){
int http_port = 9001;
vertx.deployVerticle("com.api.redis.gateway.verticle.SimpleRestChild", handler -> {
if(handler.succeeded()) {
System.out.println(" SimpleRestChild deployed successfully");
}
});
Router router = Router.router(vertx);
router.route().handler(BodyHandler.create());
SimpleRestChild child = null;
try {
child = (SimpleRestChild) Class.forName("com.api.redis.gateway.verticle.SimpleRestChild").newInstance();
} catch (InstantiationException | IllegalAccessException | ClassNotFoundException e) {
e.printStackTrace();
}
router.route("/subscription").handler(child::handleSubscription);
vertx.createHttpServer().requestHandler(router::accept).listen(http_port);
System.out.println("Server started at port : " + http_port);
}
}
When handleSubscription is getting called for any "/subscription" request. client object is coming as null.
As per my understanding two objects are getting created here. One with start() and other not having start().
I want to initialize Redisclient once.And use this object when handleSubscription() will get called for any request to "/subscription".
How to achieve this ?
How to fix this problem.
the requests may be coming in before the client initialization is actually complete.
AbstractVerticle has two variations of start():
start(), and
start(Future<Void> startFuture)
the overloaded version with the Future parameter should be used to perform potentially long-running initializations that are necessary to do before the Verticle can be considered deployed and ready. (there's a section dedicated to this topic in the docs).
so you might try changing your code as follows:
public class SimpleRestChild extends SimpleRestServer {
RedisClient client;
#Override
public void start(Future<Void> startFuture) {
client = ...
// important point below is that this Verticle's
// deployment status depends on whether or not
// the client initialization succeeds
client.subscribe("...", handler -> {
if(handler.succeeded()) {
startFuture.complete();
} else {
startFuture.fail(handler.cause());
}
);
}
}
and:
public class SimpleRestServer extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) {
int http_port = 9001;
vertx.deployVerticle("...", handler -> {
// if the child Verticle is successfully deployed
// then move on to completing this Verticle's
// initialization
if(handler.succeeded()) {
Router router = ...
...
// if the server is successfully installed then
// invoke the Future to signal this Verticle
// is deployed
vertx.createHttpServer()
.requestHandler(router::accept)
.listen(http_port, handler -> {
if(handler.succeeded()) {
startFuture.complete();
} else {
startFuture.fail(handler.cause());
}
});
} else {
startFuture.fail(handler.cause());
}
}
using this type of approach, your Verticles will only service requests when all their dependent resources are fully initialized.
this is mycode. It seem only execute 1 request
public class RestFulService extends AbstractVerticle {
#Override
public void start() throws Exception {
Router router = Router.router(vertx);
router.get("/test/hello/:input").handler(new Handler<RoutingContext>() {
#Override
public void handle(RoutingContext routingContext) {
WorkerExecutor executor = vertx.createSharedWorkerExecutor("my-worker-pool",10,120000);
executor.executeBlocking(future -> {
try {
Thread.sleep(5000);
future.complete();
} catch (InterruptedException e) {
e.printStackTrace();
}
},false, res -> {
System.out.println("The result is: " + res.result());
routingContext.response().end("routing1"+res.result());
executor.close();
});
}
});
}
When i call 10 request from browser in same time, it take 50000ms to done all request.
Please guide me fix it.
Try with curl, I suspect your browser is using the same connection for all requests (thus waiting for a response before sending the next request).
By the way, you don't need to call createSharedWorkerExecutor on each request. You can do it once when the verticle is started.