Apache Curator LeaderSelector: How to avoid giving up leadership by not exiting from takeLeadership() method? - apache-zookeeper

I'm trying to implement a simple leader election based system where my main business logic of the application runs on the elected leader node. As part of acquiring leadership, the main business logic starts various other services. I'm using Apache Curator LeaderSelector recipe to implement the leader selection process.
In my system, the node which gets selected as the leader keeps the leadership until failure forces another leader to be selected. In other words, once I get the leadership I don't want to relinquish it.
According to Curator LeaderSelection documentation, leadership gets relinquished when the takeLeadership() method returns. I want to avoid it, and I'm right now just block the return by introducing a wait loop.
My question is:
Is this the right way to implement leadership?
Is the wait loop (as shown in the code example below) the right way to block?
public class MainBusinessLogic extends LeaderSelectorListenerAdapter {
private static final String ZK_PATH_LEADER_ROOT = "/some-path";
private final CuratorFramework client;
private final LeaderSelector leaderSelector;
public MainBusinessLogic() {
client = CuratorService.getInstance().getCuratorFramework();
leaderSelector = new LeaderSelector(client, ZK_PATH_LEADER_ROOT, this);
leaderSelector.autoRequeue();
leaderSelector.start();
}
#Override
public void takeLeadership(CuratorFramework client) throws IOException {
// Start various other internal services...
ServiceA serviceA = new ServiceA(...);
ServiceB serviceB = new ServiceB(...);
...
...
serviceA.start();
serviceB.start();
...
...
// We are done but need to keep leadership to this instance, else all the business
// logic and services will start on another node.
// Is this the right way to prevent relinquishing leadership???
while (true) {
synchronized (this) {
try {
wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
}

LeaderLatchInstead of the wait(), you can just do:
Thread.currentThread().join();
But, yes, that's correct.
BTW - if you prefer a different method, you can use LeaderLatch with a LeaderLatchListener.

Related

Pattern for using properly MongoClient in Vert.x

I feel quite uncomfortable with the MongoClient class, certainly because I don't exactly understand what it is and how it works.
The first call to MongoClient.createShared will actually create the
pool, and the specified config will be used.
Subsequent calls will return a new client instance that uses the same
pool, so the configuration won’t be used.
Does that mean that the pattern should be:
In startup function, to create the pool, we make a call
mc = MongoClient.createShared(vx, config, "poolname");
Is the returned value mc important for this first call if it succeeds? What is its value if the creation of the pool fails? The documentations doesn't say. There is a socket exception if mongod is not running, but what about the other cases?
In another place in the code (another verticle, for example), can we write mc = MongoClient.createShared(vx, new JsonObject(), "poolname"); to avoid to systematically need to access shared objects.
Again, In another verticle where we need to access the database, should we define MongoClient mc
as a class field in which case it will be released to the pool only in the stop() method, or
shouldn't it be a variable populated with MongoClient.createShared(...) and de-allocated with mc.close() once we don't need the connection any more in order release it again to the pool ?
What I would write is as follows
// Main startup Verticle
import ...
public class MainVerticle extends AbstractVerticle {
...
#Override
public void start(Future<Void> sf) throws Exception {
...
try {
MongoClient.createShared(vx, config().getJsonObject("mgcnf"), "pool");
}
catch(Exception e) {
log.error("error error...");
sf.fail("failure reason");
return;
}
...
sf.complete();
}
...some other methods
}
and then, in some other place
public class SomeVerticle extends AbstractVerticle {
public void someMethod(...) {
...
// use the database:
MongoClient mc = MongoClient.createShared(vx, new JsonObject(), "pool");
mc.save(the_coll, the_doc, res -> {
mc.close();
if(res.succeeded()) {
...
}
else {
...
}
}
...
}
...
}
Does that make sense ? Yet, this is not what is in the examples that I could find around the internet.
Don't worry about pools. Don't use them. They don't do what you think they do.
In your start method of any verticle, set a field (what you call a class field, but you really mean instance field) on the inheritor of AbstractVerticle to MongoClient.createShared(getVertx(), config). Close the client in your stop method. That's it.
The other exceptions you'll see are:
Bad username/password
Unhealthy cluster state
The Java driver has a limit of 500 or 1,000 connections (depending on version), you'll receive an exception if you exceed this connection count
Both will be propagated up from the driver wrapped in a VertxException.

Samza: Delay processing of messages until timestamp

I'm processing messages from a Kafka topic with Samza. Some of the messages come with a timestamp in the future and I'd like to postpone the processing until after that timestamp. In the meantime, I'd like to keep processing other incoming messages.
What I tried to do is make my Task queue the messages and implement the WindowableTask to periodically check the messages if their timestamp allows to process them. The basic idea looks like this:
public class MyTask implements StreamTask, WindowableTask {
private HashSet<MyMessage> waitingMessages = new HashSet<>();
#Override
public void process(IncomingMessageEnvelope incomingMessageEnvelope, MessageCollector messageCollector, TaskCoordinator taskCoordinator) {
byte[] message = (byte[]) incomingMessageEnvelope.getMessage();
MyMessage parsedMessage = MyMessage.parseFrom(message);
if (parsedMessage.getValidFromDateTime().isBeforeNow()) {
// Do the processing
} else {
waitingMessages.add(parsedMessage);
}
}
#Override
public void window(MessageCollector messageCollector, TaskCoordinator taskCoordinator) {
for (MyMessage message : waitingMessages) {
if (message.getValidFromDateTime().isBeforeNow()) {
// Do the processing and remove the message from the set
}
}
}
}
This obviously has some downsides. I'd be losing my waiting messages in memory when I redeploy my task. So I'd like to know the best practice for delaying the processing of messages with Samza. Do I need to reemit the messages to the same topic again and again until I can finally process them? We're talking about delaying the processing for a few minutes up to 1-2 hours here.
It's important to keep in mind, when dealing with message queues, is that they perform a very specific function in a system: they hold messages while the processor(s) are busy processing preceding messages. It is expected that a properly-functioning message queue will deliver messages on demand. What this implies is that as soon as a message reaches the head of the queue, the next pull on the queue will yield the message.
Notice that delay is not a configurable part of the equation. Instead, delay is an output variable of a system with a queue. In fact, Little's Law offers some interesting insights into this.
So, in a system where a delay is necessary (for example, to join/wait for a parallel operation to complete), you should be looking at other methods. Typically a queryable database would make sense in this particular instance. If you find yourself keeping messages in a queue for a pre-set period of time, you're actually using the message queue as a database - a function it was not designed to provide. Not only is this risky, but it also has a high likelihood of hurting the performance of your message broker.
I think you could use key-value store of Samza to keep state of your task instance instead of in-memory Set.
It should look something like:
public class MyTask implements StreamTask, WindowableTask, InitableTask {
private KeyValueStore<String, MyMessage> waitingMessages;
#SuppressWarnings("unchecked")
#Override
public void init(Config config, TaskContext context) throws Exception {
this.waitingMessages = (KeyValueStore<String, MyMessage>) context.getStore("messages-store");
}
#Override
public void process(IncomingMessageEnvelope incomingMessageEnvelope, MessageCollector messageCollector,
TaskCoordinator taskCoordinator) {
byte[] message = (byte[]) incomingMessageEnvelope.getMessage();
MyMessage parsedMessage = MyMessage.parseFrom(message);
if (parsedMessage.getValidFromDateTime().isBefore(LocalDate.now())) {
// Do the processing
} else {
waitingMessages.put(parsedMessage.getId(), parsedMessage);
}
}
#Override
public void window(MessageCollector messageCollector, TaskCoordinator taskCoordinator) {
KeyValueIterator<String, MyMessage> all = waitingMessages.all();
while(all.hasNext()) {
MyMessage message = all.next().getValue();
// Do the processing and remove the message from the set
}
}
}
If you redeploy you task Samza should recreate state of key-value store (Samza keeps values in special kafka topic related to key-value store). You need of course provide some extra configuration of your store (in above example for messages-store).
You could read about key-value store here (for the latest Samza version):
https://samza.apache.org/learn/documentation/0.14/container/state-management.html

Consuming messages from a Hazelcast Queue only once in a distributed environment

I have a similar question as this post:
Consume message only once from Topic per listeners running in cluster
When I tried using a queue to publish messages and added an item listener in two different JVMs, I am receiving the messages twice in both of them. I want to receive the message only once in a clustered/distributed environments.
Here's my code snippet:
Publishing of the message:
getQueue().add("some sample message");
I have the same listener configured in two different JVMs which goes like this:
public HazelcastQueueListener(){
HazelcastInstance instance = HazelcastClient.newHazelcastClient(HazelClientConfig.getClientConfig());
IQueue<String> queue1 = instance.getQueue("SAMPLEQUEUE");
queue1.addItemListener(this, false);
}
public static void main(String args[]){
HazelcastQueueListener listener = new HazelcastQueueListener();
}
#Override
public void itemAdded(ItemEvent<String> arg0) {
// TODO Auto-generated method stub
if(arg0!=null){
System.out.println("Item coming out of queue 1" +arg0);
}
else{
System.out.println("null");
}
}
You have to poll the queue, like a standard java BlockingQueue in order to consume an item only once.
String item = queue1.take()
AFAIK, Hazelcast doesn't support asynchronous operation on queue. The ItemListener doesn't consume the item, it only notifies that an item is available.

How to access memcached asynchronously in netty

I am writing a server in netty, in which I need to make a call to memcached. I am using spymemcached and can easily do the synchronous memcached call. I would like this memcached call to be async. Is that possible? The examples provided with netty do not seem to be helpful.
I tried using callbacks: created a ExecutorService pool in my Handler and submitted a callback worker to this pool. Like this:
public class MyHandler extends ChannelInboundMessageHandlerAdapter<MyPOJO> implements CallbackInterface{
...
private static ExecutorService pool = Executors.newFixedThreadPool(20);
#Override
public void messageReceived(ChannelHandlerContext ctx, MyPOJO pojo) {
...
CallingbackWorker worker = new CallingbackWorker(key, this);
pool.submit(worker);
...
}
public void myCallback() {
//get response
this.ctx.nextOutboundMessageBuf().add(response);
}
}
CallingbackWorker looks like:
public class CallingbackWorker implements Callable {
public CallingbackWorker(String key, CallbackInterface c) {
this.c = c;
this.key = key;
}
public Object call() {
//get value from key
c.myCallback(value);
}
However, when I do this, this.ctx.nextOutboundMessageBuf() in myCallback gets stuck.
So, overall, my question is: how to do async memcached calls in Netty?
There are two problems here: a small-ish issue with the way you're trying to code this, and a bigger one with many libraries that provide async service calls, but no good way to take full advantage of them in an async framework like Netty. That forces users into suboptimal hacks like this one, or a less-bad, but still not ideal approach I'll get to in a moment.
First the coding problem. The issue is that you're trying to call a ChannelHandlerContext method from a thread other than the one associated with your handler, which is not allowed. That's pretty easy to fix, as shown below. You could code it a few other ways, but this is probably the most straightforward:
private static ExecutorService pool = Executors.newFixedThreadPool(20);
public void channelRead(final ChannelHandlerContext ctx, final Object msg) {
//...
final GetFuture<String> future = memcachedClient().getAsync("foo", stringTranscoder());
// first wait for the response on a pool thread
pool.execute(new Runnable() {
public void run() {
String value;
Exception err;
try {
value = future.get(3, TimeUnit.SECONDS); // or whatever timeout you want
err = null;
} catch (Exception e) {
err = e;
value = null;
}
// put results into final variables; compiler won't let us do it directly above
final fValue = value;
final fErr = err;
// now process the result on the ChannelHandler's thread
ctx.executor().execute(new Runnable() {
public void run() {
handleResult(fValue, fErr);
}
});
}
});
// note that we drop through to here right after calling pool.execute() and
// return, freeing up the handler thread while we wait on the pool thread.
}
private void handleResult(String value, Exception err) {
// handle it
}
That will work, and might be sufficient for your application. But you've got a fixed-sized thread pool, so if you're ever going to handle much more than 20 concurrent connections, that will become a bottleneck. You could increase the pool size, or use an unbounded one, but at that point, you might as well be running under Tomcat, as memory consumption and context-switching overhead start to become issues, and you lose the scalabilty that was the attraction of Netty in the first place!
And the thing is, Spymemcached is NIO-based, event-driven, and uses just one thread for all its work, yet provides no way to fully take advantage of its event-driven nature. I expect they'll fix that before too long, just as Netty 4 and Cassandra have recently by providing callback (listener) methods on Future objects.
Meanwhile, being in the same boat as you, I researched the alternatives, and not being too happy with what I found, I wrote (yesterday) a Future tracker class that can poll up to thousands of Futures at a configurable rate, and call you back on the thread (Executor) of your choice when they complete. It uses just one thread to do this. I've put it up on GitHub if you'd like to try it out, but be warned that it's still wet, as they say. I've tested it a lot in the past day, and even with 10000 concurrent mock Future objects, polling once a millisecond, its CPU utilization is negligible, though it starts to go up beyond 10000. Using it, the example above looks like this:
// in some globally-accessible class:
public static final ForeignFutureTracker FFT = new ForeignFutureTracker(1, TimeUnit.MILLISECONDS);
// in a handler class:
public void channelRead(final ChannelHandlerContext ctx, final Object msg) {
// ...
final GetFuture<String> future = memcachedClient().getAsync("foo", stringTranscoder());
// add a listener for the Future, with a timeout in 2 seconds, and pass
// the Executor for the current context so the callback will run
// on the same thread.
Global.FFT.addListener(future, 2, TimeUnit.SECONDS, ctx.executor(),
new ForeignFutureListener<String,GetFuture<String>>() {
public void operationSuccess(String value) {
// do something ...
ctx.fireChannelRead(someval);
}
public void operationTimeout(GetFuture<String> f) {
// do something ...
}
public void operationFailure(Exception e) {
// do something ...
}
});
}
You don't want more than one or two FFT instances active at any time, or they could become a drain on CPU. But a single instance can handle thousands of outstanding Futures; about the only reason to have a second one would be to handle higher-latency calls, like S3, at a slower polling rate, say 10-20 milliseconds.
One drawback of the polling approach is that it adds a small amount of latency. For example, polling once a millisecond, on average it will add 500 microseconds to the response time. That won't be an issue for most applications, and I think is more than offset by the memory and CPU savings over the thread pool approach.
I expect within a year or so this will be a non-issue, as more async clients provide callback mechanisms, letting you fully leverage NIO and the event-driven model.

Java server framework to listen to PostgreSQL NOTIFY statements

I need to write a server which listens to PostgreSQL NOTIFY statements and considers each notification as a request to serve (actually, more like a task to process). My main requirements are:
1) A mechanism to poll on PGConnection (Ideally this would be a listener, but in the PgJDBC implementation, we are required to poll for pending notifications. Reference)
2) Execute a callback based on the "request" (using channel name in the NOTIFY notification), on a separate thread.
3) Has thread management stuff built in. (create/delete threads when a task is processed/finished, put on a queue when too many tasks being concurrently processed etc.)
Requirements 1 and 2 are something which are easy for me to implement myself. But I would prefer not to write thread management myself.
Is there an existing framework meeting this requirements? An added advantage would be if the framework automatically generates request statistics.
To be honest, requirement 3 could probably be easily satistied just using standard ExecutorService implementations from Executors, which will allow you to, for example, get a fixed-size thread pool and submit work to them in the form of Runnable or Callable implementations. They will deal with the gory details of creating threads up to the limit etc.. You can then have your listener implement a thin layer of Runnable to collect statistics etc.
Something like:
private final ExecutorService threadPool = Executors.newFixedThreadPool(THREAD_POOL_SIZE);
private final NotificationCallback callback;
private int waiting, executing, succeeded, failed;
public void pollAndDispatch() {
Notification notification;
while ((notification = pollDatabase()) != null) {
final Notification ourNotification = notification;
incrementWaitingCount();
threadPool.submit(new Runnable() {
public void run() {
waitingToExecuting();
try {
callback.processNotification(ourNotification);
executionCompleted();
} catch (Exception e) {
executionFailed();
LOG.error("Exeception thrown while processing notification: " + ourNotification, e);
}
}
});
}
}
// check PGconn for notification and return it, or null if none received
protected Notification pollDatabase() { ... }
// maintain statistics
private synchronized void incrementWaitingCount() { ++waiting; }
private synchronized void waitingToExecuting() { --waiting; ++executing; }
private synchronized void executionCompleted() { --executing; ++succeeded; }
private synchronized void executionFailed() { --executing; ++failed; }
If you want to be fancy, put the notifications onto a JMS queue and use its infrastructure to listen for new items and process them.