When does curator lib does retry - apache-zookeeper

I was under the impression that curator lib will do the retry for all the Zookeeper operations even if the session is lost. I was simulating a case where I had created a node and then set some data to that node. Then while retrieving the data i killed the session. I see that the curator is able to reconnect to the session but I thought it will also retry and get the data which was not the case. Is there any documentation as to when exactly and for which operations curator does a retry.
Code that watches the node:
getAsyncCuratorFramework(curatorFramework)
.watched()
.checkExists()
.forPath(fullNodePath)
.event()
.toCompletableFuture()
.get(jobTimeoutDO.getDuration(), jobTimeoutDO.getTimeUnit());
Now I am simulating a test where I am watching an Ephemeral node for node delete event and I schedule the following call in between:
KillSession.kill
Since the session was killed the node will be removed and curator will try to establish the connection again. All of this works fine and as expected. But I also thought that the curator will retry and watch the node again ofcourse if the node does not exists it might throw an exception but I do create a node again.
Just wanted to confirm in the above scenario will the curator not retry. BTW it throws the following exception:
AsyncEventException

But I also thought that the curator will retry and watch the node again
That's not how retries work. Retries in Curator retry individual ZooKeeper operations. They are not a high level feature and will not reset watches for you. What you are looking for is one of Curator's high level recipes that manage a ZNode. Have a look at PersistentNode or NodeCache.

Related

How to handle kafka consumer failures

I am trying understand how to handle failed consumer records. How to
we know there is record failure. What I am seeing is when the record
processing failed in the consumer with runtime exception consumer is
keep on retrying. But when the next record is available to process it
is commiting offset of the latest record, which is expected. My
question how to we know about failed record. In older messaging
systems failed messages are rolled back to queues and processing stops
there. Then we know the queue is down and we can take action.
I can record the failed record into some db table,but what happens if this recording fails?
I can move failures to error/ dead letter queues, again what happens if this moving fails?
I am using kafka 2.6 with spring boot 2.3.4. Any help would be appreciated
Sounds like you would need to disable auto commits and manually commit the offsets yourself when your scope of "sucessfully processed" is achieved. If you include external processes like a database, then you will also need to increase Kafka client timeouts so it doesnt think the consumer is dead while waiting on error logging/handling.

KafkaTopicProvisioner failed to obtain partition

I observed my services going down with the below exception. The reason was one of our three Kafka brokers was down. And spring was always trying to connect with the same broker. Before it can skip faulty broker and connect to the next available broker, Kubernetes is restarting the pod (due liveness probe failure configured at 60seconds). Due to restart, next time also it tries to connect to the same first faulty broker and thus pod never comes up.
How we can configure spring to not wait for more than 10seconds for a faulty broker?
I found cloud.stream.binder.healthTimeout property but not sure if this is the right one. How I can replicate the issue in my local.
Kafka version: 2.2.1
{“timestamp”:“2020-01-21T17:16:47.598Z”,“level”:“ERROR”,“thread”:“main”,“logger”:“org.springframework.cloud.stream.binder.kafka.provisioning.KafkaTopicProvisioner”,“message”:“Failed
to obtain partition
information”,“context”:“default”,“exception”:“org.apache.kafka.common.errors.TimeoutException:
Failed to update metadata after 60000 ms.\n”}

How to add health check for topics in KafkaStreams api

I have a critical Kafka application that needs to be up and running all the time. The source topics are created by debezium kafka connect for mysql binlog. Unfortunately, many things can go wrong with this setup. A lot of times debezium connectors fail and need to be restarted, so does my apps then (because without throwing any exception it just hangs up and stops consuming). My manual way of testing and discovering the failure is checking kibana log, then consume the suspicious topic through terminal. I can mimic this in code but obviously no way the best practice. I wonder if there is the ability in KafkaStream api that allows me to do such health check, and check other parts of kafka cluster?
Another point that bothers me is if I can keep the stream alive and rejoin the topics when connectors are up again.
You can check the Kafka Streams State to see if it is rebalancing/running, which would indicate healthy operations. Although, if no data is getting into the Topology, I would assume there would be no errors happening, so you need to then lookup the health of your upstream dependencies.
Overall, sounds like you might want to invest some time into using monitoring tools like Consul or Sensu which can run local service health checks and send out alerts when services go down. Or at the very least Elasticseach alerting
As far as Kafka health checking goes, you can do that in several ways
Is the broker and zookeeper process running? (SSH to the node, check processes)
Is the broker and zookeeper ports open? (use Socket connection)
Are there important JMX metrics you can track? (Metricbeat)
Can you find an active Controller broker (use AdminClient#describeCluster)
Are there a required minimum number of brokers you would like to respond as part of the Controller metadata (which can be obtained from AdminClient)
Are the topics that you use having the proper configuration? (retention, min-isr, replication-factor, partition count, etc)? (again, use AdminClient)

During rolling upgrade/restart, how to detect when a kafka broker is "done"?

I need to automate a rolling restart of a kafka cluster (3 kafka brokers). I can easily do it manually - restart one after the other, while checking the log to see when it's fine (e.g., when the new process has joined the cluster).
What is a good way to automate this check? How can I ask the broker whether it's up and running, connected to its peers, all topics up-to-date and such? In my restart script, I have access to the metrics, but to be frank, I did not really see one there which gives me a clear picture.
Another way would be to ask what a good "readyness" probe would be that does not simply check some TCP/IP port, but looks at the actual server...
I would suggest exposing JMX metrics and tracking the following for cluster health
the controller count (must be 1 over the whole cluster)
under replicated partitions (should be zero for healthy cluster)
unclean leader elections (if you don't disable this in server.properties make sure there are none in the metric counts)
ISR shrinks within a reasonable time period, like 10 minute window (should be none)
Also, Yelp has tooling for rolling restarts implemented in Python, which requires Jolokia JMX Agents installed on the brokers, and it polls the metrics to make sure some of the above conditions are true
Assuming your cluster was healthy at the beginning of the restart operation, at a minimum, after each broker restart, you should ensure that the under-replicated partition count returns to zero before restarting the next broker.
As the previous responders mentioned, there is existing code out there to automate this. I don’t use Jolikia, myself, but my solution (which I’m working on now) also uses JMX metrics.
Kakfa Utils by Yelp is one of the best tools that can be used to detect when a kafka broker is "done". Specifically, kafka_rolling_restart is the tool which gets broker details from zookeeper and URP (Under Replicated Partitions) metrics from each broker. When a broker is restarted, total URPs across Kafka cluster is periodically collected and when it goes to zero, it restarts another broker. The controller broker is restarted at the last.

Is there a way to delete an ephemeral node after a client is disconnected by some time?

Our cluster nodes take actions on the deletion of some ephemeral nodes but we're having network issues at a customer that leads to the deletion of the ephemeral nodes for some clients, although those clients are still up and running.
I agree that the network issues should be solved but it doesn't look like we can do that at the moment.
So is there a way to configure Zookeeper to delete the ephemeral node for a disconnected client only if it stays disconnected for X amount of time ?
We use Apache Curator as a Zookeeper client.
Our Zookeeper version is 3.4.6.
You can play around with zookeeper's session timeout configuration to achieve the desired behavior. Zookeeper server will delete the ephemeral node for a session after not receiving any heartbeat from the client for the session timeout duration.