zookeeper write failure handling - apache-zookeeper

I have a question on how zookeeper handles write failures. Let us assume, there are 3 nodes, and write succeeds on 1 but fails on 2, I know zookeeper will return error. But what happens to successful write on one node? Is that rolled back or changes are persisted with an expectation of being replicated to other nodes eventually?

Zookeeper uses atomic messaging system. It's very nicely explained in the following article:
ZooKeeper uses a variation of two-phase-commit protocol for replicating transactions to followers. When a leader receive a change update from a client it generate a transaction with sequel number c and the leader’s epoch e and send the transaction to all followers. A follower adds the transaction to its history queue and send ACK to the leader. When a leader receives ACK’s from a quorum it send the the quorum COMMIT for that transaction. A follower that accept COMMIT will commit this transaction unless c is higher than any sequence number in its history queue. It will wait for receiving COMMIT’s for all its earlier transactions (outstanding transactions) before commiting.
Also official documentation can be very useful.

Related

Is message deduplication essential on the Kafka consumer side?

Kafka documentation states the following as the top scenario:
To process payments and financial transactions in real-time, such as
in stock exchanges, banks, and insurances
Also, regarding the main concepts, right at the very top:
Kafka provides various guarantees such as the ability to process
events exactly-once.
It’s funny the document says:
Many systems claim to provide "exactly once" delivery semantics, but
it is important to read the fine print, most of these claims are
misleading…
It seems obvious that payments/financial transactions must be processed „exactly-once“, but the rest of Kafka documentation doesn't make it obvious how this should be accomplished.
Let’s focus on the producer/publisher side:
If a producer attempts to publish a message and experiences a network
error it cannot be sure if this error happened before or after the
message was committed. This is similar to the semantics of inserting
into a database table with an autogenerated key. … Since 0.11.0.0, the
Kafka producer also supports an idempotent delivery option which
guarantees that resending will not result in duplicate entries in the
log.
KafkaProducer only ensures that it doesn’t incorrectly resubmit messages (resulting in duplicates) itself. Kafka cannot cover the case where client app code crashes (along with KafkaProducer) and it is not sure if it previously invoked send (or commitTransaction in case of transactional producer) which means that application-level retry will result in duplicate processing.
Exactly-once delivery for other destination systems generally
requires cooperation with such systems, but Kafka provides the offset
which makes implementing this feasible (see also Kafka Connect).
The above statement is only partially correct, meaning that while it exposes offsets on the Consumer side, it doesn’t make exactly-once feasible at all on the producer side.
Kafka consume-process-produce loop enables exactly-once processing leveraging sendOffsetsToTransaction, but again cannot cover the case of the possibility of duplicates on the first producer in the chain.
The provided official demo for EOS (Exactly once semantics) only provides an example for consume-process-produce EOS.
Solutions involving DB transaction log readers which read already committed transactions, also cannot be sure if they will produce duplicate messages in case they crash.
There is no support for a distributed transaction (XA) involving a database and the Kafka producer.
Does all of this mean that in order to ensure exactly once processing for payments and financial transactions (Kafka top use case!), we absolutely must perform business-level message deduplication on the consumer side, inspite of the Kafka transport-level “guarantees”/claims?
Note: I’m aware of:
Kafka Idempotent producer
but I would like a clear answer if deduplication is inevitable on the consumer side.
You must deduplicate on consumer side since rebalance on consumer side can really cause processing of events more than once in a consumer group based on fetch size and commit interval parameters.
If a consumer exits without acknowledging back to broker, Kafka will assign those events to another consumer in the group. Example if you are pulling a batch size of 5 events, if consumer dies or goes for a restart after processing first 3(If the external api/db fails OR the worse case your server runs out of memory and crashes), the current consumer dies abruptly without making a commit back/ack to broker. Hence the same batch gets assigned to another consumer from group(rebalance) where it starts supplies the same event batch again which will result in re-processing of same set of records resulting in duplication. A good read here : https://quarkus.io/blog/kafka-commit-strategies/
You can make use of internal state store of Kafka for deduplication. Here there is no offset/partition tracking, its kind of cache(persistent time bound on cluster).
In my case we push correlationId(a unique business identifier in incoming event) into it on successful processing of events, and all new events are checked against this before processing to make sure its not a duplicate event. Enabling state store will create more internal topics in Kafka cluster, just an FYI.
https://kafka.apache.org/10/documentation/streams/developer-guide/processor-api.html#state-stores

Kafka transactions - why do I need to replicate?

I am using Kafka for a circular buffer of the last 24 hours of events.
I have 4 brokers that are run on on ephemeral cloud providers. So the disk is local, if the broker dies I loose the data for that broker. I can start the broker again and it an replicate the data from another broker. I have replicas setup for my topic and the offsets topic:
default.replication.factor=2
offsets.topic.replication.factor=2
I'm using transactions to commit the new offsets + new records atomically. My app is side affect free, so if the transaction fails, I can poll and get the same records and repeat the processing and produce the same resultant events.
So the defaults for the transaction durability properties:
transaction.state.log.min.isr=2
transaction.state.log.replication.factor=3
I feel that in my setup, I can set both of these properties to 1 i.e. no replication/durability (as my app is side affect free). Yet I can't shake the niggling feeling for some reason that I'm wrong.
Am I wrong? Why are the transactions durable in the first place - what scenario does the durability help with?

detail of leader retry when follower fail in raft

I have read the paper about raft, and I am confused with
If followers crash or run slowly, or if network packets are lost, the
leader retries AppendEntries RPCs indefinitely (even after it has
responded to the client) until all followers eventually store all log
entries.
which is written at the beginning of the section 5.3 Log replication.
To make my confusion more clear, I split it into two questions.
Question 1. Should leader retry in all of three failure situations below?
reply false if term < currentTerm (in Figure 2)
reply false if log doesn’t contain an entry at prevLogIndex whose term matches prevLogTerm (in Figure 2)
rpc error or timeout
Question 2. If leader should retry in some situation, does the leader process will block until all followers reply success?
Below is my attempt:
In first failure case, there is no need that leader should retry.
In second failure case, leader should retry, and adjust the nextIndex of the follower until the follower replies success. Also leader will be blocked before accepting next client request.
In third failure case, there is no need that leader should retry, and we can append the failure entry at next client request.
The quote only describes the leader will retry when the follower doesn't give a response for whatever reason (e.g. follower issue or network issue).
In scenarios 1 and 2 you listed under Question 1, followers do give a rejection response to the leader, so that's different from your initial confusion.
Now to answer what will happen in these scenarios:
If the leader's current term X < the follower's current term Y, the leader step down to follower mode because there must be another leader for term Y.
The follower rejects AppendEntries RPC because the follower doesn't contain logs at the prev index. In that case, the leader should force the follower to duplicate the leader's logs before appending new entries. The mechanism is described in Handling of inconsistency section.
The answer to Question 2 is very simple: The leader can proceed and update the state machine when the majority has returned success. Retrying on the remaining followers is for data consistency purposes, it doesn't slow down the system.

How to handle various failure conditions in Kafka

Issue we were facing:
In our system we were logging a ticket in database with status NEW and also putting it in the kafka queue for further processing. The processors pick those tickets from kafka queue, do processing and update the status accordingly. We found that some tickets are left in NEW state forever. So we were guessing whether tickets are failing to get produced in the queue or are no getting consumed.
Message loss / duplication scenarios (and some other related points):
So I started to dig exhaustively to know in what all ways we can face message loss and duplication in Kafka. Below I have listed all possible message loss and duplication scenarios that I can find in this post:
How data loss can occur in different approaches to handle all replicas down
Handle by waiting for leader to come online
Messages sent between all replica down and leader comes online are lost.
Handle by electing new broker as a leader once it comes online
If new broker is out of sync from previous leader, all data written between the
time where this broker went down and when it was elected the new leader will be
lost. As additional brokers come back up, they will see that they have committed
messages that do not exist on the new leader and drop those messages.
How data loss can occur when leader goes down, while other replicas may be up
In this case, the Kafka controller will detect the loss of the leader and elect a new leader from the pool of in sync replicas. This may take a few seconds and result in LeaderNotAvailable errors from the client. However, no data loss will occur as long as producers and consumers handle this possibility and retry appropriately.
When a consumer may miss to consume a message
If Kafka is configured to keep messages for a day and a consumer is down for a period of longer than a day, the consumer will lose messages.
Evaluating different approaches to consumer consistency
Message might not be processed when consumer is configured to receive each message at most once
Message might be duplicated / processed twice when consumer is configured to receive each message at least once
No message is processed multiple times or left unprocessed if consumer is configured to receive each message exactly once.
Kafka provides below guarantees as long as you are producing to one partition and consuming from one partition. All guarantees are off if you are reading from the same partition using two consumers or writing to the same partition using two producers.
Kafka makes the following guarantees about data consistency and availability:
Messages sent to a topic partition will be appended to the commit log in the order they are sent,
a single consumer instance will see messages in the order they appear in the log,
a message is ‘committed’ when all in sync replicas have applied it to their log, and
any committed message will not be lost, as long as at least one in sync replica is alive.
Approach I came up with:
After reading several articles, I felt I should do following:
If message is not enqueued, producer should resend
For this producer should listen for acknowledgement for each message sent. If no ackowledement is received, it can retry sending message
Producer should be async with callback:
As explained in last example here
How to avoid duplicates in case of producer retries sending
To avoid duplicates in queue, set enable.idempotence=true in producer configs. This will make producer ensure that exactly one copy of each message is sent. This requires following properties set on producer:
max.in.flight.requests.per.connection<=5
retries>0
acks=all (Obtain ack when all brokers has committed message)
Producer should be transactional
As explained here.
Set transactional id to unique id:
producerProps.put("transactional.id", "prod-1");
Because we've enabled idempotence, Kafka will use this transaction id as part of its algorithm to deduplicate any message this producer sends, ensuring idempotency.
Use transactions semantics: init, begin, commit, close
As explained here:
producer.initTransactions();
try {
producer.beginTransaction();
producer.send(record1);
producer.send(record2);
producer.commitTransaction();
} catch(ProducerFencedException e) {
producer.close();
} catch(KafkaException e) {
producer.abortTransaction();
}
Consumer should be transactional
consumerProps.put("isolation.level", "read_committed");
This ensures that consumer don't read any transactional messages before the transaction completes.
Manually commit offset in consumer
As explained here
Process record and save offsets atomically
Say by atomically saving both record processing output and offsets to any database. For this we need to set auto commit of database connection to false and manually commit after persisting both processing output and offset. This also requires setting enable.auto.commit to false.
Read initial offset (say to read after recovery from cache) from database
Seek consumer to this offset and then read from that position.
Doubts I have:
(Some doubts might be primary and can be resolved by implementing code. But I want words from experienced kafka developer.)
Does the consumer need to read the offset from database only for initial (/ first after consumer recovery) read or for all reads? I feel it needs to read offset from database only on restarts, as explained here
Do we have to opt for manual partitioning? Does this approach works only with auto partitioning off? I have this doubt because this example explains storing offset in MySQL by specifying partitions explicitly.
Do we need both: Producer side kafka transactions and consumer side database transactions (for storing offset and processing records atomically)? I feel for producer idempotence, we need producer to have unique transaction id and for that we need to use kafka transactional api (init, begin, commit). And as a counterpart, consumer also need to set isolation.level to read_committed. However can we ensure no message loss and duplicate processing without using kafka transactions? Or they are absolutely necessary?
Should we persist offset to external db as explained above and here
or send offset to transaction as explained here (also I didnt get what does it exactly mean by sending offset to transaction)
or follow sync async commit combo explained here.
I feel message loss / duplication scenarios 1 and 2 are handled by points 1 to 4 of approach I explained above.
I feel message loss / duplication scenario 3 is handled by point 6 of approach I explained above.
How do we implement different consumer consistency approaches as stated in message loss / duplication scenario 4? Is their any configuration or it needs to be implemented inside custom logic inside consumer?
Message loss / duplication scenario 5 says: "Kafka provides below guarantees as long as you are producing to one partition and consuming from one partition."? Is it something to concern about while building correct system?
Is any consideration unnecessary/redundant in the approach I came up with above? Also did I miss any necessary consideration? Did I miss any message loss / duplication scenarios?
Is their any other standard / recommended / preferable approach to ensure no message loss and duplicate processing than what I have thought above?
Do I have to actually code above approach using kafka APIs? or is there any high level API built atop kafka API which allows to easily ensure no message loss and duplicate processing?
Looking at issue we were facing (as stated at very beginning), we were thinking if we can recover any lost/unprocessed messages from files in which kafka stores messages. However that isnt correct, right?
(Extremely sorry for such an exhaustive post but wanted to write question which will ask all related question at one place allowing to build big picture of how to build system around kafka.)

How ZooKeeper provides sequential consistency

In here
someone said:
"even if you read from a different follower every time, you'll never
see version 3 of the data after seeing version 4."
So if I have 3 nodes zookeeper quorum as below:
zk0 -- leader
zk1
zk2
Assume there is a value in quorum "3" and I have a client connects to zk1, then my client sends a write request (update "3" to "4") and zk0 (leader) writes the value then subsequently received the confirmation from zk1. My client can see the new ("4"), because it connects to zk1.
Now my question is, if I switch my client from zk1 to zk2 (leader hasn't received write confirmation from zk2, so zk2 is behind the quorum) I will see the value as "3" rather than "4". Does it break the sequential consistency?
ZooKeeper uses a special atomic messaging protocol called ZooKeeper Atomic
Broadcast (ZAB), which ensures that the local replicas in the ensemble (groups of Zookeeper servers) never diverge.
ZAB protocol is atomic, so the protocol guarantees that updates either succeed or fail.
In Zookeeper every write goes through the leader and leader generates a transaction id (called zxid) and assigns it to this write request.
A zxid is a long (64-bit) integer split into two parts:
epoch
counter
The zxid represents the order in which the writes are applied on all replicas.
The epoch represents the changes in leadership over time. Epochs refer to the period during which a given server exercised leadership. During an epoch, a leader broadcasts proposals and identifies each one according to the counter.
A write is considered successful if the leader receives the ack from the majority.
The zxid is used to keep servers synchronized and avoid the conflict you described.