what i found while using pulsar is that bookkeeper legders get bigger and bigger over time until bookkeeper crashes. how can i fix it?
i didn't find a configuration in the documentation that I could change.
and i got these alarms.
[ReplicationWorker] WARN org.apache.bookkeeper.replication.ReplicationWorker - BKNotEnoughBookiesException while replicating the fragment
org.apache.bookkeeper.client.BKException$BKNotEnoughBookiesException: Not enough non-faulty bookies available
i want bookkeeper to be able to control the size of ledgers to make bookkeeper work properly.
Related
I'm sending data daily to my elk-stack via https://metacpan.org/pod/Search::Elasticsearch::Client::7_0::Bulk
Sometimes it happens, more often recently, that I receive a "Data too large" error. The first part of my data was received, but after this error my sending script stops and I end up with incomplete data.
As far as I understood, correct me if I'm wrong, this happens when my stack is experiencing memory issues while processing the data it already received. I assume that, after some time, I could send the rest of the data, because the next day, the same issue occurs: The first bunch of my data is processed, the rest rejected with "Data too large".
I saw that I can add an "on-error" callback, but I have no clue what I can do in it. My idea would be to implement a delay and retry after some time.
Can anyone give me have a hint how to achieve it?
Are there any ideas how to avoid the issue in the first place? I already increased heap space some time ago, but after 2 month the issue reoccured.
you'd need to check your Elasticsearch logs and the full response that Elasticsearch sends back (eg was it a 429?). however heap pressure can cause this, and you'd probably need to dig into why you are experiencing that
the other option is to reduce the size of your requests that you are sending
Update Remembering my "experience" with Java I simply did a restart of my ELK stack and the next import went through smoothly.
So despite the fact that 512m memory seem a bit low, it worked after a restart. Will check again today and then.
Increase memory
Schedule a nightly restart
I am going through the documentation, and there seems to be there are lot of moving with respect to message processing like exactly once processing , at least once processing . And, the settings scattered here and there. There doesnt seem a single place that documents the properties need to be configured rougly for exactly once processing and atleast once processing.
I know there are many moving parts involved and it always depends . However, like i was mentioning before , what are the settings to be configured atleast to provide exactly once processing and at most once and atleast once ...
You might be interested in the first part of Kafka FAQ that describes some approaches on how to avoid duplication on data production (i.e. on producer side):
Exactly once semantics has two parts: avoiding duplication during data
production and avoiding duplicates during data consumption.
There are two approaches to getting exactly once semantics during data
production:
Use a single-writer per partition and every time you get a network
error check the last message in that partition to see if your last
write succeeded
Include a primary key (UUID or something) in the
message and deduplicate on the consumer.
If you do one of these things, the log that Kafka hosts will be
duplicate-free. However, reading without duplicates depends on some
co-operation from the consumer too. If the consumer is periodically
checkpointing its position then if it fails and restarts it will
restart from the checkpointed position. Thus if the data output and
the checkpoint are not written atomically it will be possible to get
duplicates here as well. This problem is particular to your storage
system. For example, if you are using a database you could commit
these together in a transaction. The HDFS loader Camus that LinkedIn
wrote does something like this for Hadoop loads. The other alternative
that doesn't require a transaction is to store the offset with the
data loaded and deduplicate using the topic/partition/offset
combination.
My team was recently considering different message brokers to use for our project, we ended up picking Apache Pulsar, but it applies to others (Kafka). Our requirement is to track total number of messages sent and bytes sent to each subscriber for billing purposes.
I was reading documentation for metrics and was surprised to see that Pulsar doesn't track this, I've checked Kafka and the result was the same.
My understanding on this subject is minimal so is this some kind of anti-pattern?
I understand that counter values like this never go down and for our use case - should not be reset, leading to potential (certain) overflows. But to me this could be solved by using something like a histogram in Prometheus (metrics format used in Pulsar). I am actually thinking about implementing such functionality, but am I wrong and is there a better solution for our purpose?
I use Kafka Streams 2.1 and created the following stream using Suppressed feature to process the aggregation of each whole minute:
originStream
.windowedBy(TimeWindows.of(Duration.ofSeconds(60)).grace(Duration.ofMillis(500)))
.aggregate(factory::createAggregation,
(k, v, a) -> a.aggregate(v),
materialized.withLoggingDisabled())
.suppress(untilWindowCloses(Suppressed.BufferConfig.unbounded()))
.toStream();
The rate of messages I receive is about 200 per second.
After a short time I see the GC starting to work very hard, and sometimes OOM errors.
Since I use a heap of 2GB and a record will not take more than 1KB, it is clear to me that something is wrong - there shouldn't be so many messages in a window of 1 minute to explode a 2GB heap.
So.. I took a heap dump, in which I see 5 InMemoryTimeOrderedKeyValueBuffer Objects taking more than 300MB each (total >1.5GB).
I dived some more into one of those, and saw that the smallest/highest timestamp in their sortedMap was 1,575,458,160,000/1,575,481,800,000. This means that the buffer holds messages during a period of 23,640,000 = 394 minutes.
To my understanding the buffer was supposed to be flushed, so that only the last minute will consume the memory - all other windows should have been evicted.
Am I doing something wrong?
Any help would be appriciated.
The problem should not be suppress() but the aggregation state store. By default, it has a retention time of 1 day. You can reduce the retention time by passing in Materialized.withRetention(...) into aggregate().
I am surprised that your heap dump shows InMemoryTimeOrderedKeyValueBuffer though, because this is the store used by suppress(). Hence, I am not 100% sure if reducing the retention time will fix the issue.
Btw: that there are a few bugs in suppress() in 2.1 version that are only fixed in 2.3 release and thus it's highly recommended to upgrade to 2.3 if you use suppress().
I've changed The BufferConfig to use max-bytes boundary:
Suppressed.BufferConfig.unbounded().withMaxBytes(10_000_000)
and that seem to solve the problem. I looked at the code, and don't understand why - because I see it now should have thrown an exception, but it doesn't.
So, I still don't understand something here, but the problem is solved for now.
After that I used Mattias J. Sax suggestions too, just to be even safer (Thanks).
Edit:
It happened again twice today. This means that what I did did not fix the problem (Although it may have changed its frequency).
Right now, I have no solution for this problem.
we're using the latest master build (at the time of this writing: https://github.com/linkedin/Burrow/commit/12e681a3a8a61f84f17677996dc3e6a2b79fac41)
Our Kafka-Brokers are running 1.1.0
We switched recently from https://github.com/Morningstar/kafka-offset-monitor to Burrow, because we're adding authorization to our Clusters.
Now, most of our consumer-lags are 0 most of the time (according to Burrow, whereas on kafka-offset-monitor they were around 1K - 100K most of the time - both are OK from our point of view).
For reasons unknown to us, the consumer lag "jumps" e.g. from 0 to 1.4 Billion(!) from one minute to the next, and back again after another minute. We have about 20 consumers on our main topic, and all of their lags jump - but by different amounts. Some "only" jump from 1k to 1M, others from 0 to the billions described above.
Is anybody else seeing this?
Is there a known reason or do we have to adjust our config? - We didn't change anything about the default config for the evaluation or notifications...
We use https://github.com/rgannu/burrow-graphite to report to graphite, and our alarming system is based on those metrics...
Any help is appreciated