Function Exceptions in Pulsar (PUB-SUB) - publish-subscribe

I need help with Pulsar Function Exceptions, what are User Function Exceptions and what will happen if its Rate/Count is high and also what could be the cause of high Rate/Count.

Related

Kafka Streams: How bad is it to have side effects in filter, map?

If it depends, what side effects are OK and which are definitely BAD?
My situation is that it feels more natural to filter out some events and log them and increment a metric (an HTTP call) in a single function passed to filter. However, the documentation mentions to put side effects such as logging in peek and foreach, but doesn't mention why.
The main reason against any external API calls is that many Streams API methods are time sensitive. If you do too much work, then the consumer group within the topology will fail to heartbeat and cause a rebalance, thereby halting the data flow. Even peek/foreach require an internal consumer and can have the same problem
That being said, HTTP / DB calls without a short client timeout can be bad. Logging or interfacing with local system resources is good.
If you really need external TCP/UDP calls, then stream/branch the data to some output topic, then use Kafka Connect for that.
In addition to the timing considerations that #OneCricketeer mentioned, you may want to consider if you are using exactly-once semantics (EOS) or at-least-once semantics (ALOS).
With EOS, the side effect would happen once per record; whereas, with ALOS, there's a chance that a given record may be processed multiple times.
You can read more about that here: https://kafka.apache.org/documentation/#semantics
https://docs.confluent.io/platform/current/streams/concepts.html#processing-guarantees

Flink: posting messages to an external API: custom sink or lambda function

We are developing a pipeline in apache flink (datastream API) that needs to sends its messages to an external system using API calls. Sometimes such an API call will fail, in this case our message needs some extra treatment (and/or a retry).
We had a few options for doing this:
We map() our stream through a function that does the API call and get the result of the API call returned, so we can act upon failures subsequently (this was my original idea, and why i did this: flink scala map with dead letter queue)
We write a custom sink function that does the same.
However, both options have problems i think:
With the map() approach i won't be able to get exactly once (or at most once which would also be fine) semantics since flink is free to re-execute pieces of pipelines after recovering from a crash in order to get the state up to date.
With the custom sink approach i can't get a stream of failed API calls for further processing: a sink is a dead end from the flink APPs point of view.
Is there a better solution for this problem ?
The async i/o operator is designed for this scenario. It's a better starting point than a map.
There's also been recent work done to develop a generic async sink, see FLIP-171. This has been merged into master and will be released as part of Flink 1.15.
One of those should be your best way forward. Whatever you do, don't do blocking i/o in your user functions. That causes backpressure and often leads to performance problems and checkpoint failures.

Must msmq queues be transactional?

I've just recently gotten into using Rebus, and noticed that it always creates transactional msmq-queues resulting in heavy traffic to the HDD (0,5 - 5mb/sec). Is this intentional - and can something be done to avoid it?
It's correctly observed that Rebus (with its default MsmqMessageQueue transport) always creates transactional MSMQ queues. It will also refuse to work with non-transactional input queues, throwing an error at startup if you created a non-transactional queue yourself and attempt to use it.
This is because the philosophy in Rebus revolves around the idea that messages are important data, just as important as the data in your SQL Server or whichever database you're using.
And yes, the way MSMQ implements durability is that messages are written to disk when they're sent, so that probably explains the disk activity you're seeing.
If you have a different opinion as to how you'd like your system to treat its messages, there's nothing that prevents you from replacing Rebus' transport with something that can work with non-transactional MSMQ. Keep in mind though, that all of Rebus' delivery guarantees will be void if you do so ;)
We had the very same observation, the annoying aspect is that we have 300/500 KB/sec write on disk even when there are no message on the queue. It seems that only polling from the queue causes a constant write on disk.
Gian Maria.

Does MongoDB fail silently if I don't check error codes?

I'm wondering if any persistence failure will go undetected if I don't check error codes? If so, what's the right way to write fast (asynchronously) while still detecting errors?
If you don't check for errors, your update is only fireAndForget. You'll indeed miss all errors which could arise. Please see MongoDB WriteConcerns for the available write modes in MongoDB (sorry I always fail to find the official, non driver related documentation, I really should bookmark it).
So with NORMAL you'll get at least connectivity errors, with NONE no exceptions at all. If you want to be informed of exceptions you have to use one of the other modes, which differ only in the persistence guarantee they give you.
You can't detect errors when running asynchronous, as this is against the intention. Your connection which sent the write operation, may be already closed or reused, so you can't sent it through that connection. Further more only your actual code knows what to do if it fails. As mongoDB doesn't offer some remote procedure call to asynchronous inform you of updates you'll have to wait until the write finished to a given stage.
So the fastest, but most unrelieable is SAFE, where the write only happened to memory. JOURNAL gives you the security that it was written at least to disk. With FSYNC you'll have those changes persisted on your db on disk. REPLICA that a least two replicas have written it, and MAJORITY that more than half of your replicas have written it(by three replicas which should be the default this doesn't differ).
The only chance I see to have something like asynchronous, is to have a separate Thread who is performing all write operations synchronous. This thread you could handle the actual update as well as a class which is called in case of a failure to perform the needed operations to handle this failure. But I don't think that this is good application design.
Yes, depending on the error, it can fail silently if you don't check the returned error code. It's necessary to wait for error checking. Your only other option would be for your app to occasionally tell the user "oops, remember when I acted like I saved your data a moment ago? Well, not really."

Scala-Actors, is it good practice to avoid memory leaks?

All started with question "Scala, Actors, what happens to unread inbox messages?". I was thinking how to avoid such problems in large system with many actors.
I found myself writing something like this:
react {
//all cases
case any: AnyRef => logMessageWithoutCase(any)
}
Is it good avoidance from memory leaks or is it have some side effects?
UPDATE 1 Thanks to #Alexey Romanov and #Luigi Plinge, if in the system will have some Spam actor? Something like this:
react{
//all cases
case msg: Any => Spam!msg
}
And finally in Spam will log or save to database. I think, it is more intuitive solution.
You might also investigate using Akka actors, which do not suffer from this problem because they enforce in-sequence message processing. Here, unhandled messages get passed to the unhandled() callback, which by default logs and throws an exception.
Another thing to consider is that Akka actors will replace the current scala.actor package in the medium term. This will be especially beneficial for large systems with many actors, because the current Scala actors are not as light-weight as Akka actors.
Logging messages is a side effect :) Since logging likely needs to write to disk or database, if you have many unmatched messages, this may degrade performance. Otherwise yes, this is a good way to avoid memory leaks.