I am facing the same issue as
Kafka Streams Deserialization Handler
After using logandcontinue, still on restarting server the corrupt messages show up everytime.
It looks like this jira issue is still open and needs to be addressed to fix the problem you are describing: https://issues.apache.org/jira/browse/KAFKA-6502
It only happens when you have a series of records in error though. As soon as you have a good record coming in, the offset moves along. Therefore, as a workaround, you can probably send a good record that will not cause an error maybe?
Related
I want to return statsu = success
After Kafka consumer read all massage in that day, it must to running all the time.
I tried to google it but not found any solution but if you have any link or something to resolve it, please share it back to me.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#idle-containers
While efficient, one problem with asynchronous consumers is detecting when they are idle. You might want to take some action if no messages arrive for some period of time.
You can configure the listener container to publish a ListenerContainerIdleEvent when some time passes with no message delivery. While the container is idle, an event is published every idleEventInterval milliseconds.
And
https://docs.spring.io/spring-kafka/docs/current/reference/html/#event-consumption
You can capture these events by implementing ApplicationListener — either a general listener or one narrowed to only receive this specific event. You can also use #EventListener.
I am trying to subscribe to a salesforce channel, I was able to get the messages from the channel.
But the issue is I am getting the messages one after another, the expected way of receiving is when bulk messages are published to the channel. Need to get the messages as a whole in the subscriber.
Let's say if I publish 500 messages, I need to get whole 500 messages here on the subscriber. But I am getting one message after another.
I am using the following code
async with client:
for topic in get_topics(system='salesforce'):
await client.subscribe(topic)
async for message in client:
messages = message
The above code is called inside async function
I am not sure this a bug from library or the method I follow is wrong
please let me know what is the issue.
I was able to figure out this Issue,
It is not related to library, it is super awesome. I was able to get the message real time.
The issue was with the architecture I have currently, which is causing the delay.
Thank you all for the help.
I see like it is not so good to answer this way, but answering it so that it might give someone heads up. While looking for such errors.
They can easily start debug the architecture instead of the library.
I just tried going though the getting started tutorial located here
http://doc.akka.io/docs/akka/2.0.1/intro/getting-started-first-scala.html#getting-started-first-scala
When running the example, everything works, but I get a dead letters message from each of my workers saying it cant send a akka.dispatch.sysmsg.Terminate from the worker to the master.
I'm guessing that this is because the master gets shutdown before the workers. How do I rectify this? If I comment out context.stop(self) the issues goes away, but, can I be certain that everything gets closed correctly when context.system.shutdown() is called from the listener.
And say I actually wanted to shut down only the master and the workers (not the system), how would I do this without getting the dead letters error I get when using context.stop(self) as advised by the tutorial.
First: please do not use such an old version of Akka if at all possible, currently we are at 2.2.3.
The messages you are seeing are not indicative of a problem (and as such they are not printed as errors), hence you should not try to fix them.
Getting into CQRS and I understand that you have commands (app layer) and events (from the domain).
In the simple case where events are to update the read model, do read model updates fail? If there is no "bug" then I cannot see them failing and as I am using EventStore, I know there is a commit flag which will retry failures.
So my question is do I have to do anything in addition to EventStore to handle failures?
Coming from a world where you do everything in one transaction and now things are done separately is worrying me.
Of course there may be cases where a published event will fail in the read models.
You have to make sure you can detect that and solve it.
The nice thing is that you can replay all the events again and again so you have the chance not only to fix the error. You can also test the fix by replaying every single event if you want.
I use NServiceBus as my publishing mechanism which allows me to use an error queue. Using my other logging tools together with the error queue I can easily determine what happened since I have the error log and the actual message that caused the error in the first place.
I am using C# and .Net Framework 1.1 (yes its old but I inherited this stuff and can't convert up). I places messages on a transactional queue but it does not get on the queue about 50% of the time. Running workgroup and Windows/XP Professional with all service packs installed. I don't see any messages in the dead letter queue either.
Any ideas where to look?
If it isn't hitting the queue at all and isn't going to the dead-letter queue, it suggests the item isn't being sent to the queue. You should be able to confirm that this is the case by switching on the journal for the queue.
Assuming it isn't hitting the queue, it is probably a transaction issue. I would check that you are definitely committing the message to the queue every time. Make sure there aren't any exceptions being thrown and swallowed that causes the transaction to roll back or never be committed (essentially the same thing). Also make sure there aren't any conditional statements that mean the commit gets skipped.
I would add some logging around every location where a transaction is started, committed and rolled back and also around any location where you are creating a message. You can then review you log to see the order of events and see what's going astray.
Another option would be to remove all of the transaction code and test the code against a non-transactional queue. If the messages all appear then it is a transactional problem. If not, the issue is elsewhere.
I use MSMQ a lot and the one thing I have learned through experience is that it works really well and the weak point is me :-)