I just tried going though the getting started tutorial located here
http://doc.akka.io/docs/akka/2.0.1/intro/getting-started-first-scala.html#getting-started-first-scala
When running the example, everything works, but I get a dead letters message from each of my workers saying it cant send a akka.dispatch.sysmsg.Terminate from the worker to the master.
I'm guessing that this is because the master gets shutdown before the workers. How do I rectify this? If I comment out context.stop(self) the issues goes away, but, can I be certain that everything gets closed correctly when context.system.shutdown() is called from the listener.
And say I actually wanted to shut down only the master and the workers (not the system), how would I do this without getting the dead letters error I get when using context.stop(self) as advised by the tutorial.
First: please do not use such an old version of Akka if at all possible, currently we are at 2.2.3.
The messages you are seeing are not indicative of a problem (and as such they are not printed as errors), hence you should not try to fix them.
Related
I am facing the same issue as
Kafka Streams Deserialization Handler
After using logandcontinue, still on restarting server the corrupt messages show up everytime.
It looks like this jira issue is still open and needs to be addressed to fix the problem you are describing: https://issues.apache.org/jira/browse/KAFKA-6502
It only happens when you have a series of records in error though. As soon as you have a good record coming in, the offset moves along. Therefore, as a workaround, you can probably send a good record that will not cause an error maybe?
Hello fellow programmers, I wish everyone a good morning.
The Situation
Laravel is great. Laravel Mail queues and the beanstalkd integration is great. It took me almost no time to get everything working. The sun is shining and its not raining. Its awesome.
Except when an exception is thrown while sending an email. Then thise mail is processed again and again and again and the exception is also thrown again and again and again.
Infinite loop.
I think I wouldnt even notice this if I wouldn't have seeded the database with invalid data. Validation usually would have taken care of that, that emails like 361FlorindaMatthäi#gmail.com dont end up with the folowing exception:
[Swift_RfcComplianceException]
Address in mailbox given [361FlorindaMatthäi#gmail.com] does not
comply with RFC 2822, 3.6.2.
But what validation wouldnt have taken care for is for example, when my mandrill account reaches its limits or my server looses internet connection, whatever. An Exception sends it into an infinite loop.
In the world where the sun is shining and everything is great the job has to be marked as buried or suspended and the next email should be processed. An infinite loop with an invalid email address is not great.
Basicly your application doesnt send out any emails anymore. This guy has roughly the same issue.
How can I fix this? Has anyone else encountered this Error?
Any Help is much appreciated.
You just need to travel Laravel how many times to try a specific job, before deciding it has failed:
php artisan queue:daemon --tries=3
This way, it will stop processing that specific job after 3 tries.
The hard part of any queue-based system is dealing with the errors, I've run tens of millions of jobs through BeanstalkD and many more through other systems like SQS.
With this Swift_RfcComplianceException exception it's clear that the job will never be able to succeed, and so trying it again would be futile.
Some other problems might be able to be recovered, but in either event, you have to wrap the code in a try/catch block and do what you can.
Since there is no way to 'fix' this particular issue, I would record what happened (the name of the exception and any message, and the data) to a log to check on, and then delete or bury the job. If you store the job-id in the log when it is buried it, you can go back and delete or kick that particular job again later - this would be after being able to change what happens to the job (rather than having it fail again).
I have two very basic questions on WebSphere MQ - given that I had been kind of administrating it for past few months I tend to think that these are silly questions
Is there a way to "deactivate" a
queue ? (for example through a
runmqsc command or through the
explorer interface) - I think not. I
think what I can do is just delete
it.
What will happen if I create a
remote queue definition if the real
remote queue is not in place? Will
it cause any issues on the queue
manager? - I think not. I think all
I will have are error messages in
the logs.
Please let me know your thoughts.
Thanks!
1 Is there a way to "deactivate" a
queue?
Yes. You can change the queue attributes like so:
ALTER Q(QUEUE_NAME) PUT(DISABLED) GET(DISABLED)
Any connected applications will receive a return code on the next API call telling them that the queue is no longer available for PUT/GET. If these are well-behaved programs they will then report the error and either end or go into a retry loop.
2 What will happen if I create a
remote queue definition if the real
remote queue is not in place?
The QRemote definition will resolve to a transmit queue. If the message can successfully be placed there your application will receive a return code of zero. (Any unsuccessful PUT will be due to hitting MAXDEPTH or other local problem not connected to the fact that the remote definition does not exist.)
The problem will be visible when the channel tries to deliver the message. If the remote QMgr has a Dead Letter Queue, the message will go there. If not, it will be backed out onto the local XMitQ and the channel will stop.
I'm having an issue with jboss server. when i run jboss server, it stops responding( no fixed time, so cannot predict when will it stop responding after start) after that it doesn't writes anything in log file. my problem is similar to the problem described on jboss community, link given below but it doesn't have the answer. please help.
http://community.jboss.org/message/526193
--Ravi
It sounds like your jboss server is running out of threads to allocate and is waiting for a new one to become available. Try triggering a thread dump (ctrl-\) and see if you find any threads suspiciously locked and waiting in some of your code. Quite possibly you have a deadlock or memory leak somewhere in your code which is causing old threads to lock up and never be released.
Alternatively try what the guy you linked to did, i.e. increasing the amount of threads available.
edit: For some more basic advice, this post might be of use to you.
I am using C# and .Net Framework 1.1 (yes its old but I inherited this stuff and can't convert up). I places messages on a transactional queue but it does not get on the queue about 50% of the time. Running workgroup and Windows/XP Professional with all service packs installed. I don't see any messages in the dead letter queue either.
Any ideas where to look?
If it isn't hitting the queue at all and isn't going to the dead-letter queue, it suggests the item isn't being sent to the queue. You should be able to confirm that this is the case by switching on the journal for the queue.
Assuming it isn't hitting the queue, it is probably a transaction issue. I would check that you are definitely committing the message to the queue every time. Make sure there aren't any exceptions being thrown and swallowed that causes the transaction to roll back or never be committed (essentially the same thing). Also make sure there aren't any conditional statements that mean the commit gets skipped.
I would add some logging around every location where a transaction is started, committed and rolled back and also around any location where you are creating a message. You can then review you log to see the order of events and see what's going astray.
Another option would be to remove all of the transaction code and test the code against a non-transactional queue. If the messages all appear then it is a transactional problem. If not, the issue is elsewhere.
I use MSMQ a lot and the one thing I have learned through experience is that it works really well and the weak point is me :-)