How can I make apply_async to accept messagegroupid parameter for sqs fifo queue - celery

the problem I am having is that celery's apply_async is not delivering a required keyword argument, MessageGroupId, to sendmessage method of boto3 to publish message to sqs fifo queue.
I have been trying to use celery's apply_async task to publish messages on sqs fifo queue, and first I got the following error:
botocore.exceptions.ClientError: An error occurred (MissingParameter) when calling the SendMessage operation: The request must contain the parameter MessageGroupId.
So I found out that for fifo message queues, we need to pass MessageGroupId as a keywork argument to apply_async as described in celery's documentation here:.
I did pass the messagegroupid parameter as a keyword argument, but I faced the same error. for some reason, the MessageGroupId parameter is not being accepted by SendMessage operation.
Can someone help me out in solving this issue

Related

How does tell() work in Akka actor system?

I am bit confused looking at the docs here and here
receive() is a method which accepts no parameters and returns a Partial function. tell() is a method which returns Unit and it 'sends' the message. Now for the message to be processed in my understanding 2 things have to happen:
The receive() should be invoked by tell
The message is passed to the partial function which is returned by the receive()
Now if the partial function is returned back to the place where tell() was used, then how does message based communication work? Why isn't the operation being performed inside the actor itself?
Because it is internals there is no documentation, but you can checkout source code yourself here: https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/actor/ActorCell.scala. All internals starting from how tell sends message to mailbox, how message is extracted from it, how receive is called and so on is here.
Hope it answers your question.

Query node's result status in Akka Stream postStop

In an Akka Stream GraphStageLogic's postStop, is there a way to determine whether the stage has failed or completed without error? E.g. get a Try[Unit], that would be a Failure if failStage had been called or a Success if completeStage had been called.
failStage is a final method, so there is no way to hook into it
the documentation also doesn't say anything useful.
Seems there is not. So you need to monitor all of the node's own calls to completeStage, including inlet/outlet handlers.

How to manage the error queue in openssl (SSL_get_error and ERR_get_error)

In OpenSSl, The man pages for The majority of SSL_* calls indicate an error by returning a value <= 0 and suggest calling SSL_get_error() to get the extended error.
But within the man pages for these calls as well as for other OpenSSL library calls, there are vague references to using the "error queue" in OpenSSL - Such is the case in the man page for SSL_get_error:
The current thread's error queue must be empty before the TLS/SSL I/O
operation is attempted, or SSL_get_error() will not work reliably.
And in that very same man page, the description for SSL_ERROR_SSL says this:
SSL_ERROR_SSL
A failure in the SSL library occurred, usually a protocol error.
The OpenSSL error queue contains more information on the error.
This kind of implies that there is something in the error queue worth reading. And failure to read it makes a subsequent call to SSL_get_error unreliable. Presumably, the call to make is ERR_get_error.
I plan to use non-blocking sockets in my code. As such, it's important that I reliably discover when the error condition is SSL_ERROR_WANT_READ or SSL_ERROR_WANT_WRITE so I can put the socket in the correct polling mode.
So my questions are this:
Does SSL_get_error() call ERR_get_error() implicitly for me? Or do I need to use both?
Should I be calling ERR_clear_error prior to every OpenSSL library call?
Is it possible that more than one error could be in the queue after an OpenSSL library call completes? Hence, are there circumstances where the first error in the queue is more relevant than the last error?
SSL_get_error does not call ERR_get_error. So if you just call SSL_get_error, the error stays in the queue.
You should be calling ERR_clear_error prior to ANY SSL-call(SSL_read, SSL_write etc) that is followed by SSL_get_error, otherwise you may be reading an old error that occurred previously in the current thread.

Scala 2.8Beta1 Actor

Calling the !! method from one actor to another worker actor appears to keep the channel open even after the reply was received by the caller (ie: future is ready).
For example, using !! to send 11 different messages from one actor to another worker actor will result in 11 messages similar to the below being shown in the mailbox of the original caller, each with a different Channel#xxxx value.
!(scala.actors.Channel#11b456f,Exit(com.test.app.actor.QueryActor#4f7bc2,'normal))
Are these messages awaiting replies from the worker, as the original caller is sending an Exit message out upon it's own call to exit(), or are they generated on the other end, and for some reason have the print form shown above? By this point, the worker actor has already exited, so the original caller of !! will definitely never receive any replies.
This behavior is undesirable, as the original calling actor's mailbox fills with these exit messages (one for each channel created for each time !! was used).
How can this be stopped? Is the original caller automatically "linking" to the reply channels created on each !! call?
The reason why these Exit messages are sent to the original caller is that the caller links its temporary channel, which is used to receive the future result, to the callee. In particular, if the channel receives an exit signal, an Exit message is sent on that channel, which results in a message similar to what you describe to be sent to the actual caller (you can think of channels as tags on messages). This is done to allow (re-)throwing an exception inside the caller if the callee terminates before serving the future message send (the exception is thrown upon accessing the future).
The problem with the current implementation is that the caller receives an Exit message even if the future was already resolved. This is clearly a bug that should be filed on the Scala Trac. A possible fix is to only send an Exit message if the future has not been resolved, yet. In this case the Exit message will be removed whenever the future is accessed for the first time using apply or isSet.

Removing all the messages from MSMQ

I have a Nunit test which adds message into MSMQ.
In the teardown of the NUnit i want to remove all the message from the queue.
Is there a direct way to remove all the messages from the queue (some kind of refresh) ?
Is there a Purge() method on your queue object that would do the trick?
Edit: Yup - seems to be: http://msdn.microsoft.com/en-us/library/ms703966%28VS.85%29.aspx