Localstack with MassTransit not getting messages - publish-subscribe

I am having issues testing MassTransit with LocalStack, but with the real SNS/SQS in AWS everything works fine, so I suspect it's an issue with LocalStack unless MassTransit requires something else than configuring ServiceURL. See https://github.com/MassTransit/MassTransit/issues/1476
I run LocalStack as following, just with SNS and SQS
docker run -it -e SERVICES=sns,sqs -e TEST_AWS_ACCOUNT_ID="000000000000" -e DEFAULT_REGION="us-east-1" -e LOCALSTACK_HOSTNAME="localhost" -e -rm --privileged --name localstack_main -p 4566:4566 -p 4571:4571 -p 8080-8081:8080-8081 -v "/tmp/localstack:/tmp/localstack" -v "/var/run/docker.sock:/var/run/docker.sock" -e DOCKER_HOST="unix:///var/run/docker.sock" -e HOST_TMP_FOLDER="/tmp/localstack" "localstack/localstack"
Now with MassTransit I create the bus and start it. The only change I make for MassTransit to work with LocalStack is setting the ServiceURL in SNS and SQS. The rest should work in the same way (I think)
var region = "localhost:4566";
var accessKey = "test";
var secretKey = "test";
var busControl = Bus.Factory.CreateUsingAmazonSqs(c =>
{
var hostAddress = new Uri($"amazonsqs://{region}")
var hostConfigurator = new AmazonSqsHostConfigurator(hostAddress);
hostConfigurator.AccessKey(accessKey);
hostConfigurator.SecretKey(secretKey);
hostConfigurator.Config(new AmazonSimpleNotificationServiceConfig {ServiceURL = $"http://{region}"});
hostConfigurator.Config(new AmazonSQSConfig {ServiceURL = $"http://{region}"});
c.Host(hostConfigurator.Settings);
});
When running my project I can connect and publish events, no errors. I subscribe to events, no errors. I can see the topics, subscriptions and queue are properly created in LocalStack.
I can also see with Commandeer that there are "invisible" messages in the queue (not sure what that is) so it seems the problem is in the receiving part.
Is there anything additional requirement to configure in MassTransit to consume published events?
UPDATE 1: One interesting thing is that I can keep the subscriber listening for long time and during this time Commandeer shows there are invisible messages in the queue.
As soon as I stop the subscriber (and my application) I can see that Commandeer moves messages from "invisible" to "messages". Cannot peek messages though.

I've confirmed the problem is with localstack latest image as I've tried with an older one, as per Chris' suggestion, and it works well.
With localstack/localstack:0.11.2 it works well
docker run -it -e SERVICES=sns,sqs -e TEST_AWS_ACCOUNT_ID="000000000000" -e DEFAULT_REGION="us-east-1" -e LOCALSTACK_HOSTNAME="localhost" -e -rm --privileged --name localstack_main -p 4566:4566 -p 4571:4571 -p 8080-8081:8080-8081 -v "/tmp/localstack:/tmp/localstack" -v "/var/run/docker.sock:/var/run/docker.sock" -e DOCKER_HOST="unix:///var/run/docker.sock" -e HOST_TMP_FOLDER="/tmp/localstack" "localstack/localstack:0.11.2"
With latest (I think it was bdfbe53666a4dd13a09dd9e4b155e2fb750b8041daf7efc69783cb4208b6cacc but not a 100% sure) it doesn't work.
The following image versions don't work either:
localstack/localstack:0.12.8
localstack/localstack:0.11.6
UPDATE 1: I've created a simple repo with instructions to reproduce the issue https://gitlab.com/sunnyatticsoftware/sandbox/issue-localstack-masstransit
Notice the repo has a wrapper abstraction over MassTransit to use with a clean architecture. This doesn't affect the issue, but it was easier to copy-paste the needed parts rather than building up a sample from scratch.
UPDATE 2: Verified that the latest version localstack/localstack:0.12.9.1
works well for above scenario (see repo)
UPDATE 3 (2021-01-12): I attempted the version localstack/localstack:0.12.9.1 again and it does not work. Not sure if the previous time it really worked or the docker image was overwritten. In any case I'm back at using the version localstack/localstack:0.11.2 again, because the latest is also broken, unfortunately.
I can see the messages in the queue as hidden.
awslocal sqs get-queue-attributes --queue-url http://localhost:4566/000000000000/sample-queue --attribute-names All
{
"Attributes": {
"ApproximateNumberOfMessages": "0",
"ApproximateNumberOfMessagesDelayed": "0",
"ApproximateNumberOfMessagesNotVisible": "4",
"CreatedTimestamp": "1626087449.988218",
"DelaySeconds": "0",
"LastModifiedTimestamp": "1626087450.113652",
"MaximumMessageSize": "262144",
"MessageRetentionPeriod": "345600",
"QueueArn": "arn:aws:sqs:us-east-1:000000000000:sample-queue",
"Policy": "{\"Version\": \"2012-10-17\", \"Statement\": [{\"Sid\": \"0d948ac2a9ea4ed7b2c0609642107f0f\", \"Effect\": \"Allow\", \"Principal\": {\"AWS\": \"*\"}, \"Action\": \"sqs:SendMessage\", \"Resource\": \"arn:aws:sqs:us-east-1:000000000000:sample-queue\", \"Condition\": {\"ArnLike\": {\"aws:SourceArn\": \"arn:aws:sns:us-east-1:000000000000:*\"}}}]}",
"ReceiveMessageWaitTimeSeconds": "0",
"VisibilityTimeout": "30"
}
}

Related

Automatically reconnect failed tasks in Kafka-Connect

I'm using a mongo-source plugin with Kafka-connect.
I checked the source task state, and it was running and listening on a mongo collection.
I manually stopped mongod service and waited about 1 minute, then I start it back again.
I checked the source task to see if anything will fix itself, and after 30 minutes nothing seems to work.
Only after restarting the connector it started working again.
Since, mongo-source doesn't have the options to set retries + backoff when timeout, I searched for a configuration that will fit a simple scenario: restart failed task after X time using Kafka-connect configuration. couldn't find any.. :/
I can do that with a simple script, but there must be something in Kafka-connect that manages failed tasks. or even in mongo-source... I don't want it to fail so fast after just 1 minute... :/
There isn't any way other than using the REST API to find a failed task and submit a restart request - and then running this on a periodic basis. For example
curl -s "http://localhost:8083/connectors?expand=status" | \
jq -c -M 'map({name: .status.name } + {tasks: .status.tasks}) | .[] | {task: ((.tasks[]) + {name: .name})} | select(.task.state=="FAILED") | {name: .task.name, task_id: .task.id|tostring} | ("/connectors/"+ .name + "/tasks/" + .task_id + "/restart")' | \
xargs -I{connector_and_task} curl -v -X POST "http://localhost:8083"\{connector_and_task\}
Source: https://rmoff.net/2019/06/06/automatically-restarting-failed-kafka-connect-tasks/

Tensorflow Serving: Rest API returns "Malformed request" error

Tensorflow Serving server (run with docker) responds to my GET (and POST) requests with this:
{ "error": "Malformed request: POST /v1/models/saved_model/" }
Precisely the same problem was already reported but never solved (supposedly, this is a StackOverflow kind of question, not a GitHub issue):
https://github.com/tensorflow/serving/issues/1085
https://github.com/tensorflow/serving/issues/1095
Any ideas? Thank you very much.
I verified that this does not work pre-v12 and does indeed work post-v12.
> docker run -it -p 127.0.0.1:9000:8500 -p 127.0.0.1:9009:8501 -v /models/55:/models/55 -e MODEL_NAME=55 --rm tensorflow/serving
> curl http://localhost:9009/v1/models/55
{ "error": "Malformed request: GET /v1/models/55" }
Now try with v12:
> docker run -it -p 127.0.0.1:9000:8500 -p 127.0.0.1:9009:8501 -v /models/55:/models/55 -e MODEL_NAME=55 --rm tensorflow/serving:1.12.0
> curl http://localhost:9009/v1/models/55
{
"model_version_status": [
{
"version": "1541703514",
"state": "AVAILABLE",
"status": {
"error_code": "OK",
"error_message": ""
}
}
]
}
There were two issues with my approach:
1) The status check request wasn't supported in my Tensorflow_model_server (see https://github.com/tensorflow/serving/issues/1085 for details)
2) More importantly, when using Windows you must escape quotation marks in JSON. So instead of:
curl -XPOST http://localhost:8501/v1/models/saved_model:predict -d "{"instances":[{"features":[1,1,1,1,1,1,1,1,1,1]}]}"
I should have used this:
curl -XPOST http://localhost:8501/v1/models/saved_model:predict -d "{\"instances\":[{\"features\":[1,1,1,1,1,1,1,1,1,1]}]}"
Depends on your model, but this is what my body looks like:
{"inputs": {"text": ["Hello"]}}
I used Postman to help me out so that it knew it was a JSON.
This is for predict API, so the url ends in ":predict"
Again, that depends on what API you're trying to use.
Model status API is only supported in master branch. There is no TF serving release that supports it yet (the API is slated for upcoming 1.12 release). You can use the nightly docker image (tensorflow/serving:nightly) to test on master branch builds.
This solution gived by netf in issue:1128 in tensorflow/serving.
I already try this solution, it's done and i can get the model status.Getting Model status img(this is the img for model status demo).
Hope I can help you.
If you not clear the master branch builds, you can contact me.
I can give your instruction.
Email:mizeshuang#gmail.com

Orion Context Broker: reset by peer when calling updateContext

Just after the install of the Context Broker I've tried to test it creating a new entity as described in the session Entity Creation of https://forge.fi-ware.org/plugins/mediawiki/wiki/fiware/index.php/Publish/Subscribe_Broker_-_Orion_Context_Broker_-_User_and_Programmers_Guide, but I'm getting a "connection reset by peer" error.
The log doesn't seem say anything, even I raised the level of traces with -t 0-255 option.
Aditional info:
$ contextBroker --version
0.14.0
$ ps aux | grep context
/usr/bin/contextBroker -port 1026 -logDir /var/log/contextBroker -pidpath /var/log/contextBroker/contextBroker.pid -dbhost localhost -db orion -t 0-255
The issue was fixed updating the Context Broker to a newer version, in my case to version 0.14.1, which can be found here: http://repositories.testbed.fi-ware.org/repo/rpm/x86_64/.
I can't say exactly what was wrong, since after updating everything was working fine.

How to set up messaging subsystem using CLI in Wildfly

Does anyone have an example script for setting up the messaging subsystem in Wildfly using CLI?
The perfect example would be the CLI needed to take a server running the standalone.xml, and after running the CLI script it has the messaging subsystem as defined in the standalone-full.xml.
The examples I've found so far all start with the assumption the messaging subsystem is already in place.
Here's the script to add messaging. This adds the messaging subsystem, and makes it look like the subsystem when running standalone-full.xml.
/extension=org.jboss.as.messaging:add()
batch
/subsystem=messaging:add
/subsystem=messaging/hornetq-server=default:add
/subsystem=messaging/hornetq-server=default/:write-attribute(name=journal-file-size, value=102400L)
/subsystem=messaging/hornetq-server=default/address-setting=#:add(address-full-policy="PAGE", \
dead-letter-address="jms.queue.DLQ", expiry-address="jms.queue.ExpiryQueue", expiry-delay=-1L, \
last-value-queue=false, max-delivery-attempts=10, max-size-bytes=10485760L, message-counter-history-day-limit=10, \
page-max-cache-size=5, page-size-bytes=2097152L, redelivery-delay=0L, redistribution-delay=-1L, send-to-dla-on-no-route=false)
/subsystem=messaging/hornetq-server=default/in-vm-connector=in-vm:add(server-id=0)
/subsystem=messaging/hornetq-server=default/in-vm-acceptor=in-vm:add(server-id=0)
/subsystem=messaging/hornetq-server=default/http-connector=http-connector:add(socket-binding="http", param={http-upgrade-endpoint="http-acceptor"})
/subsystem=messaging/hornetq-server=default/http-connector=http-connector-throughput:add(socket-binding="http", param={http-upgrade-endpoint="http-acceptor-throughput", batch-delay=50})
/subsystem=messaging/hornetq-server=default/http-acceptor=http-acceptor:add(http-listener="default")
/subsystem=messaging/hornetq-server=default/http-acceptor=http-acceptor-throughput:add(http-listener="default", param={batch-delay=50, direct-deliver=false})
/subsystem=messaging/hornetq-server=default/connection-factory=InVmConnectionFactory:add(connector={"in-vm"=>undefined}, entries = ["java:/ConnectionFactory"])
/subsystem=messaging/hornetq-server=default/connection-factory=RemoteConnectionFactory:add(connector={"http-connector"=>undefined}, entries = ["java:jboss/exported/jms/RemoteConnectionFactory"])
/subsystem=messaging/hornetq-server=default/pooled-connection-factory=hornetq-ra:add(connector={"in-vm"=>undefined}, entries=["java:/JmsXA","java:jboss/DefaultJMSConnectionFactory"])
/subsystem=messaging/hornetq-server=default/security-setting=#:add()
/subsystem=messaging/hornetq-server=default/security-setting=#/role=guest:add(consume=true, create-durable-queue=false, create-non-durable-queue=true, delete-durable-queue=false, delete-non-durable-queue=true, manage=false, send=true)
jms-queue add --queue-address=ExpiryQueue --durable=true --entries=["java:/jms/queue/ExpiryQueue"]
jms-queue add --queue-address=DLQ --durable=true --entries=["java:/jms/queue/DLQ"]
run-batch
Here is an updated CLI command for new Wildfly 10 (ActiveMQ Artemis)
>> ADD MESSAGING SUBSYSTEM
/extension=org.wildfly.extension.messaging-activemq:add()
/subsystem=messaging-activemq:add
/:reload
/subsystem=messaging-activemq/server=default:add
/subsystem=messaging-activemq/server=default/security-setting=#:add
/subsystem=messaging-activemq/server=default/address-setting=#:add(dead-letter-address="jms.queue.DLQ", expiry-address="jms.queue.ExpiryQueue", expiry-delay="-1L", max-delivery-attempts="10", max-size-bytes="10485760", page-size-bytes="2097152", message-counter-history-day-limit="10")
/subsystem=messaging-activemq/server=default/http-connector=http-connector:add(socket-binding="http", endpoint="http-acceptor")
/subsystem=messaging-activemq/server=default/http-connector=http-connector-throughput:add(socket-binding="http", endpoint="http-acceptor-throughput" ,params={batch-delay="50"})
/subsystem=messaging-activemq/server=default/in-vm-connector=in-vm:add(server-id="0")
/subsystem=messaging-activemq/server=default/http-acceptor=http-acceptor:add(http-listener="default")
/subsystem=messaging-activemq/server=default/http-acceptor=http-acceptor-throughput:add(http-listener="default", params={batch-delay="50", direct-deliver="false"})
/subsystem=messaging-activemq/server=default/in-vm-acceptor=in-vm:add(server-id="0")
/subsystem=messaging-activemq/server=default/jms-queue=ExpiryQueue:add(entries=["java:/jms/queue/ExpiryQueue"])
/subsystem=messaging-activemq/server=default/jms-queue=DLQ:add(entries=["java:/jms/queue/DLQ"])
>> Refresh needed at this point
/subsystem=messaging-activemq/server=default/connection-factory=InVmConnectionFactory:add(connectors=["in-vm"], entries=["java:/ConnectionFactory"])
/subsystem=messaging-activemq/server=default/connection-factory=RemoteConnectionFactory:add(connectors=["http-connector"], entries = ["java:jboss/exported/jms/RemoteConnectionFactory"])
/subsystem=messaging-activemq/server=default/pooled-connection-factory=activemq-ra:add(transaction="xa", connectors=["in-vm"], entries=["java:/JmsXA java:jboss/DefaultJMSConnectionFactory"])
/subsystem=ee/service=default-bindings/:write-attribute(name="jms-connection-factory", value="java:jboss/DefaultJMSConnectionFactory")
/subsystem=ejb3:write-attribute(name="default-resource-adapter-name", value="${ejb.resource-adapter-name:activemq-ra.rar}")
/subsystem=ejb3:write-attribute(name="default-mdb-instance-pool", value="mdb-strict-max-pool")
>> ADD MESSAGE QUEUE
/subsystem=messaging-activemq/server=default/jms-queue=MyQueue:add(entries=[java:/jms/queue/MyQueue])
All commands may be ran as a batch command or separately like this:
$SERVER_CLI_PATH --connect --user=$SERVER_USER --password=$SERVER_PASSW --command="{{line with command}}"
To set up messaging in WildFly 14, I had to do the configuration with separate CLI script files, otherwise jboss-cli would fail with JBTHR00004: Operation was cancelled exceptions, probably due to incomplete reloads. In case you still encounter these errors, add sleep commands in between to the shell script that runs the CLI scripts.
Add the messaging extension, 1-add-messaging-extension-and-subsystem.cli:
batch
# Add messaging extension
/extension=org.wildfly.extension.messaging-activemq:add()
# Add messaging subsystem
/subsystem=messaging-activemq:add
run-batch
/:reload
Add the messaging server allowing only in-VM connectors, 2-add-messaging-server.cli:
batch
# Add messaging server with default configuration, allow only in-VM connectors
/subsystem=messaging-activemq/server=default:add
/subsystem=messaging-activemq/server=default/security-setting=#:add
/subsystem=messaging-activemq/server=default/address-setting=#:add( \
dead-letter-address="jms.queue.DLQ", \
expiry-address="jms.queue.ExpiryQueue", \
max-size-bytes="10485760", \
message-counter-history-day-limit="10", \
page-size-bytes="2097152")
/subsystem=messaging-activemq/server=default/in-vm-connector=in-vm:add( \
server-id="0",params=buffer-pooling=false)
/subsystem=messaging-activemq/server=default/in-vm-acceptor=in-vm:add( \
server-id="0",params=buffer-pooling=false)
/subsystem=messaging-activemq/server=default/jms-queue=ExpiryQueue:add( \
entries=["java:/jms/queue/ExpiryQueue"])
/subsystem=messaging-activemq/server=default/jms-queue=DLQ:add( \
entries=["java:/jms/queue/DLQ"])
/subsystem=messaging-activemq/server=default/connection-factory=InVmConnectionFactory:add( \
connectors=["in-vm"], \
entries=["java:/ConnectionFactory"])
/subsystem=messaging-activemq/server=default/pooled-connection-factory=activemq-ra:add( \
transaction="xa", \
connectors=["in-vm"], \
entries=["java:/JmsXA java:jboss/DefaultJMSConnectionFactory"])
# Configure default connection factory in the EE subsystem
/subsystem=ee/service=default-bindings/:write-attribute(name="jms-connection-factory", value="java:jboss/DefaultJMSConnectionFactory")
# Configure message-driven beans in the EJB subsystem
/subsystem=ejb3:write-attribute(name="default-resource-adapter-name", value="${ejb.resource-adapter-name:activemq-ra.rar}")
/subsystem=ejb3:write-attribute(name="default-mdb-instance-pool", value="mdb-strict-max-pool")
run-batch
/:reload
In case you need HTTP connectors as well, see #petr-hunka's answer.

Custom Munin plugin won't report

I've built my first Munin plugin to give us the size of our Redis queue, but it won't report for some reason. Every other plugin on the node, including other Redis-centric plugins work fine.
Here's the plugin code:
#!/bin/sh
case $1 in
config)
cat <<'EOM'
multigraph redis_queue_size
graph_title Redis Queue Size
graph_info The size of Redis queue
graph_category redis
graph_vlabel Messages
redisqueue.label redisqueue
redisqueue.type GAUGE
redisqueue.min 0
EOM
exit 0;;
esac
queuelength=`redis-cli llen mykeyname`
printf "redisqueue.value "
echo $queuelength
The plugin is in /usr/share/munin/plugins/redis_queue_
The plugin is symlinked to /etc/munin/plugins/redis_queue_
I made sure to restart the service
$ sudo service munin-node force-reload
If I run sudo munin-run redis_queue_ I get the correct output:
redisqueue.value 1567595
If I run munin-node-config I get the following:
redis_queue_ | yes |
If I connect to the instance from the master using telnet to fetch the plugin, I get:
$ telnet 10.101.21.56 4949
Trying 10.101.21.56...
Connected to 10.101.21.56.
Escape character is '^]'.
# munin node at redis01.example.com
fetch redis_queue_
redisqueue.value 1035336
The master shows an empty graph for it, but the "last updated" time isn't increasing. I initially had the plugin configured a little differently (it wasn't producing good output) so all the values are -nan. Once I fixed the output, I expected the plugin to start working, but all efforts have failed.
Everything looks right, but yet still no values in the graph.
Edit: Munin v1.4.6