nats-sub not found after nats-box fresh install - kubernetes

I'm trying to setup a basic NATS service on my kubernetes cluster, according to their documentation, here. I executed the following code:
$ helm install test-nats nats/nats
NAME: test-nats
LAST DEPLOYED: Thu Jul 14 13:18:09 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
You can find more information about running NATS on Kubernetes
in the NATS documentation website:
https://docs.nats.io/nats-on-kubernetes/nats-kubernetes
NATS Box has been deployed into your cluster, you can
now use the NATS tools within the container as follows:
kubectl exec -n default -it deployment/test-nats-box -- /bin/sh -l
nats-box:~# nats-sub test &
nats-box:~# nats-pub test hi
nats-box:~# nc test-nats 4222
Thanks for using NATS!
$ kubectl exec -n default -it deployment/test-nats-box -- /bin/sh -l
_ _
_ __ __ _| |_ ___ | |__ _____ __
| '_ \ / _` | __/ __|_____| '_ \ / _ \ \/ /
| | | | (_| | |_\__ \_____| |_) | (_) > <
|_| |_|\__,_|\__|___/ |_.__/ \___/_/\_\
nats-box v0.11.0
test-nats-box-84c48d46f-j7jvt:~#
Now, so far, everything has conformed to their start guide. However, when I try to test the connection, I run into trouble:
test-nats-box-84c48d46f-j7jvt:~# nats-sub test &
test-nats-box-84c48d46f-j7jvt:~# /bin/sh: nats-sub: not found
test-nats-box-84c48d46f-j7jvt:~# nats-pub test hi
/bin/sh: nats-pub: not found
It looks like the commands weren't found but they should have been installed when I did the helm install. What's going on here?

I have reproduced the set-up on my kubernetes cluster and have successfully deployed the nats box and started a client subscriber program in which subscribers listen on subjects, and publishers send messages on specific subjects.
1. Create Subscriber
In a shell or command prompt session, start a client subscriber program.
nats sub < subject>
Here, < subject > is a subject to listen on. It helps to use unique and well thought-through subject strings because you need to ensure that messages reach the correct subscribers even when wildcards are used.
For example:
nats sub msg.test
You should see the message: Listening on [msg.test].
2. Create a Publisher and publish a message
Create a NATS publisher and send a message.
nats pub < subject> < message>
Where < subject> is the subject name and < message> is the text to publish.
For example:
nats pub msg.test nats-message-1
You'll notice that the publisher sends the message and prints: Published [msg.test] : 'NATS MESSAGE'.
The subscriber receives the message and prints: [#1] Received on [msg.test]: 'NATS MESSAGE'.
Here, you have provided the wrong syntax nats-sub and nats-pub which are deprecated. Try using the above commands to give precise results.

I had the same problem. nats-sub and nats-pub seem to be deprecated and you need to use nats sub and nats pub instead.

Related

How to connect python s3fs client to a running Minio docker container?

For test purposes, I'm trying to connect a module that intoduces an absration layer over s3fs with custom business logic.
It seems like I have trouble connecting the s3fs client to the Minio container.
Here's how I created the the container and attach the s3fs client (below describes how I validated the container is running properly)
import s3fs
import docker
client = docker.from_env()
container = client.containers.run('minio/minio',
"server /data --console-address ':9090'",
environment={
"MINIO_ACCESS_KEY": "minio",
"MINIO_SECRET_KEY": "minio123",
},
ports={
"9000/tcp": 9000,
"9090/tcp": 9090,
},
volumes={'/tmp/minio': {'bind': '/data', 'mode': 'rw'}},
detach=True)
container.reload() # why reload: https://github.com/docker/docker-py/issues/2681
fs = s3fs.S3FileSystem(
anon=False,
key='minio',
secret='minio123',
use_ssl=False,
client_kwargs={
'endpoint_url': "http://localhost:9000" # tried 127.0.0.1:9000 with no success
}
)
===========
>>> fs.ls('/')
[]
>>> fs.ls('/data')
Bucket doesnt exists exception
check that the container is running:
➜ ~ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
127e22c19a65 minio/minio "/usr/bin/docker-ent…" 56 seconds ago Up 55 seconds 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp hardcore_ride
check that the relevant volume is attached:
➜ ~ docker exec -it 127e22c19a65 bash
[root#127e22c19a65 /]# ls -l /data/
total 4
-rw-rw-r-- 1 1000 1000 4 Jan 11 16:02 foo.txt
[root#127e22c19a65 /]# exit
Since I proved the volume binding is working properly by shelling into the container, I expected to see the same results when attached the container's filesystem via the s3fs client.
What is the bucket name that was created as part of this setup?
From the docs I'm seeing you have to give <bucket_name>/<object_path> syntax to access the resources.
fs.ls('my-bucket')
['my-file.txt']
Also if you look at the docs below there are a couple of other ways to access it using fs.open can you give that a try?
https://buildmedia.readthedocs.org/media/pdf/s3fs/latest/s3fs.pdf

Localstack with MassTransit not getting messages

I am having issues testing MassTransit with LocalStack, but with the real SNS/SQS in AWS everything works fine, so I suspect it's an issue with LocalStack unless MassTransit requires something else than configuring ServiceURL. See https://github.com/MassTransit/MassTransit/issues/1476
I run LocalStack as following, just with SNS and SQS
docker run -it -e SERVICES=sns,sqs -e TEST_AWS_ACCOUNT_ID="000000000000" -e DEFAULT_REGION="us-east-1" -e LOCALSTACK_HOSTNAME="localhost" -e -rm --privileged --name localstack_main -p 4566:4566 -p 4571:4571 -p 8080-8081:8080-8081 -v "/tmp/localstack:/tmp/localstack" -v "/var/run/docker.sock:/var/run/docker.sock" -e DOCKER_HOST="unix:///var/run/docker.sock" -e HOST_TMP_FOLDER="/tmp/localstack" "localstack/localstack"
Now with MassTransit I create the bus and start it. The only change I make for MassTransit to work with LocalStack is setting the ServiceURL in SNS and SQS. The rest should work in the same way (I think)
var region = "localhost:4566";
var accessKey = "test";
var secretKey = "test";
var busControl = Bus.Factory.CreateUsingAmazonSqs(c =>
{
var hostAddress = new Uri($"amazonsqs://{region}")
var hostConfigurator = new AmazonSqsHostConfigurator(hostAddress);
hostConfigurator.AccessKey(accessKey);
hostConfigurator.SecretKey(secretKey);
hostConfigurator.Config(new AmazonSimpleNotificationServiceConfig {ServiceURL = $"http://{region}"});
hostConfigurator.Config(new AmazonSQSConfig {ServiceURL = $"http://{region}"});
c.Host(hostConfigurator.Settings);
});
When running my project I can connect and publish events, no errors. I subscribe to events, no errors. I can see the topics, subscriptions and queue are properly created in LocalStack.
I can also see with Commandeer that there are "invisible" messages in the queue (not sure what that is) so it seems the problem is in the receiving part.
Is there anything additional requirement to configure in MassTransit to consume published events?
UPDATE 1: One interesting thing is that I can keep the subscriber listening for long time and during this time Commandeer shows there are invisible messages in the queue.
As soon as I stop the subscriber (and my application) I can see that Commandeer moves messages from "invisible" to "messages". Cannot peek messages though.
I've confirmed the problem is with localstack latest image as I've tried with an older one, as per Chris' suggestion, and it works well.
With localstack/localstack:0.11.2 it works well
docker run -it -e SERVICES=sns,sqs -e TEST_AWS_ACCOUNT_ID="000000000000" -e DEFAULT_REGION="us-east-1" -e LOCALSTACK_HOSTNAME="localhost" -e -rm --privileged --name localstack_main -p 4566:4566 -p 4571:4571 -p 8080-8081:8080-8081 -v "/tmp/localstack:/tmp/localstack" -v "/var/run/docker.sock:/var/run/docker.sock" -e DOCKER_HOST="unix:///var/run/docker.sock" -e HOST_TMP_FOLDER="/tmp/localstack" "localstack/localstack:0.11.2"
With latest (I think it was bdfbe53666a4dd13a09dd9e4b155e2fb750b8041daf7efc69783cb4208b6cacc but not a 100% sure) it doesn't work.
The following image versions don't work either:
localstack/localstack:0.12.8
localstack/localstack:0.11.6
UPDATE 1: I've created a simple repo with instructions to reproduce the issue https://gitlab.com/sunnyatticsoftware/sandbox/issue-localstack-masstransit
Notice the repo has a wrapper abstraction over MassTransit to use with a clean architecture. This doesn't affect the issue, but it was easier to copy-paste the needed parts rather than building up a sample from scratch.
UPDATE 2: Verified that the latest version localstack/localstack:0.12.9.1
works well for above scenario (see repo)
UPDATE 3 (2021-01-12): I attempted the version localstack/localstack:0.12.9.1 again and it does not work. Not sure if the previous time it really worked or the docker image was overwritten. In any case I'm back at using the version localstack/localstack:0.11.2 again, because the latest is also broken, unfortunately.
I can see the messages in the queue as hidden.
awslocal sqs get-queue-attributes --queue-url http://localhost:4566/000000000000/sample-queue --attribute-names All
{
"Attributes": {
"ApproximateNumberOfMessages": "0",
"ApproximateNumberOfMessagesDelayed": "0",
"ApproximateNumberOfMessagesNotVisible": "4",
"CreatedTimestamp": "1626087449.988218",
"DelaySeconds": "0",
"LastModifiedTimestamp": "1626087450.113652",
"MaximumMessageSize": "262144",
"MessageRetentionPeriod": "345600",
"QueueArn": "arn:aws:sqs:us-east-1:000000000000:sample-queue",
"Policy": "{\"Version\": \"2012-10-17\", \"Statement\": [{\"Sid\": \"0d948ac2a9ea4ed7b2c0609642107f0f\", \"Effect\": \"Allow\", \"Principal\": {\"AWS\": \"*\"}, \"Action\": \"sqs:SendMessage\", \"Resource\": \"arn:aws:sqs:us-east-1:000000000000:sample-queue\", \"Condition\": {\"ArnLike\": {\"aws:SourceArn\": \"arn:aws:sns:us-east-1:000000000000:*\"}}}]}",
"ReceiveMessageWaitTimeSeconds": "0",
"VisibilityTimeout": "30"
}
}

Automatically reconnect failed tasks in Kafka-Connect

I'm using a mongo-source plugin with Kafka-connect.
I checked the source task state, and it was running and listening on a mongo collection.
I manually stopped mongod service and waited about 1 minute, then I start it back again.
I checked the source task to see if anything will fix itself, and after 30 minutes nothing seems to work.
Only after restarting the connector it started working again.
Since, mongo-source doesn't have the options to set retries + backoff when timeout, I searched for a configuration that will fit a simple scenario: restart failed task after X time using Kafka-connect configuration. couldn't find any.. :/
I can do that with a simple script, but there must be something in Kafka-connect that manages failed tasks. or even in mongo-source... I don't want it to fail so fast after just 1 minute... :/
There isn't any way other than using the REST API to find a failed task and submit a restart request - and then running this on a periodic basis. For example
curl -s "http://localhost:8083/connectors?expand=status" | \
jq -c -M 'map({name: .status.name } + {tasks: .status.tasks}) | .[] | {task: ((.tasks[]) + {name: .name})} | select(.task.state=="FAILED") | {name: .task.name, task_id: .task.id|tostring} | ("/connectors/"+ .name + "/tasks/" + .task_id + "/restart")' | \
xargs -I{connector_and_task} curl -v -X POST "http://localhost:8083"\{connector_and_task\}
Source: https://rmoff.net/2019/06/06/automatically-restarting-failed-kafka-connect-tasks/

Solaris svcs command shows wrong status

I have freshly installed an application on solaris 5.10 . When checked through ps -ef | grep hyperic | grep agent, process are up and running . When checked the status through svcs hyperic-agent command, the output shows that the agent is in maintenance mode . Application is working fine and I dont have any issues with the application . Please help
There are several reasons that lead to that behavior:
Starter (start/exec property of service) returned status that is different from SMF_EXIT_OK (zero). Than you may check logs:
# svcs -x ssh
...
See: /var/svc/log/network-ssh:default.log
If you check logs, you may see following messages that means, starter script failed or incorrectly written:
[ Aug 11 18:40:30 Method "start" exited with status 96 ]
Another reason for such behavior is that service faults during while its working (i.e. one of processes coredumps or receives kill signal or all processes exits) as described here: https://blogs.oracle.com/lianep/entry/smf_5_fault_retry_models
The actual system that provides SMF facilities for monitoring that is System Contracts. You may determine contract ID of online service with svcs -v (field CTID):
# svcs -vp svc:/network/smtp:sendmail
STATE NSTATE STIME CTID FMRI
online - Apr_14 68 svc:/network/smtp:sendmail
Apr_14 1679 sendmail
Apr_14 1681 sendmail
Than watch events with ctwatch:
# ctwatch 68
CTID EVID CRIT ACK CTTYPE SUMMARY
68 28 crit no process contract empty
Than there are two options to handle that:
There is a real problem with service so it eventually faults. Than debug the application.
It is normal behavior of service, so you should edit and re-import your service manifest, to make SMF less paranoid. I.e. configure ignore_error and duration properties.

Custom Munin plugin won't report

I've built my first Munin plugin to give us the size of our Redis queue, but it won't report for some reason. Every other plugin on the node, including other Redis-centric plugins work fine.
Here's the plugin code:
#!/bin/sh
case $1 in
config)
cat <<'EOM'
multigraph redis_queue_size
graph_title Redis Queue Size
graph_info The size of Redis queue
graph_category redis
graph_vlabel Messages
redisqueue.label redisqueue
redisqueue.type GAUGE
redisqueue.min 0
EOM
exit 0;;
esac
queuelength=`redis-cli llen mykeyname`
printf "redisqueue.value "
echo $queuelength
The plugin is in /usr/share/munin/plugins/redis_queue_
The plugin is symlinked to /etc/munin/plugins/redis_queue_
I made sure to restart the service
$ sudo service munin-node force-reload
If I run sudo munin-run redis_queue_ I get the correct output:
redisqueue.value 1567595
If I run munin-node-config I get the following:
redis_queue_ | yes |
If I connect to the instance from the master using telnet to fetch the plugin, I get:
$ telnet 10.101.21.56 4949
Trying 10.101.21.56...
Connected to 10.101.21.56.
Escape character is '^]'.
# munin node at redis01.example.com
fetch redis_queue_
redisqueue.value 1035336
The master shows an empty graph for it, but the "last updated" time isn't increasing. I initially had the plugin configured a little differently (it wasn't producing good output) so all the values are -nan. Once I fixed the output, I expected the plugin to start working, but all efforts have failed.
Everything looks right, but yet still no values in the graph.
Edit: Munin v1.4.6