mosquitto and ibm watson iot on raspberry pi Connection Refused - ibm-cloud

I am new to IoT. I just installed mosquitto on my rpi and registered my devices and gateway in Watson IoT Platform using this tutorial: https://developer.ibm.com/recipes/tutorials/using-mosquitto-as-a-gateway-for-watson-iot/
Mosquitto works fine local mode. However, I am facing a problem subscribing/publishing using these commands:
mosquitto_sub -d -h pxci52.messaging.internetofthings.ibmcloud.com -i 'g:pxci52:myfstream:gateway' -t iot-2/type/myfstream/id/gateway/evt/status/fmt/raw
and
sudo mosquitto_pub -d -h pxci52.messaging.internetofthings.ibmcloud.com -i 'g:pxci52:myfstream:gateway' -t iot-2/type/myfstream/id/gateway/evt/status/fmt/raw -m "hello"
Here is my conf file:
pid_file /var/run/mosquitto.pid
persistence true
persistence_location /var/lib/mosquitto/
log_dest topic
log_type error
log_type warning
log_type notice
log_type information
connection_messages true
log_timestamp true
include_dir /etc/mosquitto/conf.d
connection bridge-to-watsoniot
address pxci52.messaging.internetofthings.ibmcloud.com:1883
cleansession true
try_private false
bridge_attempt_unsubscribe false
notifications false
notification_topic iot-2/type/myfstream/id/gateway/evt/status/fmt/raw
remote_username token
remote_password xxxxxx
remote_clientid g:pxci52:myfstream:gateway
notifications true
topic iot-2/type/+/id/+/cmd/+/fmt/+ in iot-2/type/+/id/+/cmd/+/fmt/+
topic iot-2/type/+/id/+/evt/+/fmt/+ out iot-2/type/+/id/+/evt/+/fmt/+
connection_messages true

I see in the log Invalid userID (token) for device auth: ClientID='g:pqci52:myfstream:gateway' Instead of just "token" try "use-token-auth" That is what is specified in the recipe example you were following.

Related

Kafka : broker has no supported SASL mechanisms on some listener

I am trying to gradually enable ACLs on a existing cluster (3.1.0 bitnami helm chart) which is configured like this :
listeners=INTERNAL://:9093,CLIENT://:9092
listener.security.protocol.map=INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT
advertised.listeners=CLIENT://$(MY_POD_NAME)-k8s.dev.host.com:4430,INTERNAL://$(MY_POD_NAME).message-broker-dev-kafka-headless.message-broker-dev.svc.cluster.local:9093
The kafka-k8s.dev.host.com:4430 is internally forwarded to the CLIENT listener on 9092
For now, we are doing TLS termination on the LB, hence the PLAINTEXT on the CLIENT listener but using SSL security.protocol :
kafkacat -b kafka-k8s.dev.host.com:4430 -X security.protocol=SSL -L
The plan is to add 2 new listeners that will require SASL auth, migrate the clients to the listeners & deprecate the existing listeners. The new configuration will look like this :
listeners=INTERNAL://:9093,CLIENT://:9092,SASL_INTERNAL://:9095,SASL_CLIENT://:9094
listener.security.protocol.map=INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT,SASL_INTERNAL:SASL_PLAINTEXT,SASL_CLIENT:SASL_PLAINTEXT
advertised.listeners=CLIENT://$(MY_POD_NAME)-k8s.dev.host.com:4430,INTERNAL://$(MY_POD_NAME).message-broker-dev-kafka-headless.message-broker-dev.svc.cluster.local:9093,SASL_CLIENT://$(MY_POD_NAME)-sasl-k8s.dev.host.com:4430,SASL_INTERNAL://$(MY_POD_NAME).message-broker-dev-kafka-headless.message-broker-dev.svc.cluster.local:9095
allow.everyone.if.no.acl.found=true
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=PLAIN
After creating some SCRAM-SHA-512 users and applying ACLs to existing topics, everything is working fine on the SASL_INTERNAL listener but not on the SASL_CLIENT :
$ kafkacat -b message-broker-dev-kafka-headless.message-broker-dev:9095 -C -t protected-topic-v1 -X security.protocol=SASL_PLAINTEXT -X sasl.mechanisms=SCRAM-SHA-512 -X sasl.username=demo-user -X sasl.password=secret
{"userId":"1225"}
% Reached end of topic protected-topic-v1 [0] at offset 22
$ kafkacat -b kafka-sasl-k8s.dev.host.com:4430 -C -t protected-topic-v1 -X security.protocol=SASL_SSL -X sasl.mechanisms=SCRAM-SHA-512 -X sasl.username=demo-user -X sasl.password=secret
%3|1669825033.516|FAIL|rdkafka#consumer-1| [thrd:sasl_ssl://kafka-sasl-k8s.dev.host.com:4430/bootstrap]: sasl_ssl://kafka-sasl-k8s.dev.host.com:4430/bootstrap: SASL SCRAM-SHA-512 mechanism handshake failed: Broker: Request not valid in current SASL state: broker's supported mechanisms: (after 44ms in state AUTH_HANDSHAKE)
The kafka-sasl-k8s.dev.host.com:4430 is internally forwarded to the SASL_CLIENT listener on 9094 (and again using TLS termination on LB, so SASL_SSL instead of SASL_PLAINTEXT)
For now, I'm not totally sure if I missed a kafka configuration or messed a network configuration.
Thanks in advance.
Auto-answering, it was a network issue.
kafka-sasl-k8s.dev.host.com:4430 was sending traffic to 9092 & not 9094 as expeccted

Kafka connect mongoDB sink with TLS

I set up my mongoDB cluster with TLS authentication.
I can successfully connect on a mongos instance using :
/opt/cluster/stacks/mongoDB/bin/mongosh --tls --host $(hostname).domain.name -tlsCAFile /opt/cluster/security/ssl/cert.pem --port 27017
I have a Kafka connect mongoDB sink that has the following configuration :
{
"name": "client-order-request-mongodb-sink",
"config": {
"connector.class": "com.mongodb.kafka.connect.MongoSinkConnector",
"database":"Trading",
"collection":"ClientOrderRequest",
"topics":"ClientOrderRequest",
"connection.uri":"mongodb://hostname1.domain.name:27017,pre-hostname2.domain.name:27017",
"mongo.errors.tolerance": "all",
"mongo.errors.log.enable": "true",
"errors.log.include.messages": "true",
"writemodel.strategy":"com.mongodb.kafka.connect.sink.writemodel.strategy.ReplaceOneBusinessKeyStrategy",
"document.id.strategy": "com.mongodb.kafka.connect.sink.processor.id.strategy.PartialValueStrategy",
"document.id.strategy.overwrite.existing": "true",
"document.id.strategy.partial.value.projection.type": "allowlist",
"document.id.strategy.partial.value.projection.list": "localReceiveTime,clientId,orderId"
}
}
It is working fine if I redeploy mongoDB without authentication, but now when I try to instantiate it with the following curl command :
curl -X POST -H "Content-Type: application/json" --data '#connect-task-sink-mongodb-client-order-request.json' $KAFKA_CONNECT_LEADER_NODE/connectors/
I have the following error:
{"error_code":400,"message":"Connector configuration is invalid and contains the following 1 error(s):\nUnable to connect to the server.\nYou can also find the above list of errors at the endpoint /connector-plugins/{connectorType}/config/validate"}
From the mongoDB kafka connect sink documentation I found that I needed to set up global variable of the KAFKA_OPTS so before starting the distributed connect server I do:
export KAFKA_OPTS="\
-Djavax.net.ssl.trustStore=/opt/cluster/security/ssl/keystore.jks \
-Djavax.net.ssl.trustStorePassword=\"\" \
-Djavax.net.ssl.keyStore=/opt/cluster/security/ssl/keystore.jks \
-Djavax.net.ssl.keyStorePassword=\"\""
Notice that I put an empty password because when I list the entry of my keystore with:
keytool -v -list -keystore key.jks
Then I just press enter when the password is prompted.
So the issue was that the ssl connection wasn't enabled on the client side.
If you want to do so with the mongoDB kafka connect plugin you need to state it in the connection.uri config parameter such as:
"connection.uri":"mongodb://hostname1.domain.name:27017,pre-hostname2.domain.name:27017/?ssl=true"

I can't connect from the outside to the mongo-express

I am using the mongo-express.
Installed on AWS EC2, it was started.
$ node app
Mongo Express server listening on port 8081 at localhost
Database connected
Connecting to db...
Database db connected
However, it is not possible to connect from the browser to port 8081.
I can download the index.html of the mongo-express using wget command on ec2.
$ wget http://admin:pass#localhost:8081
--2016-02-22 02:22:25-- http://admin:*password*#localhost:8081/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8081... connected.
HTTP request sent, awaiting response... 401 Unauthorized
Authentication selected: Basic realm="Authorization Required"
Reusing existing connection to localhost:8081.
HTTP request sent, awaiting response... 200 OK
Length: 9319 (9.1K) [text/html]
Saving to: ?index.html?
index.html 0%[ ] 0 --.-KB/s GET / 200 218.468 ms - 9319
index.html 100%[===================>] 9.10K --.-KB/s in 0.04s
2016-02-22 02:22:26 (236 KB/s) - ?index.html? saved [9319/9319]
By the way, port 8081 in the security group of ec2 is open to my IP.
The following settings of config.js is, was the cause
site: { // baseUrl: the URL that mongo express will be located at - Remember to add the forward slash at the stard and end!
baseUrl: '/',
cookieKeyName: 'mongo-express',
cookieSecret: process.env.ME_CONFIG_SITE_COOKIESECRET || 'cookiesecret',
host: process.env.VCAP_APP_HOST || 'localhost',
port: process.env.VCAP_APP_PORT || 8081,
requestSizeLimit: process.env.ME_CONFIG_REQUEST_SIZE || '50mb',
sessionSecret: process.env.ME_CONFIG_SITE_SESSIONSECRET || 'sessionsecret',
sslCert: process.env.ME_CONFIG_SITE_SSL_CRT_PATH || '',
sslEnabled: process.env.ME_CONFIG_SITE_SSL_ENABLED || false,
sslKey: process.env.ME_CONFIG_SITE_SSL_KEY_PATH || '',
},
The value of the host is changed to "0.0.0.0", now to be able to connect from browser to the mongo-express.
In my case, I've got the issue because I wanted to expose my container on another port (4301).
But the express was still listening on 8081.
To fix it, had to specify indeed VCAP_APP_HOST and VCAP_APP_PORT.
And you can directly specify it on the run cmd like:
docker run --network YOUR_NETWORK --name YOUR_MONGO_EXPRESS_CONTAINER_NAME -e ME_CONFIG_MONGODB_SERVER=YOUR_MONGO_SERVER_IP -e VCAP_APP_HOST=0.0.0.0 -e VCAP_APP_PORT=4301 -p 4301:4301 mongo-express

HAProxy - basic authentication for backend server

I use the following configuration to access internet from local 127.0.0.1:2000 proxy to the internet.:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 4096
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
listen appname 0.0.0.0:2000
mode http
stats enable
acl white_list src 127.0.0.1
tcp-request content accept if white_list
tcp-request content reject
stats uri /haproxy?stats
stats realm Strictly\ Private
stats auth special_admin:special_username
balance roundrobin
option httpclose
option forwardfor
server lamp1 23.123.1.110:3128 check
Unfortunately I need to authenticate to my external proxy 23.123.1.110 via http basic authentication "special_admin:special_username".
My question is, is there any way to use basic authentication like :
server lamp1 http://special_admin:special_username#23.123.1.110:3128 check
Thanks
In your example you only need to add the necessary Authorization header with the authorization method and the username:password encoded as base64 like this:
reqadd Authorization:\ Basic\ c3BlY2lhbF9hZG1pbjpzcGVjaWFsX3VzZXJuYW1l
I created the base64 encoded string like this:
echo -n "special_admin:special_username" | base64
For more details about HTTP Basic authorization see https://en.wikipedia.org/wiki/Basic_access_authentication#Client_side
Below listed steps have worked for me.
# haproxy conf
global
log 127.0.0.1 local1
maxconn 4096
defaults
mode http
maxconn 2048
userlist AuthUsers
user admin password $6$SydPP/et7BGN$C5VIhcn6OxuIaLPhCDCmzJyqDYQF8skik3J6sApkXPa6YPSVGutcgQPpdX/VEycGNi3sw7NxLSflEb53gzJtA1
frontend nginx-frontend
bind *:5000
mode http
timeout connect 5s
timeout client 5s
timeout server 5s
default_backend nginx-backend
# For Path based basic authentication use this commented example
#acl PATH_cart path_beg -i /testing
#acl authusers_acl http_auth(AuthUsers)
#http-request auth realm nginx-backend if PATH_cart !authusers_acl
acl authusers_acl http_auth(AuthUsers)
http-request auth realm nginx-backend if !authusers_acl
backend nginx-backend
server nginx nginx:80 check inter 5s rise 2 fall 3
Install below package to generate hash password
sudo apt-get install whois
mkpasswd -m sha-512 'your_password'
mkpasswd -m sha-512 admin#456
expected output
$6$gnGNapo/XeXYg39A$T/7TDfMrZXUDPbv5UPYemrdxdh5xEwqBrzSbpJYs9rfxLbQtgQzxyzkSGWIVOEGze8KrsA0urh3/dG.1xOx3M0
Copy the generated password and paste in haproxy.cfg file
#Deploy the containers to test configuration
sudo docker run -d --name nginx nginx
sudo docker run -d -p 5000:5000 --name haproxy --link nginx:nginx -v /home/users/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg haproxy
Check in the browser, username and password will be prompted.

FreeRadius - Failed binding to authentication address

When I run the following command, I can get successfull result.
root#ubuntu:/home/can# radtest user password 127.0.0.1 1812 testing123
Sending Access-Request of id 78 to 127.0.0.1 port 1812
User-Name = "user"
User-Password = "password"
NAS-IP-Address = 127.0.1.1
NAS-Port = 1812
Message-Authenticator = 0x00000000000000000000000000000000
rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=78, length=20
However When I run the "freeradius -X" , I get error message as following :
.....
Failed binding to authentication address * port 1812: Address already in use
/etc/freeradius/radiusd.conf[250]: Error binding to port for 0.0.0.0 port 1812
Please Help Me
Thank you for your efforts.
Can
radiusd is already running. sudo service freeradius stop will stop it, and allow freeradius -X to bind to the address/port that was previously used by the RADIUS daemon.
Run 'service freeradius restart' and 'service freeradius stop' commands
Then run the command,'freeradius -X'
you will not face binding issue anymore.
Even having finished the service, there were pending zombie process.
Searching for a zumbi process I´ve found one:
[root#localhost sites-enabled]# ps aux | grep radi
radiusd 25042 0.0 0.7 186360 14980 ? Ssl Fev17 0:00 /usr/sbin/radiusd -d /etc/raddb
[root#localhost sites-enabled]# kill -9 25042
Service was start sucessfully after this.
Basically the port freeradius is looking to use is already in use by another background running instance of freeradius. Ending the first instance of freeradius will allow you to use that same port for the newly run instance.