Kafka : broker has no supported SASL mechanisms on some listener - apache-kafka

I am trying to gradually enable ACLs on a existing cluster (3.1.0 bitnami helm chart) which is configured like this :
listeners=INTERNAL://:9093,CLIENT://:9092
listener.security.protocol.map=INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT
advertised.listeners=CLIENT://$(MY_POD_NAME)-k8s.dev.host.com:4430,INTERNAL://$(MY_POD_NAME).message-broker-dev-kafka-headless.message-broker-dev.svc.cluster.local:9093
The kafka-k8s.dev.host.com:4430 is internally forwarded to the CLIENT listener on 9092
For now, we are doing TLS termination on the LB, hence the PLAINTEXT on the CLIENT listener but using SSL security.protocol :
kafkacat -b kafka-k8s.dev.host.com:4430 -X security.protocol=SSL -L
The plan is to add 2 new listeners that will require SASL auth, migrate the clients to the listeners & deprecate the existing listeners. The new configuration will look like this :
listeners=INTERNAL://:9093,CLIENT://:9092,SASL_INTERNAL://:9095,SASL_CLIENT://:9094
listener.security.protocol.map=INTERNAL:PLAINTEXT,CLIENT:PLAINTEXT,SASL_INTERNAL:SASL_PLAINTEXT,SASL_CLIENT:SASL_PLAINTEXT
advertised.listeners=CLIENT://$(MY_POD_NAME)-k8s.dev.host.com:4430,INTERNAL://$(MY_POD_NAME).message-broker-dev-kafka-headless.message-broker-dev.svc.cluster.local:9093,SASL_CLIENT://$(MY_POD_NAME)-sasl-k8s.dev.host.com:4430,SASL_INTERNAL://$(MY_POD_NAME).message-broker-dev-kafka-headless.message-broker-dev.svc.cluster.local:9095
allow.everyone.if.no.acl.found=true
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
sasl.mechanism.inter.broker.protocol=PLAIN
After creating some SCRAM-SHA-512 users and applying ACLs to existing topics, everything is working fine on the SASL_INTERNAL listener but not on the SASL_CLIENT :
$ kafkacat -b message-broker-dev-kafka-headless.message-broker-dev:9095 -C -t protected-topic-v1 -X security.protocol=SASL_PLAINTEXT -X sasl.mechanisms=SCRAM-SHA-512 -X sasl.username=demo-user -X sasl.password=secret
{"userId":"1225"}
% Reached end of topic protected-topic-v1 [0] at offset 22
$ kafkacat -b kafka-sasl-k8s.dev.host.com:4430 -C -t protected-topic-v1 -X security.protocol=SASL_SSL -X sasl.mechanisms=SCRAM-SHA-512 -X sasl.username=demo-user -X sasl.password=secret
%3|1669825033.516|FAIL|rdkafka#consumer-1| [thrd:sasl_ssl://kafka-sasl-k8s.dev.host.com:4430/bootstrap]: sasl_ssl://kafka-sasl-k8s.dev.host.com:4430/bootstrap: SASL SCRAM-SHA-512 mechanism handshake failed: Broker: Request not valid in current SASL state: broker's supported mechanisms: (after 44ms in state AUTH_HANDSHAKE)
The kafka-sasl-k8s.dev.host.com:4430 is internally forwarded to the SASL_CLIENT listener on 9094 (and again using TLS termination on LB, so SASL_SSL instead of SASL_PLAINTEXT)
For now, I'm not totally sure if I missed a kafka configuration or messed a network configuration.
Thanks in advance.

Auto-answering, it was a network issue.
kafka-sasl-k8s.dev.host.com:4430 was sending traffic to 9092 & not 9094 as expeccted

Related

Kafka connect mongoDB sink with TLS

I set up my mongoDB cluster with TLS authentication.
I can successfully connect on a mongos instance using :
/opt/cluster/stacks/mongoDB/bin/mongosh --tls --host $(hostname).domain.name -tlsCAFile /opt/cluster/security/ssl/cert.pem --port 27017
I have a Kafka connect mongoDB sink that has the following configuration :
{
"name": "client-order-request-mongodb-sink",
"config": {
"connector.class": "com.mongodb.kafka.connect.MongoSinkConnector",
"database":"Trading",
"collection":"ClientOrderRequest",
"topics":"ClientOrderRequest",
"connection.uri":"mongodb://hostname1.domain.name:27017,pre-hostname2.domain.name:27017",
"mongo.errors.tolerance": "all",
"mongo.errors.log.enable": "true",
"errors.log.include.messages": "true",
"writemodel.strategy":"com.mongodb.kafka.connect.sink.writemodel.strategy.ReplaceOneBusinessKeyStrategy",
"document.id.strategy": "com.mongodb.kafka.connect.sink.processor.id.strategy.PartialValueStrategy",
"document.id.strategy.overwrite.existing": "true",
"document.id.strategy.partial.value.projection.type": "allowlist",
"document.id.strategy.partial.value.projection.list": "localReceiveTime,clientId,orderId"
}
}
It is working fine if I redeploy mongoDB without authentication, but now when I try to instantiate it with the following curl command :
curl -X POST -H "Content-Type: application/json" --data '#connect-task-sink-mongodb-client-order-request.json' $KAFKA_CONNECT_LEADER_NODE/connectors/
I have the following error:
{"error_code":400,"message":"Connector configuration is invalid and contains the following 1 error(s):\nUnable to connect to the server.\nYou can also find the above list of errors at the endpoint /connector-plugins/{connectorType}/config/validate"}
From the mongoDB kafka connect sink documentation I found that I needed to set up global variable of the KAFKA_OPTS so before starting the distributed connect server I do:
export KAFKA_OPTS="\
-Djavax.net.ssl.trustStore=/opt/cluster/security/ssl/keystore.jks \
-Djavax.net.ssl.trustStorePassword=\"\" \
-Djavax.net.ssl.keyStore=/opt/cluster/security/ssl/keystore.jks \
-Djavax.net.ssl.keyStorePassword=\"\""
Notice that I put an empty password because when I list the entry of my keystore with:
keytool -v -list -keystore key.jks
Then I just press enter when the password is prompted.
So the issue was that the ssl connection wasn't enabled on the client side.
If you want to do so with the mongoDB kafka connect plugin you need to state it in the connection.uri config parameter such as:
"connection.uri":"mongodb://hostname1.domain.name:27017,pre-hostname2.domain.name:27017/?ssl=true"

Kafka authorization failed only on port 9092

I use the confluent kafka docker image and have enabled authentication and authorization with the following config.
KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://:9092,SASL_SSL://:9093
=> 9093 SASL_SSL
=> 9092 PLAINTEXT
Here is a part of my config:
Container environment variables
- KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND=false
- KAFKA_SSL_CLIENT_AUTH=required
- KAFKA_SECURITY_INTER_BROKER_PROTOCOL=SASL_SSL
- KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL=PLAIN
- KAFKA_SASL_ENABLED_MECHANISMS=PLAIN
- KAFKA_AUTHORIZER_CLASS_NAME=kafka.security.authorizer.AclAuthorizer
- KAFKA_SUPER_USERS="User:admin"
- KAFKA_OPTS=-Djava.security.auth.login.config={{ kafka_secrets_dir }}/kafka_jaas.conf
kafka_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin"
user_admin="admin"
user_second_user="read_user";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin";
};
Configure consumer ACLs
bin/kafka-acls --authorizer-properties zookeeper.connect=my.host1:2181,host2:2181,host3:2181 --add --allow-principal User:second_user --consumer --topic '*' --group '*'
Configure producer ACLs
kafka-acls --authorizer-properties zookeeper.connect=my.host1:2181,host2:2181,host3:2181 --add --allow-principal User:second_user --producer --topic '*'
I want to use kafka over both ports. 9093 with SSL encryption and 9092 witout. Therefore I tested it with a simple console consumer/producer. Port 9093 works fine, I can consume and produce messages. The problem is that it does not work over port 9092. I always get an authentication error TopicAuthorizationException: Not authorized to access topics: [test_topic]. I tested it with the "second_user" and even with the super user "admin". Why does it only work with the secured port? Did I miss any config?
Console consume via port 9093 (working)
#consumer.properties
ssl.endpoint.identification.algorithm=
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
ssl.truststore.location=/home/vagrant/kafka-2.8.0/ssl/kafka.truststore.jks
ssl.truststore.password=changeme
ssl.protocol=TLS
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="admin";
# create consumer => This is working!
/bin/kafka-console-consumer.sh --bootstrap-server host1:9093,host2:9093,host3:9093 --topic test_topic --from-beginning --consumer.config consumer.properties
Console consume via port 9092 (not working)
#consumer.properties
security.protocol=PLAINTEXT
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="admin";
#create consumer
kafka-console-consumer.sh --bootstrap-server host1:9092,host2:9092,host3:9092 --topic test_topic --from-beginning --consumer.config consumer.properties
=>TopicAuthorizationException: Not authorized to access topics: [test_topic]
I also tested it with python and the confluent-kafka-python package(not working).
test.py
self.consumer = Consumer({
'bootstrap.servers': "host1:9092,host2:9092,host3:9092",
'group.id': f"test",
'security.protocol': "PLAINTEXT",
'sasl.mechanism': 'PLAIN',
'sasl.username': 'admin',
'sasl.password': "admin"
})
=> FindCoordinator response error: Group authorization failed
You did not enable authentication on port 9092,
with combination of
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND
, you are getting authorization failure,
to fix it you should change to SASL_PLAINTEXT to allow SASL authentication without TLS encryption
PLAINTEXT://:9092 -> SASL_PLAINTEXT://:9092

How to send authorization header (or) access token in avro-console-producer

We have a SSL enabled kafka broker and Schema Registry access is through Keycloak. From external machine, I am able to send the data using kafka-console-producer and below is my configs.
ssl.properties:
security.protocol=SASL_SSL
ssl.truststore.location=truststore.jks
ssl.truststore.password=password
sasl.mechanism=PLAIN
jaas.conf:
KafkaClient
{
org.apache.kafka.common.security.plain.PlainLoginModule required
username="<user-name>"
password="<password>";
};
export KAFKA_OPTS="-Djavax.net.ssl.trustStore=truststore.jks -Djavax.net.ssl.trustStorePassword=password -Djava.security.auth.login.config=jaas.conf"
./kafka-console-producer --bootstrap-server broker-url:<external_port> --topic sample.data --producer.config ssl.properties
Hi sample data sent
I am able to see them using consumer
Now, for schema registry I need to get the token as shown below:
curl -k --data 'grant_type=password&client_id=schema-registry-client&username=username&password=password' https://<keycloakurl>/auth/realms/<namespace>/protocol/openid-connect/token
output:
{"access_token":"<access_token>","expires_in":600,"refresh_expires_in":1800,"refresh_token":"<refresh_token>","token_type":"bearer","not-before-policy":0,"session_state":"4117e69c-afe9-43ae-9756-90b151f0b536","scope":"profile email"}
curl -k -H "Authorization: Bearer <access_token>"
https://<sc_url>/schemaregistry/subjects
output:
["test.data-value"]
Question is, how can I use the access_token in avro-console-producer ? I dont see a way.
Based on the source code for the Registry REST client, something like these properties should be what you want (untested)
bearer.auth.credentials.source="STATIC_TOKEN"
bearer.auth.token="your token"

How to enable Server host name verification in Kafka 2?

This is my server.properties:
listeners=PLAINTEXT://localhost:9092,SSl://localhost:9093
ssl.client.auth=required
ssl.keystore.location=/home/xrobot/kafka_2.12-2.1.0/certificate/server.keystore.jks
ssl.keystore.password=ffffdd
ssl.key.password=ffffdd
ssl.truststore.location=/home/xrobot/kafka_2.12-2.1.0/certificate/server.truststore.jks
ssl.truststore.password=ffffdd
ssl.endpoint.identification.algorithm=
security.inter.broker.protocol=SSL
If I try to set ssl.endpoint.identification.algorithm=HTTPS, then I get this error:
[2019-02-26 19:04:00,011] ERROR [Controller id=0, targetBrokerId=0] Connection to node 0 (localhost/127.0.0.1:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
So how can I enable Server host name verification in Kafka 2 ?
You have set security.inter.broker.protocol to SSL. when brokers try to connect each other or Zookeeper they act as a client so, your brokers need to have truststore that contains public key of the CA you used to sign the broker certificates or the public keys of all broker certificates. for complete steps take a look at Encryption and Authentication using SSL
I am completely sure that you know each client has to have its own public and private key in a two way SSL authentication.

HAProxy - basic authentication for backend server

I use the following configuration to access internet from local 127.0.0.1:2000 proxy to the internet.:
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 4096
#chroot /usr/share/haproxy
user haproxy
group haproxy
daemon
#debug
#quiet
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
listen appname 0.0.0.0:2000
mode http
stats enable
acl white_list src 127.0.0.1
tcp-request content accept if white_list
tcp-request content reject
stats uri /haproxy?stats
stats realm Strictly\ Private
stats auth special_admin:special_username
balance roundrobin
option httpclose
option forwardfor
server lamp1 23.123.1.110:3128 check
Unfortunately I need to authenticate to my external proxy 23.123.1.110 via http basic authentication "special_admin:special_username".
My question is, is there any way to use basic authentication like :
server lamp1 http://special_admin:special_username#23.123.1.110:3128 check
Thanks
In your example you only need to add the necessary Authorization header with the authorization method and the username:password encoded as base64 like this:
reqadd Authorization:\ Basic\ c3BlY2lhbF9hZG1pbjpzcGVjaWFsX3VzZXJuYW1l
I created the base64 encoded string like this:
echo -n "special_admin:special_username" | base64
For more details about HTTP Basic authorization see https://en.wikipedia.org/wiki/Basic_access_authentication#Client_side
Below listed steps have worked for me.
# haproxy conf
global
log 127.0.0.1 local1
maxconn 4096
defaults
mode http
maxconn 2048
userlist AuthUsers
user admin password $6$SydPP/et7BGN$C5VIhcn6OxuIaLPhCDCmzJyqDYQF8skik3J6sApkXPa6YPSVGutcgQPpdX/VEycGNi3sw7NxLSflEb53gzJtA1
frontend nginx-frontend
bind *:5000
mode http
timeout connect 5s
timeout client 5s
timeout server 5s
default_backend nginx-backend
# For Path based basic authentication use this commented example
#acl PATH_cart path_beg -i /testing
#acl authusers_acl http_auth(AuthUsers)
#http-request auth realm nginx-backend if PATH_cart !authusers_acl
acl authusers_acl http_auth(AuthUsers)
http-request auth realm nginx-backend if !authusers_acl
backend nginx-backend
server nginx nginx:80 check inter 5s rise 2 fall 3
Install below package to generate hash password
sudo apt-get install whois
mkpasswd -m sha-512 'your_password'
mkpasswd -m sha-512 admin#456
expected output
$6$gnGNapo/XeXYg39A$T/7TDfMrZXUDPbv5UPYemrdxdh5xEwqBrzSbpJYs9rfxLbQtgQzxyzkSGWIVOEGze8KrsA0urh3/dG.1xOx3M0
Copy the generated password and paste in haproxy.cfg file
#Deploy the containers to test configuration
sudo docker run -d --name nginx nginx
sudo docker run -d -p 5000:5000 --name haproxy --link nginx:nginx -v /home/users/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg haproxy
Check in the browser, username and password will be prompted.