Unable to connect to SASL Kafka broker using kaf - apache-kafka

I am following this article https://gitlab.com/ShowMeYourCodeYouTube/kafka-producer-consumer/-/tree/master/
My docker-compose file is https://gitlab.com/ShowMeYourCodeYouTube/kafka-producer-consumer/-/blob/master/docker-compose-ssl.yml
SERVER_JAAS.conf used for auth login is like below
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafkabroker"
password="kafkabroker-secret"
user_kafkabroker="kafkabroker-secret"
user_client="client-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="kafka-secret";
};
My docker-compose ps looks like below
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
kafka1-ssl confluentinc/cp-kafka:6.1.1 "/etc/confluent/dock…" kafka1-ssl 38 minutes ago Up 35 minutes 0.0.0.0:9093-9094->9093-9094/tcp, 9092/tcp, 0.0.0.0:29093-29094->29093-29094/tcp
zookeeper-ssl confluentinc/cp-zookeeper:6.1.1 "/etc/confluent/dock…" zookeeper1 39 minutes ago Up 35 minutes 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp
My kaf cli config reads below
- name: ssl_local
version: ""
brokers:
- localhost:29094
SASL:
mechanism: SCRAM-SHA-512
username: "kafkabroker"
password: "kafkabroker-secret"
clientID: ""
clientSecret: ""
tokenURL: ""
token: ""
TLS: null
security-protocol: SASL_SSL
schema-registry-url: ""
Now when i am trying to connect to broker and list the topics, I am getting below error
>> kaf topic ls
Unable to get cluster admin: kafka: client has run out of available brokers to talk to: x509: certificate is not valid for any names, but wanted to match localhost

Related

No JAAS configuration section named 'Client' was found in specified JAAS configuration file

Can't deploy Kafka with SASL authentication
Here is my docker-compose.yml
version: '3.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
KAFKA_OPTS:
-Djava.security.auth.login.config=/home/etozhekim/IdeaProjects/veles-core/zookeeper_server_jaas.conf
-Dquorum.auth.enableSasl=true
-Dquorum.cnxn.threads.size=20
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-DjaasLoginRenew=3600000
-DrequireClientAuthScheme=sasl
volumes:
- /home/etozhekim/IdeaProjects/veles-core/zookeeper_server_jaas.conf
networks:
- kafka-cluster-network
kafka:
image: confluentinc/cp-kafka
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
KAFKA_LISTENERS: SASL_PLAINTEXT://kafka:9092,SASL_PLAINTEXT://kafka:9092
KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_OPTS:
-Dzookeeper.sasl.client=true
-Djava.security.auth.login.config=/home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf
volumes:
- /home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf
Zookeeper is deployed without problems. But Kafka logs:
[2023-02-02 11:49:24,708] WARN SASL configuration failed. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf'
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="kafkabroker"
password="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
zookeper_server_jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};
The error is saying it wants a Zookeeper Client JAAS setting whereas you've only configured a Kafka Client config in the broker.
Also, /home/etozhekim doesn't exist in Confluent images.
Add a valid right-hand side container mapping to the volume, such as :/tmp/jaas.conf
Then use -Djava.security.auth.login.config=/tmp/jaas.conf

Spring config server renew vault token auth

I am using spring config server with 2 backends : git and vault (for secrets), and i have a clients apps that connect to the config server to get distant configuration (git and vault).
I have this configuration:
config server
server:
port: 8888
spring:
profiles:
active: git, vault
cloud:
config:
server:
vault:
host: hostName
kvVersion: 1
order: 1
backend: secret/cad
scheme: https
port: 443
git:
order: 2
uri: git#gitlab.git_repo
ignoreLocalSshSettings: true
force-pull: true
deleteUntrackedBranches: true
privateKey: key
and client side
spring:
application:
name: my_app_name
cloud:
vault:
config:
uri: http://localhost:8888
token: s.token
fail-fast: true
With this way I have to change the token for every client every day (token expire 24h). Is there a way to renew the token with this configuration or there is another way to authenticate to the vault?
spring.cloud.vault:
config.lifecycle:
enabled: true
min-renewal: 10s
expiry-threshold: 1440m
lease-endpoints: Legacy
1440 minutes = 24h
Reference: https://cloud.spring.io/spring-cloud-vault/reference/html/#vault-lease-renewal

Kafka Listener is not working! It is isolated in intranet

My Kafka node is hosted in Google Cloud Dataproc. However, we realized that the Kafka installed through default initialization script is set up in such a way that it only allows intranet access. It is completely isolated from the outside world. The producer outside the google cloud network can't publish the message to Kafka and the Kafka message can't chain to its extranet subscriber.
Remark
I have whitelisted the producer IP
After read thru the other StackOverflow, blog post and documentation. I think it could due to advertised.listeners parts of Socket Server Settings in /usr/lib/kafka/server.properties.
First solution
I added advertised.listeners=PLAINTEXT://[External_IP]:19092
then sudo /etc/init.d/kafka-server restart
OUTCOME
However, when I trying to Kafkacat or telnet, it always failed. I also tested advertised.listeners with various port
Second solution from https://rmoff.net/2018/08/02/kafka-listeners-explained/
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
->>>>>>> I added below listener config according to https://rmoff.net/2018/08/02/kafka-listeners-explained/
listeners=INTERNAL://0.0.0.0:9092,EXTERNAL://0.0.0.0:19092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=EXTERNAL://[External_IP]:19092,INTERNAL://[Internal_IP]:9092
inter.broker.listener.name=INTERNAL
OUTCOME
It's the same result as above, Not Working.
Firewall Rules [Updated]
This is my current firewall rules config. Am I doing a mistake?
Can anyone help me to resolve this?
Here is what worked for my cluster:
I've set the following properties from the second solution:
listeners=INTERNAL://0.0.0.0:9092,EXTERNAL://0.0.0.0:19092
listener.security.protocol.map=INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
advertised.listeners=EXTERNAL://[External_IP]:19092,INTERNAL://[Internal_IP]:9092
inter.broker.listener.name=INTERNAL
I've created a firewall rule opening port 19092 to my personal development machine IP and applied it to the network. From my machine, I've tried to telnet the kafka server and I got:
$ telnet [EXTERNAL-IP] 19092
Trying [EXTERNAL-IP]...
Connected to [EXTERNAL-IP].
Escape character is '^]'.
I then tried to use kafkacat, and got an error. Running in debug, I saw the error was because I have not set any topics:
%7|1578351264.551|METADATA|rdkafka#producer-1| [thrd:main]: [EXTERNAL-IP]:19092/bootstrap: ===== Received metadata: application requested =====
%7|1578351264.551|METADATA|rdkafka#producer-1| [thrd:main]: [EXTERNAL-IP]:19092/bootstrap: ClusterId: jYxfi6zzR0euAovYyKCFZg, ControllerId: -1
%7|1578351264.551|METADATA|rdkafka#producer-1| [thrd:main]: [EXTERNAL-IP]:19092/bootstrap: 0 brokers, 0 topics
%7|1578351264.551|METADATA|rdkafka#producer-1| [thrd:main]: [EXTERNAL-IP]:19092/bootstrap: No brokers or topics in metadata: should retry
%7|1578351264.551|REQERR|rdkafka#producer-1| [thrd:main]: [EXTERNAL-IP]:19092/bootstrap: MetadataRequest failed: Local: Partial response: explicit actions Retry
%7|1578351264.551|RETRY|rdkafka#producer-1| [thrd:[EXTERNAL-IP]:19092/bootstrap]: [EXTERNAL-IP]:19092/bootstrap: Retrying MetadataRequest (v2, 25 bytes, retry 1/2, prev CorrId 3) in 100ms
Please notice that I've tried to connect to the kafka server from outside to the cluster. In the questions, the telnet and kafkacat are running on the same machine as the kafka server (kafka-tng-w-0).
Here is a sample docker-compose.yaml file.
version: '2'
services:
zookeeper:
image: strimzi/kafka:0.20.0-kafka-2.6.0
command: [
"sh", "-c",
"bin/zookeeper-server-start.sh config/zookeeper.properties"
]
ports:
- "2181:2181"
environment:
LOG_DIR: /tmp/logs
kafka:
image: strimzi/kafka:0.20.0-kafka-2.6.0
command: [
"sh", "-c",
"bin/kafka-server-start.sh config/server.properties --override
listeners=$${KAFKA_LISTENERS} --override
advertised.listeners=$${KAFKA_ADVERTISED_LISTENERS} --override
zookeeper.connect=$${KAFKA_ZOOKEEPER_CONNECT}"
]
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
LOG_DIR: "/tmp/logs"
# Dev GQ - Laptop
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.23.240.1:9092
# AWS Pre-Prod
#KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://11.122.200.229:9092
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
And here is a sample Quarkus application.properties file with kafka bootstrap server configured as advertised listeners in docker-compose.yaml.
# Configure the SmallRye Kafka connector
# Dev GQ - Laptop
mp.messaging.connector.smallrye-kafka.bootstrap.servers=172.23.240.1:9092
# AWS Pre-Prod
#mp.messaging.connector.smallrye-kafka.bootstrap.servers=11.122.200.229:9092
quarkus.kafka.health.enabled=true
# Configure the Kafka sink (we write to it)
mp.messaging.outgoing.generated-price.connector=smallrye-kafka
mp.messaging.outgoing.generated-price.topic=prices
mp.messaging.outgoing.generated-price.value.serializer=org.apache.kafka.common.serialization.IntegerSerializer
# Configure the Kafka source (we read from it)
mp.messaging.incoming.prices.connector=smallrye-kafka
mp.messaging.incoming.prices.topic=prices
# ..... more codes
version: "3"
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
hostname: kafka
ports:
- "9093:9093"
- "9092:9092"
environment:
TZ: CST-8
KAFKA_BROKER_ID: 3
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9093
KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9092,OUTSIDE://${Your_External_IP}:9093
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
volumes:
- /var/run/docker.sock:/var/run/docker.sock
links:
- zookeeper

Micronaut kafka project - mutiple consumers each with different bootstrap server and ssl certs

I am trying to set up a micronaut project with multiple consumers each with different bootstrap server and ssl certs. I am not setting global bootstrap server and certs. This does not work. Any suggestion is appreciated.
One another option is to combine the certs into one jks file and set global bootstrap and ssl configuration.
kafka:
consumers:
group1:
bootstrap:
servers: $someserver1
ssl:
keystore:
location: /keystore.jks
password: password
truststore:
location: /truststore.jks
password: password
type: PKCS12
security:
protocol: ssl
group2:
bootstrap:
servers: $someserver2
ssl:
keystore:
location: /keystore-1.jks
password: password
truststore:
location: /truststore-1.jks
password: password
type: PKCS12
security:
protocol: ssl
The above config does work fine but u need to disable the kafka health check or provide a combined ssl certs otherwise micronaut kafka health check fails. This is true with 1.2.7 version of micronaut.
kafka:
health.enabled: false
consumers:
group1:
bootstrap:
servers: $someserver1
ssl:
keystore:
location: /keystore.jks
password: password
truststore:
location: /truststore.jks
password: password
type: PKCS12
security:
protocol: ssl
group2:
bootstrap:
servers: $someserver2
ssl:
keystore:
location: /keystore-1.jks
password: password
truststore:
location: /truststore-1.jks
password: password
type: PKCS12
security:
protocol: ssl
The issue is that consumer classes are not annotated quite correctly. Here is what worked for me:
In application.yml (note that both servers use the same SSL certs):
kafka:
consumers:
group1:
topic: some-topic-1
bootstrap:
servers: server-1:9092
group2:
topic: some-topic-2
bootstrap:
servers: server-2:9092
ssl:
keystore:
location: /keystore.jks
password: "password1"
truststore:
location: /truststore.jks
password: "password2"
security:
protocol: ssl
Then, annotate your classes to set consumer specific properties. Example in Kotlin:
#KafkaListener(
properties = [Property(name = ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, value = "\${kafka.consumers.group1.bootstrap.servers}")],
)
class TestConsumer1() {
#Topic("\${kafka.consumers.group1.topic}")
fun receiveMessage(record: ConsumerRecord<String, String>, acknowledgement: Acknowledgement) {
TODO("Handle events from server-1:9092 and some-topic-1 :)")
}
}
More information on the #KafkaListener and properties can be found here: https://micronaut-projects.github.io/micronaut-kafka/latest/guide/#kafkaListenerConfiguration

How to pass JAAS configuration kafka env variables kubernetes

I am trying to authenticate my Kafka rest proxy with SASL but I am having trouble transferring the configs made in my local docker compose to Kubernetes.
I am using JAAS configuration to achieve this.
My JAAS file looks like this.
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="rest"
password="rest-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="rest"
password="restsecret";
};
and then in my docker compose I have done:
KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/rest_jaas.conf
How will I transfer this same logic to Kubernetes?
I have tried passing the env variable like this:
env:
- name: KAFKA_OPTS
value: |
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="rest"
password="rest-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="rest"
password="rest-secret";
};
but it still fails. Here is what my logs say:
Error: Could not find or load main class KafkaClient
/bin/sh: 3: org.apache.kafka.common.security.plain.PlainLoginModule: not found
/bin/sh: 6: Syntax error: "}" unexpected
Your help will be highly appreciated.
Save your Kafka JAAS config file as rest_jaas.conf. Then execute:
kubectl create secret generic kafka-secret --from-file=rest_jaas.conf
Then in your deployment you insert:
env:
- name: KAFKA_OPTS
value: -Djava.security.auth.login.config=/etc/kafka/secrets/rest_jaas.conf
volumeMounts:
- name: kafka-secret
mountPath: /etc/kafka/secrets
subPath: rest_jaas.conf
volumes:
- name: kafka-secret
secret:
secretName: kafka-secret