Getting Secure Kafka and Schema registry to talk to each other, as well as spring boot configuration for it - apache-kafka

Hey I have been trying to solve this for about two weeks now.
Basically I want kafka to be on SSL and schema registry on HTTPS. No kerberos to be used.
I have a two spring services, one is a producer and the other a consumer (avro)
this is my current docker-compose, with it when I send a request to my producer it doesn't throw any errors in the application, the request times out but Kafka logs show kafka_1 | [2019-12-03 09:53:27,454] INFO [SocketServer brokerId=1] Failed authentication with /172.18.0.1 (SSL handshake failed) (org.apache.kafka.common.network.Selector)
when I uncomment the lines in the docker-compose i get PKIX path building failed and some other error specifying that Avro cannot be serialized or something like that
zookeeper:
image: confluentinc/cp-zookeeper:5.3.0
ports:
- 2181:2181
environment:
ZOOKEEPER_CLIENT_PORT: "2181"
ZOOKEEPER_TICK_TIME: "2000"
kafka:
image: confluentinc/cp-kafka:5.3.0
ports:
- 29094:29094
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,SSL:SSL
KAFKA_SECURITY_PROTOCOL: SSL
KAFKA_INTER_BROKER_PROTOCOL: SSL
KAFKA_INTER_BROKER_LISTENER_NAME: SSL
KAFKA_LISTENERS: SSL://kafka:29094,PLAINTEXT://kafka:9092
KAFKA_ADVERTISED_LISTENERS: SSL://kafka:29094,PLAINTEXT://kafka:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
# KAFKA_SSL_CLIENT_AUTH: required
KAFKA_SSL_KEYSTORE_FILENAME: kafka.server.keystore.jks
KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.server.truststore.jks
KAFKA_SSL_KEY_CREDENTIALS: key_credential
KAFKA_SSL_KEYSTORE_CREDENTIALS: key_credential
KAFKA_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.server.keystore.jks
KAFKA_SSL_KEYSTORE_PASSWORD: PASSWORD
KAFKA_SSL_KEY_PASSWORD: PASSWORD
KAFKA_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.server.truststore.jks
KAFKA_SSL_TRUSTSTORE_PASSWORD: PASSWORD
KAFKA_SSL_TRUSTSTORE_CREDENTIALS: key_credential
KAFKA_HEAP_OPTS: -Xmx456M
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
# KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
KAFKA_SUPER_USERS: User:CN=Kafka-domain
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/xubu/Documents:/etc/kafka/secrets
schema-registry:
image: confluentinc/cp-schema-registry:5.3.0
depends_on:
- zookeeper
- kafka
ports:
- 8181:8181
- 8085:8085
- 8086:8086
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_LISTENERS: http://schema-registry:8085, https://schema-registry:8086
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: SSL
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: SSL://kafka:29094
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_LOCATION: /etc/kafka/client/kafka.client.truststore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_LOCATION: /etc/kafka/client/kafka.client.keystore.jks
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_KEY_PASSWORD: PASSWORD
SCHEMA_REGISTRY_KAFKASTORE_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
SCHEMA_REGISTRY_SSL_TRUSTSTORE_LOCATION: /etc/kafka/client/kafka.client.truststore.jks
SCHEMA_REGISTRY_SSL_TRUSTSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_KEYSTORE_LOCATION: /etc/kafka/client/kafka.client.keystore.jks
SCHEMA_REGISTRY_SSL_KEYSTORE_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_KEY_PASSWORD: PASSWORD
SCHEMA_REGISTRY_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: ""
SCHEMA_REGISTRY_SECURITY_PROTOCOL: SSL
SCHEMA_REGISTRY_INTER_INSTANCE_PROTOCOL: "https"
#SCHEMA_REGISTRY_SSL_CLIENT_AUTH: 'true'
volumes:
- /home/xubu/Documents:/etc/kafka/client
- /home/xubu/Documents:/etc/kafka/consumer
Below is part of my spring boot application.yaml
spring:
kafka:
bootstrap-servers: kafka:29094
producer:
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
ssl:
key-store-location: /home/xubu/Documents/kafka.client.keystore.jks
key-password: PASSWORD
key-store-password: PASSWORD
trust-store-location: /home/xubu/Documents/kafka.client.truststore.jks
trust-store-password: PASSWORD
protocol: SSL
properties:
value:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.RecordNameStrategy
value-serializer: io.confluent.kafka.serializers.KafkaAvroSerializer
ssl.endpoint.identification.algorithm: https
schema.registry.url: https://schema-registry:8086
ssl:
trust-store-location: /home/xubu/Documents/kafka.client.truststore.jks
trust-store-password: PASSWORD
key-store-location: /home/xubu/Documents/kafka.client.keystore.jks
key-store-password: PASSWORD
key-password: PASSWORD
protocol: SSL
key-store-type: jks
trust-store-type: jks
This has been my pain for the last two or so weeks, and it's an intro into trying out Access Control Lists on schema-registry

Related

No JAAS configuration section named 'Client' was found in specified JAAS configuration file

Can't deploy Kafka with SASL authentication
Here is my docker-compose.yml
version: '3.1'
services:
zookeeper:
image: confluentinc/cp-zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
KAFKA_OPTS:
-Djava.security.auth.login.config=/home/etozhekim/IdeaProjects/veles-core/zookeeper_server_jaas.conf
-Dquorum.auth.enableSasl=true
-Dquorum.cnxn.threads.size=20
-Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-DjaasLoginRenew=3600000
-DrequireClientAuthScheme=sasl
volumes:
- /home/etozhekim/IdeaProjects/veles-core/zookeeper_server_jaas.conf
networks:
- kafka-cluster-network
kafka:
image: confluentinc/cp-kafka
ports:
- "9092:9092"
depends_on:
- zookeeper
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
KAFKA_LISTENERS: SASL_PLAINTEXT://kafka:9092,SASL_PLAINTEXT://kafka:9092
KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_OPTS:
-Dzookeeper.sasl.client=true
-Djava.security.auth.login.config=/home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf
volumes:
- /home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf
Zookeeper is deployed without problems. But Kafka logs:
[2023-02-02 11:49:24,708] WARN SASL configuration failed. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/home/etozhekim/IdeaProjects/veles-core/kafka_server_jaas.conf'
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="kafkabroker"
password="password";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="password";
};
zookeper_server_jaas.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin="password";
};
The error is saying it wants a Zookeeper Client JAAS setting whereas you've only configured a Kafka Client config in the broker.
Also, /home/etozhekim doesn't exist in Confluent images.
Add a valid right-hand side container mapping to the volume, such as :/tmp/jaas.conf
Then use -Djava.security.auth.login.config=/tmp/jaas.conf

ksqldb not failing over to standby schema registry

I am trying to test failover scenario for kafka schema registry.
I spanned up two Schema registry docker containers(Primary and standby) and I have a KSQLDB server running in a docker container pointing to primary schema registry. The source kafka connecter is streaming the data from the database to kafka topics. The ksqlDB server is able to validate the schema of the kafka message using primary schema registry. Now I shutdown the primary schema registry. The ksqldb server is not failing over to the stand by schema registry to validate the schema, causing ksqldb server not receiving the data from kafka topics.
How should ksqldb server should know what is the standby schema-registry that it need to connect to when primary is down.
Below is docker-compose.yml file that I have used
schema-registry:
image: confluentinc/cp-schema-registry:${CP_VERSION}
depends_on:
- zookeeper
- kafka
ports:
- "8081:8081"
container_name: schema-registry
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka:9092
SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_ORIGIN: '*'
SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_METHODS: 'GET,POST,PUT,OPTIONS'
SCHEMA_REGISTRY_LEADER_ELIGIBILITY : "true"
SCHEMA_REGISTRY_GROUP_ID : "schema-registry-group"
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
schema-registry-2:
image: confluentinc/cp-schema-registry:${CP_VERSION}
depends_on:
- kafka
- schema-registry
ports:
- "8082:8082"
container_name: schema-registry-2
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry-2
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka:9092
SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_ORIGIN: '*'
SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_METHODS: 'GET,POST,PUT,OPTIONS'
SCHEMA_REGISTRY_LEADER_ELIGIBILITY : "true"
SCHEMA_REGISTRY_GROUP_ID : "schema-registry-group"
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8082
primary-ksqldb-server:
image: ${KSQL_IMAGE_BASE}confluentinc/ksqldb-server:${KSQL_VERSION}
hostname: primary-ksqldb-server
container_name: primary-ksqldb-server
depends_on:
- kafka
- schema-registry
ports:
- "8088:8088"
environment:
KSQL_CONFIG_DIR: "/etc/ksql"
KSQL_LISTENERS: http://0.0.0.0:8088
KSQL_BOOTSTRAP_SERVERS: kafka:9092
KSQL_KSQL_ADVERTISED_LISTENER : http://localhost:8088
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081
KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true"
KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true"
KSQL_KSQL_EXTENSION_DIR: "/usr/ksqldb/ext/"
KSQL_KSQL_SERVICE_ID: "nrt_"
KSQL_KSQL_STREAMS_NUM_STANDBY_REPLICAS: 1
KSQL_KSQL_QUERY_PULL_ENABLE_STANDBY_READS: "true"
KSQL_KSQL_HEARTBEAT_ENABLE: "true"
KSQL_KSQL_LAG_REPORTING_ENABLE : "true"
KSQL_KSQL_QUERY_PULL_MAX_ALLOWED_OFFSET_LAG : 100
KSQL_LOG4J_APPENDER_KAFKA_APPENDER: "org.apache.kafka.log4jappender.KafkaLog4jAppender"
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_LAYOUT: "io.confluent.common.logging.log4j.StructuredJsonLayout"
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_BROKERLIST: localhost:9092
KSQL_LOG4J_APPENDER_KAFKA_APPENDER_TOPIC: KSQL_LOG
KSQL_LOG4J_LOGGER_IO_CONFLUENT_KSQL: INFO,kafka_appender
KSQL_KSQL_QUERY_PULL_METRICS_ENABLED: "true"
KSQL_JMX_OPTS: >
-Djava.rmi.server.hostname=localhost
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=1099
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.rmi.port=1099
When I stop primary schema registry, ksqldb is supposed to connect to standy schema registry
How would it know the other is available if you don't provide it?
KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081,http://schema-registry-2:8082
In other words, you shut down schema-registry container, so it will simply not respond. It will not forward requests or update the clients to talk to another server... So, you need to provide a URL-list, or you need to setup an external reverse proxy to round-robin the requests to the active instance.

Unable to connect to Kafka(in docker) from Internet

I am attempting to make it so I can connect to Kafka from over the internet. I have opened Ports: 9092 and 2181 on my router.
I have had no luck at all! I am using OffsetExplorer and I am able to ping kafka from another network. The IP of the system that is is running on is: 10.0.1.104
I AM able to connect to kafka on the local network from another computer though.
Here is my Kafka Docker-Compose:
version: '3.7'
services:
zookeeper:
image: wurstmeister/zookeeper:3.4.6
environment:
JVMFLAGS: "-Djava.security.auth.login.config=/etc/zookeeper/zookeeper_jaas.conf"
volumes:
- ./zookeeper_jaas.conf:/etc/zookeeper/zookeeper_jaas.conf
ports:
- 2181:2181
kafka:
image: wurstmeister/kafka:2.13-2.8.1
depends_on:
- zookeeper
ports:
- 9092:9092
- 29092:29092
environment:
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: INTERNAL://:9093,EXTERNAL://:9092
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:9093,EXTERNAL://10.0.1.104:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
ALLOW_PLAINTEXT_LISTENER: 'yes'
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/kafka_jaas.conf"
volumes:
- ./kafka_server_jaas.conf:/etc/kafka/kafka_jaas.conf
Connection attempts result in this on Kafka's output
Thank you very much!

Set up Confluent Metrics Reporter at wurstmeister/kafka

Control-Center with wurstmeister/kafka at docker.
But when I open cp-control-center I can't see the metrics for broker. There is a report message that says Set up Confluent Metrics Reporter .
Can I do set up and take the metrics for wurstmeister/kafka image?
My docker-compose file is the following
kafka:
image: wurstmeister/kafka
container_name: kafka
hostname: kafka
ports:
- "9092"
- "9999"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_PORT: 9092
KAFKA_PORT: 9092
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=kafka -Dcom.sun.management.jmxremote.rmi.port=9999"
JMX_PORT: 9999
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://172.22.0.4:9092
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
control-center:
image: confluentinc/cp-enterprise-control-center:6.0.0
hostname: control-center
container_name: control-center
depends_on:
- kafka
ports:
- "9021:9021"
environment:
CONTROL_CENTER_BOOTSTRAP_SERVERS: kafka:9092
CONTROL_CENTER_REPLICATION_FACTOR: 1
CONTROL_CENTER_INTERNAL_TOPICS_PARTITIONS: 1
CONTROL_CENTER_MONITORING_INTERCEPTOR_TOPIC_PARTITIONS: 1
CONFLUENT_METRICS_TOPIC_REPLICATION: 1
PORT: 9021
The metrics reporters for the brokers aren't on the classpath for the wurstmeister container and the metrics topic isn't created.
You'd have to download the Confluent Platform to get those reporters, so no reason not to use their container

Kafka with SASL_PLAINTEXT authentication

I'm using the following docker-compose configuration:
app-zookeeper:
image: wurstmeister/zookeeper
container_name: app-zookeeper
ports:
- 2181:2181
app-kafka:
build: ../images/kafka
container_name: app-kafka
ports:
- 9092:9092
environment:
KAFKA_ADVERTISED_HOST_NAME: ${DOCKER_LOCAL_HOST}
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ADVERTISED_LISTENERS: SASL_PLAINTEXT://:9092
KAFKA_LISTENERS: SASL_PLAINTEXT://:9092
KAFKA_ZOOKEEPER_CONNECT: app-zookeepr:2181
KAFKA_DELETE_TOPIC_ENBALE: "true"
KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-512
KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SCRAM-SHA-512
KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_PLAINTEXT
CUSTOM_INIT_SCRIPT: "export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
File kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin123";
};
On images/kafka I have a DockerFile:
FROM wurstmeister/kafka
# Authentication
COPY kafka_server_jaas.conf /opt/kafka/config/
# Define env vars for authentication
ENV CUSTOM_INIT_SCRIPT="export KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
ENV KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
# create user
RUN kafka-configs.sh --zookeeper <DOCKER_LOCAL_HOST>:2181 --alter --add-config='SCRAM-SHA-512=[password="admin123"]' --entity-type users --entity-name admin
# List users
RUN kafka-configs.sh --zookeeper <DOCKER_LOCAL_HOST>:2181 --describe --entity-type users
Then I start the zookeeper container and kafka containers:
On kafka container I got this error, and I am not able to connect.
ERROR [Controller id=1001, targetBrokerId=1001] Connection to node 1001 failed authentication due to: Authentication failed due to invalid credentials with SASL mechanism SCRAM-SHA-512 (org.apache.kafka.clients.NetworkClient)
on the kafka container:
I have the env var KAFKA_OPTS defined
KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf
Any clue?