Provide Kafka Client property at runtime to use kafka auth - apache-kafka

In the micronaut-kafka documentation there is some info how to set custom properties. Via the application.yml file or directly using the annotation:
#KafkaClient(
id="product-client",
acks = KafkaClient.Acknowledge.ALL,
properties = #Property(name = ProducerConfig.RETRIES_CONFIG, value = "5")
)
public interface ProductClient {
...
}
I have to provide the sasl.jaas.config property at runtime, as the clients use authentication and the secrets are resolved on startup. After the secrets are resolved, the kafka consumer/producer should be initialised.
What is the best way to achieve this?
Thanks!

I don't think micronaut have this setting in the current version.
But you can just add this row under your producer config in application.yml:
sasl.jaas.config: com.sun.security.auth.module.Krb5LoginModule required blablabla";
and read this via placeholder like:
Property(name = "sasl.jaas.config", value = "${kafka.producers.your_producer_id.sasl.jaas.config}")

Related

Create subjects automatically from Schema Registry - Subject not found

I have Schema Registry container http://registry-server:8081
ProducerConfig:
bootstrap.servers : [PLAINTEXT://kafka-server:9092]
value.serializer : class org.apache.kafka.common.serialization.ByteArraySerializer
and a Standalone service that acts as a producer and has its property set as below
Producer
"schema.registry.url", "http://registry-server:8081"
"bootstrap.servers", "http://kafka-server:9092
"value.converter.value.subject.name.strategy", true
"auto.register.schemas", false
"value.serializer", DelegatingSerialzer.class
KafkaAvroSerializerConfig
value.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
key.subject.name.strategy = class io.confluent.kafka.serializers.subject.TopicNameStrategy
But when the Standalone service attempt to send a request to schema registry, something like below
http://registry-server:8081/subjects/topicName-value?deleted=false
I constantly receive Subject Not Found error.
Is it because auto.register.schemas is set to false in the Producer from standalone service and that's why it is failing to create subjects ?
How can I auto register schema and auto create Subject from the Schema Registry service ?
btw, kafka, schema-registery and standalone-app are containers
Is it because auto.register.schemas is set to false in the Producer from standalone service and that's why it is failing to create subjects ?
Probably
auto register schema and auto create Subject from the Schema Registry service ?
You can't. The registry needs to have external POST HTTP requests sent to it to create schemas for subjects. You cannot create subject without any schema.
The registry itself doesn't start with any schema, usually only your client applications do, so there is nothing it can (auto) register itself.

Messages are not getting consumed

I have added the below configuration in application.properties file of Spring Boot with Camel implementation but the messages are not getting consumed. Am I missing any configuration? Any pointers to implement consumer from Azure event hub using kafka protocol and Camel ?
bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
The route looks like this:
from("kafka:{{topicName}}?brokers=NAMESPACENAME.servicebus.windows.net:9093" )
.log("Message received from Kafka : ${body}");
I found the solution for this issue. Since I was using the Spring Boot Auto configuration (camel-kafka-starter), the entry on the application.properties file had to be modified as given below:
camel.component.kafka.brokers=NAMESPACENAME.servicebus.windows.net:9093
camel.component.kafka.security-protocol=SASL_SSL
camel.component.kafka.sasl-mechanism=PLAIN
camel.component.kafka.sasl-jaas-config =org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
The consumer route for the Azure event hub with Kafka protocol will look like this:
from("kafka:{{topicName}}")
.log("Message received from Kafka : ${body}");
Hope this solution helps to consume events from Azure event hub in Camel using Kafka protocol

list all running configuration on kafka broker

I wish to list all configurations active on a kafka broker. I could see the configurations in server.properties files but that's not all, it doesn't show all configurations. I want to be able to see all configurations, even the default ones. Is this possible?
Any pointers in this direction would be greatly appreciated.
There is no command which list the current configuration of a kafka broker. However if you want to see all the configuration parameters with there default values and importance it is listed here
https://docs.confluent.io/current/installation/configuration/broker-configs.html
You can achieve that programatically through Kafka AdminClient (I'm using 2.0 FWIW - the interface is still evolving):
final String brokerId = "1";
final ConfigResource cr = new ConfigResource(Type.BROKER, brokerId);
final DescribeConfigsResult dcr = admin.describeConfigs(Arrays.asList(cr));
final Map<ConfigResource, Config> configMap = dcr.all().get();
for (final Config config : configMap.values()) {
for (final ConfigEntry entry : config.entries()) {
System.out.println(entry);
}
}
KafkaAdmin Javadoc
Each of config entries has a 'source' property that indicates where the property is coming from (in case of broker it's default broker config or per-broker override; for topics there's more possible values).

Not able to use Kafka's JdbcSourceConnector to read data from Oracle DB to kafka topic

I am trying to write a standalone java program using kafka-jdbc-connect API to stream data from oracle-table to kafka topic.
API used: I'm currently trying to use Kafka Connectors, JdbcSourceConnector class to be precise.
Constraint: Use Confluent Java API and not do it through CLI or by executing provided shell script.
What I did: create an instance of JdbcSourceConnector.java class and call start(Properties) method of this class by providing the Properties object as a parameter. This properties object has database connection properties, table whitelist property, topic prefix etc.
After starting thread, i'm unable to read the data from "topic-prefix-tablename" topic. I am not sure how to pass Kafka Broker details to JdbcSourceConnector. Calling start() method on JdbcSourceConnector starting thread but not doing anything.
Is there a simple java API tutorial page/example code i can refer because all the examples i see are using CLI/shell scripts?
Any help is appreciated
Code:
public static void main(String[] args) {
Map<String, String> jdbcConnectorConfig = new HashMap<String, String>();
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_URL_CONFIG, "<DATABASE_URL>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_USER_CONFIG, "<DATABASE_USER>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_PASSWORD_CONFIG, "<DATABASE_PASSWORD>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.POLL_INTERVAL_MS_CONFIG, "300000");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.BATCH_MAX_ROWS_CONFIG, "10");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.MODE_CONFIG, "timestamp");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TABLE_WHITELIST_CONFIG, "<TABLE_NAME>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TIMESTAMP_COLUMN_NAME_CONFIG, "<TABLE_COLUMN_NAME>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TOPIC_PREFIX_CONFIG, "test-oracle-jdbc-");
JdbcSourceConnector jdbcSourceConnector = new JdbcSourceConnector ();
jdbcSourceConnector.start(jdbcConnectorConfig);
}
Assuming you are trying to do it in Standalone mode.
In your Application run configuration, your main class should be "org.apache.kafka.connect.cli.ConnectStandalone" and you need to pass two property files as program arguments.
You should also extend "your-custom-JdbcSourceConnector" class with "org.apache.kafka.connect.source.SourceConnector" class
Main Class: org.apache.kafka.connect.cli.ConnectStandalone
Program Arguments: .\path-to-config\connect-standalone.conf .\path-to-config\connetcor.properties
"connect-standalone.conf" file will contain all Kafka broker details.
// Example connect-standalone.conf
bootstrap.servers=<comma seperated brokers list here>
group.id=some_loca_group_id
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=connect.offset
offset.flush.interval.ms=100
offset.flush.timeout.ms=180000
buffer.memory=67108864
batch.size=128000
producers.acks=1
"connector.properties" file will contain all details required to create and start connector
// Example connector.properties
name=some-local-connector-name
connector.class=your-custom-JdbcSourceConnector
tasks.max=3
topic=output-topic
fetchsize=10000
More info here : https://docs.confluent.io/current/connect/devguide.html#connector-example

Kafka configure jaas using sasl.jaas.config on kubernetes

I'm using this helm chart: https://github.com/helm/charts/tree/master/incubator/kafka
and these overrides in values.yaml
configurationOverrides:
advertised.listeners: |-
EXTERNAL://kafka-${KAFKA_BROKER_ID}.host-removed:$((31090 + ${KAFKA_BROKER_ID}))
listener.security.protocol.map: |-
PLAINTEXT:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
sasl.enabled.mechanisms: SCRAM-SHA-256
auto.create.topics.enable: false
inter.broker.listener.name: PLAINTEXT
sasl.mechanism.inter.broker.protocol: SCRAM-SHA-256
listener.name.EXTERNAL.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
based on this documentation: https://kafka.apache.org/documentation/#security_jaas_broker
(quick summary)
Brokers may also configure JAAS using the broker configuration property sasl.jaas.config. The property name must be prefixed with the listener prefix including the SASL mechanism, i.e. listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config. Only one login module may be specified in the config value. If multiple mechanisms are configured on a listener, configs must be provided for each mechanism using the listener and mechanism prefix
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="admin" \
password="admin-secret";
The problem is that when I start Kafka I get the following error:
java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'plaintext.KafkaServer' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
According to the order of precedence, it should use the static jass file if the above config is NOT set.
If JAAS configuration is defined at different levels, the order of precedence used is:
Broker configuration property listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config
{listenerName}.KafkaServer section of static JAAS configuration
KafkaServer section of static JAAS configuration
The helm chart doesn't support a way to configure this jaas file so using this property seems to be the desired way, I'm just confused as to what is configured incorrectly.
Note: The cluster works fine if I disable all SASL and just use plain text but that's not much good in a real environment.
We've defined 2 listeners: PLAINTEXT and EXTERNAL. You've mapped both to SASL_PLAINTEXT.
Is this really what you wanted to do? or did you want PLAINTEXT to not require SASL but just be Plaintext?
If you really want both to be SASL, then both of them need a JAAS configuration. In your question, I only see a JAAS configuration for EXTERNAL:
listener.name.EXTERNAL.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
As you've mapped PLAINTEXT to SASL_PLAINTEXT, it also requires a JAAS configuration. You can specify it using for example:
listener.name.PLAINTEXT.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
If you wanted your PLAINTEXT listener to actually be Plaintext without SASL, then you need to update the listener mapping:
listener.security.protocol.map: |-
PLAINTEXT:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT