I wish to list all configurations active on a kafka broker. I could see the configurations in server.properties files but that's not all, it doesn't show all configurations. I want to be able to see all configurations, even the default ones. Is this possible?
Any pointers in this direction would be greatly appreciated.
There is no command which list the current configuration of a kafka broker. However if you want to see all the configuration parameters with there default values and importance it is listed here
https://docs.confluent.io/current/installation/configuration/broker-configs.html
You can achieve that programatically through Kafka AdminClient (I'm using 2.0 FWIW - the interface is still evolving):
final String brokerId = "1";
final ConfigResource cr = new ConfigResource(Type.BROKER, brokerId);
final DescribeConfigsResult dcr = admin.describeConfigs(Arrays.asList(cr));
final Map<ConfigResource, Config> configMap = dcr.all().get();
for (final Config config : configMap.values()) {
for (final ConfigEntry entry : config.entries()) {
System.out.println(entry);
}
}
KafkaAdmin Javadoc
Each of config entries has a 'source' property that indicates where the property is coming from (in case of broker it's default broker config or per-broker override; for topics there's more possible values).
Related
I've been using node-rdkafka npm package for working with node and kafka.
For creating a new topic I've been using the following code:
client.createTopic({ topic: topic.name, num_partitions: _.get(topic, "partitions", 1), replication_factor: _.get(topic, "replicas", 3) }
I need to add topic level retention.ms for overriding the default 7 days set at the broker level. Is there any way to do this using node-rdkafka
I found the solution to this.
There is a property 'config' of type object which can be used for this purpose:
client.createTopic({
'topic': name,
'num_partitions': partitions,
'replication_factor': replicas,
'config': {
'retention.ms': '60000'
}
}
This will set the retention.ms to 60000 ms. Note that all the key value pairs passed inside the 'config' parameter must be of type string.
In the micronaut-kafka documentation there is some info how to set custom properties. Via the application.yml file or directly using the annotation:
#KafkaClient(
id="product-client",
acks = KafkaClient.Acknowledge.ALL,
properties = #Property(name = ProducerConfig.RETRIES_CONFIG, value = "5")
)
public interface ProductClient {
...
}
I have to provide the sasl.jaas.config property at runtime, as the clients use authentication and the secrets are resolved on startup. After the secrets are resolved, the kafka consumer/producer should be initialised.
What is the best way to achieve this?
Thanks!
I don't think micronaut have this setting in the current version.
But you can just add this row under your producer config in application.yml:
sasl.jaas.config: com.sun.security.auth.module.Krb5LoginModule required blablabla";
and read this via placeholder like:
Property(name = "sasl.jaas.config", value = "${kafka.producers.your_producer_id.sasl.jaas.config}")
Currently I am changing the default broker configurations in my kafka cluster using the kafka-configs.sh script.
./kafka-configs.sh --bootstrap-server <bootstrap_server> --entity-type brokers --entity-default --alter --add-config max.connections=100
The above command would set the default value of max.connections configuration to 100 in all my brokers of the cluster. I would like to achieve the same through Java.
I tried using the alterConfigs method in the AdminClient class. Using this method I am able to set the configuration value, but this value getting at the broker level.
Due to this I would have to execute the alterConfigs for each and every broker in the cluster which is not scalable.
Could anyone help me with changing the default broker configuration using AdminClient class similar to what I was doing with the shell script.
Thank you.
You could use the code below to set configs at broker-default level:
Properties props = new Properties();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
Map<String, NewPartitions> newPartitions = new HashMap<>();
ConfigResource configResource = new ConfigResource(ConfigResource.Type.BROKER, "");
ConfigEntry entry = new ConfigEntry("max.connections", String.valueOf(100));
AlterConfigOp op = new AlterConfigOp(entry, AlterConfigOp.OpType.SET);
Map<ConfigResource, Collection<AlterConfigOp>> configs = new HashMap<>(1);
configs.put(configResource, Arrays.asList(op));
try (Admin admin = AdminClient.create(props)) {
admin.incrementalAlterConfigs(configs).all().get();
}
I am trying to write a standalone java program using kafka-jdbc-connect API to stream data from oracle-table to kafka topic.
API used: I'm currently trying to use Kafka Connectors, JdbcSourceConnector class to be precise.
Constraint: Use Confluent Java API and not do it through CLI or by executing provided shell script.
What I did: create an instance of JdbcSourceConnector.java class and call start(Properties) method of this class by providing the Properties object as a parameter. This properties object has database connection properties, table whitelist property, topic prefix etc.
After starting thread, i'm unable to read the data from "topic-prefix-tablename" topic. I am not sure how to pass Kafka Broker details to JdbcSourceConnector. Calling start() method on JdbcSourceConnector starting thread but not doing anything.
Is there a simple java API tutorial page/example code i can refer because all the examples i see are using CLI/shell scripts?
Any help is appreciated
Code:
public static void main(String[] args) {
Map<String, String> jdbcConnectorConfig = new HashMap<String, String>();
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_URL_CONFIG, "<DATABASE_URL>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_USER_CONFIG, "<DATABASE_USER>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_PASSWORD_CONFIG, "<DATABASE_PASSWORD>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.POLL_INTERVAL_MS_CONFIG, "300000");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.BATCH_MAX_ROWS_CONFIG, "10");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.MODE_CONFIG, "timestamp");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TABLE_WHITELIST_CONFIG, "<TABLE_NAME>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TIMESTAMP_COLUMN_NAME_CONFIG, "<TABLE_COLUMN_NAME>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TOPIC_PREFIX_CONFIG, "test-oracle-jdbc-");
JdbcSourceConnector jdbcSourceConnector = new JdbcSourceConnector ();
jdbcSourceConnector.start(jdbcConnectorConfig);
}
Assuming you are trying to do it in Standalone mode.
In your Application run configuration, your main class should be "org.apache.kafka.connect.cli.ConnectStandalone" and you need to pass two property files as program arguments.
You should also extend "your-custom-JdbcSourceConnector" class with "org.apache.kafka.connect.source.SourceConnector" class
Main Class: org.apache.kafka.connect.cli.ConnectStandalone
Program Arguments: .\path-to-config\connect-standalone.conf .\path-to-config\connetcor.properties
"connect-standalone.conf" file will contain all Kafka broker details.
// Example connect-standalone.conf
bootstrap.servers=<comma seperated brokers list here>
group.id=some_loca_group_id
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=connect.offset
offset.flush.interval.ms=100
offset.flush.timeout.ms=180000
buffer.memory=67108864
batch.size=128000
producers.acks=1
"connector.properties" file will contain all details required to create and start connector
// Example connector.properties
name=some-local-connector-name
connector.class=your-custom-JdbcSourceConnector
tasks.max=3
topic=output-topic
fetchsize=10000
More info here : https://docs.confluent.io/current/connect/devguide.html#connector-example
Is there a way to access the configuration values in server.properties without direct access to that file itself?
I thought that:
kafka-configs.sh --describe --entity-type topics --zookeeper localhost:2181
might give me what I want, but I did not see the values set in server.properties. Just the following (I set 'ddos' as my own topic from kafka-topics.sh):
Configs for topics:ddos are
Configs for topics:__consumer_offsets are segment.bytes=104857600,cleanup.policy=compact
I was thinking I'd also see globally configured options, like this from the default configuration I have:
log.retention.hours=168
Thanks in advance.
Since Kafka 0.11, you can use the AdminClient describeConfigs() API to retrieve configuration of brokers.
For example, skeleton code to retrieve configuration for broker 0:
Properties adminProps = new Properties();
adminProps.load(new FileInputStream("admin.properties"));
AdminClient admin = KafkaAdminClient.create(adminProps);
Collection<ConfigResource> resources = new ArrayList<>();
ConfigResource cr = new ConfigResource(Type.BROKER, "0");
resources.add(cr);
DescribeConfigsResult dcr = admin.describeConfigs(resources);
System.out.println(dcr.all().get());