I am using spring kafka to connect to kafka and to check the status of the kafka server, I am using org.apache.kafka.clients.admin.AdminClient. It is working fine in my local but but when I deploy into QA environment, it doesn't start, complaining not able to create AdminClient bean. My guess is that AdminClient would be using any specific port, which would not be open in QA environment.
Does someone know if this is the case and which port KafkaAdmin connect to? Spring kafka without KafkaAdmin seems to be working fine.
There is nothing special. The KafkaAdmin is based on some provided config:
/**
* Create an instance with an {#link AdminClient} based on the supplied
* configuration.
* #param config the configuration for the {#link AdminClient}.
*/
public KafkaAdmin(Map<String, Object> config) {
This config is indeed use for the AdminClient internal instance:
adminClient = AdminClient.create(this.config);
and that one is based on the AdminClientConfig:
/**
* Create a new AdminClient with the given configuration.
*
* #param conf The configuration.
* #return The new KafkaAdminClient.
*/
public static AdminClient create(Map<String, Object> conf) {
return KafkaAdminClient.createInternal(new AdminClientConfig(conf), null);
}
So, all the properties required for AdminClient connection you can find in that AdminClientConfig. And pay attention that host/port by default is exactly the same as it is for any other clients:
public static final String BOOTSTRAP_SERVERS_CONFIG = CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG;
and
private static final String BOOTSTRAP_SERVERS_DOC = CommonClientConfigs.BOOTSTRAP_SERVERS_DOC;
So, when you create a KafkaAdmin instance, you should provide at least that bootstrap.servers property.
Also would be great to see a stack trace which happens in that mentioned environment.
Related
I'm trying to run a Micronaut test including kafka with testcontainers.
For my test I need that my code and the kafka server share the same port but I can not configure the port in kafka:
#Container
static KafkaContainer kafka =
new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:latest"));
It is generating a random port and it is not possible to configure it.
Another possibility is to change the application.yml property that the producer user for the kafka server but I can not find any soluciĆ³n also.
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, configuration.kafkaUrl);
Your test class needs to implement TestPropertyProvider and override getProperties():
#MicronautTest
class MySpec extends Specification implements TestPropertyProvider {
private static final Collection<Book> received = new ConcurrentLinkedDeque<>()
static KafkaContainer kafka = new KafkaContainer(
DockerImageName.parse('confluentinc/cp-kafka:latest'))
#Override
Map<String, String> getProperties() {
kafka.start()
['kafka.bootstrap.servers': kafka.bootstrapServers]
}
// tests here
See this official Micronaut guide for a detailed tutorial:
https://guides.micronaut.io/latest/micronaut-kafka-gradle-groovy.html
Keep getting log:reactor.core.Exceptions$ErrorCallbackNotImplemented: org.apache.kafka.common.errors.TimeoutException: Topic topic not present in metadata after 60000 ms. Caused by: org.apache.kafka.common.errors.TimeoutException: Topic topic not present in metadata after 60000 ms. when trying to produce message on kafka.
Already made sure that I have Jackson core, Jackson databind and Kafka clients dependencies in the producer project. Also How do I pass security protocol in reactor kafka SenderOptions
Topic topic not present in metadata after 60000 ms. You have to create the topic before you can use it - either with command line tools, or with an AdminClient.
You can set any ProducerConfig property in the map passed into the create().
/**
* Creates a sender options instance with the specified config overrides for the underlying
* Kafka {#link Producer}.
* #return new instance of sender options
*/
#NonNull
static <K, V> SenderOptions<K, V> create(#NonNull Map<String, Object> configProperties) {
return new ImmutableSenderOptions<>(configProperties);
}
Kafka configuration properties:
Can i have the same property "key"(and maybe different "value") in
(1) application.properties,
(2) bean(ProducerFactory/ProducerConfig) and
If yes, who is the "last-win"?
P.S Yes, i know, test it! But it will also be handy to have this question/answer on SO.
EDIT:
Example:
(1) spring.kafka.producer.properties.enable.idempotence=true
(2) props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "false");
With props defined as:
#Configuration
public class KafkaProducerConfiguration {
#Bean
public ProducerFactory<Object, Object> producerFactory() {
Map<String, Object> props = new HashMap<>();
The boot property (1) is only used when Boot auto-configures the producer factory for you. Since you are defining your own producer factory #Bean (2), Boot's is disabled and the properties are ignored.
If you want to use the Boot application.properties, simply remove your producerFactory #Bean and let Boot configure the producer factory for you.
I have no idea what config/producer.properties (3) is.
I am trying to write a standalone java program using kafka-jdbc-connect API to stream data from oracle-table to kafka topic.
API used: I'm currently trying to use Kafka Connectors, JdbcSourceConnector class to be precise.
Constraint: Use Confluent Java API and not do it through CLI or by executing provided shell script.
What I did: create an instance of JdbcSourceConnector.java class and call start(Properties) method of this class by providing the Properties object as a parameter. This properties object has database connection properties, table whitelist property, topic prefix etc.
After starting thread, i'm unable to read the data from "topic-prefix-tablename" topic. I am not sure how to pass Kafka Broker details to JdbcSourceConnector. Calling start() method on JdbcSourceConnector starting thread but not doing anything.
Is there a simple java API tutorial page/example code i can refer because all the examples i see are using CLI/shell scripts?
Any help is appreciated
Code:
public static void main(String[] args) {
Map<String, String> jdbcConnectorConfig = new HashMap<String, String>();
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_URL_CONFIG, "<DATABASE_URL>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_USER_CONFIG, "<DATABASE_USER>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_PASSWORD_CONFIG, "<DATABASE_PASSWORD>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.POLL_INTERVAL_MS_CONFIG, "300000");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.BATCH_MAX_ROWS_CONFIG, "10");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.MODE_CONFIG, "timestamp");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TABLE_WHITELIST_CONFIG, "<TABLE_NAME>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TIMESTAMP_COLUMN_NAME_CONFIG, "<TABLE_COLUMN_NAME>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TOPIC_PREFIX_CONFIG, "test-oracle-jdbc-");
JdbcSourceConnector jdbcSourceConnector = new JdbcSourceConnector ();
jdbcSourceConnector.start(jdbcConnectorConfig);
}
Assuming you are trying to do it in Standalone mode.
In your Application run configuration, your main class should be "org.apache.kafka.connect.cli.ConnectStandalone" and you need to pass two property files as program arguments.
You should also extend "your-custom-JdbcSourceConnector" class with "org.apache.kafka.connect.source.SourceConnector" class
Main Class: org.apache.kafka.connect.cli.ConnectStandalone
Program Arguments: .\path-to-config\connect-standalone.conf .\path-to-config\connetcor.properties
"connect-standalone.conf" file will contain all Kafka broker details.
// Example connect-standalone.conf
bootstrap.servers=<comma seperated brokers list here>
group.id=some_loca_group_id
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=connect.offset
offset.flush.interval.ms=100
offset.flush.timeout.ms=180000
buffer.memory=67108864
batch.size=128000
producers.acks=1
"connector.properties" file will contain all details required to create and start connector
// Example connector.properties
name=some-local-connector-name
connector.class=your-custom-JdbcSourceConnector
tasks.max=3
topic=output-topic
fetchsize=10000
More info here : https://docs.confluent.io/current/connect/devguide.html#connector-example
Is spring.cloud.stream.kafka.binder.zkNodes mandatory? What would happen if value is absent?
It is no longer required (since 2.0).
For earlier versions, we had to use Zookeeper to provision topics.
/**
* Zookeeper nodes.
* #param zkNodes the nodes.
* #deprecated connection to zookeeper is no longer necessary
*/
#Deprecated
#DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkNodes(String... zkNodes) {
this.zkNodes = zkNodes;
}