Is spring.cloud.stream.kafka.binder.zkNodes mandatory? What would happen if value is absent?
It is no longer required (since 2.0).
For earlier versions, we had to use Zookeeper to provision topics.
/**
* Zookeeper nodes.
* #param zkNodes the nodes.
* #deprecated connection to zookeeper is no longer necessary
*/
#Deprecated
#DeprecatedConfigurationProperty(reason = "No longer necessary since 2.0")
public void setZkNodes(String... zkNodes) {
this.zkNodes = zkNodes;
}
Related
Keep getting log:reactor.core.Exceptions$ErrorCallbackNotImplemented: org.apache.kafka.common.errors.TimeoutException: Topic topic not present in metadata after 60000 ms. Caused by: org.apache.kafka.common.errors.TimeoutException: Topic topic not present in metadata after 60000 ms. when trying to produce message on kafka.
Already made sure that I have Jackson core, Jackson databind and Kafka clients dependencies in the producer project. Also How do I pass security protocol in reactor kafka SenderOptions
Topic topic not present in metadata after 60000 ms. You have to create the topic before you can use it - either with command line tools, or with an AdminClient.
You can set any ProducerConfig property in the map passed into the create().
/**
* Creates a sender options instance with the specified config overrides for the underlying
* Kafka {#link Producer}.
* #return new instance of sender options
*/
#NonNull
static <K, V> SenderOptions<K, V> create(#NonNull Map<String, Object> configProperties) {
return new ImmutableSenderOptions<>(configProperties);
}
I have followed the below documentation https://www.confluent.io/blog/spring-kafka-can-your-kafka-consumers-handle-a-poison-pill/ to handle deserialization exceptions.
It works fine, the message gets logged and move forward, but everytime I restart the server the bad messages are logged again.
Is there a way I can skip/acknowledge the bad message once it is logged so that it doesnt get picked up again on restart the server
Consumer YAML
spring:
kafka:
bootstrap-servers: localhost:9092
consumer:
# Configures the Spring Kafka ErrorHandlingDeserializer that delegates to the 'real' deserializers
key-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
value-deserializer: org.springframework.kafka.support.serializer.ErrorHandlingDeserializer
properties:
# Delegate deserializers
spring.deserializer.key.delegate.class: org.apache.kafka.common.serialization.StringDeserializer
spring.deserializer.value.delegate.class: org.sprngframework.kafka.support.serializer.JsonDeserializer
Also, NOTE:=
From the above, in logs I see it gives a warn that spring.deserializer.value.delegate.class is given -> but not a known config.
Have also confgured this
#Bean
#Configuration
#EnableKafka
public class KafkaConfiguration {
/**
* Boot will autowire this into the container factory.
*/
#Bean
public LoggingErrorHandler errorHandler() {
return new LoggingErrorHandler();
}
}
Could someone please advise on the same
I wish to list all configurations active on a kafka broker. I could see the configurations in server.properties files but that's not all, it doesn't show all configurations. I want to be able to see all configurations, even the default ones. Is this possible?
Any pointers in this direction would be greatly appreciated.
There is no command which list the current configuration of a kafka broker. However if you want to see all the configuration parameters with there default values and importance it is listed here
https://docs.confluent.io/current/installation/configuration/broker-configs.html
You can achieve that programatically through Kafka AdminClient (I'm using 2.0 FWIW - the interface is still evolving):
final String brokerId = "1";
final ConfigResource cr = new ConfigResource(Type.BROKER, brokerId);
final DescribeConfigsResult dcr = admin.describeConfigs(Arrays.asList(cr));
final Map<ConfigResource, Config> configMap = dcr.all().get();
for (final Config config : configMap.values()) {
for (final ConfigEntry entry : config.entries()) {
System.out.println(entry);
}
}
KafkaAdmin Javadoc
Each of config entries has a 'source' property that indicates where the property is coming from (in case of broker it's default broker config or per-broker override; for topics there's more possible values).
I am using spring kafka to connect to kafka and to check the status of the kafka server, I am using org.apache.kafka.clients.admin.AdminClient. It is working fine in my local but but when I deploy into QA environment, it doesn't start, complaining not able to create AdminClient bean. My guess is that AdminClient would be using any specific port, which would not be open in QA environment.
Does someone know if this is the case and which port KafkaAdmin connect to? Spring kafka without KafkaAdmin seems to be working fine.
There is nothing special. The KafkaAdmin is based on some provided config:
/**
* Create an instance with an {#link AdminClient} based on the supplied
* configuration.
* #param config the configuration for the {#link AdminClient}.
*/
public KafkaAdmin(Map<String, Object> config) {
This config is indeed use for the AdminClient internal instance:
adminClient = AdminClient.create(this.config);
and that one is based on the AdminClientConfig:
/**
* Create a new AdminClient with the given configuration.
*
* #param conf The configuration.
* #return The new KafkaAdminClient.
*/
public static AdminClient create(Map<String, Object> conf) {
return KafkaAdminClient.createInternal(new AdminClientConfig(conf), null);
}
So, all the properties required for AdminClient connection you can find in that AdminClientConfig. And pay attention that host/port by default is exactly the same as it is for any other clients:
public static final String BOOTSTRAP_SERVERS_CONFIG = CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG;
and
private static final String BOOTSTRAP_SERVERS_DOC = CommonClientConfigs.BOOTSTRAP_SERVERS_DOC;
So, when you create a KafkaAdmin instance, you should provide at least that bootstrap.servers property.
Also would be great to see a stack trace which happens in that mentioned environment.
Can someone please guide me on how to intercept MQTT messages on ActiveMQ Artemis broker? I tried as suggested in the manual but the MQTT messages are not intercepting. However the publishing and subscribing of messages are working fine.
Interceptor class:
public class InterceptorExample implements Interceptor {
#Override
public boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException {
System.out.println("Packet intercepted");
return true;
}
}
I add the interceptor to the configuration in addMQTTConnector method
protected void addMQTTConnector() throws Exception {
.
.
.
List<String> incomingInterceptors = new ArrayList<>();
incomingInterceptors.add("org.apache.activemq.artemis.core.protocol.mqtt.InterceptorExample");
server.getConfiguration().setIncomingInterceptorClassNames(incomingInterceptors);
}
full code for the broker class is at https://codeshare.io/snZsB
I filled a feature request for Interceptor support in MQTT. It is already implemented and was released in Artemis 1.4.0.
In Artemis 1.3.0, only messages sent over the core protocol (and maybe one more other than MQTT) could be intercepted.