I'm trying to run a Micronaut test including kafka with testcontainers.
For my test I need that my code and the kafka server share the same port but I can not configure the port in kafka:
#Container
static KafkaContainer kafka =
new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:latest"));
It is generating a random port and it is not possible to configure it.
Another possibility is to change the application.yml property that the producer user for the kafka server but I can not find any soluciĆ³n also.
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, configuration.kafkaUrl);
Your test class needs to implement TestPropertyProvider and override getProperties():
#MicronautTest
class MySpec extends Specification implements TestPropertyProvider {
private static final Collection<Book> received = new ConcurrentLinkedDeque<>()
static KafkaContainer kafka = new KafkaContainer(
DockerImageName.parse('confluentinc/cp-kafka:latest'))
#Override
Map<String, String> getProperties() {
kafka.start()
['kafka.bootstrap.servers': kafka.bootstrapServers]
}
// tests here
See this official Micronaut guide for a detailed tutorial:
https://guides.micronaut.io/latest/micronaut-kafka-gradle-groovy.html
Related
I am using spring kafka with the embedded kafka for JUnit test, it gives an error for every test on windows:
Error deleting C:\Users:LXX691\AppData\Local\Temp\kafka-1103610162480947200/.lock: The process cannot access the file because it is being used by another process.
I just did the basic configuration like below
#SpringBootTest(webEnvironment = RANDOM_PORT)
#RunWith(SpringRunner.class)
public class KafkaTest {
#Autowired
EmbeddedKafkaBroker broker;
#Before
void setUp() throws Exception() {
// setup producer and consumers
}
#Test
void test() {
producer.send(new ProducerRecord<>("topic", "content"));
}
}
Any suggestion to resolve or any workaround is appreciated.
This is a known issue in Apache Kafka: https://issues.apache.org/jira/browse/KAFKA-8145.
Unfortunately there is nothing in Spring Kafka we can do on the matter.
See more info here: Kafka: unable to start Kafka - process can not access file 00000000000000000000.timeindex and here https://github.com/spring-projects/spring-kafka/issues/194
Using spring integration Kafka dsl, I wonder why listener not receive messages? But the same application If I replace spring integration DSL with a method annotated with KafkaListener is able to consume messages fine.
What am I missing with DSL?
DSL code that does not consume:
#Configuration
#EnableKafka
class KafkaConfig {
//consumer factory provided by Spring boot
#Bean
IntegrationFlow inboundKafkaEventFlow(ConsumerFactory consumerFactory) {
IntegrationFlows
.from(Kafka
.messageDrivenChannelAdapter(consumerFactory, "kafkaTopic")
.configureListenerContainer({ c -> c.groupId('kafka-consumer-staging') })
.id("kafkaTopicListener").autoStartup(true)
)
.channel("logChannel")
.get()
}
}
logChannel (or any other channel I use), does not reflect inbound messages.
Instead of the above code, If I use plain listener, it works fine to consume messages.
#Component
class KafkaConsumer {
#KafkaListener(topics = ['kafkaTopic'], groupId = 'kafka-consumer-staging')
void inboundKafkaEvent(String message) {
log.debug("message is {}", message)
}
}
Both approaches uses same application.properties for Kafka consumer.
You are missing the fact that you use Spring Integration, but you haven't enable it in your application. You don't need to do that for Kafka though, since you are not going to consume it with the #KafkaListener. So, to enable Spring Integration infrastructure, you need to add #EnableIntegration on your #Configuration class: https://docs.spring.io/spring-integration/docs/5.1.6.RELEASE/reference/html/#configuration-enable-integration
I am trying to write a standalone java program using kafka-jdbc-connect API to stream data from oracle-table to kafka topic.
API used: I'm currently trying to use Kafka Connectors, JdbcSourceConnector class to be precise.
Constraint: Use Confluent Java API and not do it through CLI or by executing provided shell script.
What I did: create an instance of JdbcSourceConnector.java class and call start(Properties) method of this class by providing the Properties object as a parameter. This properties object has database connection properties, table whitelist property, topic prefix etc.
After starting thread, i'm unable to read the data from "topic-prefix-tablename" topic. I am not sure how to pass Kafka Broker details to JdbcSourceConnector. Calling start() method on JdbcSourceConnector starting thread but not doing anything.
Is there a simple java API tutorial page/example code i can refer because all the examples i see are using CLI/shell scripts?
Any help is appreciated
Code:
public static void main(String[] args) {
Map<String, String> jdbcConnectorConfig = new HashMap<String, String>();
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_URL_CONFIG, "<DATABASE_URL>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_USER_CONFIG, "<DATABASE_USER>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.CONNECTION_PASSWORD_CONFIG, "<DATABASE_PASSWORD>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.POLL_INTERVAL_MS_CONFIG, "300000");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.BATCH_MAX_ROWS_CONFIG, "10");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.MODE_CONFIG, "timestamp");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TABLE_WHITELIST_CONFIG, "<TABLE_NAME>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TIMESTAMP_COLUMN_NAME_CONFIG, "<TABLE_COLUMN_NAME>");
jdbcConnectorConfig.put(JdbcSourceConnectorConfig.TOPIC_PREFIX_CONFIG, "test-oracle-jdbc-");
JdbcSourceConnector jdbcSourceConnector = new JdbcSourceConnector ();
jdbcSourceConnector.start(jdbcConnectorConfig);
}
Assuming you are trying to do it in Standalone mode.
In your Application run configuration, your main class should be "org.apache.kafka.connect.cli.ConnectStandalone" and you need to pass two property files as program arguments.
You should also extend "your-custom-JdbcSourceConnector" class with "org.apache.kafka.connect.source.SourceConnector" class
Main Class: org.apache.kafka.connect.cli.ConnectStandalone
Program Arguments: .\path-to-config\connect-standalone.conf .\path-to-config\connetcor.properties
"connect-standalone.conf" file will contain all Kafka broker details.
// Example connect-standalone.conf
bootstrap.servers=<comma seperated brokers list here>
group.id=some_loca_group_id
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
key.converter.schemas.enable=false
value.converter.schemas.enable=false
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=connect.offset
offset.flush.interval.ms=100
offset.flush.timeout.ms=180000
buffer.memory=67108864
batch.size=128000
producers.acks=1
"connector.properties" file will contain all details required to create and start connector
// Example connector.properties
name=some-local-connector-name
connector.class=your-custom-JdbcSourceConnector
tasks.max=3
topic=output-topic
fetchsize=10000
More info here : https://docs.confluent.io/current/connect/devguide.html#connector-example
I am using spring kafka to connect to kafka and to check the status of the kafka server, I am using org.apache.kafka.clients.admin.AdminClient. It is working fine in my local but but when I deploy into QA environment, it doesn't start, complaining not able to create AdminClient bean. My guess is that AdminClient would be using any specific port, which would not be open in QA environment.
Does someone know if this is the case and which port KafkaAdmin connect to? Spring kafka without KafkaAdmin seems to be working fine.
There is nothing special. The KafkaAdmin is based on some provided config:
/**
* Create an instance with an {#link AdminClient} based on the supplied
* configuration.
* #param config the configuration for the {#link AdminClient}.
*/
public KafkaAdmin(Map<String, Object> config) {
This config is indeed use for the AdminClient internal instance:
adminClient = AdminClient.create(this.config);
and that one is based on the AdminClientConfig:
/**
* Create a new AdminClient with the given configuration.
*
* #param conf The configuration.
* #return The new KafkaAdminClient.
*/
public static AdminClient create(Map<String, Object> conf) {
return KafkaAdminClient.createInternal(new AdminClientConfig(conf), null);
}
So, all the properties required for AdminClient connection you can find in that AdminClientConfig. And pay attention that host/port by default is exactly the same as it is for any other clients:
public static final String BOOTSTRAP_SERVERS_CONFIG = CommonClientConfigs.BOOTSTRAP_SERVERS_CONFIG;
and
private static final String BOOTSTRAP_SERVERS_DOC = CommonClientConfigs.BOOTSTRAP_SERVERS_DOC;
So, when you create a KafkaAdmin instance, you should provide at least that bootstrap.servers property.
Also would be great to see a stack trace which happens in that mentioned environment.
Can someone please guide me on how to intercept MQTT messages on ActiveMQ Artemis broker? I tried as suggested in the manual but the MQTT messages are not intercepting. However the publishing and subscribing of messages are working fine.
Interceptor class:
public class InterceptorExample implements Interceptor {
#Override
public boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException {
System.out.println("Packet intercepted");
return true;
}
}
I add the interceptor to the configuration in addMQTTConnector method
protected void addMQTTConnector() throws Exception {
.
.
.
List<String> incomingInterceptors = new ArrayList<>();
incomingInterceptors.add("org.apache.activemq.artemis.core.protocol.mqtt.InterceptorExample");
server.getConfiguration().setIncomingInterceptorClassNames(incomingInterceptors);
}
full code for the broker class is at https://codeshare.io/snZsB
I filled a feature request for Interceptor support in MQTT. It is already implemented and was released in Artemis 1.4.0.
In Artemis 1.3.0, only messages sent over the core protocol (and maybe one more other than MQTT) could be intercepted.