Currently I am using Kafka SpoolDir connector in standalone mode. After adding the required configurations to the properties file, I start the connector using
kafka/bin/connect-standalone.sh connect-standalone.properties file-source.properties
Is there any way to start the connector(stadalone/ distributed) using a java code only, in the way we can write consumer and producer java codes?
ConnectStandalone is the Java class that this command starts, but Connect Framework is not meant to be ran as an embedded service
You can see the source code here that starts the server and parses the config file
Related
I would like to develop a Kafka connector but for test quickly I would like reload the plugin without reload the container in my docker stack. Is there a way to do it?
Actually I compile the connector and i move the jar file in a folder binded on the container kafka connect. After move the file I reload the container.
Thanks in advance.
It's not possible to "hot swap" plugins; they are only scanned at startup of the Connect worker process, after the CLASSPATH is also scanned.
You should use unit/integration tests for "quickly testing". If that's not enough, then connect-standalone can be used rather than containers
The problem is simple: i want to print all topics from apache kafka after installing kafka module on karaf. I need to get properties from cfg file which is located in jbossfuse/etc and create a KafkaConsumer object. I want to implement BundleActivator to be able to start method in the moment of installation module.
The question is: how can i get properties from the config file?
I found some solution here: some solution, they said " you can use ConfigAdimn service from OSGi spec. ". How can i use it? All examples with the code are welcome
Karaf uses Felix-FileInstall to read config files: http://felix.apache.org/documentation/subprojects/apache-felix-file-install.html
So if there is a file named kafka.cfg, it will pick it up and register a config with the ConfigAdmin-Service under the pid 'kafka'.
You can fetch the ConfigAdmin-Service and fetch the config using an Activator and read that config from there, but I strongly recommend to use DeclarativeServices or Blueprint instead to interact with the OSGi-Framework, both support injection of configuration if it is available.
Because otherwise you have to deal yourself with the following topics:
there is no ConfigAdmin (yet), maybe because your bundle starts earlier)
the ConfigAdmin changes (for example due to a package refresh or update)
the configuration is not yet registered (because felix has not read it yet)
the configuration gets updated (for example somone changes the file)
I was working with a Kafka download pack and was following the Kafka getting started guide. Thus, I created a sample topic called test.
Then when I wanted to try setting some access control lists using the kafka-acls.sh script. For some reason, I did not find that script inside the bin directory of my kafka pack.
So, I downloaded a fresh kafka pack from their website to check and this script was available. (I don't know why or how it wasn't there in the earlier pack)
However, when I started kafka from my new pack and tried to create the same topic test, I get an error saying that the topic already exists.
I am trying to figure out how this is possible even with a freshly downloaded instance? Does kafka save topics in some common directory or something?
Shabir
Found the reason. I figured that if the topics are persisted even across different bundles of Kafka then it must be stored in some place in disk other than the bundle itself.
A little bit of searching proved that the zookeeper stores its details in the directory pointed to by dataDir inside the zookeeper.properties file which is by default /tmp/zookeeper.
Once I deleted this folder and started a fresh Kafka pack all previous topics were gone and it behaved like a new fresh pack.
Thanks
Shabir
I have the following configuration:
Spring-integration-kafka 1.3.1.RELEASE
I have a custom kafka-sink and a custom kafka-source
The configuration I want to have:
I'd like to still using Spring-integration-kafka 1.3.1.RELEASE with my custom kafka-sink.
I'm changing my kafka-source logic to use Spring-integration-kafka-2.1.0.RELEASE. I noticed the way to implement a consumer/producer is way different to prior versions of Spring-integration-kafka.
My question is: could I face some compatibily issues?
I'm using Rabbit.
You should be ok then; it would probably work with the newer kafka jars in the source's /lib directory since each module is loaded in its own classloader so there should be no clashes with the xd/lib jars.
However, you might have to remove the old kafka jars from the xd/lib directory (which is why I asked about the message bus).
I am trying to execute a sample producer consumer application using Apache Kafka. I downloaded it from https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.0.0/kafka-0.10.0.0-src.tgz . Then I started following the steps given in http://www.javaworld.com/article/3060078/big-data/big-data-messaging-with-kafka-part-1.html.
When I tried to run bin/zookeeper-server-start.sh config/zookeeper.properties, I am getting Error: Could not find or load main class config.zookeeper.properties I googled about the issue but didn't get any useful information on this. Can anyone help me to continue?
You've downloaded the source package. Download the binary package of Kafka and do testing.
You have to download the binary version from the official Kafka web site.
Assuming you have the correct binary version check to see that you do not already have CLASSPATH defined in your environment. If you do and the defined CLASSPATH has a space in it (e.g.C:\Program Files\<>) then neither zookeeper or kafka will start.
To solve this either delete your existing CLASSPATH or modify the startup script that builds the zookeeper and kafka CLASSPATH values, putting your CLASSPATH entry in double quotes before the path is built