Apache camel-apache kafka integration - apache-kafka

I am learning how to integrate kafka with apache camel and i encountered the following error.
Any help will be appreciated.I have a file created inside C:/inbox folder and want to consume the text in it using kafka consumer.I am using version 3.1.0 of apache camel.Below is my code
package com.javainuse;
import org.apache.camel.builder.RouteBuilder;
public class SimpleRouteBuilder extends RouteBuilder {
#Override
public void configure() throws Exception {
String topicName = "test123";
String kafkaServer = "kafka:localhost:9092";
String zooKeeperHost = "zookeeperHost=localhost&zookeeperPort=2181";
String serializerClass = "serializerClass=kafka.serializer.StringEncoder";
String toKafka = "kafka:localhost:9092;kafka:test123?brokers=localhost:9092;zookeeperHost=localhost;zookeeperPort=2181;groupId=group1";
from("file:C:/inbox?noop=true").to(toKafka);
}
}
And below is the error I am getting
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
org.apache.camel.FailedToStartRouteException: Failed to start route route1 because of Route(route1)[From[file:C:/inbox?noop=true] -> [To[kafka:loc...
at org.apache.camel.impl.engine.BaseRouteService.warmUp(BaseRouteService.java:133)
at org.apache.camel.impl.engine.AbstractCamelContext.doWarmUpRoutes(AbstractCamelContext.java:3246)
at org.apache.camel.impl.engine.AbstractCamelContext.safelyStartRouteServices(AbstractCamelContext.java:3139)
at org.apache.camel.impl.engine.AbstractCamelContext.doStartOrResumeRoutes(AbstractCamelContext.java:2925)
at org.apache.camel.impl.engine.AbstractCamelContext.doStartCamel(AbstractCamelContext.java:2725)
at org.apache.camel.impl.engine.AbstractCamelContext.lambda$doStart$2(AbstractCamelContext.java:2527)
at org.apache.camel.impl.engine.AbstractCamelContext.doWithDefinedClassLoader(AbstractCamelContext.java:2544)
at org.apache.camel.impl.engine.AbstractCamelContext.doStart(AbstractCamelContext.java:2525)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.impl.engine.AbstractCamelContext.start(AbstractCamelContext.java:2421)
at com.javainuse.MainApp.main(MainApp.java:12)
Caused by: org.apache.camel.RuntimeCamelException: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.camel.RuntimeCamelException.wrapRuntimeCamelException(RuntimeCamelException.java:52)
at org.apache.camel.support.ChildServiceSupport.start(ChildServiceSupport.java:67)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:87)
at org.apache.camel.processor.channel.DefaultChannel.doStart(DefaultChannel.java:144)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:73)
at org.apache.camel.processor.Pipeline.doStart(Pipeline.java:154)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.support.processor.DelegateAsyncProcessor.doStart(DelegateAsyncProcessor.java:78)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.impl.engine.BaseRouteService.startChildService(BaseRouteService.java:339)
at org.apache.camel.impl.engine.BaseRouteService.doWarmUp(BaseRouteService.java:189)
at org.apache.camel.impl.engine.BaseRouteService.warmUp(BaseRouteService.java:131)
... 10 more
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:432)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:298)
at org.apache.camel.component.kafka.KafkaProducer.doStart(KafkaProducer.java:119)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.impl.engine.AbstractCamelContext.internalAddService(AbstractCamelContext.java:1455)
at org.apache.camel.impl.engine.AbstractCamelContext.addService(AbstractCamelContext.java:1391)
at org.apache.camel.processor.SendProcessor.doStart(SendProcessor.java:240)
at org.apache.camel.support.service.ServiceSupport.start(ServiceSupport.java:121)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:70)
at org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:87)
at org.apache.camel.processor.errorhandler.RedeliveryErrorHandler.doStart(RedeliveryErrorHandler.java:1454)
at org.apache.camel.support.ChildServiceSupport.start(ChildServiceSupport.java:60)
... 25 more
Caused by: org.apache.kafka.common.config.ConfigException: Invalid url in bootstrap.servers: localhost:9092;zookeeperHost=localhost;zookeeperPort=2181;groupId=group1
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:58)
at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:47)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:407)
... 37 more
Process finished with exit code 0```

The error stacktrace says that your Kafka consumer URI is invalid (at the bottom of the stacktrace). And it is indeed.
The correct form is kafka:[topicname]?[options] (check Camel-Kafka docs)
So when I look at your URI it should probably be
kafka:test123?brokers=localhost:9092&groupId=group1
Your URI has the following problems:
It contains 2 times kafka:[topicname] what is invalid
One of the kafka:[topicname] is kafka:[brokers], remove it
Semicolons (;) instead of & to delimit options
Zookeeper options for old versions of camel-kafka, remove them
By the way: The line SLF4J: Defaulting to no-operation (NOP) logger implementation on top of your stacktrace says that you use SLF4J logging interface, but you have no implementation added to your project.
If you use Maven, you can add the following dependency to add the SLF4J API as well as Logback as implementation to your project.
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
</dependency>

Related

Flink - kafka connector OAUTHBEARER Class loader issue

I try to configure kafka authentification using sasl mechanism (OAUTHBEARER)(using flink 1.9.2, kafka-client 2.2.0).
When using Flink with SASL authentification I got the exception bellow.
Kafka is shaded in a fat jar with the application.
After a remote debugging I found that my callback handler has a ChildFirstClassloader and
org.apache.kafka.common.security.auth.AuthenticateCallbackHandler belongs to another ChildFirstClassloader so the instance of the following test is failing (OAuthBearerSaslClientFactory) :
if (!(Objects.requireNonNull(callbackHandler) instanceof AuthenticateCallbackHandler))
throw new IllegalArgumentException(String.format(
"Callback handler must be castable to %s: %s",
AuthenticateCallbackHandler.class.getName(), callbackHandler.getClass().getName()));
I have no idea why these two classes have two different classloader.
Any idea? Any workaround?
Thanks for the help.
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
Caused by: java.lang.IllegalArgumentException: Callback handler must be castable to org.apache.kafka.common.security.auth.AuthenticateCallbackHandler: org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslClientCallbackHandler
at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerSaslClient$OAuthBearerSaslClientFactory.createSaslClient(OAuthBearerSaslClient.java:182)
at javax.security.sasl.Sasl.createSaslClient(Sasl.java:420)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.lambda$createSaslClient$0(SaslClientAuthenticator.java:180)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.createSaslClient(SaslClientAuthenticator.java:176)
at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.<init>(SaslClientAuthenticator.java:168)
at org.apache.kafka.common.network.SaslChannelBuilder.buildClientAuthenticator(SaslChannelBuilder.java:254)
at org.apache.kafka.common.network.SaslChannelBuilder.lambda$buildChannel$1(SaslChannelBuilder.java:202)
at org.apache.kafka.common.network.KafkaChannel.<init>(KafkaChannel.java:140)
at org.apache.kafka.common.network.SaslChannelBuilder.buildChannel(SaslChannelBuilder.java:210)
at org.apache.kafka.common.network.Selector.buildAndAttachKafkaChannel(Selector.java:334)
at org.apache.kafka.common.network.Selector.registerChannel(Selector.java:325)
at org.apache.kafka.common.network.Selector.connect(Selector.java:257)
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:920)
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.trySend(ConsumerNetworkClient.java:474)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:255)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
at org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:292)
at org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1803)
at org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1771)
at org.apache.flink.streaming.connectors.kafka.internal.KafkaPartitionDiscoverer.getAllPartitionsForTopics(KafkaPartitionDiscoverer.java:77)
at org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.discoverPartitions(AbstractPartitionDiscoverer.java:131)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.open(FlinkKafkaConsumerBase.java:508)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:552)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:416)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
at java.lang.Thread.run(Thread.java:748)
I'm not sure if you've solved this already, but I wrestled with this exact same scenario for quite a while. What ended up working for me was copying the kafka-clients jar into Flink's lib/ directory.
Sorry forgot to post the solution, but yes i solved it in the same way, by copying the kafka-client in flink lib.

Error while starting kafka cluster:java.lang.NoSuchMethodError

I am trying to start kafka cluster on my local machine having ubuntu 18.04 with intellij 2019. I have kafka 2.3. I already started zookeeper before it. I am trying to run a shell script having below code :
kafka-server-start.sh $KAFKA_HOME/config/server-0.properties.
I am getting below error :
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/vagrant/app/apache-hive-3.0.0-bin/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/vagrant/app/kafka23/libs/slf4j-log4j12-1.7.26.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2020-06-08T13:36:09,329 INFO [main] kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean
2020-06-08T13:36:09,548 ERROR [main] kafka.Kafka$ - Exiting Kafka due to fatal exception
java.lang.NoSuchMethodError: scala.Predef$.refArrayOps([Ljava/lang/Object;)[Ljava/lang/Object;
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:43) [kafka_2.12-2.3.0.jar:?]
at kafka.Kafka$.main(Kafka.scala:67) [kafka_2.12-2.3.0.jar:?]
at kafka.Kafka.main(Kafka.scala) [kafka_2.12-2.3.0.jar:?]```
Can somebody please help to resolve this issue ?
The issue I found out was multiple sl4j binding coming from my bashrc file. Both hive and kafka sl4j bindings caused the conflict. I commented the relevant hive code in my bashrc and was able to create kafka cluster.

Exception on startup: NoSuchMethodException: org.springframework.kafka.core.KafkaTemplate.<init>()

I tried to update Spring Kafka version but got exception
Spring Kafka version 2.3.4.RELEASE
Spring Boot version 2.2.2.RELEASE
Kafka-clients version 2.3.1
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.springframework.kafka.core.KafkaTemplate]: No default constructor found; nested exception is java.lang.NoSuchMethodException: org.springframework.kafka.core.KafkaTemplate.<init>()
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:83)
at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateBean(AbstractAutowireCapableBeanFactory.java:1312)
... 101 more
Caused by: java.lang.NoSuchMethodException: org.springframework.kafka.core.KafkaTemplate.<init>()
at java.lang.Class.getConstructor0(Class.java:3082)
at java.lang.Class.getDeclaredConstructor(Class.java:2178)
at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:78)
... 102 more
You need to show you code and configuration and the full stack trace (you should never edit/truncate stack traces here). The error seems quite clear:
Caused by: java.lang.NoSuchMethodException: org.springframework.kafka.core.KafkaTemplate.()
There is no no-arg constructor - it needs a producer factory; we need to see the code and configuration to figure out who's trying to create a template with no PF.
Normaly, Spring Boot will automatically configure a KafkaTemplate for you.
Thank you! The problem wa in the tests - I incorrectly determined the generic type of KafkaTemplate. I used
KafkaTemplate <String, Bytes> instead of KafkaTemplate<String, Message> which I am using in app code. So I suppose, test spring context could not define proper bean to autowire.

Getting exception while instantiating KafkaProducer

I am using IBM Bluemix implementation of the Kafka Broker.
I am creating the KafkaProducer with following properties:
key.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
bootstrap.servers=xxxx.xxxxxx.xxxxxx.xxxxxx.bluemix.net:xxxx
client.id=messagehub
acks=-1
security.protocol=SASL_SSL
ssl.protocol=TLSv1.2
ssl.enabled.protocols=TLSv1.2
ssl.truststore.location=xxxxxxxxxxxxxxxxx
ssl.truststore.password=xxxxxxxx
ssl.truststore.type=JKS
ssl.endpoint.identification.algorithm=HTTPS
KafkaProducer<byte[], byte[]> kafkaProducer =
new KafkaProducer<byte[], byte[]>(props);
With this I got following exception:
org.apache.kafka.common.KafkaException:
org.apache.kafka.clients.producer.internals.DefaultPartitioner is not
an instance of org.apache.kafka.clients.producer.Partitioner
After reading the following blog:
http://blog.rocana.com/kafkas-defaultpartitioner-and-byte-arrays I added the following line to my property file, even though I was using new API:
partitioner.class=kafka.producer.ByteArrayPartitioner
Now I am getting this exception:
org.apache.kafka.common.KafkaException: Could not instantiate class
kafka.producer.ByteArrayPartitioner Does it have a public no-argument
constructor?
It looks like ByteArrayPartitioner does not have a default constructor.
Any idea what I am missing here?
Thanks
Madhu
As I was using the KafkaProducer API, I did not need
partitioner.class=kafka.producer.ByteArrayPartitioner
property. The issue was there were 2 copies of the kafkaclient jar. We have configured our installation such that all library jar files are in an external shared directory. But due to the POM configuration error the war file also had a copy of the kafka client in its lib directory. Once I fixed this issue, it worked fine.
Madhu

Apache Kafka example error: Failed to send message after 3 tries

I am running this kafka producer example mentioned in its site
The code:
public class TestProducer {
public static void main(String[] args) {
long events = Long.parseLong(args[0]);
Random rnd = new Random();
Properties props = new Properties();
props.put("metadata.broker.list", "host.broker-1:9093, host.broker-2:9093, host.broker-3:9095");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("partitioner.class", "test.app.SimplePartitioner");
props.put("request.required.acks", "1");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
for (long nEvents = 0; nEvents < events; nEvents++) {
long runtime = new Date().getTime();
String ip = "192.168.2." + rnd.nextInt(255);
String msg = runtime + ",www.example.com," + ip;
KeyedMessage<String, String> data = new KeyedMessage<String, String>("page_visits", ip, msg);
producer.send(data);
}
producer.close();
}
}
public class SimplePartitioner implements Partitioner{
public SimplePartitioner (VerifiableProperties props) {
}
public int partition(Object key, int a_numPartitions) {
int partition = 0;
String stringKey = (String) key;
int offset = stringKey.lastIndexOf('.');
if (offset > 0) {
partition = Integer.parseInt( stringKey.substring(offset+1)) % a_numPartitions;
}
return partition;
}
}
More details:
I am running this application on a host(call is producer) which is remote to host-broker[1-3]
I can ping and ssh the broker host from producer host.
Provided the advertised.host.name in the server.properties (they are named as server[1-3].properties in the brokers respectively
The properties:
broker.id=1
port=9093
host.name=host.broker.internal.name
advertised.host.name=host-broker1
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/1/kafka-logs-1,/data/2/kafka-logs-2
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=zk1:2181,zk2:2181,zk3:2181
zookeeper.connection.timeout.ms=6000
Any idea on how to fix this error?
I got these errors when running a Kafka producer:
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Found a solution:
On my Mac box, after I download the scala-2.10 and kafka_2.10-0.8.1, in the kafka_2.10-0.8.1 directory, every thing is fine when I start zookeeper, kafka server, and create a test topic. Then I need to start a producer for the test topic. but there is an error:
yhuangMac:kafka_2.10-0.8.1 yhuang$ ./bin/kafka-console-producer.sh –broker-list localhost:9092 –topic test
SLF4J: Failed to load class “org.slf4j.impl.StaticLoggerBinder”.
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
The reason is that in the kafka libs directory, the kafka release zip file only included jar file of slf4j-api, they missed a jar file: slf4j-nop.jar, so we have to go to http://www.slf4j.org, download slf4j-1.7.7.zip, and then unzip it, copy the slf4j-api-1.7.7, slf4j-nop-1.7.7.jar into kafka’s libs directory.
Restart kafka producer again, now no error is reported.
Source: SOLUTION
You need to add the SLF4j logging implementation. if you are using maven as the build tool try adding this following to your pom.xml and see if it works ..
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.5</version>
</dependency>
This is a solution to the exception in the original question asked by Krish: "kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries."
The FAQ here and here says that your hostname should be set correctly. I have not experienced that condition. But I found another condition when a kafka producer gives this error message: when your partition key in the producer is wrong. That is, if you have a topic with one partition, then partition key in the producer can be either null(message is sent to a random partition) or 0(partitions in kafka are numbered starting from 0). If you try to use a partition key of 1, this exception is thrown in the producer. Or if you have 3 partitions in the topic, and you use a partition key of 3(key of 3 is invalid because valid partition numbers are 0,1,2), this exception is thrown. This error is consistent when the partition number in the producer's send() method does not match with the range of partitions in the topic.
I used kafka version 0.8.2. The client API I used was the package kafka.javaapi.producer.Producer.
This can happen if the client cannot reach BOTH hostname and IP of the kafka broker.
Make an entry to the clients \etc\hosts or C:\Windows\System32\drivers\etc\hosts and that resolved this issue for me.
I got the error from apache kafka:
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details
My Setup:
OS: Ubuntu 14.04
sbt: sbt launcher version 0.13.5
scala: Scala code runner version 2.9.2
Was able to fix it with these commands:
cd /home/el/apachekafka/kafka_2.10-0.8.1.1/libs
wget http://www.slf4j.org/dist/slf4j-1.7.7.tar.gz
tar -xvf slf4j-1.7.7.tar.gz
cd /home/el/apachekafka/kafka_2.10-0.8.1.1/libs/slf4j-1.7.7
cp slf4j-api-1.7.7.jar ..
cp slf4j-nop-1.7.7.jar ..
Then re-run the command and the producer doesn't throw any error.
ForHDP kafka use broker port: 6667
For Standalone kafka use broker port: 9092
The error was because of the port no which we were using (HDP uses 6667 but we were using 9092)
bin/kafka-console-producer.sh --broker-list broker-ip:9092 --topic test //not working
bin/kafka-console-producer.sh --broker-list broker-ip:6667 --topic test //working
link: Kafka console producer Error in Hortonworks HDP 2.3 Sandbox
Not sure but one possibility could be that the topic is not created on Kafka.
Check the web UI for kafka and make sure the topic you are using i.e. "page_visits" to send the data is created there.
If not it is very easy to create the topic using the GUI.
I came across this error in Hortonworks HDP 2.2 where default port is set to 6667.
If your kafka server is running on HDP sandbox the resolution is to set
metadata.broker.list as 10.0.2.15:6667 Please follow this code.
Properties props = new Properties();
props.put("metadata.broker.list", "10.0.2.15:6667");
props.put("serializer.class", "kafka.serializer.StringEncoder");
//props.put("producer.type","async");
props.put("request.required.acks", "1");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
try{
producer.send(new KeyedMessage<String, String>("zerg.hydra", jsonPayload));
producer.close();
}catch(Exception e){
e.printStackTrace();
}