Creating partition for topic in kafka-node - apache-kafka

I have a created a HighLevelProducer to publish messages to a topic stream that will be consumed by a ConsumerGroupStream using kafka-node. When I create multiple consumers from the same ConsumerGroup to consume from that same topic only one partition is created and only one consumer is consuming. I have also tried to define the number of partitions for that topic although I'm not sure if is required to define it upon creating the topic and if so how many partitions will I need in advance. In addition, is it possible to push an object to the Transform stream and not a string (I currently used JSON.stringify because otherwise I got [Object object] in the consumer.
const myProducerStream = ({ kafkaHost, highWaterMark, topic }) => {
const kafkaClient = new KafkaClient({ kafkaHost });
const producer = new HighLevelProducer(kafkaClient);
const options = {
highWaterMark,
kafkaClient,
producer
};
kafkaClient.refreshMetadata([topic], err => {
if (err) throw err;
});
return new ProducerStream(options);
};
const transfrom = topic => new Transform({
objectMode: true,
decodeStrings: true,
transform(obj, encoding, cb) {
console.log(`pushing message ${JSON.stringify(obj)} to topic "${topic}"`);
cb(null, {
topic,
messages: JSON.stringify(obj)
});
}
});
const publisher = (topic, kafkaHost, highWaterMark) => {
const myTransfrom = transfrom(topic);
const producer = myProducerStream({ kafkaHost, highWaterMark, topic });
myTransfrom.pipe(producer);
return myTransform;
};
The consumer:
const createConsumerStream = (sourceTopic, kafkaHost, groupId) => {
const consumerOptions = {
kafkaHost,
groupId,
protocol: ['roundrobin'],
encoding: 'utf8',
id: uuidv4(),
fromOffset: 'latest',
outOfRangeOffset: 'earliest',
};
const consumerGroupStream = new ConsumerGroupStream(consumerOptions, sourceTopic);
consumerGroupStream.on('connect', () => {
console.log(`Consumer id: "${consumerOptions.id}" is connected!`);
});
consumerGroupStream.on('error', (err) => {
console.error(`Consumer id: "${consumerOptions.id}" encountered an error: ${err}`);
});
return consumerGroupStream;
};
const publisher = (func, destTopic, consumerGroupStream, kafkaHost, highWaterMark) => {
const messageTransform = new AsyncMessageTransform(func, destTopic);
const resultProducerStream = myProducerStream({ kafkaHost, highWaterMark, topic: destTopic })
consumerGroupStream.pipe(messageTransform).pipe(resultProducerStream);
};

For the first question:
The maximum working consumers in a group are equal to the number of partitions.
So if you have TopicA with 1 partition and you have 5 consumers in your consumer group, 4 of them will be idle.
If you have TopicA with 5 partitions and you have 5 consumers in your consumer group, all of them will be active and consuming messages from your topic.
To specify the number of partitions, you should create the topic from CLI instead of expecting Kafka to create it when you first publish messages.
To create a topic with a specific number of partitions:
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 3 --partitions 3 --topic test
To alter the number of partitions in an already existed topic:
bin/kafka-topics.sh --zookeeper zk_host:port/chroot --alter --topic test
--partitions 40
Please note that you can only increase the number of partitions, you can not decrease them.
Please refer to Kafka Docs https://kafka.apache.org/documentation.html
Also if you'd like to understand more about Kafka please check the free book https://www.confluent.io/resources/kafka-the-definitive-guide/

Related

node-rdkafka - debug set to all but I only see broker transport failure

I am trying to connect to kafka server. Authentication is based on GSSAPI.
/opt/app-root/src/server/node_modules/node-rdkafka/lib/error.js:411
return new LibrdKafkaError(e);
^
Error: broker transport failure
at Function.createLibrdkafkaError (/opt/app-root/src/server/node_modules/node-rdkafka/lib/error.js:411:10)
at /opt/app-root/src/server/node_modules/node-rdkafka/lib/client.js:350:28
This my test_kafka.js:
const Kafka = require('node-rdkafka');
const kafkaConf = {
'group.id': 'espdev2',
'enable.auto.commit': true,
'metadata.broker.list': 'br01',
'security.protocol': 'SASL_SSL',
'sasl.kerberos.service.name': 'kafka',
'sasl.kerberos.keytab': 'svc_esp_kafka_nonprod.keytab',
'sasl.kerberos.principal': 'svc_esp_kafka_nonprod#INT.LOCAL',
'debug': 'all',
'enable.ssl.certificate.verification': true,
//'ssl.certificate.location': 'some-root-ca.cer',
'ssl.ca.location': 'some-root-ca.cer',
//'ssl.key.location': 'svc_esp_kafka_nonprod.keytab',
};
const topics = 'hello1';
console.log(Kafka.features);
let readStream = new Kafka.KafkaConsumer.createReadStream(kafkaConf, { "auto.offset.reset": "earliest" }, { topics })
readStream.on('data', function (message) {
const messageString = message.value.toString();
console.log(`Consumed message on Stream: ${messageString}`);
});
You can look at this issue for the explanation of this error:
https://github.com/edenhill/librdkafka/issues/1987
Taken from #edenhill:
As a general rule for librdkafka-based clients: given that the cluster and client are correctly configured, all errors can be ignored as they are most likely temporary and librdkafka will attempt to recover automatically. In this specific case; if a group coordinator request fails it will be retried (using any broker in state Up) within 500ms. The current assignment and group membership will not be affected, if a new coordinator is found before the missing heartbeats times out the membership (session.timeout.ms).
Auto offset commits will be stalled until a new coordinator is found. In a future version we'll extend the error type to include a severity, allowing applications to happily ignore non-terminal errors. At this time an application should consider all errors informational, and not terminal.

Keyed kafka messages always seem to go to the same partition

My node application uses the kafka-node node module.
I have a kafka topic with three partitions as seen below:
Topic: NotifierTemporarye3df:/opPartitionCount: 3in$ kafReplicationFactor: 3ibe Configs: segment.bytes=1073741824 --topic NotifierTemporary
Topic: NotifierTemporary Partition: 0 Leader: 1001 Replicas: 1001,1003,1002 Isr: 1001,1003,1002
Topic: NotifierTemporary Partition: 1 Leader: 1002 Replicas: 1002,1001,1003 Isr: 1002,1001,1003
Topic: NotifierTemporary Partition: 2 Leader: 1003 Replicas: 1003,1002,1001 Isr: 1003,1002,1001
When I write a series of keyed messages to my topic, they all appear to be written to the same partition. I would expect some of my different keyed messages to be sent to partitions 1 and 2.
Here is my log output from the consumer onMessage event function for several messages:
the message is: {"topic":"NotifierTemporary","value":"{\"recipient\":66,\"subject\":\"download complete\",\"message\":\"s3/123.jpg\"}","offset":345,"partition":0,"highWaterOffset":346,"key":"66","timestamp":"2020-03-19T00:16:57.783Z"}
the message is: {"topic":"NotifierTemporary","value":"{\"recipient\":222,\"subject\":\"download complete\",\"message\":\"s3/123.jpg\"}","offset":346,"partition":0,"highWaterOffset":347,"key":"222","timestamp":"2020-03-19T00:16:57.786Z"}
the message is: {"topic":"NotifierTemporary","value":"{\"recipient\":13,\"subject\":\"download complete\",\"message\":\"s3/123.jpg\"}","offset":347,"partition":0,"highWaterOffset":348,"key":"13","timestamp":"2020-03-19T00:16:57.791Z"}
the message is: {"topic":"NotifierTemporary","value":"{\"recipient\":316,\"subject\":\"download complete\",\"message\":\"s3/123.jpg\"}","offset":348,"partition":0,"highWaterOffset":349,"key":"316","timestamp":"2020-03-19T00:16:57.798Z"}
the message is: {"topic":"NotifierTemporary","value":"{\"recipient\":446,\"subject\":\"download complete\",\"message\":\"s3/123.jpg\"}","offset":349,"partition":0,"highWaterOffset":350,"key":"446","timestamp":"2020-03-19T00:16:57.806Z"}
the message is: {"topic":"NotifierTemporary","value":"{\"recipient\":66,\"subject\":\"download complete\",\"message\":\"s3/123.jpg\"}","offset":350,"partition":0,"highWaterOffset":351,"key":"66","timestamp":"2020-03-19T00:17:27.918Z"}
the message is: {"topic":"NotifierTemporary","value":"{\"recipient\":222,\"subject\":\"download complete\",\"message\":\"s3/123.jpg\"}","offset":351,"partition":0,"highWaterOffset":352,"key":"222","timestamp":"2020-03-19T00:17:27.920Z"}
the message is: {"topic":"NotifierTemporary","value":"{\"recipient\":13,\"subject\":\"download complete\",\"message\":\"s3/123.jpg\"}","offset":352,"partition":0,"highWaterOffset":353,"key":"13","timestamp":"2020-03-19T00:17:27.929Z"}
the message is: {"topic":"NotifierTemporary","value":"{\"recipient\":316,\"subject\":\"download complete\",\"message\":\"s3/123.jpg\"}","offset":353,"partition":0,"highWaterOffset":354,"key":"316","timestamp":"2020-03-19T00:17:27.936Z"}
Here is the kafka-node producer code to send a message:
* #description Adds a notification message to the Kafka topic that is not saved in a database.
* #param {Int} recipientId - accountId of recipient of notification message
* #param {Object} message - message payload to send to recipient
*/
async sendTemporaryNotification(recipientId, subject, message) {
const notificationMessage = {
recipient: recipientId,
subject,
message,
};
// we need to validate this message schema - this will throw if invalid
Joi.assert(notificationMessage, NotificationMessage);
// partition based on the recipient
const payloads = [
{ topic: KAFKA_TOPIC_TEMPORARY, messages: JSON.stringify(notificationMessage), key: notificationMessage.recipient },
];
if (this.isReady) {
await this.producer.sendAsync(payloads);
}
else {
throw new ProducerNotReadyError('Notifier Producer not ready');
}
}
}
As you can see, none of them are ever from partitions 1 & 2. This is true even after constantly sending messages with random integer keys for several minutes. What could I be doing wrong?
The correct partitionerType needs to be configured when you create the producer:
// Partitioner type (default = 0, random = 1, cyclic = 2, keyed = 3, custom = 4), default 0
new Producer(client, {paritionerType: 3});
See the docs: https://www.npmjs.com/package/kafka-node#producerkafkaclient-options-custompartitioner
Scarysize was correct about me not specifying the partitioner type. For anyone wondering what a complete partitioned producer looks like, you can reference this code. I've verified that this distributes messages based off of the provided keys. I used a HighLevelProducer here because one of the main contributors of the kafka-node library suggested that others use it in order to solve partitioning issues. I have not verified that this solution would work with a regular Producer rather than the HighLevelProducer.
In this example, I'm sending notification messages to users based off of their userId. That is the key which messages are being partitioned on.
const { KafkaClient, HighLevelProducer, KeyedMessage } = require('kafka-node');
const Promise = require('bluebird');
const NotificationMessage = require(__dirname + '/../models/notificationMessage.js');
const ProducerNotReadyError = require(__dirname + '/../errors/producerNotReadyError.js');
const Joi = require('#hapi/joi');
const KAFKA_TOPIC_TEMPORARY = 'NotifierTemporary';
/**
* #classdesc Producer that sends notification messages to Kafka.
* #class
*/
class NotifierProducer {
/**
* Create NotifierProducer.
* #constructor
* #param {String} kafkaHost - address of kafka server
*/
constructor(kafkaHost) {
const client = Promise.promisifyAll(new KafkaClient({kafkaHost}));
const producerOptions = {
partitionerType: HighLevelProducer.PARTITIONER_TYPES.keyed, // this is a keyed partitioner
};
this.producer = Promise.promisifyAll(new HighLevelProducer(client, producerOptions));
this.isReady = false;
this.producer.on('ready', async () => {
await client.refreshMetadataAsync([KAFKA_TOPIC_TEMPORARY]);
console.log('Notifier Producer is operational');
this.isReady = true;
});
this.producer.on('error', err => {
console.error('Notifier Producer error: ', err);
this.isReady = false;
});
}
/**
* #description Adds a notification message to the Kafka topic that is not saved in a database.
* #param {Int} recipientId - accountId of recipient of notification message
* #param {String} subject - subject header of the message
* #param {Object} message - message payload to send to recipient
*/
async sendTemporaryNotification(recipientId, subject, message) {
const notificationMessage = {
recipient: recipientId,
subject,
message,
};
// we need to validate this message schema - this will throw if invalid
Joi.assert(notificationMessage, NotificationMessage);
// partition based on the recipient
const messageKM = new KeyedMessage(notificationMessage.recipient, JSON.stringify(notificationMessage));
const payloads = [
{ topic: KAFKA_TOPIC_TEMPORARY, messages: messageKM, key: notificationMessage.recipient },
];
if (this.isReady) {
await this.producer.sendAsync(payloads);
}
else {
throw new ProducerNotReadyError('Notifier Producer not ready');
}
}
}
/**
* Kafka topic that the producer and corresponding consumer will use to send temporary messages.
* #type {string}
*/
NotifierProducer.KAFKA_TOPIC_TEMPORARY = KAFKA_TOPIC_TEMPORARY;
module.exports = NotifierProducer;

Spring Cloud Stream > SendTo does not send to Kafka but directly via direct channel

I have two channels in my application which hare bound to two Kafka topics:
input
error.input.my-group
Input is configured in order to send message to dlq (error.input.my-group) in case of error.
I have a StreamListener on "error.input.my-group" which is configured in order to send the message back to original channel.
#StreamListener(Channels.DLQ)
#SendTo(Channels.INPUT)
public Message<?> reRoute(Message<?> failed){
messageDeliveryService.waitUntilCanBeDelivered(failed);
processed.incrementAndGet();
Integer retries = failed.getHeaders().get(X_RETRIES_HEADER, Integer.class);
retries = retries == null ? 1 : retries+1;
if (retries < MAX_RETRIES) {
logger.info("Retry (count={}) for {}", retries, failed);
return buildRetryMessage(failed, retries);
}
else {
logger.error("Retries exhausted (-> sent to parking lot) for {}", failed);
Channels.parkingLot().send(MessageBuilder.fromMessage(failed)
.setHeader(BinderHeaders.PARTITION_OVERRIDE,
failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build());
}
return null;
}
private Message<?> buildRetryMessage(Message<?> failed, int retries) {
return MessageBuilder.fromMessage(failed)
.setHeader(X_RETRIES_HEADER, retries)
.setHeader(BinderHeaders.PARTITION_OVERRIDE,
failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))
.build();
}
Here is my Channels class
#Component
public interface Channels {
String INPUT = "INPUT";
//Default name use by SCS (error.<input-topic-name>.<group-name>)
String DLQ = "error.input.my-group";
String PARKING_LOT = "parkingLot.input.my-group";
#Input(INPUT)
SubscribableChannel input();
#Input(DLQ)
SubscribableChannel dlq();
#Output(PARKING_LOT)
MessageChannel parkingLot();
}
Here is my configuration
spring:
cloud:
stream:
default:
group: my-group
binder:
headerMode: headers kafka:
binder:
# Necessary in order to commit the message to all the Kafka brokers handling the partition -> maximum durability
# -1 = all
requiredAcks: -1
brokers: bootstrap.kafka.svc.cluster.local:9092,bootstrap.kafka.svc.cluster.local:9093,bootstrap.kafka.svc.cluster.local:9094,bootstrap.kafka.svc.cluster.local:9095,bootstrap.kafka.svc.cluster.local:9096,bootstrap.kafka.svc.cluster.local:9097
bindings:
input:
consumer:
partitioned: true
enableDlq: true
dlqProducerProperties:
configuration:
key.serializer: "org.apache.kafka.common.serialization.ByteArraySerializer"
"[error.input.my-group]":
consumer:
# We cannot loose any message and we don't have any DLQ for the DLQ, therefore we only commit in case of success
autoCommitOnError: false
ackEachRecord: true
partitioned: true
enableDlq: false
bindings:
input:
contentType: application/xml
destination: input
"[error.input.my-group]":
contentType: application/xml
destination: error.input.my-group
"[parkingLot.input.my-group]":
contentType: application/xml
destination: parkingLot.input.my-group
Problem is my messages are never pushed again to Kafka but directly delivered to my input channel. Is there something I misunderstood?
In order to #SendTo the kafka destination instead of directly, you need an output binding.

Producing from localhost to Kafka in HDP Sandbox 2.6.5 not working

I am writing Kafka client producer as:
public class BasicProducerExample {
public static void main(String[] args){
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "127.0.0.1:9092");
props.put(ProducerConfig.ACKS_CONFIG, "all");
props.put(ProducerConfig.RETRIES_CONFIG, 0);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
//props.put(ProducerConfig.
props.put("batch.size","16384");// maximum size of message
Producer<String, String> producer = new KafkaProducer<String, String>(props);
TestCallback callback = new TestCallback();
Random rnd = new Random();
for (long i = 0; i < 2 ; i++) {
//ProducerRecord<String, String> data = new ProducerRecord<String, String>("dke", "key-" + i, "message-"+i );
//Topci and Message
ProducerRecord<String, String> data = new ProducerRecord<String, String>("dke", ""+i);
producer.send(data, callback);
}
producer.close();
}
private static class TestCallback implements Callback {
#Override
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if (e != null) {
System.out.println("Error while producing message to topic :" + recordMetadata);
e.printStackTrace();
} else {
String message = String.format("sent message to topic:%s partition:%s offset:%s", recordMetadata.topic(), recordMetadata.partition(), recordMetadata.offset());
System.out.println(message);
}
}
}
}
OUTPUT:
Error while producing message to topic :null
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
NOTE:
Broker port: localhost:6667 is working.
In your property for BOOTSTRAP_SERVERS_CONFIG, try changing the port number to 6667.
Thanks.
--
Hiren
I use Apache Kafka on a Hortonworks (HDP 2.X release) installation. The error message encountered means that Kafka producer was not able to push the data to the segment log file. From a command-line console, that would mean 2 things :
You are using incorrect port for the brokers
Your listener config in server.properties are not working
If you encounter the error message while writing via scala api, additionally check connection to kafka cluster using telnet <cluster-host> <broker-port>
NOTE: If you are using scala api to create topic, it takes sometime for the brokers to know about the newly created topic. So, immediately after topic creation, the producers might fail with the error Failed to update metadata after 60000 ms.
I did the following checks in order to resolve this issue:
The first difference once I check via Ambari is that Kafka brokers listen on port 6667 on HDP 2.x (apache kafka uses 9092).
listeners=PLAINTEXT://localhost:6667
Next, use the ip instead of localhost.
I executed netstat -na | grep 6667
tcp 0 0 192.30.1.5:6667 0.0.0.0:* LISTEN
tcp 1 0 192.30.1.5:52242 192.30.1.5:6667 CLOSE_WAIT
tcp 0 0 192.30.1.5:54454 192.30.1.5:6667 TIME_WAIT
So, I modified the producer call to user the IP and not localhost:
./kafka-console-producer.sh --broker-list 192.30.1.5:6667 --topic rdl_test_2
To monitor if you have new records being written, monitor the /kafka-logs folder.
cd /kafka-logs/<topic name>/
ls -lart
-rw-r--r--. 1 kafka hadoop 0 Feb 10 07:24 00000000000000000000.log
-rw-r--r--. 1 kafka hadoop 10485756 Feb 10 07:24 00000000000000000000.timeindex
-rw-r--r--. 1 kafka hadoop 10485760 Feb 10 07:24 00000000000000000000.index
Once, the producer successfully writes, the segment log-file 00000000000000000000.log will grow in size.
See the size below:
-rw-r--r--. 1 kafka hadoop 10485760 Feb 10 07:24 00000000000000000000.index
-rw-r--r--. 1 kafka hadoop **45** Feb 10 09:16 00000000000000000000.log
-rw-r--r--. 1 kafka hadoop 10485756 Feb 10 07:24 00000000000000000000.timeindex
At this point, you can run the consumer-console.sh:
./kafka-console-consumer.sh --bootstrap-server 192.30.1.5:6667 --topic rdl_test_2 --from-beginning
response is hello world
After this step, if you want to produce messages via the Scala API's , then change the listeners value(from localhost to a public IP) and restart Kafka brokers via Ambari:
listeners=PLAINTEXT://192.30.1.5:6667
A Sample producer will be as follows:
package com.scalakafka.sample
import java.util.Properties
import java.util.concurrent.TimeUnit
import org.apache.kafka.clients.producer.{ProducerRecord, KafkaProducer}
import org.apache.kafka.common.serialization.{StringSerializer, StringDeserializer}
class SampleKafkaProducer {
case class KafkaProducerConfigs(brokerList: String = "192.30.1.5:6667") {
val properties = new Properties()
val batchsize :java.lang.Integer = 1
properties.put("bootstrap.servers", brokerList)
properties.put("key.serializer", classOf[StringSerializer])
properties.put("value.serializer", classOf[StringSerializer])
// properties.put("serializer.class", classOf[StringDeserializer])
properties.put("batch.size", batchsize)
// properties.put("linger.ms", 1)
// properties.put("buffer.memory", 33554432)
}
val producer = new KafkaProducer[String, String](KafkaProducerConfigs().properties)
def produce(topic: String, messages: Iterable[String]): Unit = {
messages.foreach { m =>
println(s"Sending $topic and message is $m")
val result = producer.send(new ProducerRecord(topic, m)).get()
println(s"the write status is ${result}")
}
producer.flush()
producer.close(10L, TimeUnit.MILLISECONDS)
}
}
Hope this helps someone.

kafka spout can't connect to kafka topic

I'm trying to connect a KafkaSpout belonging to a storm topology running on a LocalCluster object. I wrote this code according to the documentation I found on https://github.com/apache/storm/tree/master/external/storm-kafka.
private static final String brokerZkStr = "localhost:2181";
private static final String topic = "/test-topic-multi";
public void startTopology()
{
BrokerHosts hosts = new ZkHosts(brokerZkStr);
SpoutConfig conf = new SpoutConfig(hosts, topic, "localhost:2181", UUID
.randomUUID().toString());
KafkaSpout kafkaSput = new KafkaSpout(conf);
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("kafka-spout", kafkaSput);
Config topConfig = new Config();
topConfig.setDebug(true);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology("HelloStorm", topConfig , builder.createTopology());
}
I want to use a zookeeper instance running at localhost:2181 but when a try to run the code I get the following error:
java.lang.RuntimeException: java.lang.RuntimeException: java.lang.IllegalArgumentException: Invalid path string "/brokers/topics//test-topic-multi/partitions" caused by empty node name specified #16
at storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:81)
at storm.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:42)
at storm.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:57)
at storm.kafka.KafkaSpout.open(KafkaSpout.java:87)
It seems to be just a problem of wrong settings but I can't solve it
PS Kafka configuration is the following: 1 instance of Zookeeper and 2 brokers running on localhost:9092 and localhost:9093
I think I solve it. I just messed up with the configuration code. The correct one is:
private static final String topic = "test-topic-multi";
....
SpoutConfig conf = new SpoutConfig(hosts, topic, "/" + topic, UUID
.randomUUID().toString());
Your kafka topic name is not valid. Why are you attempting to connect to a topic which does not exist?
sql#injection:~$ kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic /test-topic
Error while executing topic command topic name /test-topic is illegal, contains a character other than ASCII alphanumerics, '.', '_' and '-'
kafka.common.InvalidTopicException: topic name /test-topic is illegal, contains a character other than ASCII alphanumerics, '.', '_' and '-'
at kafka.common.Topic$.validate(Topic.scala:42)
at kafka.admin.AdminUtils$.createOrUpdateTopicPartitionAssignmentPathInZK(AdminUtils.scala:181)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:172)
at kafka.admin.TopicCommand$.createTopic(TopicCommand.scala:93)
at kafka.admin.TopicCommand$.main(TopicCommand.scala:55)
at kafka.admin.TopicCommand.main(TopicCommand.scala)
Are you really sure the front-slash belongs to the topic name?