connect to cloudera kafka cluster using kafka connect in redhat client machine - apache-kafka

I have to connect to cloudera kafka cluster using kafka connect
kafka cluster apache version => 0.11
cloudera vrsion = cdk 3.1
cloudera cluster is secured with Kerberos
I am able to connect where kafka connect client in centos 7
I am not able to connect where kafka connect client in redhat 8
Error as below
[2021-05-25 04:54:56,855] WARN [AdminClient clientId=adminclient-1] Connection to node -2 (xxxxxxxx:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. (org.apache.kafka.clients.NetworkClient:769)
cloudera cluster ssl not enabled , runs on 9092 port
then i tried console producer as well , that also gave below error with redhat8 only
[2021-05-29 09:16:31,772] WARN [Producer clientId=console-producer] Connection to node -4 (xxxxk4.domain/xx.xxx.xx.47:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. (org.apache.kafka.clients.NetworkClient)
[2021-05-29 09:16:31,772] WARN [Producer clientId=console-producer] Bootstrap broker xxxxk4.domain:9092 (id: -4 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2021-05-29 09:16:31,895] WARN [Producer clientId=console-producer] Connection to node -1 (xxxxk1.domain/xx.xxx.xx.44:9092) terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. (org.apache.kafka.clients.NetworkClient)
jaas file for console producer as below
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/home/test/keytabs/username.keytab"
principal="user1#DOMAIN.DOMAINaABC.COM";
};
client.property file as below
security.protocol=SASL_PLAINTEXT
sasl.kerberos.service.name=kafka
kafka connect property file as below
consumer.security.protocol=SASL_PLAINTEXT
coonsumer.sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/home/test/keytabs/user.keytab" \
principal="username#DOMAIN.XXXXXABC.COM";
how we tried
bin/connect-standalone.sh config/connect-standalone.properties default-connectortest.properties
console command as below
sh kafka-console-producer.sh --broker-list namek1.domain.lk:9092,xxxxk2.xx.company.lk:9092, --producer.config /opt/analytics/xxxx/client.properties --topic sample --property print.key=true --property key.separator=" -> "
cat /etc/krb5.conf
# Other applications require this directory to perform krb5 configuration.
includedir /etc/krb5.conf.d/
[libdefaults]
default_realm = DOMAIN.NAME1ABC.COM
dns_lookup_kdc = true
dns_lookup_realm = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = aes256-cts
default_tkt_enctypes = aes256-cts
permitted_enctypes = aes256-cts
udp_preference_limit = 1
kdc_timeout = 9000
ignore_acceptor_hostname = true
[realms]
NAME1.NAME1ABC.COM = {
kdc = name.DOMAIN.NAME1ABC.COM
admin_server = name2.name1.nameabc.com
}

Related

Kafka Producer Connection Issue

I am trying send messages through kafka producer but getting following error. I am using following command to establish connection.
export KAFKA_OPTS="-Djava.security.auth.login.config=$OSI_HOME/jaas.conf -Djava.security.krb5.conf=$OSI_HOME/krb5.conf"$OSI_HOME/Custom/confluent/bin/kafka-console-producer --topic testTopic --bootstrap-server <server list> --producer.config producer.properties
Here's the JAAS config I am using
KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="[Location of keytabfile]" serviceName="kafka" useTicketCache=true principal="svc_account#REALM"; };
I can see producer was able to connect. But when I try to send messages I get following error.
[2022-02-23 12:45:56,562] WARN [Producer clientId=console-producer] Connection to node -7 terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. (org.apache.kafka.clients.NetworkClient)
[2022-02-23 12:45:56,563] WARN [Producer clientId=console-producer] Bootstrap broker (id: -7 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-02-23 12:45:56,707] WARN [Producer clientId=console-producer] Connection to node -13 terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. (org.apache.kafka.clients.NetworkClient)
[2022-02-23 12:45:56,707] WARN [Producer clientId=console-producer] Bootstrap broker (id: -13 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
[2022-02-23 12:45:56,863] WARN [Producer clientId=console-producer] Connection to node -12 terminated during authentication. This may happen due to any of the following reasons: (1) Authentication failed due to invalid credentials with brokers older than 1.0.0, (2) Firewall blocking Kafka TLS traffic (eg it may only allow HTTPS traffic), (3) Transient network issue. (org.apache.kafka.clients.NetworkClient)
[2022-02-23 12:45:56,863] WARN [Producer clientId=console-producer] Bootstrap broker (id: -12 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
Does anyone know how to resolve this? I tried telnet command for broker list and it's connected so it doesn't seem to be a firewall issue. I can also do kinit from the server so doesn't seem to be an authentication issue.
git clone https://gerrit.googlesource.com/gerrit-release-tools && (cd gerrit-release-tools && f=git rev-parse --git-dir/hooks/commit-msg ; mkdir -p $(dirname $f) ; curl -Lo $f https://gerrit-review.googlesource.com/tools/hooks/commit-msg ; chmod +x $f) enter code here

Mirror Maker2 not able to connect to target cluster broker

I have two Kafka clusters on AWS MSK (in same environment and region). I have a KafkaConnect cluster setup on the destination cluster and have setup a mirror maker connector to run. The submission of the connector is fine and there are no errors.
When I try to check status of the connector, it says, RUNNING:
{"name":"mirror-maker-test-connector","connector":{"state":"RUNNING","worker_id":"<ip>:<port>"},"tasks":[task_list],"type":"source"}
I see the following exception:
[2022-01-12 19:46:33,772] DEBUG [Producer clientId=connector-producer-mirror-maker-test-connector-0] Connection with b-2.<broker_ip> disconnected (org.apache.kafka.common.network.Selector)
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:120)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576)
at org.apache.kafka.common.network.Selector.poll(Selector.java:481)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:551)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243)
at java.base/java.lang.Thread.run(Thread.java:829)
[2022-01-12 19:46:33,773] DEBUG [Producer clientId=connector-producer-mirror-maker-test-connector-0] Node -1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2022-01-12 19:46:33,773] WARN [Producer clientId=connector-producer-mirror-maker-test-connector-0] Bootstrap broker b-2.<broker_ip>:9094 (id: -1 rack: null) disconnected (org.apache.kafka.clients.NetworkClient)
I am able to connect to the specified broker with netcat from within the Kafka Connect k8s pod.
Anyone faced this issue before?
I got it to work - had to add SSL properties for both consumer and producer when submitting the Mirror Maker connector.
"target.cluster.security.protocol": "SSL",
"target.cluster.ssl.truststore.location":"<certs_path>",
"target.cluster.ssl.truststore.password": "<password>"
"source.cluster.security.protocol": "SSL",
"source.cluster.ssl.truststore.location": "<certs_path>",
"source.cluster.ssl.truststore.password": "<password>"

Invalid credentials with SASL_PLAINTEXT mechanism SCRAM-SHA-256 - InvalidACL for /config/users/admin

I am using kafka_2.3.0, Ubuntu 16.04
Below are the configurations for Kafka broker and the zookeeper nodes. Currently i am testing this on single machine so the IP shall remain same all over and port shall differ.
kafka-broker 1 configuration.
broker.id=1
listeners=SASL_PLAINTEXT://192.168.1.172:9092
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9092
log.dirs=/home/emgda/data/kafka/1/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka-broker 2 configuration.
broker.id=2
listeners=SASL_PLAINTEXT://192.168.1.172:9093
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9093
log.dirs=/home/emgda/data/kafka/2/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka-broker 3 configuration.
broker.id=3
listeners=SASL_PLAINTEXT://192.168.1.172:9094
advertised.listeners=SASL_PLAINTEXT://192.168.1.172:9094
log.dirs=/home/emgda/data/kafka/3/kafka-logs
zookeeper.connect=192.168.1.172:2181,192.168.1.172:2182,192.168.1.172:2183
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
security.inter.broker.protocol=SASL_PLAINTEXT
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="admin" password="admin-secret";
kafka_jass.config
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="super"
password="adminsecret";
};
zookeeper_jass.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="adminsecret";
};
Zookeeper-node 1 configuration
dataDir=/home/emgda/data/zookeeper/1/
clientPort=2181
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper-node 2 configuration
dataDir=/home/emgda/data/zookeeper/2/
clientPort=2182
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper-node 3 configuration
dataDir=/home/emgda/data/zookeeper/3/
clientPort=2183
server.1=localhost:2666:3666
server.2=localhost:2667:3667
server.3=localhost:2668:3668
requireClientAuthScheme=sasl
Zookeeper nodes in cluster start properly and the kafka is also able to authenticate to zookeeper as below zookeeper logs will help understand what heppens when first kafka broker comes up,
[2019-12-30 13:35:29,465] INFO Accepted socket connection from /192.168.1.172:42362 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2019-12-30 13:35:29,480] INFO Client attempting to establish new session at /192.168.1.172:42362 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:35:29,487] INFO Established session 0x10000d285210003 with negotiated timeout 6000 for client /192.168.1.172:42362 (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:35:29,529] INFO Successfully authenticated client: authenticationID=super; authorizationID=super. (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2019-12-30 13:35:29,529] INFO Setting authorizedID: super (org.apache.zookeeper.server.auth.SaslServerCallbackHandler)
[2019-12-30 13:35:29,530] INFO adding SASL authorization for authorizationID: super (org.apache.zookeeper.server.ZooKeeperServer)
[2019-12-30 13:36:54,999] INFO Closed socket connection for client /192.168.1.172:42362 which had sessionid 0x10000d285210003 (org.apache.zookeeper.server.NIOServerCnxn)
Error while starting first Kafka broker as below,
[2019-12-30 13:35:58,417] ERROR [Controller id=1, targetBrokerId=1] Connection to node 1 (192.168.1.172/192.168.1.172:9092) failed authentication due to: Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256 (org.apache.kafka.clients.NetworkClient)
[2019-12-30 13:35:58,421] INFO [SocketServer brokerId=1] Failed authentication with /192.168.1.172 (Authentication failed during authentication due to invalid credentials with SASL mechanism SCRAM-SHA-256) (org.apache.kafka.common.network.Selector)
While I am trying to create a Kakfa-broker user using below command, I get below error
emgda#ubuntu:~/softwares/kafka_2.12-2.3.0$ ./bin/kafka-configs.sh --zookeeper 192.168.1.172:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
Error while executing config command with args '--zookeeper 192.168.1.172:2181 --alter --add-config SCRAM-SHA-256=[iterations=8192,password=admin-secret],SCRAM-SHA-512=[password=admin-secret] --entity-type users --entity-name admin'
org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL for /config/users/admin
at org.apache.zookeeper.KeeperException.create(KeeperException.java:124)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:560)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1610)
at kafka.zk.KafkaZkClient.createOrSet$1(KafkaZkClient.scala:357)
at kafka.zk.KafkaZkClient.setOrCreateEntityConfigs(KafkaZkClient.scala:367)
at kafka.zk.AdminZkClient.changeEntityConfig(AdminZkClient.scala:378)
at kafka.zk.AdminZkClient.changeUserOrUserClientIdConfig(AdminZkClient.scala:312)
at kafka.zk.AdminZkClient.changeConfigs(AdminZkClient.scala:276)
at kafka.admin.ConfigCommand$.alterConfig(ConfigCommand.scala:153)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:104)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:80)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Question:
What am I missing so that kafka-broker authenticates in a proper way?

Kafka2.2.0 with SASL-SCRAM - SSL peer is not authenticated, returning ANONYMOUS instead

Kafka SSL peer is not authenticated, returning ANONYMOUS instead error when client connecting the brokers SASL port, it allows the connection on PLAINTEXT or SSL ports.
I have kafka 2.2.0 in windows systems with SSL enabled, where the kafka broker plaintext is running on 9092 and SSL on 9093. On top of that, configured the SASL with SCRAM mechanism with listener port as 9094, ending-up with error as mentioned in problem summary while running producer as kafka-console-producer.bat --broker-list localhost:9094 --topic xxx
Here are the SASL configurations, not provided other configuration like basic and SSL
zookeeper.properties
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000
server.properties
listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093,SASL_SSL://0.0.0.0:9094
advertised.listeners=PLAINTEXT://localhost:9092,SSL://localhost:9093,SASL_SSL://localhost:9094
sasl.enabled.mechanisms=SCRAM-SHA-256
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
zookeeper_server_jaas.conf
Server {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd"
user_admin="admin-pwd"
user_other1="other1-pwd"
user_other2="other2-pwd";
};
producer.properties
security.protocol=SSL
kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd";
};
Client {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-pwd";
};
kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-pwd";
};
Start the Zookeeper as
SET ZOO_LOG_DIR=C:/Work/kafka_2.11-2.2.0-for-ssl/zookeeper-data
SET KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/zookeeper_server_jaas.conf
zookeeper-server-start.bat %KAFKA_HOME%/config/zookeeper.properties
Start the kafka as
set KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/kafka_server_jaas.conf
kafka-server-start.bat %KAFKA_HOME%/config/server.properties
Start the Producer as
SET KAFKA_HOME=C:/Work/kafka_2.11-2.2.0-for-ssl
set KAFKA_OPTS=-Djava.security.auth.login.config=%KAFKA_HOME%/config/kafka_client_jaas.conf
kafka-console-producer.bat --broker-list localhost:9094 --topic xxx
The producer is only working if I use the broker port as 9092. Did I missed something and end-up with mis-configuration. Any inputs?
Updated:
Here is the error while connecting the producer/consumer
[2019-10-14 15:39:42,108] DEBUG [SslTransportLayer channelId=127.0.0.1:9094-127.0.0.1:63848-0 key=sun.nio.ch.SelectionKeyImpl#222a223c] SSL peer is not authenticated, returning ANONYMOUS instead (org.apache.kafka.common.network.SslTransportLayer)
[2019-10-14 15:39:42,108] DEBUG [SslTransportLayer channelId=127.0.0.1:9094-127.0.0.1:63848-0 key=sun.nio.ch.SelectionKeyImpl#222a223c] SSL handshake completed successfully with peerHost '127.0.0.1' peerPort 63848 peerPrincipal 'User:ANONYMOUS' cipherSuite 'TLS_DHE_DSS_WITH_AES_256_CBC_SHA256' (org.apache.kafka.common.network.SslTransportLayer)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Handling Kafka request API_VERSIONS during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to HANDSHAKE_REQUEST during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] DEBUG Set SASL server state to FAILED during authentication (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator)
[2019-10-14 15:39:42,108] INFO [SocketServer brokerId=0] Failed authentication with 127.0.0.1/127.0.0.1 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
I had same problem. Authentication with SASL SCRAM wasn't working on 2.2.x and 2.3.x Kafka versions. On 2.1 it was OK.
In the end I resolved the issue by providing zookeeper chroot path (/kafkaTest) when creating principals:
./kafka-configs --zookeeper zookeeper-01:2181/kafkaTest --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin
Seems like when credentials are created in zookeeper root path, Kafka can't find them to validate.
I hope it will solve your issue as well!

Native Apache Kafka and Zookeeper with Confluent components?

Could you please tell me about compatibility of Apache Kafka and Zookeeper (native Apache distributuins) with some Confluent's components. I have already installed in my environment Kafka and Zookeeper as a multinodes clusters. But now I need to add schema-registry, kafka-connect.
So I actually tried to deploy Confluent Schema registry from their official docker image. I logged in and was able to successfully telnet kafka broker on port 9093
root#schema-0:/usr/bin# telnet kafka-0.kafka-hs 9093
Trying 10.244.3.47...
Connected to kafka-0.kafka-hs.log-platform.svc.cluster.local.
Escape character is '^]'.
After I tried to do some tests:
# /usr/bin/kafka-avro-console-producer \
--broker-list localhost:9093 --topic bar \
--property value.schema='{"type":"record","name":"myrecord","fields" \
[{"name":"f1","type":"string"}]}'
Add some values:
{"f1": "value1"}
But no luck :(. Got next errors:
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
(org.apache.kafka.clients.producer.ProducerConfig)
[2018-01-28 11:23:23,561] INFO Kafka version : 1.0.0-cp1 (org.apache.kafka.common.utils.AppInfoParser)
[2018-01-28 11:23:23,561] INFO Kafka commitId : ec61c5e93da662df (org.apache.kafka.common.utils.AppInfoParser){"f1": "value1"}
[2018-01-28 11:23:36,233] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-28 11:23:36,335] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2018-01-28 11:23:36,486] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Entire system is spinning on Kubernetes
Confluent Platform is Apache Kafka, but with additional components (such as Schema Registry) bundled with it.
The error you're getting is related to the network configuration. You need to make sure that your Broker is available to other nodes, including Schema Registry. In your Schema Registry config you've specified broker-list localhost:9093 but this should be your Kafka broker. In addition as Dmitry Minkovsky mentions, make sure you've set the advertised listener in your broker. This article might help.