registrar debezium conector en kafka conectar e iniciar el servicio - apache-kafka

Install kafka broker on a server, inside the config folder add the connector1.properties file where you add the connector configuration, which contains the specifications of another server that contains the database to be read, that is, the other server I just need the database.
create a directory called libs where you add another one that contains the connector debezium mysql, also configure the plugin.path of the file with connect-standalone.properties with the location of the directory that contains the connector.
I want to start the kafka connect service autonomously, that is, on a single server,
in turn I want to register a connector with this command
bin / connect-standalone.sh config / connect-standalone.properties connector1.properties
the properties of connector one are:
connector.class = io.debezium.connector.mysql.mysqlconnector
tasks.max = 1
database.hostname = (host port)
database.port = 3306
database.user = userdbz
database.password = 12345
database.include.list = sbsdigdb_migra
database.server.id = 184054
database.server.name = qaservermysql
database.history.kafka.bootstrap.servers = (ip): 6667
database.history.kafka.topic = sbsmigra
my connect-standalone.properties file
plugin.path = / usr / hdp / current / kafka-broker / libs / connect-plugins /
my problem lies in, when I want to run it it generates this error
(org.apache.kafka.clients.admin.AdminClientConfig: 279)
[2021-02-02 09: 01: 01,125] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig: 287)
[2021-02-02 09: 01: 01,126] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig: 287)
[2021-02-02 09: 01: 01,126] WARN The configuration 'offset.storage.file.filename' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig: 287)
[2021-02-02 09: 01: 01,126] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig: 287)
[2021-02-02 09: 01: 01,126] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig: 287)
[2021-02-02 09: 01: 01,126] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig: 287)
[2021-02-02 09: 01: 01,126] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig: 287)
[2021-02-02 09: 01: 01,127] INFO Kafka version: 2.0.0.3.1.4.0-315 (org.apache.kafka.common.utils.AppInfoParser: 109)
[2021-02-02 09: 01: 01,127] INFO Kafka commitId: 4243d589e2b33433 (org.apache.kafka.common.utils.AppInfoParser: 110)
[2021-02-02 09: 01: 01,153] WARN [AdminClient clientId = adminclient-1] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient: 671)
Any kind of information would be of great help, I appreciate it

Related

Kafka SASL_SSL No JAAS configuration section named 'Client' was found in specified JAAS configuration file

I'm trying to activate authentication using SASL/PLAIN in my kafka broker.
the JAAS configuration file is as the following
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
I launch kafka service using the following commands
export KAFKA_OPTS="-Djava.security.auth.login.config=<PATH>kafka_server_jaas.conf
/bin/kafka-server-start.sh /config/server.properties
The kafka service is not started properly and I got these errors in the log
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/home/kafka/kafka/config/kafka_server_jaas.conf'.
at org.apache.zookeeper.client.ZooKeeperSaslClient.<init>(ZooKeeperSaslClient.java:189)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1161)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1210)
[2022-03-16 12:13:16,587] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,588] ERROR [ZooKeeperClient Kafka server] Auth failed, initialized=false connectionState=CONNECTING (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,592] INFO Socket connection established, initiating session, client: /127.0.0.1:46706, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,611] INFO Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x100002dd98c0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,612] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,752] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2022-03-16 12:13:16,786] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener)
[2022-03-16 12:13:16,788] INFO Cleared cache (kafka.server.FinalizedFeatureCache)
[2022-03-16 12:13:16,957] INFO Cluster ID = 6WTadNCMRAW4dHoc_JUnIg (kafka.server.KafkaServer)
[2022-03-16 12:13:16,968] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID 6WTadNCMRAW4dHoc_JUnIg doesn't match stored clusterId Some(RJXzPwJeRfawIa_yA0B26A) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:228)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
I already added the following lines to server.properties
listeners=SASL_SSL://localhost:9092
security.protocol=SASL_SSL
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
[2022-03-16 12:13:16,968] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID 6WTadNCMRAW4dHoc_JUnIg doesn't match stored clusterId Some(RJXzPwJeRfawIa_yA0B26A) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:228)
at kafka.Kafka$.main(Kafka.scala:109)
at kafka.Kafka.main(Kafka.scala)
This issue occurs when there is a mismatch in cluster ID stored in Zookeeper and Kafka data directories for the broker.
In this case, cluster ID of the broker stored in
Zookeeper data is 6WTadNCMRAW4dHoc_JUnIg
Kafka meta.properties is RJXzPwJeRfawIa_yA0B26A
Reason:
Zookeeper data directory got deleted.
Deleting Zookeeper dataDir and restarting both Zookeeper and Kafka service will not work. Because Zookeeper creates a new Cluster ID and assigns it to the broker when it registers and if there is no entry already. This new cluster ID will be different from the one in meta.properties.
This issue can be fixed by following below steps
delete both Kafka log.dirs and Zookeeper dataDir - results in data loss; Both Kafka and Zookeeper service needs to be restarted
delete meta.properties in Kafka log.dirs directory - no data loss; Kafka service needs to be started anyway
update cluster ID in meta.properties with the value stored in Zookeeper data; In this case, replace RJXzPwJeRfawIa_yA0B26A with 6WTadNCMRAW4dHoc_JUnIg - no data loss; Kafka service needs to be started anyway
javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file
The Client section is used to authenticate a SASL connection with ZooKeeper. Above error javax.security.auth.login.LoginException is a warning and Kafka will connect to Zookeeper server without SASL authentication if Zookeeper allows it.
[2022-03-16 12:13:16,587] INFO Opening socket connection to server localhost/127.0.0.1:2181. (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,588] ERROR [ZooKeeperClient Kafka server] Auth failed, initialized=false connectionState=CONNECTING (kafka.zookeeper.ZooKeeperClient)
[2022-03-16 12:13:16,592] INFO Socket connection established, initiating session, client: /127.0.0.1:46706, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2022-03-16 12:13:16,611] INFO Session establishment complete on server localhost/127.0.0.1:2181, session id = 0x100002dd98c0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
The KafkaServer section is used by the Broker and provides SASL configuration options for inter broker connection. The username and password are used by the broker to initiate connections to other brokers. The set of properties user_username defines the passwords for all users to connect to the broker.

Kafka Snowflake Connector: org.apache.kafka.common.network.InvalidReceiveException: Invalid receive

Worker Node distributed-connector log:
[2021-11-23 09:05:22,605] WARN The configuration 'config.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'rest.advertised.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'status.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'group.id' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'rest.host.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'rest.advertised.port' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'config.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'offset.flush.interval.ms' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'rest.port' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'key.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'status.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'value.converter.schemas.enable' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'offset.storage.replication.factor' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,606] WARN The configuration 'offset.storage.topic' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,607] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,607] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:362)
[2021-11-23 09:05:22,607] INFO Kafka version: 2.6.0 (org.apache.kafka.common.utils.AppInfoParser:117)
[2021-11-23 09:05:22,607] INFO Kafka commitId: 62abe01bee039651 (org.apache.kafka.common.utils.AppInfoParser:118)
[2021-11-23 09:05:22,607] INFO Kafka startTimeMs: 1637658322607 (org.apache.kafka.common.utils.AppInfoParser:119)
[2021-11-23 09:05:22,991] INFO Kafka cluster ID: zojXCfzxQum_fc3mC6WN_A (org.apache.kafka.connect.util.ConnectUtils:65)
[2021-11-23 09:05:23,008] INFO Logging initialized #10836ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log:169)
[2021-11-23 09:05:23,076] INFO Added connector for http://**masternodename**:8083 (org.apache.kafka.connect.runtime.rest.RestServer:132)
[2021-11-23 09:05:23,076] INFO Initializing REST server (org.apache.kafka.connect.runtime.rest.RestServer:204)
[2021-11-23 09:05:23,083] INFO jetty-9.4.24.v20191120; built: 2019-11-20T21:37:49.771Z; git: 363d5f2df3a8a28de40604320230664b9c793c16; jvm 1.8.0_192-BellSoft-b12 (org.eclipse.jetty.server.Server:359)
[2021-11-23 09:05:23,120] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed:84)
org.apache.kafka.connect.errors.ConnectException: Unable to initialize REST server
at org.apache.kafka.connect.runtime.rest.RestServer.initializeServer(RestServer.java:216)
at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:99)
at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:78)
Caused by: java.io.IOException: Failed to bind to MasterServerName/MasterIP:8083
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:231)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.eclipse.jetty.server.Server.doStart(Server.java:385)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
at org.apache.kafka.connect.runtime.rest.RestServer.initializeServer(RestServer.java:214)
... 2 more
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
... 9 more
Master Node: Server.log:
[2021-11-23 09:23:04,041] WARN [SocketServer brokerId=0] Unexpected error from /**workernode-ip**; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = -720899)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:103)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:447)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:397)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:678)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:580)
at org.apache.kafka.common.network.Selector.poll(Selector.java:485)
at kafka.network.Processor.poll(SocketServer.scala:913)
at kafka.network.Processor.run(SocketServer.scala:816)
at java.lang.Thread.run(Thread.java:748)
[2021-11-23 09:30:35,461] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
A BindException means you have some networking properties misconfigured, or there is already something running on conflicting ports. For example - bootstrap.servers=...localhost:9092... and rest.advertised.port=9092 would indicate you have Kafka broker already running on port 9092 and trying to make Kafka Connect start an HTTP server on that same port, which will not work.
Regarding the other issues I can see
server.properties > listeners should always be 0.0.0.0 for a host/ip, not the machine hostname if you want external clients to that machine.
If possible, don't run Kafka Connect on the brokers, so localhost:9092 should never be added to bootstrap.servers of connect-distriubuted.properties
connect-distributed.properties > rest.advertised.port should not be 9092 since it is not a broker. The default of 8083 is fine...
You should start with one broker and one Connect worker on separate hosts. If you don't have access to multiple physical machines, using Docker-Compose rather than VMs would be simplest.
I suspect these last two are your error because Connect is trying to use the Kafka TCP protocol on itself, so the "Invalid receive" refers to the bytes in the request/response. Then to correctly setup a Kafka cluster and clients, then listeners should not be just the hostname of the local machine of the process; this is what advertised.listeners on the brokers are for

Kafka to Snowflake connecting issue

(Submitting on behalf of a Snowflake client...)
.........................
I am trying to connect Kafka to snowflake using Snowflake Connector for Kafka.
Referring to this document: https://docs.snowflake.net/manuals/user-guide/kafka-connector.html
When I am running Kafka, it is initializing the Snowflake plugins .
eg:
[2019-08-31 21:52:09,448] INFO Added aliases 'SnowflakeSinkConnector' and 'SnowflakeSink' to plugin 'com.snowflake.kafka.connector.SnowflakeSinkConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
[2019-08-31 21:52:09,456] INFO Added aliases 'SnowflakeJsonConverter' and 'SnowflakeJson' to plugin 'com.snowflake.kafka.connector.records.SnowflakeJsonConverter' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:396)
But after that it is unable to read other worker config attributes.
[2019-08-31 21:52:10,373] WARN The configuration 'connector.class' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,373] WARN The configuration 'snowflake.topic2table.map' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,375] WARN The configuration 'tasks.max' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,378] WARN The configuration 'topics' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,381] WARN The configuration 'snowflake.private.key.passphrase' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,385] WARN The configuration 'plugin.path' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,386] WARN The configuration 'buffer.flush.time' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,386] WARN The configuration 'snowflake.url.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,387] WARN The configuration 'value.converter.basic.auth.credentials.source' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,387] WARN The configuration 'snowflake.database.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,387] WARN The configuration 'snowflake.schema.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,387] WARN The configuration 'value.converter.schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,389] WARN The configuration 'offset.storage.file.filename' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,392] WARN The configuration 'value.converter.basic.auth.user.info' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,392] WARN The configuration 'buffer.count.records' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,393] WARN The configuration 'snowflake.private.key' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,393] WARN The configuration 'snowflake.user.name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,393] WARN The configuration 'name' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,394] WARN The configuration 'value.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,394] WARN The configuration 'key.converter' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
[2019-08-31 21:52:10,394] WARN The configuration 'buffer.size.bytes' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig:287)
I realize these are warnings but after that, we're getting failures. So I assume it is failing as it is unable to initialize the above config values.
WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource will be ignored.
Sep 04, 2019 11:55:52 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource will be ignored.
Sep 04, 2019 11:55:52 AM org.glassfish.jersey.internal.inject.Providers checkProviderRuntime
WARNING: A provider org.apache.kafka.connect.runtime.rest.resources.RootResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider org.apache.kafka.connect.runtime.rest.resources.RootResource will be ignored.
Sep 04, 2019 11:55:52 AM org.glassfish.jersey.internal.Errors logErrors
WARNING: The following warnings have been detected: WARNING: The (sub)resource method listConnectors in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method createConnector in org.apache.kafka.connect.runtime.rest.resources.ConnectorsResource contains empty path annotation.
WARNING: The (sub)resource method listConnectorPlugins in org.apache.kafka.connect.runtime.rest.resources.ConnectorPluginsResource contains empty path annotation.
WARNING: The (sub)resource method serverInfo in org.apache.kafka.connect.runtime.rest.resources.RootResource contains empty path annotation.
[2019-09-04 11:55:52,788] INFO Started o.e.j.s.ServletContextHandler#2be818da{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:850)
[2019-09-04 11:55:52,800] INFO Started http_8083#798deee8{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:292)
[2019-09-04 11:55:52,801] INFO Started #9514ms (org.eclipse.jetty.server.Server:408)
[2019-09-04 11:55:52,802] INFO Advertised URI: http://10.10.25.86:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:267)
[2019-09-04 11:55:52,802] INFO REST server listening at http://10.10.25.86:8083/, advertising URL http://10.10.25.86:8083/ (org.apache.kafka.connect.runtime.rest.RestServer:217)
[2019-09-04 11:55:52,802] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:55)
[2019-09-04 11:55:52,807] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:113)
org.apache.kafka.common.config.ConfigException: Must configure one of topics or topics.regex
at org.apache.kafka.connect.runtime.SinkConnectorConfig.validate(SinkConnectorConfig.java:96)
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:269)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:189)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:107)
[2019-09-04 11:55:52,808] INFO Kafka Connect stopping (org.apache.kafka.connect.runtime.Connect:65)
[2019-09-04 11:55:52,808] INFO Stopping REST server (org.apache.kafka.connect.runtime.rest.RestServer:223)
[2019-09-04 11:55:52,820] INFO Stopped http_8083#798deee8{HTTP/1.1,[http/1.1]}{0.0.0.0:8083} (org.eclipse.jetty.server.AbstractConnector:341)
[2019-09-04 11:55:52,821] INFO node0 Stopped scavenging (org.eclipse.jetty.server.session:167)
[2019-09-04 11:55:52,827] INFO Stopped o.e.j.s.ServletContextHandler#2be818da{/,null,UNAVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:1040)
[2019-09-04 11:55:52,829] INFO REST server stopped (org.apache.kafka.connect.runtime.rest.RestServer:241)
[2019-09-04 11:55:52,829] INFO Herder stopping (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:95)
[2019-09-04 11:55:52,829] INFO Worker stopping (org.apache.kafka.connect.runtime.Worker:184)
[2019-09-04 11:55:52,829] INFO Stopped FileOffsetBackingStore (org.apache.kafka.connect.storage.FileOffsetBackingStore:66)
[2019-09-04 11:55:52,830] INFO Worker stopped (org.apache.kafka.connect.runtime.Worker:205)
[2019-09-04 11:55:52,830] INFO Herder stopped (org.apache.kafka.connect.runtime.standalone.StandaloneHerder:112)
[2019-09-04 11:55:52,830] INFO Kafka Connect stopped (org.apache.kafka.connect.runtime.Connect:70)
I understand that this may be a configuration issue in that in the newer version of Kafka, the configuration for "topic" was updated to "topics", but are there any other/additional explanations, corrective actions or recommended work-arounds?
Thank you!
You can ignore all those config warnings, they are just that—warnings (albeit noisy & confusing ones!).
The reason it's failed is as you've identified:
Must configure one of topics or topics.regex
You have to specify one of these in your configuration.

kafka confluent error java.lang.IllegalArgumentException: /tmp/confluent.PVghAKRg/zookeeper/data/myid file is missing

I am running Kafka via a Confluent platform on 3 nodes but when i running confluent start get this error :
[2018-04-09 10:54:25,995] INFO Reading configuration from: /tmp/confluent.SVNfiLFU/zookeeper/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2018-04-09 10:54:26,011] INFO Resolved hostname: 0.0.0.0 to address: /0.0.0.0 (org.apache.zookeeper.server.quorum.QuorumPeer)
[2018-04-09 10:54:26,011] INFO Resolved hostname: 192.168.0.36 to address: /192.168.0.36 (org.apache.zookeeper.server.quorum.QuorumPeer)
[2018-04-09 10:54:26,011] INFO Resolved hostname: 192.168.0.22 to address: /192.168.0.22 (org.apache.zookeeper.server.quorum.QuorumPeer)
[2018-04-09 10:54:26,011] INFO Defaulting to majority quorums (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2018-04-09 10:54:26,012] ERROR Invalid config, exiting abnormally (org.apache.zookeeper.server.quorum.QuorumPeerMain)
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /tmp/confluent.SVNfiLFU/zookeeper/zookeeper.properties
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:154)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:101)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)
Caused by: java.lang.IllegalArgumentException: /tmp/confluent.SVNfiLFU/zookeeper/data/myid file is missing
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:406)
at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:150)
... 2 more
this is zookeeper.properties :
dataDir=/var/lib/zookeeper/
clientPort=2181
initLimit=5
syncLimit=2
tickTime=2000
server.1=192.168.0.21:2888:3888
server.2=192.168.0.22:2888:3888
server.3=192.168.0.36:2888:3888
also, I created myid file that contains integer id in /var/lib/zookeeper/ directory
You need to run zookeeper-server-start.sh zookeeper.properties individual on each server.
confluent command is only for single node testing (emphasis added)
meant for development purposes only and is not suitable for a production environment. The data that are produced are transient and are intended to be temporary
That explains why you're getting errors about files in /tmp

kafka cant connect to zookeeper- FATAL Fatal error during KafkaServerStable startup

Well..every service in the world can connect to my zookeeper expect kafka. Below is my connection string in server.properties file
zk.connect=1.dzk.syd.druid.neo.com:2181, 2.dzk.syd.druid.neo.com:2181
Have have all ports on the two zookeeper servers ....total promiscuous mode. I can even telnet into the zookeeper server from the kafka server..
telnet 2.dzk.syd.druid.neo.com 2181
Trying 54.252.183.218...
Connected to 2.dzk.syd.druid.neo.com.
Escape character is '^]'.
So....rather confused on why kafka will not connect to zookeeper?
I am using ubuntu 12.04 and kafka 0.7.2
[2013-07-16 04:36:49,915] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2013-07-16 04:36:49,915] INFO Client environment:user.dir=/etc/sv/kafka (org.apache.zookeeper.ZooKeeper)
[2013-07-16 04:36:49,916] INFO Initiating client connection, connectString=1.dzk.syd.druid.neo.com:2181, 2.dzk.syd.druid.neo.com:2181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient#39cc65b1 (org.apache.zookeeper.ZooKeeper)
[2013-07-16 04:36:49,935] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2013-07-16 04:36:49,938] FATAL Fatal error during KafkaServerStable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkException: Unable to connect to 1.dzk.syd.druid.neo.com:2181, 2.dzk.syd.druid.neo.com:2181
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:66)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:872)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84)
at kafka.server.KafkaZooKeeper.startup(KafkaZooKeeper.scala:44)
at kafka.log.LogManager.<init>(LogManager.scala:93)
at kafka.server.KafkaServer.startup(KafkaServer.scala:58)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
at kafka.Kafka$.main(Kafka.scala:47)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: 2.dzk.syd.druid.neo.com: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:894)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1286)
at java.net.InetAddress.getAllByName0(InetAddress.java:1239)
at java.net.InetAddress.getAllByName(InetAddress.java:1155)
at java.net.InetAddress.getAllByName(InetAddress.java:1091)
at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:387)
at org.apache.zookeeper.ClientCnxn.<init>(ClientCnxn.java:332)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:383)
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:64)
... 9 more
[2013-07-16 04:36:49,942] INFO Shutting down Kafka server (kafka.server.KafkaServer)
[2013-07-16 04:36:49,943] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler)
[2013-07-16 04:36:49,944] INFO Kafka server shut down completed (kafka.server.KafkaServer)
In your kafka/config/server.properties, there should be a property
#host.name=localhost
if you have uncommented this, or set this to another name, then that name should be in the /etc/hosts file
It's been a while since this has been answered, but in case it could help someone here is how i fixed it :
Actually i am using an Ansible playbook to install Kafka cluster and the params generated in zookeeper.properties file were not correctly ordered :
server.1=0.0.0.0:2888:3888
server.2=kafka-4:2888:3888
server.3=kafka-5:2888:3888
server.4=kafka-3:2888:3888
server.5=kafka-2:2888:3888
Putting them in the right order,
server.1=0.0.0.0:2888:3888
server.2=kafka-2:2888:3888
server.3=kafka-3:2888:3888
server.4=kafka-4:2888:3888
server.5=kafka-5:2888:3888
Then restart Kafka service, fixed it.
Change this in zookeeper.properties
maxClientCnxns=0 to maxClientCnxns=1