Failed to start schema registry - apache-kafka

I have followed this demo and get the following error.
https://github.com/confluentinc/cp-demo
https://github.com/confluentinc/cp-demo/blob/7.0.1-post/docker-compose.yml
I replace KSQL_BOOTSTRAP_SERVERS with my own kafka server and get the following error, what could be the cause of this issue?
[2022-02-08 11:03:09,095] INFO Logging initialized #838ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2022-02-08 11:03:09,130] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2022-02-08 11:03:09,204] INFO Adding listener with HTTP/2: https://0.0.0.0:8085 (io.confluent.rest.ApplicationServer)
[2022-02-08 11:03:09,491] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: No listener configured with requested scheme http
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.getSchemeAndPortForIdentity(KafkaSchemaRegistry.java:303)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:148)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:71)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:271)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:245)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)

Related

[ERROR][o.o.a.a.AlertIndices] info deleteOldIndices

I am running an Opensearch cluster 2.3, and from the log I can see the following error:
[2023-02-13T03:37:44,711][ERROR][o.o.a.a.AlertIndices ] [opensearch-node1] info deleteOldIndices
What trigger this error? I've never set up the alert, and in the past on the same cluster I used to have some ISM policies for the indices, but I removed them all, can this be linked to the error I am seeing?
Thanks.

"SchemaRegistryException: Failed to get Kafka cluster ID" for LOCAL setup

I'm downloaded the .tz (I am on MAC) for confluent version 7.0.0 from the official confluent site and was following the setup for LOCAL (1 node) and Kafka/ZooKeeper are starting fine, but the Schema Registry keeps failing (Note, I am behind a corporate VPN)
The exception message in the SchemaRegistry logs is:
[2021-11-04 00:34:22,492] INFO Logging initialized #1403ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2021-11-04 00:34:22,543] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2021-11-04 00:34:22,614] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer)
[2021-11-04 00:35:23,007] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: Failed to get Kafka cluster ID
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1488)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:166)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:71)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:271)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:245)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1486)
... 7 more
My schema-registry.properties file has bootstrap URL set to
kafkastore.bootstrap.servers=PLAINTEXT://localhost:9092
I saw some posts saying its the SchemaRegistry unable to connect to the KafkaCluster URL because of the localhost address potentially. I am fairly new to Kafka and basically just need this local setup to run a git repo that is utilizing some Topics/Kafka so my questions...
How can I fix this (I am behind a corporate VPN but I figured this shouldn't affect this)
Do I even need the SchemaRegistry?
I ended up just going with the Docker local setup inside, and the only change I had to make to the docker compose YAML was to change the schema-registry port (I changed it to 8082 or 8084, don't remember exactly but just an unused port that is not being used by some other Confluent service listed in the docker-compose.yaml) and my local setup is working fine now

Mesos Kafka task failed memory limit

I am going to set up a kafka cluster on apache mesos.
I follow the instruction at kafka-mesos on github. I installed a mesos cluster (using Mesosphere without Marathon) with 3 nodes each with 2 CPUs and 4GB memory. I tested the cluster with hello world examples successfully.
I can run kafka-mesos scheduler on it and can add brokers to it.
But when i want to start the broker, an memory limit issued appear.
broker-191-.... TASK_FAILED slave:#c3-S1 reason:REASON_MEMORY_LIMIT
Although, the cluster has 12GB memory, but broker just need 3GB memory with 1GB heap. (I test it with various configuration from 512M till 3GB, but not worked)
What is the problem? and what is the solution?
the complete story is here:
2015-10-17 17:39:24,748 [Jetty-17] INFO ly.stealth.mesos.kafka.HttpServer$ - handling - http://192.168.11.191:7000/api/broker/start
2015-10-17 17:39:28,202 [Thread-605] INFO ly.stealth.mesos.kafka.Scheduler$ - [resourceOffers]
mesos-2#O1160 cpus:2.00 mem:4098.00 disk:9869.00 ports:[31000..32000]
mesos-3#O1161 cpus:2.00 mem:4098.00 disk:9869.00 ports:[31000..32000]
mesos-1#O1162 cpus:2.00 mem:4098.00 disk:9869.00 ports:[31000..32000]
2015-10-17 17:39:28,204 [Thread-605] INFO ly.stealth.mesos.kafka.Scheduler$ - Starting broker 191: launching task broker-191-0abe9e57-b0fb-4d87-a1b4-529acb111940 by offer mesos-2#O1160
broker-191-0abe9e57-b0fb-4d87-a1b4-529acb111940 slave:#c6-S3 cpus:1.00 mem:3096.00 ports:[31000..31000] data:defaults=broker.id\=191\,log.dirs\=kafka-logs\,port\=31000\,zookeeper.connect\=192.168.11.191:2181\\\,192.168.11.192:2181\\\,192.168.11.193:2181\,host.name\=mesos-2\,log.retention.bytes\=10737418240,broker={"stickiness" : {"period" : "10m"\, "stopTime" : "2015-10-17 13:43:29.278"}\, "id" : "191"\, "mem" : 3096\, "cpus" : 1.0\, "heap" : 1024\, "failover" : {"delay" : "1m"\, "maxDelay" : "10m"}\, "active" : true}
2015-10-17 17:39:28,417 [Thread-606] INFO ly.stealth.mesos.kafka.Scheduler$ - [statusUpdate] broker-191-0abe9e57-b0fb-4d87-a1b4-529acb111940 TASK_FAILED slave:#c6-S3 reason:REASON_MEMORY_LIMIT
2015-10-17 17:39:28,418 [Thread-606] INFO ly.stealth.mesos.kafka.Scheduler$ - Broker 191 failed 1, waiting 1m, next start ~ 2015-10-17 17:40:28+03
2015-10-17 17:39:29,202 [Thread-607] INFO ly.stealth.mesos.kafka.Scheduler$ - [resourceOffers]
I found the following in Mesos master log:
...validation.cpp:422] Executor broker-191-... for task broker-191-... uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases.
...validation.cpp:434] Executor broker-191-... for task broker-191-... uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases.
but i set the CPU and MEM for brokers via broker add (update):
broker updated:
id: 191
active: false
state: stopped
resources: cpus:1.00, mem:2048, heap:1024, port:auto
failover: delay:1m, max-delay:10m
stickiness: period:10m, expires:2015-10-19 11:15:53+03
The executor doesn't get the heap setting just the broker. I opened an issue for this https://github.com/mesos/kafka/issues/137. Please increase the mem until a patch is available.
This hasn't been a problem seen I suspect because the mem gets set as a larger value (the size of your data set you don't want to hit disk from when reading) so there is page cache for max efficiencies http://kafka.apache.org/documentation.html#maximizingefficiency

JBoss 5.1 cluster fails with EAR

We have 2 servers that cluster just fine when we do not deploy any EAR files.
Server 1:
2015-03-26 08:23:00,339 INFO [org.jboss.messaging.core.impl.postoffice.GroupMember] (Incoming-13,10.200.51.14:62610) org.jboss.messaging.core.impl.postoffice.GroupMember$ControlMembershipListener#1f15227c got new view [10.200.51.14:62610|1] [10.200.51.14:62610, 10.200.51.16:58992], old view is [10.200.51.14:62610|0] [10.200.51.14:62610]
2015-03-26 08:23:00,339 INFO [org.jboss.messaging.core.impl.postoffice.GroupMember] (Incoming-13,10.200.51.14:62610) I am (10.200.51.14:62610)
2015-03-26 08:23:00,339 INFO [org.jboss.messaging.core.impl.postoffice.GroupMember] (Incoming-13,10.200.51.14:62610) New Members : 1 ([10.200.51.16:58992])
2015-03-26 08:23:00,355 INFO [org.jboss.messaging.core.impl.postoffice.GroupMember] (Incoming-13,10.200.51.14:62610) All Members : 2 ([10.200.51.14:62610, 10.200.51.16:58992])
2015-03-26 08:24:32,140 INFO [org.jboss.cache.RPCManagerImpl] (Incoming-16,10.200.51.14:62610) Received new cluster view: [10.200.51.14:62610|2] [10.200.51.14:62610]
Server2:
2015-03-26 08:23:00,011 INFO [org.jboss.messaging.core.impl.postoffice.GroupMember] (main) All Members : 2 ([10.200.51.14:62610, 10.200.51.16:58992])
Multicast is successfully configured and working.
The clustering does NOT occur when EAR files are included at JBoss startup.
We see NAKACK messages (on Server1) when Server2 starts... but clustering does not complete.
2015-03-26 14:28:41,105 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-2,10.200.51.14:7900) sender 10.200.51.16:7900 not found in xmit_table
2015-03-26 14:28:41,105 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-2,10.200.51.14:7900) range is null
We see multiple NAKACK messages (on Server2) when it starts:
2015-03-26 14:27:47,488 WARN [org.jgroups.protocols.pbcast.NAKACK] (OOB-4,10.200.51.16:50648) 10.200.51.16:50648] discarded message from non-member 10.200.51.14:59139, my view is [10.200.51.16:50648|0] [10.200.51.16:50648]
2015-03-26 14:27:47,675 WARN [org.jgroups.protocols.pbcast.NAKACK] (OOB-4,10.200.51.16:50648) 10.200.51.16:50648] discarded message from non-member 10.200.51.14:59139, my view is [10.200.51.16:50648|0] [10.200.51.16:50648]
and
2015-03-26 14:28:34,038 WARN [org.jgroups.protocols.pbcast.NAKACK] (Incoming-4,10.200.51.16:50648) 10.200.51.16:50648] discarded message from non-member 10.200.51.14:59139, my view is [10.200.51.16:50648|0] [10.200.51.16:50648]
2015-03-26 14:28:34,038 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-9,10.200.51.16:50648) sender 10.200.51.14:59139 not found in xmit_table
2015-03-26 14:28:34,038 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-9,10.200.51.16:50648) range is null
2015-03-26 14:28:40,356 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-2,10.200.51.16:7900) sender 10.200.51.14:7900 not found in xmit_table
2015-03-26 14:28:40,356 ERROR [org.jgroups.protocols.pbcast.NAKACK] (Incoming-2,10.200.51.16:7900) range is null
We also see a JBoss messaging error on Server2 after the EAR file completes its deployment:
2015-03-26 14:32:38,557 ERROR [org.jboss.messaging.util.ExceptionUtil] (WorkManager(2)-7) SessionEndpoint[ej-53z3kq7i-1-gg3wjq7i-1d1bnk-gf1k5a] createConsumerDelegate [4k-dcm4kq7i-1-gg3wjq7i-1d1bnk-gf1k5a]
java.lang.IllegalStateException: org.jboss.messaging.core.impl.postoffice.GroupMember#692dba20 response not received from 10.200.51.14:59139 - there may be others
at org.jboss.messaging.core.impl.postoffice.GroupMember.multicastControl(GroupMember.java:262)
This application is CA Identity Manager R12.6 SP4, with 2 EAR files involved.
We have discussed this clustering issue with CA, and they indicate something is mis-configured within the JBoss AS.
Does anyone have any idea how we might troubleshoot and resolve this problem???

Why does server fail with "java.net.SocketException: invalid argument" upon server's startup?

Kafka 0.8
I follow the quickstart guide and when I come to the Step 2 to run bin/kafka-server-start.sh config/server.properties I'm facing the exception:
[2013-08-06 09:55:14,603] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2013-08-06 09:55:14,657] ERROR Error while electing or becoming leader on broker 0 (kafka.server.ZookeeperLeaderElector)
java.net.SocketException: invalid argument
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:639)
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
at kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$addNewBroker(ControllerChannelManager.scala:84)
at kafka.controller.ControllerChannelManager$$anonfun$1.apply(ControllerChannelManager.scala:35)
at kafka.controller.ControllerChannelManager$$anonfun$1.apply(ControllerChannelManager.scala:35)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:81)
at kafka.controller.ControllerChannelManager.<init>(ControllerChannelManager.scala:35)
at kafka.controller.KafkaController.startChannelManager(KafkaController.scala:503)
at kafka.controller.KafkaController.initializeControllerContext(KafkaController.scala:467)
at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:215)
at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:89)
at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:53)
at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:106)
at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:549)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
What could I be doing wrong? Please advise.
It is likely to be either a problem with name resolution, or with leftover settings from 0.7.
If you are migrating from 0.7, see the migration guide.
If you're starting fresh, ensure that there is an accurate entry in /etc/hosts for your hostname.
e.g. given a /etc/hostname file with
yourhostname
and an interface (/sbin/ifconfig) listening with an example IP of 10.181.11.14
/etc/hosts should correctly map that name to the listening interface:
10.181.11.14 yourhostname.yourdomain.com yourhostname someotheralias
You can test it by telnetting to the kafka port and ensuring that there is no timeout:
telnet yourhostname.yourdomain.com 9092
Trying 10.181.11.14...
Connected to yourhostname.yourdomain.com.
Escape character is '^]'.