I am trying to start up Kafka server locally on a macos with m1 chip. I followed the guide from the official kakfa quickstart(https://kafka.apache.org/quickstart). Zookeeper starts up fine but bin/kafka-server-start.sh config/server.properties is giving me socket invalid argument exception below:
[2023-01-30 09:22:55,790] ERROR Encountered an error while configuring the connection, closing it. (kafka.network.DataPlaneAcceptor)
java.net.SocketException: Invalid argument
at java.base/sun.nio.ch.Net.setIntOption0(Native Method)
at java.base/sun.nio.ch.Net.setSocketOption(Net.java:373)
at java.base/sun.nio.ch.SocketChannelImpl.setOption(SocketChannelImpl.java:234)
at java.base/sun.nio.ch.SocketAdaptor.setBooleanOption(SocketAdaptor.java:270)
at java.base/sun.nio.ch.SocketAdaptor.setTcpNoDelay(SocketAdaptor.java:305)
at kafka.network.Acceptor.configureAcceptedSocketChannel(SocketServer.scala:759)
at kafka.network.Acceptor.accept(SocketServer.scala:737)
at kafka.network.Acceptor.acceptNewConnections(SocketServer.scala:703)
at kafka.network.Acceptor.run(SocketServer.scala:645)
at java.base/java.lang.Thread.run(Thread.java:829)
I have tried:
Double checking that no other application is using the same port
Use a different JDK (from openjdk17 to openjdk 11 and back to 17)
Rebooted my machine
Clear up kafka related log folder under /tmp
Rebooted my machine
Used a lower version (3.2.1) of kafka tarball (since that one worked for me before but now it also runs into the same socket issue)
Change zookeeper port from 2181 to something else
Turns out to be an antivirus issue. Sorry for the false alarm.
Related
I'm downloaded the .tz (I am on MAC) for confluent version 7.0.0 from the official confluent site and was following the setup for LOCAL (1 node) and Kafka/ZooKeeper are starting fine, but the Schema Registry keeps failing (Note, I am behind a corporate VPN)
The exception message in the SchemaRegistry logs is:
[2021-11-04 00:34:22,492] INFO Logging initialized #1403ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2021-11-04 00:34:22,543] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2021-11-04 00:34:22,614] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer)
[2021-11-04 00:35:23,007] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: Failed to get Kafka cluster ID
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1488)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:166)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:71)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:271)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:245)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1486)
... 7 more
My schema-registry.properties file has bootstrap URL set to
kafkastore.bootstrap.servers=PLAINTEXT://localhost:9092
I saw some posts saying its the SchemaRegistry unable to connect to the KafkaCluster URL because of the localhost address potentially. I am fairly new to Kafka and basically just need this local setup to run a git repo that is utilizing some Topics/Kafka so my questions...
How can I fix this (I am behind a corporate VPN but I figured this shouldn't affect this)
Do I even need the SchemaRegistry?
I ended up just going with the Docker local setup inside, and the only change I had to make to the docker compose YAML was to change the schema-registry port (I changed it to 8082 or 8084, don't remember exactly but just an unused port that is not being used by some other Confluent service listed in the docker-compose.yaml) and my local setup is working fine now
I'm getting below error while starting the Kafka-Server on Windows machine. I've downloaded Scala 2.11 - kafka_2.11-2.1.0.tgz from the link: https://kafka.apache.org/downloads and I did the following steps:
Go to config folder in Apache Kafka (C:\Apache-Kafka\kafka_2.11-2.1.0\config) and edit “server.properties” using any text editor.
Find log.dirs and repelace after “=/tmp/kafka-logs” to C:\Apache-Kafka\kafka_2.11-2.1.0\kafka-logs.
Now simply start the server:
>kafka-server-start.bat C:\Apache-Kafka\kafka_2.11-2.1.0\config
Error:
C:\Apache-Kafka\kafka_2.11-2.1.0\bin\windows>kafka-server-start.bat C:\Apache-Kafka\kafka_2.11-2.1.0\config
[2018-12-14 21:09:34,566] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2018-12-14 21:09:34,583] ERROR Exiting Kafka due to fatal exception (kafka.Kafka$)
java.nio.file.AccessDeniedException: C:\Apache-Kafka\kafka_2.11-2.1.0\config
at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:230)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at org.apache.kafka.common.utils.Utils.loadProps(Utils.java:560)
at kafka.Kafka$.getPropsFromArgs(Kafka.scala:42)
at kafka.Kafka$.main(Kafka.scala:58)
at kafka.Kafka.main(Kafka.scala)
C:\Apache-Kafka\kafka_2.11-2.1.0\bin\windows>
Note: I've already setup Apache Zookeeper on my Windows machine and it's running on port 2181.
I run the command using run as administrator.
Try this after kafka-server-start.bat
use this: ....\config\server.properties with slash between 2 dots
in my case was
In general we must not use C: drive to store kafka-logs. You can try using the drive other than the C: for storing Kafka logs. It must work.
Change the property log.dirs={Drive other than C:}/tmp/kafka-logs present in KafkaHome/config/server.properties.
I installed OrientDB 2.2.0 on my server but can't start it running server.sh script. Previous version started fine on the server and the current version is running on my notebook. The server is a droplet from Digital Ocean with Ubuntu 15.10 32-bit. The error I get is below.
Invalid maximum direct memory size: -XX:MaxDirectMemorySize=512g The
specified size exceeds the maximum representable size. Error: Could
not create the Java Virtual Machine. Error: A fatal exception has
occurred. Program will exit.
Update
The problem was with -XX:MaxDirectMemorySize=512g. I changed it to -XX:MaxDirectMemorySize=512m and the error disappeared. The problem now is that the server tries to start but gives me the message below:
Creating the system database 'OSystem' for current server
[OSystemDatabase]Exception in thread "main"
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:693)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at com.orientechnologies.common.directmemory.OByteBufferPool.allocateBuffer(OByteBufferPool.java:309)
at com.orientechnologies.common.directmemory.OByteBufferPool.acquireDirect(OByteBufferPool.java:228)
at com.orientechnologies.orient.core.storage.cache.local.OWOWCache.cacheFileContent(OWOWCache.java:1255)
at com.orientechnologies.orient.core.storage.cache.local.OWOWCache.load(OWOWCache.java:617)
at com.orientechnologies.orient.core.storage.cache.local.twoq.O2QCache.updateCache(O2QCache.java:1200)
at com.orientechnologies.orient.core.storage.cache.local.twoq.O2QCache.doLoad(O2QCache.java:439)
at com.orientechnologies.orient.core.storage.cache.local.twoq.O2QCache.allocateNewPage(O2QCache.java:489)
at com.orientechnologies.orient.core.storage.impl.local.paginated.atomicoperations.OAtomicOperation.commitChanges(OAtomicOperation.java:426)
at com.orientechnologies.orient.core.storage.impl.local.paginated.atomicoperations.OAtomicOperationsManager.endAtomicOperation(OAtomicOperationsManager.java:420)
at com.orientechnologies.orient.core.storage.impl.local.paginated.base.ODurableComponent.endAtomicOperation(ODurableComponent.java:118)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OPaginatedCluster.create(OPaginatedCluster.java:197)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.addClusterInternal(OAbstractPaginatedStorage.java:3349)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.doAddCluster(OAbstractPaginatedStorage.java:3330)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.create(OAbstractPaginatedStorage.java:381)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.create(OLocalPaginatedStorage.java:120)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.create(ODatabaseDocumentTx.java:378)
at com.orientechnologies.orient.server.OSystemDatabase.init(OSystemDatabase.java:106)
at com.orientechnologies.orient.server.OSystemDatabase.(OSystemDatabase.java:42)
at com.orientechnologies.orient.server.OServer.initSystemDatabase(OServer.java:1217)
at com.orientechnologies.orient.server.OServer.activate(OServer.java:343)
at com.orientechnologies.orient.server.OServerMain.main(OServerMain.java:41)
(Posted on behalf of the OP).
Everything is okay now. I just changed -XX:MaxDirectMemorySize=512m to -XX:MaxDirectMemorySize=2g.
I suppose you have 32-bit JVM. You should set this value to -XX:MaxDirectMemorySize=2g .
I installed Apache Storm 1.0 by following this tutorial but I am not able to access to the Storm UI from the Internet. Accessing localhost:8080 gives the following error:
org.apache.storm.utils.NimbusLeaderNotFoundException: Could not find leader nimbus from seed hosts [localhost]. Did you specify a valid list of nimbus hosts for config nimbus.seeds?
at org.apache.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:90)
at org.apache.storm.ui.core$cluster_configuration.invoke(core.clj:343)
at org.apache.storm.ui.core$fn__12106.invoke(core.clj:929)
at org.apache.storm.shade.compojure.core$make_route$fn__2467.invoke(core.clj:93)
at org.apache.storm.shade.compojure.core$if_route$fn__2455.invoke(core.clj:39)
at org.apache.storm.shade.compojure.core$if_method$fn__2448.invoke(core.clj:24)
at org.apache.storm.shade.compojure.core$routing$fn__2473.invoke(core.clj:106)
at clojure.core$some.invoke(core.clj:2570)
at org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:106)
at clojure.lang.RestFn.applyTo(RestFn.java:139)
at clojure.core$apply.invoke(core.clj:632)
at org.apache.storm.shade.compojure.core$routes$fn__2477.invoke(core.clj:111)
at org.apache.storm.shade.ring.middleware.json$wrap_json_params$fn__11576.invoke(json.clj:56)
at org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__3543.invoke(multipart_params.clj:103)
at org.apache.storm.shade.ring.middleware.reload$wrap_reload$fn__4286.invoke(reload.clj:22)
at org.apache.storm.ui.helpers$requests_middleware$fn__3770.invoke(helpers.clj:46)
at org.apache.storm.ui.core$catch_errors$fn__12301.invoke(core.clj:1230)
at org.apache.storm.shade.ring.middleware.keyword_params$wrap_keyword_params$fn__3474.invoke(keyword_params.clj:27)
at org.apache.storm.shade.ring.middleware.nested_params$wrap_nested_params$fn__3514.invoke(nested_params.clj:65)
at org.apache.storm.shade.ring.middleware.params$wrap_params$fn__3445.invoke(params.clj:55)
at org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__3543.invoke(multipart_params.clj:103)
at org.apache.storm.shade.ring.middleware.flash$wrap_flash$fn__3729.invoke(flash.clj:14)
at org.apache.storm.shade.ring.middleware.session$wrap_session$fn__3717.invoke(session.clj:43)
at org.apache.storm.shade.ring.middleware.cookies$wrap_cookies$fn__3645.invoke(cookies.clj:160)
at org.apache.storm.shade.ring.util.servlet$make_service_method$fn__3351.invoke(servlet.clj:127)
at org.apache.storm.shade.ring.util.servlet$servlet$fn__3355.invoke(servlet.clj:136)
at org.apache.storm.shade.ring.util.servlet.proxy$javax.servlet.http.HttpServlet$ff19274a.service(Unknown Source)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:654)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1320)
at org.apache.storm.logging.filters.AccessLoggingFilter.handle(AccessLoggingFilter.java:47)
at org.apache.storm.logging.filters.AccessLoggingFilter.doFilter(AccessLoggingFilter.java:39)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291)
at org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:247)
at org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:210)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:443)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1044)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:372)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:978)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.apache.storm.shade.org.eclipse.jetty.server.Server.handle(Server.java:369)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:486)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:933)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:995)
at org.apache.storm.shade.org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
at org.apache.storm.shade.org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.apache.storm.shade.org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.apache.storm.shade.org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
at org.apache.storm.shade.org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.apache.storm.shade.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.apache.storm.shade.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Content of storm.yaml:
########### These MUST be filled in for a storm configuration
storm.zookeeper.servers:
- "localhost"
storm.local.dir: "/var/storm"
nimbus.host: "localhost"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
I resolved the two problems by myself.
for the first problem:
I had to restart zookeeper after installing apache storm.
for the second problem:
the problem was not a problem of storm.
the cause of this problem is due to the platform of azure, the 8080 port was closed by default.
So, I thank myself for this effort.
If it were allowed, I will give you (myself) +1M points
I had the same error and the answer I needed is not here, so here I am.
Before going further, I am using Storm version 2.1.0. In this version nimbus.host has been replaced by the array option nimbus.seeds.
In the log is written Did you specify a valid list of nimbus hosts for config nimbus.seeds?. As nimbus.seeds is a mandatory option, to fix the problem I simply added the host IP address as the only element in this list:
nimbus.seeds: ["HOST IP"]
Kafka 0.8
I follow the quickstart guide and when I come to the Step 2 to run bin/kafka-server-start.sh config/server.properties I'm facing the exception:
[2013-08-06 09:55:14,603] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2013-08-06 09:55:14,657] ERROR Error while electing or becoming leader on broker 0 (kafka.server.ZookeeperLeaderElector)
java.net.SocketException: invalid argument
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:639)
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
at kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$addNewBroker(ControllerChannelManager.scala:84)
at kafka.controller.ControllerChannelManager$$anonfun$1.apply(ControllerChannelManager.scala:35)
at kafka.controller.ControllerChannelManager$$anonfun$1.apply(ControllerChannelManager.scala:35)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:81)
at kafka.controller.ControllerChannelManager.<init>(ControllerChannelManager.scala:35)
at kafka.controller.KafkaController.startChannelManager(KafkaController.scala:503)
at kafka.controller.KafkaController.initializeControllerContext(KafkaController.scala:467)
at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:215)
at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:89)
at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:53)
at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:106)
at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:549)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
What could I be doing wrong? Please advise.
It is likely to be either a problem with name resolution, or with leftover settings from 0.7.
If you are migrating from 0.7, see the migration guide.
If you're starting fresh, ensure that there is an accurate entry in /etc/hosts for your hostname.
e.g. given a /etc/hostname file with
yourhostname
and an interface (/sbin/ifconfig) listening with an example IP of 10.181.11.14
/etc/hosts should correctly map that name to the listening interface:
10.181.11.14 yourhostname.yourdomain.com yourhostname someotheralias
You can test it by telnetting to the kafka port and ensuring that there is no timeout:
telnet yourhostname.yourdomain.com 9092
Trying 10.181.11.14...
Connected to yourhostname.yourdomain.com.
Escape character is '^]'.