Why does server fail with "java.net.SocketException: invalid argument" upon server's startup? - apache-kafka

Kafka 0.8
I follow the quickstart guide and when I come to the Step 2 to run bin/kafka-server-start.sh config/server.properties I'm facing the exception:
[2013-08-06 09:55:14,603] INFO 0 successfully elected as leader (kafka.server.ZookeeperLeaderElector)
[2013-08-06 09:55:14,657] ERROR Error while electing or becoming leader on broker 0 (kafka.server.ZookeeperLeaderElector)
java.net.SocketException: invalid argument
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:465)
at sun.nio.ch.Net.connect(Net.java:457)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:639)
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
at kafka.controller.ControllerChannelManager.kafka$controller$ControllerChannelManager$$addNewBroker(ControllerChannelManager.scala:84)
at kafka.controller.ControllerChannelManager$$anonfun$1.apply(ControllerChannelManager.scala:35)
at kafka.controller.ControllerChannelManager$$anonfun$1.apply(ControllerChannelManager.scala:35)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:81)
at kafka.controller.ControllerChannelManager.<init>(ControllerChannelManager.scala:35)
at kafka.controller.KafkaController.startChannelManager(KafkaController.scala:503)
at kafka.controller.KafkaController.initializeControllerContext(KafkaController.scala:467)
at kafka.controller.KafkaController.onControllerFailover(KafkaController.scala:215)
at kafka.controller.KafkaController$$anonfun$1.apply$mcV$sp(KafkaController.scala:89)
at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:53)
at kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:106)
at org.I0Itec.zkclient.ZkClient$6.run(ZkClient.java:549)
at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
What could I be doing wrong? Please advise.

It is likely to be either a problem with name resolution, or with leftover settings from 0.7.
If you are migrating from 0.7, see the migration guide.
If you're starting fresh, ensure that there is an accurate entry in /etc/hosts for your hostname.
e.g. given a /etc/hostname file with
yourhostname
and an interface (/sbin/ifconfig) listening with an example IP of 10.181.11.14
/etc/hosts should correctly map that name to the listening interface:
10.181.11.14 yourhostname.yourdomain.com yourhostname someotheralias
You can test it by telnetting to the kafka port and ensuring that there is no timeout:
telnet yourhostname.yourdomain.com 9092
Trying 10.181.11.14...
Connected to yourhostname.yourdomain.com.
Escape character is '^]'.

Related

Kafka server running into java.net.SocketException: Invalid argument exception

I am trying to start up Kafka server locally on a macos with m1 chip. I followed the guide from the official kakfa quickstart(https://kafka.apache.org/quickstart). Zookeeper starts up fine but bin/kafka-server-start.sh config/server.properties is giving me socket invalid argument exception below:
[2023-01-30 09:22:55,790] ERROR Encountered an error while configuring the connection, closing it. (kafka.network.DataPlaneAcceptor)
java.net.SocketException: Invalid argument
at java.base/sun.nio.ch.Net.setIntOption0(Native Method)
at java.base/sun.nio.ch.Net.setSocketOption(Net.java:373)
at java.base/sun.nio.ch.SocketChannelImpl.setOption(SocketChannelImpl.java:234)
at java.base/sun.nio.ch.SocketAdaptor.setBooleanOption(SocketAdaptor.java:270)
at java.base/sun.nio.ch.SocketAdaptor.setTcpNoDelay(SocketAdaptor.java:305)
at kafka.network.Acceptor.configureAcceptedSocketChannel(SocketServer.scala:759)
at kafka.network.Acceptor.accept(SocketServer.scala:737)
at kafka.network.Acceptor.acceptNewConnections(SocketServer.scala:703)
at kafka.network.Acceptor.run(SocketServer.scala:645)
at java.base/java.lang.Thread.run(Thread.java:829)
I have tried:
Double checking that no other application is using the same port
Use a different JDK (from openjdk17 to openjdk 11 and back to 17)
Rebooted my machine
Clear up kafka related log folder under /tmp
Rebooted my machine
Used a lower version (3.2.1) of kafka tarball (since that one worked for me before but now it also runs into the same socket issue)
Change zookeeper port from 2181 to something else
Turns out to be an antivirus issue. Sorry for the false alarm.

"SchemaRegistryException: Failed to get Kafka cluster ID" for LOCAL setup

I'm downloaded the .tz (I am on MAC) for confluent version 7.0.0 from the official confluent site and was following the setup for LOCAL (1 node) and Kafka/ZooKeeper are starting fine, but the Schema Registry keeps failing (Note, I am behind a corporate VPN)
The exception message in the SchemaRegistry logs is:
[2021-11-04 00:34:22,492] INFO Logging initialized #1403ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log)
[2021-11-04 00:34:22,543] INFO Initial capacity 128, increased by 64, maximum capacity 2147483647. (io.confluent.rest.ApplicationServer)
[2021-11-04 00:34:22,614] INFO Adding listener: http://0.0.0.0:8081 (io.confluent.rest.ApplicationServer)
[2021-11-04 00:35:23,007] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryException: Failed to get Kafka cluster ID
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1488)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.<init>(KafkaSchemaRegistry.java:166)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:71)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.configureBaseApplication(SchemaRegistryRestApplication.java:90)
at io.confluent.rest.Application.configureHandler(Application.java:271)
at io.confluent.rest.ApplicationServer.doStart(ApplicationServer.java:245)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:44)
Caused by: java.util.concurrent.TimeoutException
at java.util.concurrent.CompletableFuture.timedGet(CompletableFuture.java:1784)
at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:180)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.kafkaClusterId(KafkaSchemaRegistry.java:1486)
... 7 more
My schema-registry.properties file has bootstrap URL set to
kafkastore.bootstrap.servers=PLAINTEXT://localhost:9092
I saw some posts saying its the SchemaRegistry unable to connect to the KafkaCluster URL because of the localhost address potentially. I am fairly new to Kafka and basically just need this local setup to run a git repo that is utilizing some Topics/Kafka so my questions...
How can I fix this (I am behind a corporate VPN but I figured this shouldn't affect this)
Do I even need the SchemaRegistry?
I ended up just going with the Docker local setup inside, and the only change I had to make to the docker compose YAML was to change the schema-registry port (I changed it to 8082 or 8084, don't remember exactly but just an unused port that is not being used by some other Confluent service listed in the docker-compose.yaml) and my local setup is working fine now

Datastax Cassandra Driver always attempts to connect to localhost, even though it's not configured to do so

So I have the following Client code:
def getCluster:Session = {
import collection.JavaConversions._
val endpoints = config.getStringList("cassandra.server")
val keyspace = config.getString("cassandra.keyspace")
val clusterBuilder = Cluster.builder
endpoints.toTraversable.map { x =>
clusterBuilder.addContactPoint(x)
}
val cluster = clusterBuilder.build
cluster
.getConfiguration
.getProtocolOptions
.setCompression(ProtocolOptions.Compression.LZ4)
cluster.connect(keyspace)}
which is shamelessly borrowed from the examples in datastax's driver documentation.
When I attempt to execute code with it, it always tries to connect to localhost, even though it's not configured for that...
In some cases, it will connect (basic reads) but for writes I get the following log message:
2016-07-07 11:34:31 DEBUG Connection:157 - Connection[/127.0.0.1:9042-10, inFlight=0, closed=false] Error connecting to /127.0.0.1:9042 (Connection refused: /127.0.0.1:9042)
2016-07-07 11:34:31 DEBUG STATES:404 - Defuncting Connection[/127.0.0.1:9042-10, inFlight=0, closed=false] because: [/127.0.0.1] Cannot connect
2016-07-07 11:34:31 DEBUG STATES:108 - [/127.0.0.1:9042] Connection[/127.0.0.1:9042-10, inFlight=0, closed=false] failed, remaining = 0
2016-07-07 11:34:31 DEBUG Connection:629 - Connection[/127.0.0.1:9042-10, inFlight=0, closed=true] closing connection
2016-07-07 11:34:31 DEBUG Cluster:1802 - Aborting onDown because a reconnection is running on DOWN host /127.0.0.1:9042
2016-07-07 11:34:31 DEBUG Cluster:1872 - Failed reconnection to /127.0.0.1:9042 ([/127.0.0.1] Cannot connect), scheduling retry in 512000 milliseconds
2016-07-07 11:34:31 DEBUG STATES:196 - [/127.0.0.1:9042] next reconnection attempt in 512000 ms
I can't figure out where/what I need to configure on the driver side (no local client, it's just the driver) to correct this issue
My guess is that this is caused by configuration of the cassandra.yaml file on your cassandra node(s). The two main settings that would impact this are broadcast_rpc_address and rpc_address, from The cassandra.yaml configuration reference:
broadcast_rpc_address
(Default: unset) RPC address to broadcast to drivers and other Cassandra nodes. This cannot be set to 0.0.0.0. If blank, it is set to the value of the rpc_address or rpc_interface. If rpc_address or rpc_interfaceis set to 0.0.0.0, this property must be set.
rpc_address
(Default: localhost) The listen address for client connections (Thrift RPC service and native transport).
If you leave both of these to the defaults, localhost will be the default address cassandra will communicate to connect on.
After the driver is able to connect to a contact point, it queries the system.local and system.peers table of the contact point to determine which hosts to connect to, the addresses those tables communicate are from rpc_address/broadcast_rpc_address

Apache Storm: Could not find leader nimbus from seed hosts

I installed Apache Storm 1.0 by following this tutorial but I am not able to access to the Storm UI from the Internet. Accessing localhost:8080 gives the following error:
org.apache.storm.utils.NimbusLeaderNotFoundException: Could not find leader nimbus from seed hosts [localhost]. Did you specify a valid list of nimbus hosts for config nimbus.seeds?
at org.apache.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:90)
at org.apache.storm.ui.core$cluster_configuration.invoke(core.clj:343)
at org.apache.storm.ui.core$fn__12106.invoke(core.clj:929)
at org.apache.storm.shade.compojure.core$make_route$fn__2467.invoke(core.clj:93)
at org.apache.storm.shade.compojure.core$if_route$fn__2455.invoke(core.clj:39)
at org.apache.storm.shade.compojure.core$if_method$fn__2448.invoke(core.clj:24)
at org.apache.storm.shade.compojure.core$routing$fn__2473.invoke(core.clj:106)
at clojure.core$some.invoke(core.clj:2570)
at org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:106)
at clojure.lang.RestFn.applyTo(RestFn.java:139)
at clojure.core$apply.invoke(core.clj:632)
at org.apache.storm.shade.compojure.core$routes$fn__2477.invoke(core.clj:111)
at org.apache.storm.shade.ring.middleware.json$wrap_json_params$fn__11576.invoke(json.clj:56)
at org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__3543.invoke(multipart_params.clj:103)
at org.apache.storm.shade.ring.middleware.reload$wrap_reload$fn__4286.invoke(reload.clj:22)
at org.apache.storm.ui.helpers$requests_middleware$fn__3770.invoke(helpers.clj:46)
at org.apache.storm.ui.core$catch_errors$fn__12301.invoke(core.clj:1230)
at org.apache.storm.shade.ring.middleware.keyword_params$wrap_keyword_params$fn__3474.invoke(keyword_params.clj:27)
at org.apache.storm.shade.ring.middleware.nested_params$wrap_nested_params$fn__3514.invoke(nested_params.clj:65)
at org.apache.storm.shade.ring.middleware.params$wrap_params$fn__3445.invoke(params.clj:55)
at org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__3543.invoke(multipart_params.clj:103)
at org.apache.storm.shade.ring.middleware.flash$wrap_flash$fn__3729.invoke(flash.clj:14)
at org.apache.storm.shade.ring.middleware.session$wrap_session$fn__3717.invoke(session.clj:43)
at org.apache.storm.shade.ring.middleware.cookies$wrap_cookies$fn__3645.invoke(cookies.clj:160)
at org.apache.storm.shade.ring.util.servlet$make_service_method$fn__3351.invoke(servlet.clj:127)
at org.apache.storm.shade.ring.util.servlet$servlet$fn__3355.invoke(servlet.clj:136)
at org.apache.storm.shade.ring.util.servlet.proxy$javax.servlet.http.HttpServlet$ff19274a.service(Unknown Source)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:654)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1320)
at org.apache.storm.logging.filters.AccessLoggingFilter.handle(AccessLoggingFilter.java:47)
at org.apache.storm.logging.filters.AccessLoggingFilter.doFilter(AccessLoggingFilter.java:39)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291)
at org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:247)
at org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:210)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:443)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1044)
at org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:372)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:978)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at org.apache.storm.shade.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.apache.storm.shade.org.eclipse.jetty.server.Server.handle(Server.java:369)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:486)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:933)
at org.apache.storm.shade.org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:995)
at org.apache.storm.shade.org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
at org.apache.storm.shade.org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.apache.storm.shade.org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at org.apache.storm.shade.org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
at org.apache.storm.shade.org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at org.apache.storm.shade.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at org.apache.storm.shade.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Content of storm.yaml:
########### These MUST be filled in for a storm configuration
storm.zookeeper.servers:
- "localhost"
storm.local.dir: "/var/storm"
nimbus.host: "localhost"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
I resolved the two problems by myself.
for the first problem:
I had to restart zookeeper after installing apache storm.
for the second problem:
the problem was not a problem of storm.
the cause of this problem is due to the platform of azure, the 8080 port was closed by default.
So, I thank myself for this effort.
If it were allowed, I will give you (myself) +1M points
I had the same error and the answer I needed is not here, so here I am.
Before going further, I am using Storm version 2.1.0. In this version nimbus.host has been replaced by the array option nimbus.seeds.
In the log is written Did you specify a valid list of nimbus hosts for config nimbus.seeds?. As nimbus.seeds is a mandatory option, to fix the problem I simply added the host IP address as the only element in this list:
nimbus.seeds: ["HOST IP"]

The Datastax cassandra community server 2.1.10 service on local computer started and then stopped

I am trying to configure a two node cluster with cassandra in windows r2 2008
So i installed cassandra community version in one server (10.xxx.0.1,10.xxx.0.2)
And then I stopped the service and then edited the configuraton.yaml file in the conf folder.
The changes are:
cluster_name
commented the num_tokens
gave the tokens in initial_token,
seeds as 10.xxx.0.1,10.xxx.0.2,
listen_addresses are their respective ip addresses which are 10.xxx.0.1,10.xxx.0.2,
rpc_addresses as 0.0.0.0,
endpointsnitch as gossip
I also changed the cassandra rackdc.properties file to dc=DC1 rack=RAC1.
I then saved and started back the service and opened the cqlsh, but it is not connecting. Below is the error:
2015-10-12 16:20:13 Commons Daemon procrun stderr initialized
If rpc_address is set to a wildcard address (0.0.0.0), then you must set broadcast_rpc_address to a value other than 0.0.0.0
Fatal configuration error; unable to start. See log for stacktrace.
..
ERROR 21:20:14 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: If rpc_address is set to a wildcard address (0.0.0.0), then you must set broadcast_rpc_address to a value other than 0.0.0.0
at org.apache.cassandra.config.DatabaseDescriptor.applyAddressConfig(DatabaseDescriptor.java:285) ~[apache-cassandra-2.1.10.jar:2.1.10]
at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:443) ~[apache-cassandra-2.1.10.jar:2.1.10]
at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:136) ~[apache-cassandra-2.1.10.jar:2.1.10]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:168) [apache-cassandra-2.1.10.jar:2.1.10]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:562) [apache-cassandra-2.1.10.jar:2.1.10]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651) [apache-cassandra-2.1.10.jar:2.1.10]
If you out 0.0.0.0 to the rpc_address you have to change the broadcast_rpc_address like in http://docs.datastax.com/en/cassandra/2.1/cassandra/configuration/configCassandra_yaml_r.html , I think that the right broadcast_rpc_address can be the own ip address.