On ubuntu 22.04 with couchdb 3.2.2, openJDK 8, I try to get lucene search to work by installing clouseau 2.21.2. I followed this tutorial. The service is available, but when I try to use it (as described in this tutorial), the log indicates clouseau disconnects upon handshake with the following:
INFO scalang.node.ErlangHandler - channel disconnected org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext#7495b1fb [id: 0x5e0a511c, /127.0.0.1:40416 :> /127.0.0.1:40743] DISCONNECTED. peer: null
ERROR scalalang.node.ServerHandshakeHandler - Channel closed during handshake
while couchdb asks "ou est clouseau?"
Seems like I am almost there. Am I?
I tried veryfying that I really have OpenJDK 8 and not a newer version. I tried installing scala utils. I tried turning it off and on again.
Related
I am trying to start up Kafka server locally on a macos with m1 chip. I followed the guide from the official kakfa quickstart(https://kafka.apache.org/quickstart). Zookeeper starts up fine but bin/kafka-server-start.sh config/server.properties is giving me socket invalid argument exception below:
[2023-01-30 09:22:55,790] ERROR Encountered an error while configuring the connection, closing it. (kafka.network.DataPlaneAcceptor)
java.net.SocketException: Invalid argument
at java.base/sun.nio.ch.Net.setIntOption0(Native Method)
at java.base/sun.nio.ch.Net.setSocketOption(Net.java:373)
at java.base/sun.nio.ch.SocketChannelImpl.setOption(SocketChannelImpl.java:234)
at java.base/sun.nio.ch.SocketAdaptor.setBooleanOption(SocketAdaptor.java:270)
at java.base/sun.nio.ch.SocketAdaptor.setTcpNoDelay(SocketAdaptor.java:305)
at kafka.network.Acceptor.configureAcceptedSocketChannel(SocketServer.scala:759)
at kafka.network.Acceptor.accept(SocketServer.scala:737)
at kafka.network.Acceptor.acceptNewConnections(SocketServer.scala:703)
at kafka.network.Acceptor.run(SocketServer.scala:645)
at java.base/java.lang.Thread.run(Thread.java:829)
I have tried:
Double checking that no other application is using the same port
Use a different JDK (from openjdk17 to openjdk 11 and back to 17)
Rebooted my machine
Clear up kafka related log folder under /tmp
Rebooted my machine
Used a lower version (3.2.1) of kafka tarball (since that one worked for me before but now it also runs into the same socket issue)
Change zookeeper port from 2181 to something else
Turns out to be an antivirus issue. Sorry for the false alarm.
I received follow log file messages using Xdebug and PHP debug for remote debugging with
Client (Windows 8.1, Chrome Browser)
Server (Linux CentOS 7, PHP 7.2.32, Apache 2.4.6, Xdebug - 2.9.6).
Trying so many options, but cannot solve this problem.
[20273] I: Connecting to configured address/port: 127.0.0.1:9003.
[20273] W: Creating socket for '127.0.0.1:9003', poll success, but error: Operation now in progress (29).
[20273] E: Could not connect to client. :-(
[20273] Log closed at 2020-10-25 05:35:27
[
I would appreciate that anyone help me.
In one of the ActiveMQ instance (v5.13.3), when tried to create a queue manually using Web console, getting error as shown below
Within the ActiveMQ logs, there is no errors but an warn as below. This instance is working fine for the existing queues & topics. Only facing problem while trying a new queue. Even the restart doesn't the help. With the similar setup it works fine in another instances.
2018-07-05 14:42:49,700 | WARN | /admin/createDestination.action | org.eclipse.jetty.servlet.ServletHandler | qtp2076549461-5084797
java.io.EOFException
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:438)[:1.8.0_112]
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:416)[:1.8.0_112]
at org.apache.activemq.util.RecoverableRandomAccessFile.readFully(RecoverableRandomAccessFile.java:75)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.PageFile.readPage(PageFile.java:878)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction$2.readPage(Transaction.java:456)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction$2.<init>(Transaction.java:447)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction.openInputStream(Transaction.java:444)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:420)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction.load(Transaction.java:377)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.index.BTreeIndex.loadNode(BTreeIndex.java:266)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.index.BTreeNode.getChild(BTreeNode.java:233)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.index.BTreeNode.getLast(BTreeNode.java:624)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.index.BTreeIndex.getLast(BTreeIndex.java:248)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.addLast(MessageDatabase.java:3298)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.MessageDatabase$MessageOrderIndex.configureLast(MessageDatabase.java:3289)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.MessageDatabase.loadStoredDestination(MessageDatabase.java:2346)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.MessageDatabase.getStoredDestination(MessageDatabase.java:2298)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$7.execute(KahaDBStore.java:722)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore$7.execute(KahaDBStore.java:716)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.disk.page.Transaction.execute(Transaction.java:802)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.recoverMessageStoreStatistics(KahaDBStore.java:716)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.AbstractMessageStore.start(AbstractMessageStore.java:45)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.store.kahadb.KahaDBStore$KahaDBMessageStore.start(KahaDBStore.java:671)[activemq-kahadb-store-5.13.3.jar:5.13.3]
at org.apache.activemq.store.ProxyMessageStore.start(ProxyMessageStore.java:75)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.region.Queue.initialize(Queue.java:385)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.region.DestinationFactoryImpl.createDestination(DestinationFactoryImpl.java:87)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.region.AbstractRegion.createDestination(AbstractRegion.java:629)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.jmx.ManagedQueueRegion.createDestination(ManagedQueueRegion.java:56)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.region.AbstractRegion.addDestination(AbstractRegion.java:155)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.region.RegionBroker.addDestination(RegionBroker.java:348)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:173)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.advisory.AdvisoryBroker.addDestination(AdvisoryBroker.java:239)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:173)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:173)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.MutableBrokerFilter.addDestination(MutableBrokerFilter.java:178)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.broker.jmx.BrokerView.addQueue(BrokerView.java:405)[activemq-broker-5.13.3.jar:5.13.3]
at org.apache.activemq.web.DestinationFacade.addDestination(DestinationFacade.java:56)[activemq-web-5.13.3.jar:5.13.3]
at org.apache.activemq.web.controller.CreateDestination.handleRequest(CreateDestination.java:38)[file:/C:/Talend/6.2.1/esb/activemq/webapps/admin/WEB-INF/classes/:]
at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:50)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:965)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:867)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:648)[tomcat-servlet-api-8.0.24.jar:]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:841)[spring-webmvc-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)[tomcat-servlet-api-8.0.24.jar:]
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:808)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.apache.activemq.web.AuditFilter.doFilter(AuditFilter.java:59)[activemq-web-5.13.3.jar:5.13.3]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)[spring-web-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.apache.activemq.web.filter.ApplicationContextFilter.doFilter(ApplicationContextFilter.java:102)[file:/C:/Talend/6.2.1/esb/activemq/webapps/admin/WEB-INF/classes/:]
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.Server.handle(Server.java:499)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)[jetty-all-9.2.13.v20150730.jar:9.2.13.v20150730]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_112]
[1]: https://i.stack.imgur.com/gNbCT.png
This is mostly because of incompatible Jetty version with JDK 1.8. You can resolve this
either by downgrading the JDK version to 1.7 and the set the JAVA_HOME or by upgrading the Jetty compatible with JDK 1.8.
I have got the similar issue with activemq version 5.10.0. Web console was working fine with JDK 1.7 but throwing an exception for JDK 1.8.
The root cause of the problem is due to corrupted kahadb database files of the Active MQ. Copied the kahadb folder from another runtime instance (similar setup) and restarted the Active MQ service. This issue is now fixed.
I installed OrientDB 2.2.0 on my server but can't start it running server.sh script. Previous version started fine on the server and the current version is running on my notebook. The server is a droplet from Digital Ocean with Ubuntu 15.10 32-bit. The error I get is below.
Invalid maximum direct memory size: -XX:MaxDirectMemorySize=512g The
specified size exceeds the maximum representable size. Error: Could
not create the Java Virtual Machine. Error: A fatal exception has
occurred. Program will exit.
Update
The problem was with -XX:MaxDirectMemorySize=512g. I changed it to -XX:MaxDirectMemorySize=512m and the error disappeared. The problem now is that the server tries to start but gives me the message below:
Creating the system database 'OSystem' for current server
[OSystemDatabase]Exception in thread "main"
java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:693)
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123)
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311)
at com.orientechnologies.common.directmemory.OByteBufferPool.allocateBuffer(OByteBufferPool.java:309)
at com.orientechnologies.common.directmemory.OByteBufferPool.acquireDirect(OByteBufferPool.java:228)
at com.orientechnologies.orient.core.storage.cache.local.OWOWCache.cacheFileContent(OWOWCache.java:1255)
at com.orientechnologies.orient.core.storage.cache.local.OWOWCache.load(OWOWCache.java:617)
at com.orientechnologies.orient.core.storage.cache.local.twoq.O2QCache.updateCache(O2QCache.java:1200)
at com.orientechnologies.orient.core.storage.cache.local.twoq.O2QCache.doLoad(O2QCache.java:439)
at com.orientechnologies.orient.core.storage.cache.local.twoq.O2QCache.allocateNewPage(O2QCache.java:489)
at com.orientechnologies.orient.core.storage.impl.local.paginated.atomicoperations.OAtomicOperation.commitChanges(OAtomicOperation.java:426)
at com.orientechnologies.orient.core.storage.impl.local.paginated.atomicoperations.OAtomicOperationsManager.endAtomicOperation(OAtomicOperationsManager.java:420)
at com.orientechnologies.orient.core.storage.impl.local.paginated.base.ODurableComponent.endAtomicOperation(ODurableComponent.java:118)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OPaginatedCluster.create(OPaginatedCluster.java:197)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.addClusterInternal(OAbstractPaginatedStorage.java:3349)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.doAddCluster(OAbstractPaginatedStorage.java:3330)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.create(OAbstractPaginatedStorage.java:381)
at com.orientechnologies.orient.core.storage.impl.local.paginated.OLocalPaginatedStorage.create(OLocalPaginatedStorage.java:120)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.create(ODatabaseDocumentTx.java:378)
at com.orientechnologies.orient.server.OSystemDatabase.init(OSystemDatabase.java:106)
at com.orientechnologies.orient.server.OSystemDatabase.(OSystemDatabase.java:42)
at com.orientechnologies.orient.server.OServer.initSystemDatabase(OServer.java:1217)
at com.orientechnologies.orient.server.OServer.activate(OServer.java:343)
at com.orientechnologies.orient.server.OServerMain.main(OServerMain.java:41)
(Posted on behalf of the OP).
Everything is okay now. I just changed -XX:MaxDirectMemorySize=512m to -XX:MaxDirectMemorySize=2g.
I suppose you have 32-bit JVM. You should set this value to -XX:MaxDirectMemorySize=2g .
We have a small test cluster with 3 nodes on Amazon. Everything seems working with cqlsh. But when I try to debug my app from my laptop (outside of Amazon of course), I'm getting 'Channel has been closed' errors, and it starts retrying forever. I know it's likely caused by the config in cassandra.ymal, as it shows some private IPs in my Eclipse console. Tried many different ways but still getting the same problem. Appreciate any input on this. How to get rid of the private IPs 10.251.x.x from the client?
Here are some context,
Versions:
[cqlsh 4.0.1 | Cassandra 2.0.4 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
cassandra-driver-core-2.0.0-rc1.jar
In cassandra.ymal:
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "54.203.x.x,54.203.x.y"
listen_address: 10.251.a.b
broadcast_address: 54.203.x.x
native_transport_port: 9042
endpoint_snitch: Ec2MultiRegionSnitch
In Eclipse console:
DEBUG [main] (ControlConnection.java:145) - [Control connection] Successfully connected to /54.203.x.x
DEBUG [Cassandra Java Driver worker-0] (Session.java:379) - Adding /54.203.x.x to list of queried hosts
DEBUG [Cassandra Java Driver worker-1] (Session.java:379) - Adding /10.251.a.c to list of queried hosts
DEBUG [Cassandra Java Driver worker-1] (Connection.java:103) - [/10.251.a.c-1] Error connecting to /10.251.a.c (connection timed out: /10.251.a.c:9042)
DEBUG [Cassandra Java Driver worker-1] (Session.java:390) - Error creating pool to /10.251.a.c ([/10.251.a.c] Cannot connect)
DEBUG [Cassandra Java Driver worker-1] (Cluster.java:1064) - /10.251.a.c is down, scheduling connection retries
DEBUG [New I/O worker #4] (Connection.java:194) - Defuncting connection to /10.251.a.c
com.datastax.driver.core.TransportException: [/10.251.a.b] Channel has been closed
at com.datastax.driver.core.Connection$Dispatcher.channelClosed(Connection.java:548)
...
It seem that your Java driver is using auto discovery by calling "describe cluster" to get a list of all nodes in your cluster. In AWS using Ec2Snitch, that yields to private ips which obviously won't work from outside of AWS. There is a discussion on this topic here:
https://datastax-oss.atlassian.net/browse/JAVA-145
The last commend got my attention. It says you can do something with LoadBalancingPolicy of the driver to limit the nodes. Hope this includes specifying the specific IPs so it does not auto discover.