What causes this OrientDB startup error? - orientdb

I just started getting this error when running Orient. Any advise what to do?
2015-11-10 08:09:52:003 INFO OrientDB auto-config DISKCACHE=489MB (heap=491MB os=52,339MB disk=978MB) [orientechnologies]
2015-11-10 08:09:52:111 INFO Loading configuration from: /home/ubuntu/workspace/orient215/config/orientdb-server-config.xml... [OServerConfigurationLoaderXml]
2015-11-10 08:09:52:391 INFO OrientDB Server v2.1.5 (build 2.1.x#r; 2015-10-29 16:54:25+0000) is starting up... [OServer]
2015-11-10 08:09:52:430 INFO Databases directory: /home/ubuntu/workspace/orient215/databases [OServer]
2015-11-10 08:09:52:484 INFO Listening binary connections on 0.0.0.0:2424 (protocol v.32, socket=default) [OServerNetworkListener]Exception in thread "main" java.lang.NegativeArraySizeException
at com.orientechnologies.orient.server.network.OServerNetworkListener.getPorts(OServerNetworkListener.java:113)
at com.orientechnologies.orient.server.network.OServerNetworkListener.listen(OServerNetworkListener.java:305)
at com.orientechnologies.orient.server.network.OServerNetworkListener.<init>(OServerNetworkListener.java:79)
at com.orientechnologies.orient.server.OServer.activate(OServer.java:334)
at com.orientechnologies.orient.server.OServerMain.main(OServerMain.java:41)

Well I deleted this question quickly because I found my own answer. But on second thought if someone else makes a typo in the port specification in orientdb-server-config.xml, then they might find this post useful

Related

KAFKA - ERROR Disk error while locking directory

I got this error ERROR Disk error while locking directory while trying to start kafka-server-start.sh config/server.properties
[2020-05-21 23:44:11,323] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-21 23:44:11,323] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-21 23:44:11,324] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-05-21 23:44:11,340] ERROR Disk error while locking directory /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/.lock
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at kafka.utils.FileLock.<init>(FileLock.scala:31)
at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:235)
at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:118)
at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:105)
at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:38)
at kafka.log.LogManager.lockLogDirs(LogManager.scala:233)
at kafka.log.LogManager.<init>(LogManager.scala:104)
at kafka.log.LogManager$.apply(LogManager.scala:1084)
at kafka.server.KafkaServer.startup(KafkaServer.scala:253)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)
[2020-05-21 23:44:11,344] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.nio.file.AccessDeniedException: /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/recovery-point-offset-checkpoint
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.createFile(Files.java:632)
at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45)
at kafka.server.checkpoints.OffsetCheckpointFile.<init>(OffsetCheckpointFile.scala:57)
at kafka.log.LogManager.$anonfun$recoveryPointCheckpoints$1(LogManager.scala:106)
at scala.collection.StrictOptimizedIterableOps.map(StrictOptimizedIterableOps.scala:100)
at scala.collection.StrictOptimizedIterableOps.map$(StrictOptimizedIterableOps.scala:87)
at scala.collection.mutable.ArraySeq.map(ArraySeq.scala:38)
at kafka.log.LogManager.<init>(LogManager.scala:105)
at kafka.log.LogManager$.apply(LogManager.scala:1084)
at kafka.server.KafkaServer.startup(KafkaServer.scala:253)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:82)
at kafka.Kafka.main(Kafka.scala)
This is known issue with Kafka distribution for windows. Refer: https://issues.apache.org/jira/browse/KAFKA-13391
Either use Kafka 2.8 (kafka_2.12-2.8.1.tgz) or wait for Kafka 3.0.1 or Kafka 3.1.0
For people using Kafka on windows and having a related error to
java.nio.file.AccessDeniedException:
This is a common error when log retention happens.
Kafka doesn’t have good support for windows filesystem.
You can use WSL2 or Docker to work around these limitations
Try to use kafka_2.12-2.8.1.tgz. This resolved the issue for me.
As the error states, the user that starts the Kafka Server process does not have access to your log.dirs:
[2020-05-21 23:44:11,344] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.nio.file.AccessDeniedException: /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/recovery-point-offset-checkpoint
You can either:
Change log.dirs (Make sure NOT to use /tmp/)
Or grant read/write access for /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data/
If none of the above options works for you, then probably it might be worth checking if the directory actually exists. If not, simply create it by running
mkdir -p /opt/kafka2.13/kafka_2.13-2.5.0/data_log_tu_tao/kafka_data
As a side note, I wouldn't say that /opt/ is the best place to store data.

Unable to start Drill in distributed mode

I am trying to setup drillv1.18 running. Facing the error below.
The drill-override.conf points to the zookeeper which runs on port 12181. On starting in distributed mode, it fails with the following log output. But the embedded mode has no issues.
It appears like permission issue, but both zookeeper, drill, zookeeper data-dir all are running under the same user.
2020-05-10 16:23:01,160 [main] DEBUG o.apache.drill.exec.server.Drillbit - Construction started.
2020-05-10 16:23:01,448 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Connect localhost:12181, zkRoot drill, clusterId: drillbits1
2020-05-10 16:23:01,531 [main] INFO o.a.d.e.s.s.PersistentStoreRegistry - Using the configured PStoreProvider class: 'org.apache.drill.exec.store.sys.store.provider.ZookeeperPersistentStoreProvider'.
2020-05-10 16:23:01,718 [main] DEBUG o.a.drill.exec.ssl.SSLConfigServer - Using Hadoop configuration for SSL
2020-05-10 16:23:01,718 [main] DEBUG o.a.drill.exec.ssl.SSLConfigServer - Hadoop SSL configuration file: ssl-server.xml
2020-05-10 16:23:01,731 [main] DEBUG org.apache.drill.exec.ssl.SSLConfig - Initialized SSL context.
2020-05-10 16:23:01,731 [main] INFO o.a.drill.exec.rpc.user.UserServer - Rpc server configured to use TLS protocol 'TLSv1.2'
2020-05-10 16:23:01,738 [main] INFO o.apache.drill.exec.server.Drillbit - Construction completed (577 ms).
2020-05-10 16:23:01,738 [main] DEBUG o.apache.drill.exec.server.Drillbit - Startup begun.
2020-05-10 16:23:01,738 [main] DEBUG o.a.d.e.c.zk.ZKClusterCoordinator - Starting ZKClusterCoordination.
2020-05-10 16:23:03,775 [main] ERROR o.apache.drill.exec.server.Drillbit - Failure during initial startup of Drillbit.
org.apache.zookeeper.KeeperException$UnimplementedException: KeeperErrorCode = Unimplemented for /drill
at org.apache.zookeeper.KeeperException.create(KeeperException.java:106)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:1538)
at org.apache.curator.utils.ZKPaths.mkdirs(ZKPaths.java:351)
at org.apache.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:230)
at org.apache.curator.framework.imps.ExistsBuilderImpl$2.call(ExistsBuilderImpl.java:224)
at org.apache.curator.connection.StandardConnectionHandlingPolicy.callWithRetry(StandardConnectionHandlingPolicy.java:67)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:81)
at org.apache.curator.framework.imps.ExistsBuilderImpl.pathInForeground(ExistsBuilderImpl.java:221)
at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:206)
at org.apache.curator.framework.imps.ExistsBuilderImpl.forPath(ExistsBuilderImpl.java:35)
at org.apache.curator.framework.imps.CuratorFrameworkImpl.createContainers(CuratorFrameworkImpl.java:265)
at org.apache.curator.framework.EnsureContainers.internalEnsure(EnsureContainers.java:69)
at org.apache.curator.framework.EnsureContainers.ensure(EnsureContainers.java:53)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.ensurePath(PathChildrenCache.java:596)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.rebuild(PathChildrenCache.java:327)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.start(PathChildrenCache.java:304)
at org.apache.curator.framework.recipes.cache.PathChildrenCache.start(PathChildrenCache.java:252)
at org.apache.curator.x.discovery.details.ServiceCacheImpl.start(ServiceCacheImpl.java:99)
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:145)
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:220)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:584)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:554)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:550)
Version 1.17 has no issues in starting in distributed mode.
The issue here is with the zookeeper version. Perhaps you use 3.4.X version, but the current version of Drill requires 3.5.X. As a workaround, you may replace zookeeper jar in jars/ext/zookeeper-3.5.7.jar and jars/ext/zookeeper-jute-3.5.7.jar with the jars that corresponds to your zookeeper version.
In Addition to the answer of Vova Vysotskyi, you may find more information in Drill documentation about this issue:
https://drill.apache.org/docs/distributed-mode-prerequisites/
Starting in Drill 1.18 the bundled ZooKeeper libraries are upgraded to version 3.5.7, preventing connections to older (< 3.5) ZooKeeper clusters. In order to connect to a ZooKeeper < 3.5 cluster, replace the ZooKeeper library JARs in ${DRILL_HOME}/jars/ext with zookeeper-3.4.x.jar then restart the cluster.

Orientdb login fails all the sudden

I've been using orientdb-2.2.19 for a while with no problem. After restarting my computer today the http://xxx.xxx.x.x:2480/studio/index.html#/ login page does not let me log into my database. The bin/server.sh command does not show any error:
GRAPH DATABASE
`` `.
`` orientdb.com
`
/hdd/orientdb-2.2.19
removing old pid file /hdd/orientdb-2.2.19/bin/orient.pid
2018-01-04 12:40:30:080 INFO Loading configuration from: /hdd/orientdb-2.2.19/config/orientdb-server-config.xml... [OServerConfigurationLoaderXml]
2018-01-04 12:40:30:250 INFO OrientDB Server v2.2.19 (build 1758205114667e967bba32d61adda4022bbad08e) is starting up... [OServer]
2018-01-04 12:40:30:253 INFO Databases directory: /hdd/orientdb-2.2.19/databases [OServer]
2018-01-04 12:40:30:297 INFO OrientDB auto-config DISKCACHE=11,969MB (heap=1,963MB direct=524,288MB os=15,980MB) [OMemoryAndLocalPaginatedEnginesInitializer]
2018-01-04 12:40:30:509 INFO Listening binary connections on 0.0.0.0:2424 (protocol v.36, socket=default) [OServerNetworkListener]
2018-01-04 12:40:30:512 INFO Listening http connections on 0.0.0.0:2480 (protocol v.10, socket=default) [OServerNetworkListener]
2018-01-04 12:40:30:519 INFO Installing dynamic plugin 'orientdb-studio-2.2.19.zip'... [OServerPluginManager]
2018-01-04 12:40:30:521 INFO ODefaultPasswordAuthenticator is active [ODefaultPasswordAuthenticator]
2018-01-04 12:40:30:522 INFO OServerConfigAuthenticator is active [OServerConfigAuthenticator]
2018-01-04 12:40:30:523 INFO OSystemUserAuthenticator is active [OSystemUserAuthenticator]
2018-01-04 12:40:30:537 INFO Installed GREMLIN language v.2.6.0 - graph.pool.max=50 [OGraphServerHandler]
2018-01-04 12:40:30:537 INFO [OVariableParser.resolveVariables] Error on resolving property: distributed [orientechnologies]
2018-01-04 12:40:30:540 WARNI Authenticated clients can execute any kind of code into the server by using the following allowed languages: [sql] [OServerSideScriptInterpreter]
2018-01-04 12:40:30:541 INFO OrientDB Studio available at http://xxx.xxx.x.x:2480/studio/index.html [OServer]
2018-01-04 12:40:30:541 INFO OrientDB Server is active v2.2.19 (build 1758205114667e967bba32d61adda4022bbad08e). [OServer]
I can log into the default "GreatfulDeadConcerts" database with the same username/password though. So I know I'm using the correct username and pass.
What could cause this problem?
On possible thing is that I copied the entire orientdb-2.2.19 folder from another directory into the one I'm using now. But this did not cause any problem for almost three weeks. So not sure if this is the real reason.
Update:
As suggested, I setup a new user and password but the login still fails. This seems to be a more serious problem than just a login issue.
I ran this command:
orientdb>CONNECT plocal:../databases/Test HUser HPass
which gave me the following error:
Error: java.lang.NumberFormatException: For input string: "public"

OrientDB & .Net driver: Unable to read data from the transport connection

Getting error while reading network stream from a successful socket connection. PL see the debug log from orient DB:
2016-04-08 18:08:51:590 WARNI Not enough physical memory available for DISKCACHE: 1,977MB (heap=494MB). Set lower Maximum Heap (-Xmx setting on JVM) and restart OrientDB. Now
running with DISKCACHE=256MB [orientechnologies]
2016-04-08 18:08:51:606 INFO OrientDB config DISKCACHE=-566MB (heap=494MB os=1,977MB disk=16,656MB) [orientechnologies]
2016-04-08 18:08:51:809 INFO Loading configuration from: C:/inetpub/wwwroot/orientdb-2.1.5/config/orientdb-server-config.xml... [OServerConfigurationLoaderXml]
2016-04-08 18:08:52:292 INFO OrientDB Server v2.1.5 (build 2.1.x#r${buildNumber}; 2015-10-29 16:54:25+0000) is starting up... [OServer]
2016-04-08 18:08:52:370 INFO Databases directory: C:\inetpub\wwwroot\orientdb-2.1.5\databases [OServer]
2016-04-08 18:08:52:495 INFO Listening binary connections on 127.0.0.1:2424 (protocol v.32, socket=default) [OServerNetworkListener]
2016-04-08 18:08:52:511 INFO Listening http connections on 127.0.0.1:2480 (protocol v.10, socket=default) [OServerNetworkListener]
2016-04-08 18:08:52:573 INFO Installing dynamic plugin 'studio-2.1.zip'... [OServerPluginManager]
2016-04-08 18:08:52:838 INFO Installing GREMLIN language v.2.6.0 - graph.pool.max=50 [OGraphServerHandler]
2016-04-08 18:08:52:838 INFO [OVariableParser.resolveVariables] Error on resolving property: distributed [orientechnologies]
2016-04-08 18:08:52:854 INFO Installing Script interpreter. WARN: authenticated clients can execute any kind of code into the server by using the following allowed languages:
[sql] [OServerSideScriptInterpreter]
2016-04-08 18:08:52:854 INFO OrientDB Server v2.1.5 (build 2.1.x#r${buildNumber}; 2015-10-29 16:54:25+0000) is active. [OServer]
2016-04-08 18:08:57:986 INFO /127.0.0.1:49243 - Connected [OChannelBinaryServer]
2016-04-08 18:08:58:002 INFO /127.0.0.1:49243 - Writing short (2 bytes): 32 [OChannelBinaryServer]
2016-04-08 18:08:58:002 INFO /127.0.0.1:49243 - Flush [OChannelBinaryServer]
2016-04-08 18:08:58:002 INFO /127.0.0.1:49243 - Reading byte (1 byte)... [OChannelBinaryServer]
Using OrientDB .Net binary (C# driver) in Windows Vista. This was working fine until recently. Not sure what broke it...
Resetting TCP/IP using NetShell utility did not help.
Any help is highly appreciated.
The problem was with the AVG anti-virus program that is blocking the socket. Added an exception in the program for localhost to fix the problem.

Unable to start JBoss from within Eclipse

I am unable to start JBoss server 5.1.0.GA version from eclipse Indigo.
Eclipse shows me message box saying 'Server JBoss v5.0 at localhost was unable to start within 500 seconds. If the server requires more time, try increasing the timeout in the server editor.' but in the console window I can see that JBoss has been actually started.
here is some part of log which I can see in console window of eclipse :
SecureDeploymentManager/remote - EJB3.x Default Remote Business Interface
SecureDeploymentManager/remote-org.jboss.deployers.spi.management.deploy.DeploymentManager - EJB3.x Remote Business Interface
15:14:20,212 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureManagementView,service=EJB3
15:14:20,212 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureManagementView ejbName: SecureManagementView
15:14:20,222 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:
SecureManagementView/remote - EJB3.x Default Remote Business Interface
SecureManagementView/remote-org.jboss.deployers.spi.management.ManagementView - EJB3.x Remote Business Interface
15:14:20,252 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=profileservice-secured.jar,name=SecureProfileService,service=EJB3
15:14:20,262 INFO [EJBContainer] STARTED EJB: org.jboss.profileservice.ejb.SecureProfileServiceBean ejbName: SecureProfileService
15:14:20,272 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:
SecureProfileService/remote - EJB3.x Default Remote Business Interface
SecureProfileService/remote-org.jboss.profileservice.spi.ProfileService - EJB3.x Remote Business Interface
15:14:20,362 INFO [TomcatDeployment] deploy, ctxPath=/admin-console
15:14:20,412 INFO [config] Initializing Mojarra (1.2_12-b01-FCS) for context '/admin-console'
15:14:23,486 INFO [TomcatDeployment] deploy, ctxPath=/BannedListSearch
15:14:27,532 INFO [TomcatDeployment] deploy, ctxPath=/IWorkWebApp
15:14:27,813 INFO [TomcatDeployment] deploy, ctxPath=/
15:14:29,155 INFO [TomcatDeployment] deploy, ctxPath=/TestWebProject
15:14:30,036 INFO [TomcatDeployment] deploy, ctxPath=/displaytag-examples-1.2
15:14:30,136 INFO [TomcatDeployment] deploy, ctxPath=/jmx-console
15:14:30,276 INFO [TomcatDeployment] deploy, ctxPath=/HelloWebService
15:14:30,407 ERROR [EngineConfigurationFactoryServlet] Unable to find config file. Creating new servlet engine config file: /WEB-INF/server-config.wsdd
15:14:30,687 INFO [Http11Protocol] Starting Coyote HTTP/1.1 on http-127.0.0.1-8081
15:14:30,707 INFO [AjpProtocol] Starting Coyote AJP/1.3 on ajp-127.0.0.1-8009
15:14:30,707 INFO [ServerImpl] JBoss (Microcontainer) [5.1.0.GA (build: SVNTag=JBoss_5_1_0_GA date=200905221053)] Started in 48s:110ms
I have increased start Timeout of server to 500 seconds then also I am getting same error. I have not changed anything else.
I am able to start JBoss from command prompt successfully but same server is not getting started from eclipse.
Please help me to start the JBoss server.
Sound to me like the http port you are configured in JBoss is different to the port you have in the Eclipse configuration for JBoss.
Eclipse uses the port configuration to listen to JBoss' port so that it can determine that JBoss has actually started. If they differ, Eclipse thinks JBoss has never started although it actually has according to the log console. Make the ports match and it will probably work.
Updated: According to your log, JBoss is using port 8081 for HTTP:
Starting Coyote HTTP/1.1 on http-127.0.0.1-8081
Now you have to tell Eclipse to listen to that port so that it can figure out whether JBoss has started (default is 8080 and therefore Eclipse will never be aware of it!). Go to your servers view, double click on your JBoss server, and the configuration screen will come up:
You have to edit the HTTP port (in the 'Port' box) and set it to 8081 so that it matches your server's.