I want to create a server plugin in NEO4j and for that I have created a plugin with package name as dummy.test.neo4j.NodeExploration.exploring. I have registered this plugin in META-INF.services, but when I add the reference of this package in NEO4j.conf file as
dbms.unmanaged_extension_classes=dummy.test.neo4j.NodeExploration=/dummy/exploring
I get following error.
Okt 16 16:59:58 gaurav-GB-BSi3-6100 neo4j[23592]: Caused by: javax.servlet.ServletException: org.neo4j.server.web.NeoServletContainer-70a4d58d#3429429d==org.neo4j.server.web.NeoServletContainer,-1,false
Okt 16 16:59:58 gaurav-GB-BSi3-6100 neo4j[23592]: org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:633)
Okt 16 16:59:58 gaurav-GB-BSi3-6100 neo4j[23592]: at org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:395)
Okt 16 16:59:58 gaurav-GB-BSi3-6100 neo4j[23592]: at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:871)
Okt 16 16:59:58 gaurav-GB-BSi3-6100 neo4j[23592]: at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
Okt 16 16:59:58 gaurav-GB-BSi3-6100 neo4j[23592]: at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
Okt 16 16:59:58 gaurav-GB-BSi3-6100 neo4j[23592]: at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
Okt 16 16:59:58 gaurav-GB-BSi3-6100 neo4j[23592]: at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
Okt 16 16:59:58 gaurav-GB-BSi3-6100 neo4j[23592]: at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
Okt 16 16:59:58 gaurav-GB-BSi3-6100 neo4j[23592]: at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
I think I'm not sure whether I'm adding correct reference in NEO4j.conf file
The above exception is caused when we consider both server plugin and unmanaged extension concept at same time. For I just removed line
dbms.unmanaged_extension_classes=dummy.test.neo4j.NodeExploration=/dummy/exploring
from neo4j.conf and simply called post a get request on http://localhost:7474/db/data/ext/exploring/graphdb/exploring
where exploring is my class and function name
Related
MemoryError: mitmproxy has crashed
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: MemoryError
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: mitmproxy has crashed!
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: Please lodge a bug report at: https://github.com/mitmproxy/mitmproxy
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: 192.168.50.117:60549: Traceback (most recent call last):
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: File "/usr/local/lib/python3.6/site-packages/mitmproxy/proxy/server.py", line 121, in handle
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: root_layer()
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: File "/usr/local/lib/python3.6/site-packages/mitmproxy/proxy/modes/transparent_proxy.py", line 19, in call
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: layer()
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: File "/usr/local/lib/python3.6/site-packages/mitmproxy/proxy/protocol/tls.py", line 285, in call
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: layer()
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: File "/usr/local/lib/python3.6/site-packages/mitmproxy/proxy/protocol/http1.py", line 83, in call
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: layer()
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: File "/usr/local/lib/python3.6/site-packages/mitmproxy/proxy/protocol/http.py", line 190, in call
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: if not self._process_flow(flow):
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: File "/usr/local/lib/python3.6/site-packages/mitmproxy/proxy/protocol/http.py", line 397, in _process_flow
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: self.read_response_body(f.request, f.response)
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: MemoryError
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: Exception in response:[Errno 12] Cannot allocate memory
Jul 16 13:02:40 cubedev-PowerEdge-R330 bash[21256]: message repeated 37 times: [ Exception in response:[Errno 12] Cannot allocate memory]
Steps to reproduce the behavior:
Issue observed when 2 clients are connected to mitmdump.The crash is observed within 10 minutes
System Information
mitmdump --version
Mitmproxy: 5.1.1
Python: 3.6.0
OpenSSL: OpenSSL 1.1.1g 21 Apr 2020
Platform: Linux-4.15.0-45-generic-x86_64-with-debian-stretch-sid
mitmproxy processes all request/responses in-memory, hence if you do a large download mitmproxy requires at least the same amount of RAM.
You can configure mitmproxy to "stream" (directly pass through) large requests and response bodies using the option stream_large_bodies:
mitmproxy --set stream_large_bodies=10m
This streams all bodies that are larger than 10MB (AFAIR a streamed body will not be processed by any filter and is also not captured).
Additionally you should save the collected requests/response using the -w option.
I'm getting this wired error message from keycloak.
I'm running two keycloak instances on the same host, one is fine and the second one is getting me this :
Caused by: java.lang.IndexOutOfBoundsException: fromIndex < 0: -1
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at java.util.BitSet.nextClearBit(BitSet.java:744)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.initializer.InitializerState.getNextUnfinishedSegmentFromIndex(InitializerState.java:102)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.initializer.InitializerState.updateLowestUnfinishedSegment(InitializerState.java:98)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.initializer.InitializerState.markSegmentFinished(InitializerState.java:94)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.initializer.InfinispanCacheInitializer.startLoadingImpl(InfinispanCacheInitializer.java:187)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.initializer.InfinispanCacheInitializer.startLoading(InfinispanCacheInitializer.java:108)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.initializer.CacheInitializer.loadSessions(CacheInitializer.java:41)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.InfinispanUserSessionProviderFactory$7.run(InfinispanUserSessionProviderFactory.java:317)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:227)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.InfinispanUserSessionProviderFactory.loadSessionsFromRemoteCache(InfinispanUserSessionProviderFactory.java:306)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.InfinispanUserSessionProviderFactory.loadSessionsFromRemoteCaches(InfinispanUserSessionProviderFactory.java:298)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.InfinispanUserSessionProviderFactory.access$500(InfinispanUserSessionProviderFactory.java:68)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.InfinispanUserSessionProviderFactory$1.lambda$onEvent$0(InfinispanUserSessionProviderFactory.java:127)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransaction(KeycloakModelUtils.java:227)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.utils.KeycloakModelUtils.runJobInTransactionWithTimeout(KeycloakModelUtils.java:267)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.models.sessions.infinispan.InfinispanUserSessionProviderFactory$1.onEvent(InfinispanUserSessionProviderFactory.java:121)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.services.DefaultKeycloakSessionFactory.publish(DefaultKeycloakSessionFactory.java:69)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.keycloak.services.resources.KeycloakApplication.<init>(KeycloakApplication.java:170)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] at org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:152)
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one] ... 31 more
Aug 14 17:23:58 keycloak.srv2 sh[2512]: [Server:server-one]
I've already lost a bunch of hours trying things around but can't figure out what's wrong
Any piece of advice would be more than welcome
When I saw this error message:
ERROR Shutdown broker because all log dirs in /tmp/kafka-logs have failed (kafka.log.LogManager)
The first thought is "well the /tmp directory probably got cleared out by the O/S (linux) - so I should update the kafka config to point to something permanent. However the directory is present and has not been wiped:
ll /tmp/kafka-logs/
total 20
drwxrwxr-x 2 ec2-user ec2-user 38 Apr 7 16:56 __consumer_offsets-0
drwxrwxr-x 2 ec2-user ec2-user 38 Apr 7 16:56 __consumer_offsets-7
drwxrwxr-x 2 ec2-user ec2-user 38 Apr 7 16:56 __consumer_offsets-42
..
drwxrwxr-x 2 ec2-user ec2-user 38 Apr 7 16:56 __consumer_offsets-32
drwxrwxr-x 2 ec2-user ec2-user 141 Apr 12 02:49 flights_raw-0
drwxrwxr-x 2 ec2-user ec2-user 178 Apr 12 08:25 air2008-0
drwxrwxr-x 2 ec2-user ec2-user 141 Apr 12 13:38 testtopic-0
-rw-rw-r-- 1 ec2-user ec2-user 1244 Apr 17 22:29 recovery-point-offset-checkpoint
-rw-rw-r-- 1 ec2-user ec2-user 4 Apr 17 22:29 log-start-offset-checkpoint
-rw-rw-r-- 1 ec2-user ec2-user 1248 Apr 17 22:30 replication-offset-checkpoint
So then what does this actually mean, why is it happening and what should be done to correct/avoid the error?
In related question best answer suggests to delete log dir both for Kafka /tmp/kafka-logs and Zookeper /tmp/zookeeper.
Probably it's because of Kafka issue which was resolved in August 2018.
Hope this will help.
I was trying to install liquibase on my CentOS 6.9 it needs java (i have openjdk version "1.8.0_121") but i got this error:
[root#sampleliquibase-3.6.2-bin..z]# ll
total 11412
drwxrwxrwx. 2 db2inst1 db2iadm1 4096 Dec 17 13:10 lib
-rwxrwxrwx. 1 db2inst1 db2iadm1 11358 Jul 3 23:27 LICENSE.txt
-rwxrwxrwx. 1 db2inst1 db2iadm1 1251 Jul 3 23:27 liquibase
-rwxrwxrwx. 1 db2inst1 db2iadm1 9406606 Dec 17 09:42 liquibase-3.6.2-
bin.zip
-rwxrwxrwx. 1 db2inst1 db2iadm1 884 Jul 3 23:27 liquibase.bat
-rwxrwxrwx. 1 db2inst1 db2iadm1 2167086 Jul 3 23:30 liquibase.jar
-rwxrwxrwx. 1 db2inst1 db2iadm1 7174 Jul 3 23:27 liquibase.spec
-rwxrwxrwx. 1 db2inst1 db2iadm1 3046 Jul 3 23:27 README.txt
drwxrwxrwx. 6 db2inst1 db2iadm1 4096 Dec 17 13:10 sdk
-rwxrwxrwx. 1 root root 41203 Mar 16 2017 slf4j-api-1.7.25.jar
[root#sampleliquibase-3.6.2-bin..z]# java -jar liquibase.jar
Error: A JNI error has occurred, please check your installation and try
again
Exception in thread "main" java.lang.NoClassDefFoundError:
ch/qos/logback/core/filter/Filter
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
at java.lang.Class.getMethod0(Class.java:3018)
at java.lang.Class.getMethod(Class.java:1784)
at
sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
Caused by: java.lang.ClassNotFoundException:
ch.qos.logback.core.filter.Filter
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 more
[root#sampleliquibase-3.6.2-bin..z]#
I think some kind of class is missing but i dont know which one is and how i can add it to java classes. what should I do?
ch/qos/logback/core/filter/Filter
aka
ch.qos.logback.core.filter.Filter
is missing which is part of Logback, you can add it to the application using your buildscript. Or manually install it to your lib folder from https://mvnrepository.com/artifact/ch.qos.logback/logback-core
Maybe you are launching the wrong file. Instead of running the jar directly you might have to run the "launcher" script instead.
I have an error in catalina log :
AVERTISSEMENT: Failed to scan [file:/usr/share/java/postgresql-jdbc.jar] from classloader hierarchy
java.util.zip.ZipException: invalid END header (bad central directory offset)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.(ZipFile.java:215)
at java.util.zip.ZipFile.(ZipFile.java:145)
at java.util.jar.JarFile.(JarFile.java:154)
at java.util.jar.JarFile.(JarFile.java:91)
at sun.net.www.protocol.jar.URLJarFile.(URLJarFile.java:93)
at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69)
at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:99)
at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122)
at sun.net.www.protocol.jar.JarURLConnection.getJarFile(JarURLConnection.java:89)
at org.apache.tomcat.util.scan.FileUrlJar.(FileUrlJar.java:41)
at org.apache.tomcat.util.scan.JarFactory.newInstance(JarFactory.java:34)
at org.apache.catalina.startup.ContextConfig$FragmentJarScannerCallback.scan(ContextConfig.java:2615)
at org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJarScanner.java:258)
at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarScanner.java:220)
at org.apache.catalina.startup.ContextConfig.processJarsForWebFragments(ContextConfig.java:1871)
at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1259)
at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:876)
at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:374)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5355)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:632)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1247)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1898)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I copy postgresql-9.2-1004.jdbc41.jar in /usr/share/java following an error in ginco.log :
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot load JDBC driver class 'org.postgresql.Driver'
Caused by: java.lang.UnsupportedClassVersionError: org/postgresql/Driver : Unsupported major.minor version 52.0
I do a link on /usr/share/apache-tomcat-6.0.47/ :
lib -> /usr/share/java/tomcat
In /usr/share/java/tomcat/ i have :
-rw-r--r--. 1 tomcat tomcat 579874 26 oct. 12:17 postgresql-9.2-1004.jdbc41.jar
lrwxrwxrwx. 1 tomcat tomcat 35 18 oct. 10:17 postgresql-jdbc.jar -> /usr/share/java/postgresql-jdbc.jar
and in /usr/share/java i have :
-rw-r--r--. 1 tomcat tomcat 579866 25 oct. 11:28 postgresql-9.2-1004.jdbc41.jar
lrwxrwxrwx. 1 tomcat tomcat 19 17 oct. 18:12 postgresql-jdbc2ee.jar -> postgresql-jdbc.jar
lrwxrwxrwx. 1 tomcat tomcat 19 17 oct. 18:12 postgresql-jdbc2.jar -> postgresql-jdbc.jar
lrwxrwxrwx. 1 tomcat tomcat 19 17 oct. 18:12 postgresql-jdbc3.jar -> postgresql-jdbc.jar
-rw-r--r--. 1 tomcat tomcat 579866 26 oct. 12:34 postgresql-jdbc.jar
-rw-r--r--. 1 tomcat tomcat 515140 25 oct. 11:30 postgresql-jdbc.jar_sav
I copy postgresql-9.2-1004.jdbc41.jar towards postgresql-jdbc.jar
My version of java :
java -version
java version "1.7.0_111"
OpenJDK Runtime Environment (rhel-2.6.7.2.el7_2-x86_64 u111-b01)
OpenJDK 64-Bit Server VM (build 24.111-b01, mixed mode)
I use maven and i configure pom.xml as :
<postgresql.jdbc.version>9.2-1004.jdbc41</postgresql.jdbc.version>