I am not able to start Kafka Server because of the error below.
java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:940)
at kafka.log.AbstractIndex.<init>(AbstractIndex.scala:61)
at kafka.log.TimeIndex.<init>(TimeIndex.scala:55)
at kafka.log.LogSegment.<init>(LogSegment.scala:73)
at kafka.log.Log.loadSegments(Log.scala:267)
at kafka.log.Log.<init>(Log.scala:116)
at kafka.log.LogManager$$anonfun$createLog$1.apply(LogManager.scala:365)
at kafka.log.LogManager$$anonfun$createLog$1.apply(LogManager.scala:361)
at scala.Option.getOrElse(Option.scala:121)
at kafka.log.LogManager.createLog(LogManager.scala:361)
at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:109)
at kafka.cluster.Partition$$anonfun$getOrCreateReplica$1.apply(Partition.scala:106)
at kafka.utils.Pool.getAndMaybePut(Pool.scala:70)
at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:105)
at kafka.cluster.Partition$$anonfun$4$$anonfun$apply$3.apply(Partition.scala:166)
at kafka.cluster.Partition$$anonfun$4$$anonfun$apply$3.apply(Partition.scala:166)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:78)
at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:166)
at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:160)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:221)
at kafka.cluster.Partition.makeLeader(Partition.scala:160)
at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:754)
at kafka.server.ReplicaManager$$anonfun$makeLeaders$4.apply(ReplicaManager.scala:753)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
at kafka.server.ReplicaManager.makeLeaders(ReplicaManager.scala:753)
at kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:698)
at kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:148)
at kafka.server.KafkaApis.handle(KafkaApis.scala:84)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:62)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:937)
... 34 more
Tried the below options . but of no use . please help
Upgraded OS from 32 to 64 Bit.
Increased java heap size to 1 GB.
Uninstalled and installed Apache Kafka
When this doesn't resolve the problem you could try to increase vm.max_map_count. The default value is 65536 (check this with sysctl vm.max_map_count)
With cat /proc/[kafka-pid]/maps | wc -l you can see how many maps are used.
Increase the setting with :
sysctl -w vm.max_map_count=262144
Upgrading the JVM to 64 bit resolved the issue
This helped me : Changing KAFKA_HEAP_OPTS="-Xmx256M -Xms256M" (Originally 512m)
change this to : kafka-server-start script
Thanks !
this helped me:
change :
export KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"
(originally 1G)
in kafka-server-start script
I had the same issue on Windows, Kafka takes some memory during the process. So We need to increase the heap to prevent throttling the performance of the application. This can be achieved graphically through JAVA Control Panel.
Inside Runtime Parameters, you can change the size of the memory allocated by JVM:
-Xmx512m that assigns 512MB,
-Xmx1024m that assigns 1GB,
-Xmx2048m that assigns 2GB,
-Xmx3072m that assigns 3GB memory and so on.
Related
I Try almost all Solution That I found on Internet But my problem is Still not solved.
I Open gradle.properties and I add The Below code It doesn't help me,
org.gradle.jvmargs=-Xmx1024m -XX:MaxPermSize=512m
I also delete .gradle directory then I try building again. But The Problem Still Exists. Please I want Help?
FAILURE: Build failed with an exception.
* What went wrong:
Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the User Manual chapter on the daemon at https://docs.gradle.org/6.7/userguide/gradle_daemon.html
Process command line: C:\Program Files\Java\jdk1.8.0_291\bin\java.exe -Xmx1024M -Dfile.encoding=windows-1252 -Duser.country=US -Duser.language=en -Duser.variant -cp C:\Users\likec\.gradle\wrapper\dists\gradle-6.7-all\cuy9mc7upwgwgeb72wkcrupxe\gradle-6.7\lib\gradle-launcher-6.7.jar org.gradle.launcher.daemon.bootstrap.GradleDaemon 6.7
Please read the following process output to find out more:
-----------------------
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 1048576 bytes for AllocateHeap
# An error report file with more information is saved as:
# C:\Users\likec\.gradle\daemon\6.7\hs_err_pid14492.log
I am trying to get JBoss WildFly 15.0.1 Final to start on a rather small ubuntu 14.04 vServer. The server has only 2 GB of RAM.
I tried to start WildFly many times without success. The JVM seems to require a lot more RAM than I had ever expected.
Here's the console output:
root#t2g55:~# service wildfly start
* Starting WildFly Application Server wildfly
* WildFly Application Server failed to start within the timeout allowed.
root#t2g55:~# cat /var/log/wildfly/console.log
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /opt/wildfly
JAVA: /usr/bin/java
JAVA_OPTS: -server -Xms768m -Xmx1536m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
=========================================================================
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000a0000000, 536870912, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 536870912 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /opt/wildfly-15.0.1.Final/hs_err_pid1379.log
1379
root#t2g55:~# free
total used free shared buffers cached
Mem: 2097152 258748 1838404 64 0 38644
-/+ buffers/cache: 220104 1877048
Swap: 2097152 0 2097152
root#t2g55:~# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-8u222-b10-1~14.04-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
root#t2g55:~#
As you can see I specified JAVA_OPTS: -server -Xms768m -Xmx1536m ..., which I thought should suffice for a WildFly server to start. Please not, that the standalone.xml has a datasource defined to a MySQL DB.
Here's the start of the dump .log:
root#t2g55:~# cat /opt/wildfly-15.0.1.Final/hs_err_pid1379.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 536870912 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2757), pid=1379, tid=0x00007f62486c6700
#
# JRE version: (8.0_222-b10) (build )
# Java VM: OpenJDK 64-Bit Server VM (25.222-b10 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
--------------- T H R E A D ---------------
.
.
.
QUESTION:
Can this be solved with this amount of memory or do I simply have too few RAM?
What else could I probably try?
I don't really want my provider to go up with the mem successively only to find there's some other problem with Java, the JVM or anything...
Thanks
EDIT 1:
The vServer provider uses OpenVZ for its virtualization.
Info: they just pushed my to 4GB, then once, I got JBoss up and running. After reboot WildFly again refuses to start: same thing, not enough memory (even though I switch between Java 8 and Java 11 runtimes).
CMD to start JBoss WildFly: sh /opt/wildfly/bin/standalone.sh &, standalone.xml appears to be OK. I removed the ExampleDS, three entries commented.
It was indeed a server virtualization issue with OpenVZ.
Quote (in German):
Hi,
das Problem nach bei den user_beancounters, genauer gesagt bei privvmpages, diese waren > zu gering eingestellt.
https://wiki.openvz.org/UBC_secondary_parameters#privvmpages
Mit freundlichen Grüßen
Mr X
Translation:
Hi,
the problem was with the user_beancounters, that is with the privvmpages, these were set too low.
https://wiki.openvz.org/UBC_secondary_parameters#privvmpages
Best regards
Mr X
I don't know exactly what he did in detail, but that resolved it.
I now run on a 2GB machine without any problems and memory usage of mysqld + standalone.sh (WildFly + webapp) is around 800 MB.
EDIT: I have tried several things to get this to work to completion and nothing seems to work:
Updated the neo4j-community.vmoptions:
-server
-Xms4096m
-Xmx4096m
-XX:NewSize=1024m
Also updated my mac to handle more threads.
C02RH2U9G8WM:~ meuser$ ulimit -u
709
C02RH2U9G8WM:~ meuser$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
Finally ran a test to make sure it wasnt limited harddrive space that was causing the issues. Everything seemed to check out, and during execution I would check my thread usage in another window....
jstack -l 'pid' | grep tid | wc -l
and
ps -elfT | wc -l
Nothing seemed out of hand so I am really confused as to why I am getting the errors below when running spark code in scala connecting to neo4j that runs fine with small set of answers but blows up when I let it rip on everything.
note: in my code I do not use sc.stop() or anything even though I think resources need to be cleared somehow while running a decently sized loop from within spark/scale
I did check the logs, turns out it might be memory related?
2017-10-13 01:17:45.073+0000 ERROR [o.n.b.t.SocketTransportHandler] Fatal error occurred when handling a client connection: unable to create new native thread unable to create new native thread
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at org.neo4j.kernel.impl.util.Neo4jJobScheduler.schedule(Neo4jJobScheduler.java:94)
at org.neo4j.bolt.v1.runtime.concurrent.ThreadedWorkerFactory.newWorker(ThreadedWorkerFactory.java:68)
at org.neo4j.bolt.v1.runtime.MonitoredWorkerFactory.newWorker(MonitoredWorkerFactory.java:54)
at org.neo4j.bolt.BoltKernelExtension.lambda$newVersions$1(BoltKernelExtension.java:234)
at org.neo4j.bolt.transport.ProtocolChooser.handleVersionHandshakeChunk(ProtocolChooser.java:95)
at org.neo4j.bolt.transport.SocketTransportHandler.chooseProtocolVersion(SocketTransportHandler.java:109)
at org.neo4j.bolt.transport.SocketTransportHandler.channelRead(SocketTransportHandler.java:58)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.handler.codec.ByteToMessageDecoder.handlerRemoved(ByteToMessageDecoder.java:219)
at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:631)
at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:468)
at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:428)
at org.neo4j.bolt.transport.TransportSelectionHandler.switchToSocket(TransportSelectionHandler.java:126)
at org.neo4j.bolt.transport.TransportSelectionHandler.decode(TransportSelectionHandler.java:81)
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:411)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:248)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:341)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:363)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:349)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:642)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:565)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:479)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:441)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
I am running these tests on a single mac laptop pro (not clustered or anything yet). An i7 16GB of ram.
I am running some spark scala code that uses the neo4j connector. When I run it on small data sets it works fine, when I let it rip on processing all the data in a neo4j database, it times out after running for a good while (several hours). Usually with me having to reset the community edition restarting the neo4j server. Is there some setting somewhere where I can adjust this time out? Either in Spark/Scala or in neo4j configs?
org.neo4j.driver.v1.exceptions.ClientException: Failed to establish connection with server. Make sure that you have a server with bolt enabled on localhost:7687
at org.neo4j.driver.internal.connector.socket.SocketClient.negotiateProtocol(SocketClient.java:197)
at org.neo4j.driver.internal.connector.socket.SocketClient.start(SocketClient.java:76)
at org.neo4j.driver.internal.connector.socket.SocketConnection.<init>(SocketConnection.java:63)
at org.neo4j.driver.internal.connector.socket.SocketConnector.connect(SocketConnector.java:52)
at org.neo4j.driver.internal.pool.InternalConnectionPool.acquire(InternalConnectionPool.java:113)
at org.neo4j.driver.internal.InternalDriver.session(InternalDriver.java:53)
at org.neo4j.spark.Executor$.execute(Neo4j.scala:360)
at org.neo4j.spark.Neo4jRDD.compute(Neo4j.scala:408)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I'm getting error messages on Eclipse (Mac):
Invalid maximum heap size: -Xmx5120m-Djavax.net.ssl.trustStore=/usr/local/keystore/JavaKeyStore.jks-Djavax.net.ssl.trustStorePassword=changeit-Dtls.key.store=/usr/local/keystore/JavaKeyStore.jks-Dtls.trusted.store=/usr/local/keystore/JavaKeyStore.jks-Dawsmock.directory=/ijmeang/elis/s3-mock-Dcatalina.base=/usr/local/workspace/workspace-eclipse/.metadata/.plugins/org.eclipse.wst.server.core/tmp0-Dcatalina.home=/Library/Tomcat-Dwtp.deploy=/usr/local/workspace/workspace-eclipse/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps-Djava.endorsed.dirs=/Library/Tomcat/endorsed
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
How can I fix this?
Here are the arguments on Eclipse:
-Dcatalina.base="/usr/local/workspace/workspace-eclipse/.metadata/.plugins/org.eclipse.wst.server.core/tmp0" -Dcatalina.home="/usr/local/Tomcat-7.0.6.9" -Dwtp.deploy="/usr/local/workspace/workspace-eclipse/.metadata/.plugins/org.eclipse.wst.server.core/tmp0/wtpwebapps" -Djava.endorsed.dirs="/usr/local/Tomcat-7.0.6.9/endorsed" -Xss768m -Xmx5120m -XX:MaxPermSize=5120m -DmasterPropertiesLinux
I would suggest trying to decrease your value of -Xmx5120m, as described here:
https://wiki.eclipse.org/FAQ_How_do_I_increase_the_heap_size_available_to_Eclipse%3F
Some JVMs put restrictions on the total amount of memory available on the heap.
It depends on what Java are you using. If you are using 32 bit Java you may not be able to use more than 2GB.
We are running on 32bit windows and since upgrading from 1.4.1 to 2.2.2, we are seeing the following memory in stdout (numbers not exact):
INFO: Database 'BLAH' uses 770MB/912MB of DISKCACHE memory, while Heap is not completely used (usedHeap=123MB maxHeap=512MB). To improve performance set maxHeap to 124MB and DISKCACHE to 1296MB
With 32bit, we can only set a max of Xmx + storage.diskCache.bufferSize ~= 1.4gb without getting OOM or performance issues. Any combination of different sizes of either of these two configurable variables results in a variant of the above message.
Is there a way to suppress the above profiler/memory checker messages?
You can disable the profiler with:
java ... -Dprofiler.enabled=false ...
Set that configuration in your server.sh or in the last section of config/orientdb-server-config.xml file.