I'm trying to tune an orientdb 2.1.5 embedded application, reducing at the minimum the I/O to disk.
I've read the documentation (http://orientdb.com/docs/last/Performance-Tuning.html) and used the storage.diskCache.bufferSize flag to increment the disk cache size.
Looking at htop and top (I'm on linux) I've not noticed any increment in java process's memory usage though. Even the two mbeans exposed by orient (O2QCacheMXBean and OWOWCacheMXBean) don't highlight evidence about the increment. So how can I be sure of the current disk cache size?
This is part of my java command line:
java -server -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9999 -Djava.rmi.server.hostname=192.168.20.154 -Dstorage.useWAL=false -Dstorage.wal.syncOnPageFlush=false -Dstorage.diskCache.bufferSize=16384 -Dtx.useLog=false -Xms4g -Xmx4g -XX:+PerfDisableSharedMem -XX:+PrintGCDetails -XX:+PrintTenuringDistribution -XX:+PrintGCTimeStamps -Xloggc:/home/nexse/local/gc.log -jar my.jar
Thanks a lot.
Related
I want to set the Max and minimum memory value for the Kafka server [9092 port]
Let say Max value is 2 GB, then memory usage should not exceeds the 2GB, but currently exceeds it.
I have link - https://kafka.apache.org/documentation/#java
Config From Apache site
-Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC
-XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M
-XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -XX:+ExplicitGCInvokesConcurrent
But I don't know how to configure it.
My goal is to setting a max memory limit value and Memory Value in Kubernetes Dashboard should not exceeds the max memory limit value.
Note -
Setting max memory limit value should not be in Kubernetes POD Level and its should be like setting value while Starting zookeeper,Kafka Server and kafka connect.
-Xmx1G -Xms256M Proof
Depending on the Image you are using for kafka you can supply these settings via the environment variable KAFKA_OPTS.
The documentation you are referring to is supplying these options to the call of 'java'. Kafka, Zookeeper etc. are jars and there for stated via java.
I am installing kafka on an aws t2 instance(one that has 1gb of memory).
(1) I download kafka_2.11-0.9.0.0
(2) I run zookeeper bin/zookeeper-server-start.sh config/zookeeper.properties
(3) I try running bin/kafka-server-start.sh config/server.properties
and I get
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
.#
.# There is insufficient memory for the Java Runtime Environment to continue.
.# Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
.# An error report file with more information is saved as:
.# /home/machine_name/kafka_2.11-0.9.0.0/hs_err_pid9161.log
I checked all propertes in the server.properties config file and in the documentation for properties that could try to do something like this but coudn't find anything.
Does anyone know why is kafka trying to allocated 1 gb when starting?
Kafka defaults to the following jvm memory parameters which mean that kafka will allocate 1GB at startup and use a maximum of 1GB of memory:
-Xmx1G -Xms1G
Just set KAFKA_HEAP_OPTS env variable to whatever you want to use instead. You may also just edit ./bin/kafka-server-start.sh and replace the values.
also if you have lower memory heap then try to reduce the size
-Xmx400M -Xms400M for both zookeeper and kafka
This issue might also relate to the maximum number of memory map areas allocated. It throws exactly the same error.
To remedy this you can run the following:
sysctl -w vm.max_map_count=200000
You want to set this in relation to your File Descriptor Limits. In summary, for every log segment on a broker, you require two map areas - one for index and one for time index.
For reference see the Kafka OS section: https://kafka.apache.org/documentation/#os
I was getting Java IO Exception: Map failed while starting Kafka-server. By analyzing previous logs it looks like it was failed because of insufficient memory in the java heap while loading logs. I changed the maximum memory size but it was not able to fix it. Finally, doing more research on google, I got to know that I had downloaded 32-bit version of java so downloading 64-bit version of java solved my problem.
Pass the KAFKA_HEAP_OPTS argument with your required memory value to run with.
Make sure to pass it in quotes - KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"
docker run -it --rm --network app-tier -e KAFKA_HEAP_OPTS="-Xmx512M -Xms512M" -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181 bitnami/kafka:latest kafka-topics.sh --list --bootstrap-server kafka-server:9092
I am running Redis 2.8.19 on Windows Server 2008.
I get an error saying that I have insufficient disc space for my Redis heap. (The memory mapping file instead of fork()).
I can only get Redis running, if I have 'maxheap 1024M' in the cfg, even though I have ~50GB of free space on the directory I have set 'heapdir' to.
If I try to run it with higher maxheap, or with no maxheap, I get this error (PowerShell):
PS C:\Users\admasgve> cd D:\redis-2.8.19
PS D:\redis-2.8.19> .\redis-server.exe
[7476] 25 Feb 09:32:38.419 #
The Windows version of Redis allocates a large memory mapped file for sharing
the heap with the forked process used in persistence operations. This file
will be created in the current working directory or the directory specified by
the 'heapdir' directive in the .conf file. Windows is reporting that there is
insufficient disk space available for this file (Windows error 0x70).
You may fix this problem by either reducing the size of the Redis heap with
the --maxheap flag, or by moving the heap file to a local drive with sufficient
space.
Please see the documentation included with the binary distributions for more
details on the --maxheap and --heapdir flags.
Redis can not continue. Exiting.
Screenshot: http://i.stack.imgur.com/Xae0f.jpg
Free space on D: 49,4 GB
Free space on C: 2,71 GB
Total RAM: 16 GB
Free RAM: ~9 GB
redis.windows.conf:
# Generated by CONFIG REWRITE
loglevel verbose
logfile "stdout"
save 900 1
save 300 10
save 60 10000
dir "D:\\redis-2.8.19"
maxmemory 1024M
# maxheap 2048M
heapdir "D:\\redis-2.8.19"
Everything beside the last 3 lines are generated by redis with the 'CONFIG REWRITE' cmd. I have tried various things, with maxmemory, maxheap and heapdir.
From Redis documentation:
maxmemory / maxheap - the maxheap flag controls the maximum size of this memory mapped file, as well as the total usable space for the Redis heap. Running Redis without either maxheap or maxmemory will result in a memory mapped file being created that is equal to the size of physical memory; The Redis heap must be larger than the value specified by the maxmemory
Have anybody encountered this problem before? What do I do wrong?
Redis doesn't use the conf file in its home directory by default. You have to pass the file in on the command line:
.\redis-server.exe redis.windows.conf
This is what is in my conf file:
maxheap 2048M
heapdir D:\\redisheap
These settings resolved my issue.
This is how to use the maxheap flag, which is more convenient then using a config file:
redis-server --maxheap 2gb
To back up Michael's response, I've had the same problem.
I had ~40GB of free space, and paging file set to 4G-8G.
Redis did not want to start until I set paging file to the amount recommended by Windows themselves, which was 12GB.
Really odd beahaviour.
.\redis-server.exe redis.windows.conf
This is what is in my conf file:
maxheap 2048M
heapdir D:\\redisheap
after passing the above parameters in redis-server.exe redis.windows.conf
the service has started for me thanks for the solution.
maxheap 2048M
heapdir D:\"location where your server is
This Should Solve problem Please Ping me if you have Same Question
in Eclipse Jboss 7.1 VM arguments
My RAM 8GB
vm arguments have a this like statements ;
-server -Xms64m
-Xmx512m
-XX:MaxPermSize=1024m
how to calculate this bold numbers?
**
Caused by: java.lang.OutOfMemoryError: Java heap space
**
You are getting that error because your server used up all of its available memory (in your case, 512mb). You can increase Xmx param, which sets the maximum amount of memory your server can use.
OutOfMemoryError can happen because of insufficient memory assignment, or memory leaks (objects that java's garbage collector can't delete, despite not being needed).
There is no magic rule to calculate those params, they depend on what you are deploying to jboss, how much concurrent users, etc, etc, etc.
You can try increasing Xmx param, and check with jvisualvm the memory usage, see how it behaves..
We have PC with Windows with 2048 RAM.
We try to use next memory settings for JBoss:
-Xms256M -Xmx768M -XX:MaxPermSize=256M
But it cannot start:
Error occurred during initialization
of VM Could not reserve enough space
for object heap Could not create the
Java virtual machine.
JBoss starts only if we change -Xmx768M to -Xmx512M.
What can be the problem?
Update:
Now we use next settings
-Xms512M -Xmx768M -XX:MaxPermSize=156M
http://javahowto.blogspot.in/2006/06/6-common-errors-in-setting-java-heap.html
The error seems to be saying that the virtual memory size of the machine is smaller than the maximum heap size we are defining via "-Xms1303m -Xmx1303m". I have changed it to "-Xms256m -Xmx512m" and it started working in my local windows box.
Interesting. What happens when you set max memory to 513M?
If that fails, it's possibly a problem I haven't seen in quite a while. An ancient COBOL compiler I used refused to work on PCs with 640K of RAM because they used a signed-number check to decide if there was enough memory.
And in that world, 640K actually had the high bit set, hence was a negative number, so the check always failed.
I find it hard to believe that this would be the case in todays world but it may be worth looking at.
If it doesn't fail at 513M, then it may just be that you're trying to allocate too much memory. It's not necessarily physical memory that matters, address space may be the issue but you should have 2G (at least) of that as well in 32bit Windows.
With your settings shown, you use 1G just for permgen and heap. Try adjusting these until it works and post the figures you have.
There are two possible causes:
JVM couldn't find 768 MiB continuous region in the address space or
total size of free area on your RAM and paging file is less than 1 GiB.
(JVM checks them using -Xmx and -XX:MaxPermSize on startup because of GC implementation)
As you could -Xmx768m -XX:MaxPermSize156m, the latter is doubtful.
If so, the problem might be resolved by freeing RAM (e.g., stopping unused services) or extending paging file.
maybe you can reboot your pc and try again. you can't allocation the memory more than your total physical memory.
-Xms256M -Xmx768M -XX:MaxPermSize=256M should only be trying to grab 512M at most on initialization plus about 100M for the JVM process itself. Do you have that much free memory on the machine? We always run jboss on machines with 4G because the DB takes up quite a bit as well.
Here's a trick you can use to find the max amount you can set. You can simply run
java -version -Xms256M -Xmx768M -XX:MaxPermSize=256M
And then increase/decrease values until you find the max the JVM will let you set.
BTW, on a 4G 32-bit windows box, we usually set -Xms768M -Xmx1300M -XX:MaxPermSize=256M