How to Limit the kafka server Memory? - kubernetes

I want to set the Max and minimum memory value for the Kafka server [9092 port]
Let say Max value is 2 GB, then memory usage should not exceeds the 2GB, but currently exceeds it.
I have link - https://kafka.apache.org/documentation/#java
Config From Apache site
-Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC
-XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M
-XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 -XX:+ExplicitGCInvokesConcurrent
But I don't know how to configure it.
My goal is to setting a max memory limit value and Memory Value in Kubernetes Dashboard should not exceeds the max memory limit value.
Note -
Setting max memory limit value should not be in Kubernetes POD Level and its should be like setting value while Starting zookeeper,Kafka Server and kafka connect.
-Xmx1G -Xms256M Proof

Depending on the Image you are using for kafka you can supply these settings via the environment variable KAFKA_OPTS.
The documentation you are referring to is supplying these options to the call of 'java'. Kafka, Zookeeper etc. are jars and there for stated via java.

Related

updating yarn.scheduler memory

Total Noob here, I installed Cloudera Manager on single node on aws ec2. I followed the install wizard but when I try running
spark-shell or pyspark I get the following error message:
ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (1024+384
MB) is above the max threshold (1024 MB) of this cluster! Please check
the values of 'yarn.scheduler.maximum-allocation-mb' and/or
'yarn.nodemanager.resource.memory-mb'.
Can somebody explain to me what is going on or where to begin reading? Total noob, here so any help or direction is greatly appreciated
The required executor memory is above the maximum threshold. You need to increase the YARN memory.
The values of yarn.scheduler.maximum-allocation-mb and yarn.nodemanager.resource.memory-mb both live in the config file yarn-site.xml which is managed by Cloudera Manager in your case.
yarn.nodemanager.resource.memory-mb is the amount of physical memory, in MB, that can be allocated for containers.
yarn.scheduler.maximum-allocation-mb is the maximum memory in mb that cab be allocated per yarn container. The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this won't take effect, and will get capped to this value.
You can read more on the definitions and default values here: https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-common/yarn-default.xml
In the Cloudera Manager user interface, go to Yarn service > Configurations > Search and increase the values of them.
Restart YARN for the changes to take effect.

Yarn cluster doesn't equally manage vcores, queue resource limit exceeded

I have 3 yarn node managers working in a yarn cluster, and an issue connected with vcores avalibity per yarn node.
For e.g., I have:
on first node : available 15 vcores,
on second node : non vcores avalible,
on third node : available 37 vcores.
And now, job try to start and fails withe the error:
"Queue's AM resource limit exceeded"
Is this connected with the non vcores available on second node, or maybe I can somehow increase the resources limit in queue?
I also want to mention, that I have the following setting:
yarn.scheduler.capacity.maximum-am-resource-percent=1.0
That means, that your drivers have exceeded max memory configured in Max Application Master Resources. You can either increase max memory for AM or decrease driver memory in your jobs.

Kafka failed to map 1073741824 bytes for committing reserved memory

I am installing kafka on an aws t2 instance(one that has 1gb of memory).
(1) I download kafka_2.11-0.9.0.0
(2) I run zookeeper bin/zookeeper-server-start.sh config/zookeeper.properties
(3) I try running bin/kafka-server-start.sh config/server.properties
and I get
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
.#
.# There is insufficient memory for the Java Runtime Environment to continue.
.# Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
.# An error report file with more information is saved as:
.# /home/machine_name/kafka_2.11-0.9.0.0/hs_err_pid9161.log
I checked all propertes in the server.properties config file and in the documentation for properties that could try to do something like this but coudn't find anything.
Does anyone know why is kafka trying to allocated 1 gb when starting?
Kafka defaults to the following jvm memory parameters which mean that kafka will allocate 1GB at startup and use a maximum of 1GB of memory:
-Xmx1G -Xms1G
Just set KAFKA_HEAP_OPTS env variable to whatever you want to use instead. You may also just edit ./bin/kafka-server-start.sh and replace the values.
also if you have lower memory heap then try to reduce the size
-Xmx400M -Xms400M for both zookeeper and kafka
This issue might also relate to the maximum number of memory map areas allocated. It throws exactly the same error.
To remedy this you can run the following:
sysctl -w vm.max_map_count=200000
You want to set this in relation to your File Descriptor Limits. In summary, for every log segment on a broker, you require two map areas - one for index and one for time index.
For reference see the Kafka OS section: https://kafka.apache.org/documentation/#os
I was getting Java IO Exception: Map failed while starting Kafka-server. By analyzing previous logs it looks like it was failed because of insufficient memory in the java heap while loading logs. I changed the maximum memory size but it was not able to fix it. Finally, doing more research on google, I got to know that I had downloaded 32-bit version of java so downloading 64-bit version of java solved my problem.
Pass the KAFKA_HEAP_OPTS argument with your required memory value to run with.
Make sure to pass it in quotes - KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"
docker run -it --rm --network app-tier -e KAFKA_HEAP_OPTS="-Xmx512M -Xms512M" -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181 bitnami/kafka:latest kafka-topics.sh --list --bootstrap-server kafka-server:9092

How to check orientdb disk cache size at runtime?

I'm trying to tune an orientdb 2.1.5 embedded application, reducing at the minimum the I/O to disk.
I've read the documentation (http://orientdb.com/docs/last/Performance-Tuning.html) and used the storage.diskCache.bufferSize flag to increment the disk cache size.
Looking at htop and top (I'm on linux) I've not noticed any increment in java process's memory usage though. Even the two mbeans exposed by orient (O2QCacheMXBean and OWOWCacheMXBean) don't highlight evidence about the increment. So how can I be sure of the current disk cache size?
This is part of my java command line:
java -server -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=9999 -Djava.rmi.server.hostname=192.168.20.154 -Dstorage.useWAL=false -Dstorage.wal.syncOnPageFlush=false -Dstorage.diskCache.bufferSize=16384 -Dtx.useLog=false -Xms4g -Xmx4g -XX:+PerfDisableSharedMem -XX:+PrintGCDetails -XX:+PrintTenuringDistribution -XX:+PrintGCTimeStamps -Xloggc:/home/nexse/local/gc.log -jar my.jar
Thanks a lot.

Cannot start JBoss 5 with memory setting -Xmx768M

We have PC with Windows with 2048 RAM.
We try to use next memory settings for JBoss:
-Xms256M -Xmx768M -XX:MaxPermSize=256M
But it cannot start:
Error occurred during initialization
of VM Could not reserve enough space
for object heap Could not create the
Java virtual machine.
JBoss starts only if we change -Xmx768M to -Xmx512M.
What can be the problem?
Update:
Now we use next settings
-Xms512M -Xmx768M -XX:MaxPermSize=156M
http://javahowto.blogspot.in/2006/06/6-common-errors-in-setting-java-heap.html
The error seems to be saying that the virtual memory size of the machine is smaller than the maximum heap size we are defining via "-Xms1303m -Xmx1303m". I have changed it to "-Xms256m -Xmx512m" and it started working in my local windows box.
Interesting. What happens when you set max memory to 513M?
If that fails, it's possibly a problem I haven't seen in quite a while. An ancient COBOL compiler I used refused to work on PCs with 640K of RAM because they used a signed-number check to decide if there was enough memory.
And in that world, 640K actually had the high bit set, hence was a negative number, so the check always failed.
I find it hard to believe that this would be the case in todays world but it may be worth looking at.
If it doesn't fail at 513M, then it may just be that you're trying to allocate too much memory. It's not necessarily physical memory that matters, address space may be the issue but you should have 2G (at least) of that as well in 32bit Windows.
With your settings shown, you use 1G just for permgen and heap. Try adjusting these until it works and post the figures you have.
There are two possible causes:
JVM couldn't find 768 MiB continuous region in the address space or
total size of free area on your RAM and paging file is less than 1 GiB.
(JVM checks them using -Xmx and -XX:MaxPermSize on startup because of GC implementation)
As you could -Xmx768m -XX:MaxPermSize156m, the latter is doubtful.
If so, the problem might be resolved by freeing RAM (e.g., stopping unused services) or extending paging file.
maybe you can reboot your pc and try again. you can't allocation the memory more than your total physical memory.
-Xms256M -Xmx768M -XX:MaxPermSize=256M should only be trying to grab 512M at most on initialization plus about 100M for the JVM process itself. Do you have that much free memory on the machine? We always run jboss on machines with 4G because the DB takes up quite a bit as well.
Here's a trick you can use to find the max amount you can set. You can simply run
java -version -Xms256M -Xmx768M -XX:MaxPermSize=256M
And then increase/decrease values until you find the max the JVM will let you set.
BTW, on a 4G 32-bit windows box, we usually set -Xms768M -Xmx1300M -XX:MaxPermSize=256M