I use embedded-kafka and I have some problems of stable work of it, reflected in:
Connection to node 0 (localhost/127.0.0.1:6001) could not be established. Broker may not be available. and timeouts.
I think that increase of memory can help, but I can't find any property for that. I tried to search code for memory or RAM words, but no luck. Is it possible to increase RAM for embedded-kafka?
There may be other issues also for this error, check the port and server properties.
I think you should increase the heap size of java process. which starts the embedded kafka.
Related
I have a kafka stream app that currently takes 3 topics and aggregates them into a KTable. This app resides inside a scala microservice on marathon which has been allocated 512 MB memory to work with. After implementing this, I've noticed that the docker container running the microservice eventually runs out of memory and was trying to debug the cause.
My current theory (whilst reading the sizing guide https://docs.confluent.io/current/streams/sizing.html) is that over time, the increasing records stored in the KTable and by extension, the underlying RocksDB, is causing the OOM for the microservice. Is there any way to find out the memory used by the underlying default RocksDB implementation?
In case anyone runs into a similar issue, setting the environment variable MALLOC_ARENA_MAX=2 seems to have fixed it for me. For a more detailed explanation as to why, please refer to section "Why memory allocators make a difference?" and "Tuning glibc" here: https://github.com/prestodb/presto/issues/8993.
I have a question based on my experience trying to implement memory requests/limits correctly in an OpenShift OKD cluster. I started by setting no request, then watching to see what cluster metrics reported for memory use, then setting something close to that as a request. I ended up with high-memory-pressure nodes, thrashing, and oom kills. I have found I need to set the requests to something closer to the VIRT size in ‘top’ (include the program binary size) to keep performance up. Does this make sense? I'm confused by the asymmetry between request (and apparent need) and reported use in metrics.
You always need to leave a bit of memory headroom for overhead an memory spills. If for some reason the container exceeds the memory, either from your application, from your binary of some garbage collection system it will get killed. For example, this is common in Java apps, where you specify a heap and you need an extra overhead for the garbage collector and other things such as:
Native JRE
Perm / metaspace
JIT bytecode
JNI
NIO
Threads
This blog explains some of them.
I'm testing running monbodb on the kubernetes platform where I can limit the resources used by the running container.
Say I set the memory limit to 256Mb. The problem is that for example while making backup memory consumption increases to the limit and container gets restarted by kubernetes.
So the question is is there a way to limit mongodb memory consumption for my case so that it would not cause the crush by exeeding memory limit set by platform.
I could of course increase the limit but I'm interested in a principal solution and would like to understand this process better because I don't really now how memory consumed by mongodb and container os. Is it possible to tune mongodb/underlying linux os to work inside existing limits.
The limits that you have set are good enough for a monogodb pod, these are the limits used by the community as well.
The only way I think you can get around this for backups is to increase the memory limits, but still it might fail, because in other places on stackoverflow people have experienced OOM killing on VMs with memory of giga bytes. MongoDB basically tries to eat any and every memory that is made available to it.
Also there are other ways to backup mongodb: https://dba.stackexchange.com/questions/76130/how-to-backup-large-mongodb-database
I am not sure how this aligns in the k8s world.
Tried to update server.xml, deleted dumps and temporary cache files from C:\Users\username\AppData\Local\javasharedresources
And still not able to start the server.
This is the following error message I get:
JVMDUMP010I Java dump written to C:\WAS8\profiles\AppSrv02\bin\javacore.20150210.094417.6468.0009.txt
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
There could be multiple causes for the OutOfMemoryError exceptions. It could be that there is a memory leak in one of the applications that is loaded on startup, or the maximum heap size is not set high enough to support all of the components loaded on startup.
It is best to go through a troubleshooting exercise. I suggest you download heap analyzer tool from here and analyze the javacore file to see where the potential leak, if any, could be.
If you can't find a memory leak, try increasing the JVM maximum heap size. Check that your host system has enough RAM to support the chosen maximum JVM heap size.
Earlier I didn't update the server.xml with correct arguments and once I updated server.xml with genericJvmArguments="-Xms1024M -Xmx2048M" and InitialHeapSize="1024" maximumHeapSize="2048", I was able to start the server.
All,
I am running CentOS 6.0 with Postgresql 8.4 and can't seem to figure out how to prevent so much disc swap from occurring. I have 12 gigs of RAM and 4 processors and I am doing some simple updates (1 table at a time). I thought for a minute that the inserts happening in parallel from a script I wrong was causing the large memory usage but when I saw the simple update causing it too I basically threw in the towel and decided to ask for help.
I pasted the conf file here. http://pastebin.com/e0jdBu0J
You can see that I set the buffers relatively low and the connection amounts high. The DB service will not start if I set the shared buffers any higher than 64 megs. Anyone have an idea what may be causing this for me?
Thanks,
Adam
If you're going into swap, increasing shared_buffers will make the problem worse; you'll be taking RAM away from the part that's running out and swapping, instead dedicating memory to the database caching. It's worth fixing SHMMAX etc. just on general principle and for later tuning work, but that's not going to help with this problem.
Guessing at the identify of your memory gobbling source is a crapshoot. Far better to look at data from "top -c" and ps to find which processes are using a lot of it. It's possible for a really bad query to consume way more memory than it should. If you see memory use spike up for a PostgreSQL process running something, check the process ID against the information in pg_stat_tables to see what it's doing.
There are a couple of things that can cause this sort of issue that often surprise people. If you are doing a large number of row updates in a single transaction, and there are foreign key checks or triggers involved, that can run out of memory. The queue of things to check in each of those cases is kept in RAM, and can be surprisingly big.
There are two problems with your PostgreSQL settings that might be related. Databases don't actually work very well if you have a lot more active connections than cores in the server; best performance is normally 2 to 3 active clients per core. And all sorts of things go wrong once you've got more than a few hundred connection. There is some connections^2 behavior that gets ugly there performance wise, and there are some memory issues too. If you really need 1250 connections, you should be using a connection pooler such as pgBouncer or pgpool-II.
And effective_io_concurrency = 1000 is way too high for any hardware on the planet. Useful values for that in a small multiple of how many disks you have in the server. I have no idea what happens as far as memory usage goes when you set it that high, but it's not been tested very well at that range. Normal settings more like 1 to 25. The parameters outlined at Tuning Your PostgreSQL Server are much more important than it is; the concurrency value only impacts one particular type of table scan.
Centos 6 seems to have a very conservative shmmax as a default
Set your shared buffers to that recommended by postgres tuning resources
see for explanation and how to set.
To experiment you can (as root) use sysctl -w kernel.shmmax = n
where n is the value that the startup error message that postgres is trying to allocate on startup. When you identify the value you wish to use permanently then set that in /etc/sysctl.conf