Redis server can't run more than 1024M maxheap - powershell

I am running Redis 2.8.19 on Windows Server 2008.
I get an error saying that I have insufficient disc space for my Redis heap. (The memory mapping file instead of fork()).
I can only get Redis running, if I have 'maxheap 1024M' in the cfg, even though I have ~50GB of free space on the directory I have set 'heapdir' to.
If I try to run it with higher maxheap, or with no maxheap, I get this error (PowerShell):
PS C:\Users\admasgve> cd D:\redis-2.8.19
PS D:\redis-2.8.19> .\redis-server.exe
[7476] 25 Feb 09:32:38.419 #
The Windows version of Redis allocates a large memory mapped file for sharing
the heap with the forked process used in persistence operations. This file
will be created in the current working directory or the directory specified by
the 'heapdir' directive in the .conf file. Windows is reporting that there is
insufficient disk space available for this file (Windows error 0x70).
You may fix this problem by either reducing the size of the Redis heap with
the --maxheap flag, or by moving the heap file to a local drive with sufficient
space.
Please see the documentation included with the binary distributions for more
details on the --maxheap and --heapdir flags.
Redis can not continue. Exiting.
Screenshot: http://i.stack.imgur.com/Xae0f.jpg
Free space on D: 49,4 GB
Free space on C: 2,71 GB
Total RAM: 16 GB
Free RAM: ~9 GB
redis.windows.conf:
# Generated by CONFIG REWRITE
loglevel verbose
logfile "stdout"
save 900 1
save 300 10
save 60 10000
dir "D:\\redis-2.8.19"
maxmemory 1024M
# maxheap 2048M
heapdir "D:\\redis-2.8.19"
Everything beside the last 3 lines are generated by redis with the 'CONFIG REWRITE' cmd. I have tried various things, with maxmemory, maxheap and heapdir.
From Redis documentation:
maxmemory / maxheap - the maxheap flag controls the maximum size of this memory mapped file, as well as the total usable space for the Redis heap. Running Redis without either maxheap or maxmemory will result in a memory mapped file being created that is equal to the size of physical memory; The Redis heap must be larger than the value specified by the maxmemory
Have anybody encountered this problem before? What do I do wrong?

Redis doesn't use the conf file in its home directory by default. You have to pass the file in on the command line:
.\redis-server.exe redis.windows.conf
This is what is in my conf file:
maxheap 2048M
heapdir D:\\redisheap
These settings resolved my issue.

This is how to use the maxheap flag, which is more convenient then using a config file:
redis-server --maxheap 2gb

To back up Michael's response, I've had the same problem.
I had ~40GB of free space, and paging file set to 4G-8G.
Redis did not want to start until I set paging file to the amount recommended by Windows themselves, which was 12GB.
Really odd beahaviour.

.\redis-server.exe redis.windows.conf
This is what is in my conf file:
maxheap 2048M
heapdir D:\\redisheap
after passing the above parameters in redis-server.exe redis.windows.conf
the service has started for me thanks for the solution.

maxheap 2048M
heapdir D:\"location where your server is
This Should Solve problem Please Ping me if you have Same Question

Related

kubernetes pod high cache memory usage

I have a java process which is running on k8s.
I set Xms and Xmx to process.
java -Xms512M -Xmx1G -XX:SurvivorRatio=8 -XX:NewRatio=6 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -jar automation.jar
My expectation is that pod should consume 1.5 or 2 gb memory, but it consume much more, nearly 3.5gb. its too much.
if ı run my process on a virtual machine, it consume much less memory.
When ı check memory stat for pods, ı reliase that pod allocate too much cache memory.
Rss nearly 1.5GB is OK. Because Xmx is 1gb. But why cache nearly 3GB.
is there any way to tune or control this usage ?
/app $ cat /sys/fs/cgroup/memory/memory.stat
cache 2881228800
rss 1069154304
rss_huge 446693376
mapped_file 1060864
swap 831488
pgpgin 1821674
pgpgout 966068
pgfault 467261
pgmajfault 47
inactive_anon 532504576
active_anon 536588288
inactive_file 426450944
active_file 2454777856
unevictable 0
hierarchical_memory_limit 16657932288
hierarchical_memsw_limit 9223372036854771712
total_cache 2881228800
total_rss 1069154304
total_rss_huge 446693376
total_mapped_file 1060864
total_swap 831488
total_pgpgin 1821674
total_pgpgout 966068
total_pgfault 467261
total_pgmajfault 47
total_inactive_anon 532504576
total_active_anon 536588288
total_inactive_file 426450944
total_active_file 2454777856
total_unevictable 0
A Java process may consume much more physical memory than specified in -Xmx - I explained it in this answer.
However, in your case, it's not even the memory of a Java process, but rather an OS-level page cache. Typically you don't need to care about the page cache, since it's the shared reclaimable memory: when an application wants to allocate more memory, but there is not enough immediately available free pages, the OS will likely free a part of the page cache automatically. In this sense, page cache should not be counted as "used" memory - it's more like a spare memory used by the OS for a good purpose while application does not need it.
The page cache often grows when an application does a lot of file I/O, and this is fine.
Async-profiler may help to find the exact source of growth:
run it with -e filemap:mm_filemap_add_to_page_cache
I demonstrated this approach in my presentation.

Jenkins and PostgreSQL is consuming a lot of memory

We have a Data ware house server running on Debian linux ,We are using PostgreSQL , Jenkins and Python.
It's been few day the memory of the CPU is consuming a lot by jenkins and Postgres.tried to find and check all the ways from google but the issue is still there.
Anyone can give me a lead on how to reduce this memory consumption,It will be very helpful.
below is the output from free -m
total used free shared buff/cache available
Mem: 63805 9152 429 16780 54223 37166
Swap: 0 0 0
below is the postgresql.conf file
Below is the System configurations,
Results from htop
Please don't post text as images. It is hard to read and process.
I don't see your problem.
Your machine has 64 GB RAM, 16 GB are used for PostgreSQL shared memory like you configured, 9 GB are private memory used by processes, and 37 GB are free (the available entry).
Linux uses available memory for the file system cache, which boosts PostgreSQL performance. The low value for free just means that the cache is in use.
For Jenkins, run it with these JAVA Options
JAVA_OPTS=-Xms200m -Xmx300m -XX:PermSize=68m -XX:MaxPermSize=100m
For postgres, start it with option
-c shared_buffers=256MB
These values are the one I use on a small homelab of 8GB memory, you might want to increase these to match your hardware

Kafka failed to map 1073741824 bytes for committing reserved memory

I am installing kafka on an aws t2 instance(one that has 1gb of memory).
(1) I download kafka_2.11-0.9.0.0
(2) I run zookeeper bin/zookeeper-server-start.sh config/zookeeper.properties
(3) I try running bin/kafka-server-start.sh config/server.properties
and I get
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
.#
.# There is insufficient memory for the Java Runtime Environment to continue.
.# Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory.
.# An error report file with more information is saved as:
.# /home/machine_name/kafka_2.11-0.9.0.0/hs_err_pid9161.log
I checked all propertes in the server.properties config file and in the documentation for properties that could try to do something like this but coudn't find anything.
Does anyone know why is kafka trying to allocated 1 gb when starting?
Kafka defaults to the following jvm memory parameters which mean that kafka will allocate 1GB at startup and use a maximum of 1GB of memory:
-Xmx1G -Xms1G
Just set KAFKA_HEAP_OPTS env variable to whatever you want to use instead. You may also just edit ./bin/kafka-server-start.sh and replace the values.
also if you have lower memory heap then try to reduce the size
-Xmx400M -Xms400M for both zookeeper and kafka
This issue might also relate to the maximum number of memory map areas allocated. It throws exactly the same error.
To remedy this you can run the following:
sysctl -w vm.max_map_count=200000
You want to set this in relation to your File Descriptor Limits. In summary, for every log segment on a broker, you require two map areas - one for index and one for time index.
For reference see the Kafka OS section: https://kafka.apache.org/documentation/#os
I was getting Java IO Exception: Map failed while starting Kafka-server. By analyzing previous logs it looks like it was failed because of insufficient memory in the java heap while loading logs. I changed the maximum memory size but it was not able to fix it. Finally, doing more research on google, I got to know that I had downloaded 32-bit version of java so downloading 64-bit version of java solved my problem.
Pass the KAFKA_HEAP_OPTS argument with your required memory value to run with.
Make sure to pass it in quotes - KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"
docker run -it --rm --network app-tier -e KAFKA_HEAP_OPTS="-Xmx512M -Xms512M" -e KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181 bitnami/kafka:latest kafka-topics.sh --list --bootstrap-server kafka-server:9092

100GB free space on NFS server, but can't write even an empty file

In my Production NFS server more 100GB free, but I can't write even an empty file on that drive. Please find the attached image for clarification. Now I have fixed the issue by removing some folders on that drive.
Use both df & df -i after reading df(1); perhaps you have too many inodes in your file system. See also stat(1) so run stat -f
Perhaps you have reached some disk quota. See also quota(1)
Consider using strace(1) to find the failing syscall and its errno

How do I set the maximum memory size that Redis can use?

To be specific, I only have 1GB of free memory and would like to use only 300MB for Redis. How can I configure it so that it is only uses up to 300MB of memory?
Out of curiosity, what happen when you try to insert a new data and Redis is already used all the memory allocated?
maxmemory is the correct configuration option to prevent Redis from using too much RAM.
If an insert causes maxmemory to be exceeded, the insert operation will sometimes fail.
Redis will do everything in its power to prevent the operation from failing, though. In the newer versions of Redis, you can configure the memory reclaiming policies in the configuration, as well by setting the maxmemory-policy option.
Also, if you have virtual memory options turned on, Redis will begin to store stale data to the disk.
More info:
What does Redis do when it runs out of memory?
You can do that using maxmemory option: maxmemory 314572800 means 300mb.
Since the last answer is from 2011. I am going to write some updated information for users reading in 2019 using Ubuntu 18.04. The configuration file is located in /etc/redis/redis.conf and if you have installed using (default/recommended method) apt install redis-server the default memory limit is set to "0" which practically means there is "no limit" which can be troublesome if user has limited/small amount of RAM/memory. To set your custom memory limit you may simply edit configuration file and type "maxmemory 1gb" as the very first line. Restart redis service for changes to take effect. To verify changes use redis-cli config get maxmemory
Ubuntu 18.04 users may read more here: How to install and configure REDIS on Ubuntu 18.04