mkfifo tmp
The tmp named pipe accommodates only 8192 bytes.
According to this answer, the default size should be 16384 bytes switching to 65336 on large writes.
Not the case on Sierra v.10.12.4
Is there a way to increase the capacity of the named pipe?
Related
I have this output:
root#hostname:/home/admin# perl -V:ptrsize
ptrsize='4';
According to this answer, ptrsize='4' means that perl is able to address 4GB of memory.
However, while loading huge data into the memory, I was consistently able to load exactly 4190924 (4.19) before hitting Out of memory error.
Why did it not fail at 4000000 (4GB) as expected?
For the sake of completeness, I checked the amount of memory used by running qx{ grep VmSize /proc/$$/status };
The limit for a 32-bit pointer is 2^32 = 4,294,967,296 bytes, properly expressed as 4 GiB, but commonly called 4GB. This is 4,194,304 kiB (the unit that VmSize reports in). You came within 4kiB (one page, on most systems) of that.
How to control the size of large journals files as journal file take large amount of space. How can the space can be saved using small files.
After doing some research:
Setting smallfiles option for controlling journaling doesn't control the size
However the command is --smallfiles
Here are some ways to control journal file size,
Use --smallfiles option in mongod which will use 128MB as max size of journal file instead of 1GB
ulimit is way in unix to control system settings. It will allow you to set max. file size in system. Consider this when you have sharding in place.
Reduce commitIntervalMs to a lower value i.e flush data to disk frequently. Use this option only when you can tolerate heavy IO load periodically.
Is there a way to determine the journal file size based on a data file size?
For example, I've arrived at a data file size of 10 GB (approximately) based on data + index length considerations and preallocation.
I understand journal is also pre-allocated (after every 1GB file size). So, for 10 GB data file, is it possible to assume journal will also be 10 GB? Or is there any other way to calculate it?
The MongoDB journal files are fixed size 1GB files (unless you use the smallfiles option). There will be at most three 1GB journal files, so you will never have more than 3GB of journal.
http://docs.mongodb.org/manual/core/journaling/
I am on Ubuntu Linux 11 and Postgresql 9.1. I use CREATE TABLE .. SELECT over a dblink, and with a table of around 2 million rows I get
ERROR: out of memory
DETAIL: Failed on request of size 432.
So I am taking contents of an entire table from one database, and inserting (or creating them) inside another database (on the same machine). I am using default values of Postgresql, however I experimented with values from pgtune as well to no avail. During the insert I do see memory usage going up, however the error occurs before my machine's limit is reached. ulimit -a says
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 30865
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 30865
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
If I do create table as ... select inside the same database, then it works without problems. Any ideas?
Edit: I tried adjusting the various memory settings in the postgresql.conf and it didn't help. What am I missing?
My guess from this is that intermediate sets are being allocated to memory only and cannot be materialized per se. Your best options are to find a workaround or work with the dblink people to correct this problem. Some potential workarounds are:
Creating a csv file with COPY and insert that into your db.
Chunking the query to say, 100k rows at a time.
To be clear, my guess would be that dblink handles things by allocating a result set, allocating memory required, and handing the data on to Postgresql. It is possible this might be done in a way that lets requests be proxied over quickly (and transferred over the network connection) when they might not be allocated entirely in memory in the dblink module itself.
However for INSERT ... SELECT it may be allocating the entire result set in memory first, and then trying to process it and insert it into the table at once.
However this is a gut feeling without a detailed review of the code (I did open dblink.c and scanned it quickly). You have to remember here that PostgreSQL is acting simultaneously as the db client to the other server and as a db server itself, so the memory gotchas both of libpq and in the backend will come together.
Edit: after a little more review it looks like this is mostly right. dblink uses cursors internally. My guess is everything is being fetched from the cursor before insert so it can have a go at it at once.
I am opening files using memory map. The files are apparently too big (6GB on a 32-bit PC) to be mapped in one ago. So I am thinking of mapping part of it each time and adjusting the offsets in the next mapping.
Is there an optimal number of bytes for each mapping or is there a way to determine such a figure?
Thanks.
There is no optimal size. With a 32-bit process, there is only 4 GB of address space total, and usually only 2 GB is available for user mode processes. This 2 GB is then fragmented by code and data from the exe and DLL's, heap allocations, thread stacks, and so on. Given this, you will probably not find more than 1 GB of contigous space to map a file into memory.
The optimal number depends on your app, but I would be concerned mapping more than 512 MB into a 32-bit process. Even with limiting yourself to 512 MB, you might run into some issues depending on your application. Alternatively, if you can go 64-bit there should be no issues mapping multiple gigabytes of a file into memory - you address space is so large this shouldn't cause any issues.
You could use an API like VirtualQuery to find the largest contigous space - but then your actually forcing out of memory errors to occur as you are removing large amounts of address space.
EDIT: I just realized my answer is Windows specific, but you didn't which platform you are discussing. I presume other platforms have similar limiting factors for memory-mapped files.
Does the file need to be memory mapped?
I've edited 8gb video files on a 733Mhz PIII (not pleasant, but doable).