What is relocating allocator for buffers in Emacs? - emacs

What is this option and how to enable it when running ./configure?
Should Emacs use a relocating allocator for buffers? no

From the glibc documentation:
Any system of dynamic memory allocation has overhead: the amount of space it uses is more than the amount the program asks for. The relocating memory allocator achieves very low overhead by moving blocks in memory as necessary, on its own initiative.
Reading through the configure script it looks like this memory allocator is used when better ones are not available. In particular, on my system it looks like "Doug Lea's new malloc from the GNU C Library" takes precedence over the relocating allocator.

Related

Do instruction sets like x86 get updated? If so, how is backwards compatibility guaranteed?

How would an older processor know how to decode new instructions it doesn't know about?
New instructions use previously-unused opcodes, or other ways to find more "coding space" (e.g. prefixes that didn't previously mean anything for a given opcode).
How would an older processor know how to decode new instructions it doesn't know about?
It won't. A binary that wants to work on old CPUs as well as new ones has to either limit itself to a baseline feature set, or detect CPU features at run-time and set function pointers to select versions of a few important functions. (aka "runtime dispatching".)
x86 has a good mechanism (the cpuid instruction) for letting code query CPU features without any kernel support needed. Some other ISAs need CPU info hard-coded into the OS or detected via I/O accesses so the only viable method is for the kernel to export info to user-space in an OS-specific way.
Or if you're building from source on a machine with a newer CPU and don't care about old CPUs, you can use gcc -O3 -march=native to let GCC use all the ISA extensions the current CPU supports, making a binary that will fault on old CPUs. (e.g. x86 #UD (UnDefined instruction) hardware exception, resulting in the OS delivering a SIGILL or equivalent to the process.)
Or in some cases, a new instruction may decode as something else on old CPUs, e.g. x86 lzcnt decodes as bsr with an ignored REP prefix on older CPUs, because x86 has basically no unused opcodes left (in 32-bit mode). Sometimes this "decode as something else" is actually useful as a graceful fallback to allow transparent use of new instructions, notably pause = rep nop = nop on old CPUs that don't know about it. So code can use it in spin loops without checking CPUID.
-march=native is common for servers where you're setting things up to just run on that server, not making a binary to distribute.
Most of the times, old processor will have "Undefined Instruction" exception. The instruction is not defined in old CPU.
In more rare cases, the instruction will execute as a different instruction. This happens when then new instruction is encoded via obligatory prefix. As an example, PAUSE is encoded as REP NOP, so it executed as nothing on older CPUs.

How to set undo-outer-limit in emac gdb

I used gdb 7.6's undo feature in emacs 24.3, and got the following warnings. It suggests me to set `undo-outer-limit' to be a larger value. How should I set the variable to a correct value? How large can emacs support?
Warning (undo): Buffer `*gud-foo*' undo info was 13351087 bytes long.
The undo info was discarded because it exceeded `undo-outer-limit'.
This is normal if you executed a command that made a huge change
to the buffer. In that case, to prevent similar problems in the
future, set `undo-outer-limit' to a value that is large enough to
cover the maximum size of normal changes you expect a single
command to make, but not so large that it might exceed the
maximum memory allotted to Emacs.
If you did not execute any such command, the situation is
probably due to a bug and you should report it.
You can disable the popping up of this buffer by adding the entry
(undo discard-info) to the user option `warning-suppress-types',
which is defined in the `warnings' library.
You can increase undo-outer-limit from the default value of 24,000,000 (see docs) by setting it in your .emacs or .emacs.d/init.el file:
;; increase undo-outer-limit to 72MB, 3x default value
(setq undo-outer-limit 72000000)
64-bit Emacs should be able to handle much larger values (see this discussion of supported file sizes) but as the warning says, very high undo memory usage may indicate a bug, so it's probably more sensible to set it as high as you find yourself needing it and not much farther.
Note that if you habitually work with files that would have been considered large in 2003 (i.e., files > 10MB), you may also want to increase large-file-warning-threshold:
;; increase large-file-warning-threshold to 30MB, 3x default value
(setq large-file-warning-threshold 30000000)

restrict perl to do memory allocation from a fixed range of memory

I have a C program with perl running as a thread. I would like to restrict the perl interpreter to use memory from a chunk that I pre-allocated (about 2GB). Wonder if it's possible and how to do it.
Thanks.
I'm reasonably certain there is no way to do that in a normal Perl binary, but all perl's memory allocation code is nicely packaged in the malloc.c file in the source code. That file also has lots of comments on how Perl's memory allocation works under the hood. It's shouldn't be too hard to create a locally modified perl that does what you want, I think.

How to identify places accumulating memory use in a Perl script?

In my Perl script, it runs with high accumulation speed of occupied memory. I have tried making suspect variables clear immediately when they are no longer needed, but the problem can not be fixed. Is there any method to monitor change of memory occupation before and after executing a block?
I have recently had to troubleshoot an out-of-memory situation in one of my programs. While I do not claim to be an expert in this matter by any means, I'm going to share my findings in the hope that it will benefit someone.
1. High, but stable, memory usage
First, you should ensure that you do not just have a case of high, but stable, memory usage. If memory usage is stable, even if your process does not fit in available memory, the discussion below won't be of much help. Here are some notes worth reading in Perl's documentation here and here, in this SO question, in this PerlMonks discussion. There is an interesting analysis here if you're familiar with Perl internals. A lot of deep information is to be found in Tim Bunce's presentation. You should be aware that Perl may not return memory to the system even if you undef stuff. Finally, there's this opinion from a Perl developer that you shouldn't worry too much about memory usage.
2. Steadily growing memory usage
In case memory usage steadily grows, this may eventually cause an out-of-memory situation. My problem turned out to be a case of circular references. According to this answer on StackOverflow, circular references are a common source of memory leaks in Perl. The underlying reason is that Perl uses a reference counting mechanism and cannot release circularly referenced memory until program exit. (Note: I haven't been able to find a more up-to-date version in Perl's documentation of the last claim.)
You can use Scalar::Util::weaken to 'weaken' a circular reference chain (see also http://perlmaven.com/eliminate-circular-reference-memory-leak-using-weaken).
3. Further reading
Tim Bunce's presentation (slides here); also in this blog post
http://www.perlmonks.org/?node_id=472366
Perl memory usage profiling and leak detection?
and of course the link given by #mpapec: http://perlmaven.com/how-much-memory-does-the-perl-application-use
4. Tools
on Unix, you could do system("ps -p $$ -o vsz,rsz,sz,size") Caution: as explained in Tim Bunce's presentation, you'll want to track VSIZE instead of RSS
How to find the amount of physical memory occupied by a hash in Perl?
https://metacpan.org/pod/Devel::Size
and a more recent take by Tim Bunce, which adds the possibility of estimating the total interpreter memory size: https://metacpan.org/pod/Devel::SizeMe
in test scripts, you can use https://metacpan.org/pod/Test::LeakTrace and https://metacpan.org/pod/Test::Memory::Cycle; an example here
https://metacpan.org/pod/Devel::InterpreterSize

Is there a standard way for perl to behave when it runs out of memory?

Is there a standard(ish) way for a Perl interpreter (aka "perl") to behave when it runs out of memory? Is it documented/specced-out in any way? Coded in some uniform way?
I'm especially interested in any standards which are expressed as covenant to Perl code being run - e.g., will die be called? Will END block be executed? Etc...
I'm fine with both an "theoretical" answer (e.g. some sort of generic "this is what perl code ought to do in general on out-of-memory" mission statement document from Larry/P5P/etc..., even if not 100% of malloc() calls follow this rule); or a "practical" statement (e.g. all malloc() calls in Perl are wrapped into a generic "allocate_memory" function which uniformly handles all failures).
It may be possible that the answer depends on what specifically causes out of memory (e.g. a request for more memory for Perl code's data structure vs. memory allocated by internal Perl code unrelated to explicit "need to store more data" logic in Perl program).
If the answer is extremely implementation dependent, assume perl for Solaris/Linux,and narrowing down to any recent stable version (5.8 to 5.16) is acceptable.
The question is limited to standard Perl interpreter, however you wish to define that as far as pre-compile configuration (e.g. perl that comes with a major Linux distribution, or one compiled with all defaults left alone, etc...).
NOTE: This question came out of Gilles's comment to another Q
Taking a look at the manual page for the various diagnostic warnings that Perl will issue when the "use diagnostics" pragma is enabled, you can see various different types of "out of memory" errors and what they mean.
So you can infer the "standard" behavior from these messages; the one with an exclamation point ("Out of memory!") sounds like the one you're asking about:
Out of memory!
(X) The malloc() function returned 0, indicating there was
insufficient remaining memory (or virtual memory) to satisfy the
request. Perl has no option but to exit immediately.
An "X" level error is labeled as "A very fatal error (nontrappable)."
However if it's a "large request" (for greater than 64K) it is trappable (I guess Perl assumes it'll have enough memory to shutdown cleanly).