Is there a way to improve emacs tramp performance? For me it's faster to open an external ftp client (filezilla), transfer files to the local disk and open them in an external editor (notepad) than open them with emacs.
I use emacs23.1 under windows xp.
I tried different tramp-default-method (telnet, pscp, ftp), all of them have the same performance.
Profiling results with elp-instrument-package are the following (I opened 3 remote files of 1.5 MB each one)
tramp-file-name-handler 1461 350.41599999 0.2398466803
tramp-sh-file-name-handler 1461 350.02699999 0.2395804243
tramp-send-command 227 179.63400000 0.7913392070
tramp-send-command-and-check 205 177.77600000 0.8672000000
tramp-wait-for-regexp 227 176.47800000 0.7774361233
tramp-wait-for-output 226 176.40000000 0.7805309734
tramp-barf-unless-okay 18 133.46699999 7.4148333333
tramp-handle-insert-file-contents 3 132.046 44.015333333
tramp-handle-file-local-copy 3 131.281 43.760333333
tramp-accept-process-output 2375 112.95100000 0.0475583157
So, actual file transfer takes 132 sec, about 1/3 of total time. Why does it spend so much time in tramp-sh-file-name-handler? I tried to advice a function tramp-sh-file-name-handler to store and return cached results but it does not work, probably this function has some side effects.
Any ideas how to improve tramp performance? (I use emacs 23.1 under WindowsXP)
I've found that fuse-ssh is far better than tramp mode, if you can set it up that way.
if your use case improves, use remote client! I have resorted to editing remotely with emacs, this reminds me.
my experience has led me to believe the machine hosting emacs would be the bottleneck
however a better SSH client may help... try the list at OpenSSH.org (low in the left nav) I like PuTTY on Windows, where selection=copy & right-click=paste.
not sure of ways to improve the remote performance, though. the default build of emacs has a lot of lisp but it takes more disk than RAM space, and has always been efficient for me excepting large files & net/sys lag.
if your case has highlighting and auto-features you don't want, then configuring minimally might help - should be able to do that without rebuilding.
emacs is so vast, I noticed most when I found out it can send/receive e-mail. I have hardly explored the tip of the iceberg.
in this case though 'vi' may be better... even with more emacs experience relatively I've used small portions in each camp. rarely do I script or seek out a new feature, the digging is tough but there are handy command guides for both.
I resolved a problem by a couple of scripts which allow me to mget/put and mirror files or directories. These scripts use lftp (a version which is installed with the cygwin) and have a very good performance.
They were demands to publish my solution. Unfortunately, I have only a prototype of it. I have no time to finish it. It serves me well but it's not in the state to be published.
Related
I have this problem:
I run some large calculations before going to sleep (or work).
When I return sometimes RAM is already filled and the program starts writing to Disk, which is a problem since then computer becomes almost non responsive, also the button "Interrupt the current operation" doesn't stop mserver.exe from executing a task.
This is what I saw 10 mins after I pressed the button "Interrupt the current operation":
Not to mention that calculations are probably like 100 or even 1000 times slower when it starts using Disk instead of RAM (so it's pointless anyway).
Another problem is that I was unable to save some variables to file since in Maple I couldn't type anything while mserver.exe was executing a task and after I killed the process mserver.exe I was still unable to save those variables since Maple commands don't work when connection to kernel is lost.
So, my question: can I make it so that mserver.exe won't use Disk at all (I mean from Maple alone, not by disabling page file in Windows) and just stop execution automatically when RAM is full (just like Classic Maple does when it hits 2GB limit)?
Also it would be nice to be able to limit Maple from using processor too much, for example up to 75% or so, so that I could work on that computer without problems.
You might experiment with a few of the options available for specifying limits on the Maple (kernel, mserver) engine.
In particular,
--init-reserve-mem=memorysize
(or, possible, the -T option). See here for some more detail:
https://www.maplesoft.com/support/help/MapleSim/view.aspx?path=maple
On Linux/OSX you could pass that in a call to the maple script that launches Maple. On MS-Windows you could add that to the command string/Property in the launcher (icon).
You might try setting it to a fraction of your total RAM, eg. 50-75%, and see how it goes. Presumably you'll have some other processes running.
As far as restricting the CPU use goes, that's more of a OS issue. On Linux/OSX you could use the system's nice facility. I don't know what's avaliable on MS-Windows (built-in or 3rd party). You might be able to set the Priority of the running mserver process from the Task Manager. Or you might look at something like the START facility:
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/start
In Emacs sql mode or lisp mode connected to an inferior SQLi or Lisp process, there's a limit to how much data I can send to the process buffer in one go. Over the limit some or all data is lost.
I've tried to find what the limit is by running a comint process which simply reads input and echoes it. The limit seems to be around 760 characters.
I'm running emacs 25.4.1 on a very old HP-UX B.11.11. I've built emacs myself using gcc 4.2.3 which is the best I can get on this box. As far as I can tell no-one else reports a similar problem, so I'm wondering if something was amiss in the build process, but I don't know where to look.
I would appreciate it very much if you helped me with the following most annoying problem:
I'm using PyDev in Eclipse on my Ubuntu 14.04 machine, and every time I run my code in debug mode, it takes around 3-4 minutes to start.
My research yielded, that it takes a very long time to run each "import" statement row (without import statements, the problem vanishes).
Can anyone tell how can I overcome this problem?
Thanks!
I'm attaching:
1) my import statements.
2) my file tree (the file I'm running is in the folder "Gil").
3) and the debug window (during these 3-4 minutes, eclipse adds more and more lines there, that just say "light.py" (this is the file I'm running))
I'm only guessing here, but from your output in PyDev it seems you're executing something with multiprocessing or another thing which creates python subprocesses (which is why I think you're having a new light.py entry every time in the debugger).
Without looking at your code it's a bit hard guessing on what's actually happening, but I can give you some suggestions here:
Make your imports lazier (if you're always executing a new process which has to re-execute all the imports, that can indeed lead to quite more time -- imports in Python are usually slow, even more so with a debugger in place... maybe do a profile in regular mode to actually know what's going on -- if it's open source or you can afford the price, http://www.pyvmmonitor.com/ can probably help you quite a bit here -- if you haven't profiled your code before, you probably have low-hanging fruits which can give you a nice speedup).
Use only programatic breakpoints with the remote debugger (see: http://pydev.org/manual_adv_remote_debugger.html) -- this will make your code run at regular speed until it hits the programmatic breakpoint.
If none of those help, please add more details on your code (are you using stackless, greenlets, threads, multiple processes, etc? -- also 3-4 minutes may be much or not. Without having the original time to get there, it's hard to know...).
Is there a way to save a compiled version of my perl scripts?
Or a way to do a JavaScript style compile where you just remove comments, whitespace, etc?
You're trying to optimize in the wrong place. If you are running scripts in a web/cgi environment, there is no need to take a compile hit every time the script is executed. The scripts should be running persistently, which you can do with Apache mod/perl, FastCGI, or a number of newer technologies and frameworks such as Plack and Catalyst. If you are more specific about your needs, you will discover that there are a number of options available to you.
Do you realize that Javascript is minified to save bandwidth, not startup time or runtime? And that the practice of minifying Javascript started in the times of dialup connections?
Sure, there was a time where interpreted programs were often minified like that, but back then typical CPUs were Z80s and 8086's running at 4-8 MHz, and using loads of cycles to execute a single instruction. To show: my Athlon XP-M 2400 is ~10,000 times faster than my 8MHz 8086 for CPU-bound programs.
Try the perl compiler, to C B::C or to B::Bytecode (similar to python pyc).
http://search.cpan.org/dist/B-C/perlcompile.pod
You could use PPI to strip out comments and POD.
Perl::Squish is the "minifier" you're looking for. Caveat: It's not going to help you at all. You're trying to optimize on the wrong end.
If you're doing this for fun you might want to check out parrot vm
If not.. see my comment ;)
SOLVED see Edit 2
Hello,
I've been writing a Perl program to handle automatic upgrading of local (proprietary) programs (for the company I work for).
Basically, it runs via cron, and unfortunately has a memory leak (or something similar). The problem is that the leak only happens when I'm not looking (aka when run via cron, not via command line).
My code does not contain any circular (or other) references, so the commonly cited tools will not help me (Devel::Cycle, Devel::Peek).
How would I go about figuring out what is using so much memory that the kernel kills it?
Basically, the code SFTPs into a server (using ```sftp...`` `), calls OpenSSL to verify the file, and then SFTPs more if more files are needed, and installs them (untars them).
I have seen delays (~15 sec) before the first SFTP session, but it has never used so much memory as to be killed (in my presence).
If I can't sort this out, I'll need to re-write in a different language, and that will take precious time.
Edit: The following message is printed out by the kernel which led me to believe it was a memory leak:
[100023.123] Out of memory: kill process 9568 (update.pl) score 325406 or a child
[100023.123] Killed Process 9568 (update.pl)
I don't believe it is an issue with cron because of the stalling (for ~15 sec, sometimes) when running it via the command-line. Also, there are no environmental variables used (at least by what I've written, maybe underlying things do?)
Edit 2: I found the issue myself, with help from the below comment by mobrule (in response to this question). It turns out that the script was called from a crontab of a user (non-root) just once a day and that (non-root privs) caused a special infinite loop situation.
Sorry guys, I feel kinda stupid for not finding this before, but thanks.
mobrule, if you submit your comment as an answer, I will accept it as it lead to me finding the problem.
End Edits
Thanks,
Brian
P.S. I may be able to post small snippets of code, but not the whole thing due to company policy.
You could try using Devel::Size to profile some of your objects. e.g. in the main:: scope (the .pl file itself), do something like this:
use Devel::Size qw(total_size);
foreach my $varname (qw(varname1 varname2 ))
{
print "size used for variable $varname: " . total_size($$varname) . "\n";
}
Compare the actual size used to what you think is a reasonable value for each object. Something suspicious might pop out immediately (e.g. a cache that is massively bloated beyond anything that sounds reasonable).
Other things to try:
Eliminate bits of functionality one at a time to see if suddenly things get a lot better; I'd start with the use of any external libraries
Is the bad behaviour localized to just one particular machine, or one particular operating system? Move the program to other systems to see how its behaviour changes.
(In a separate installation) try upgrading to the latest Perl (5.10.1), and also upgrade all your CPAN modules
How do you know that it's a memory leak? I can think of many other reasons why the OS would kill a program.
The first question I would ask is "Does this program always work correctly from the command line?". If the answer is "No" then I'd fix these issues first.
On the other hand if the answer is "Yes", I would investigate all the differences between having the program executed under cron and from the command line to find out why it is misbehaving.
If it is run by cron, that shouldn't it die after iteration? If that is the case, hard for me to see how a memory leak would be a big deal...
Are you sure it is the script itself, and not the child processes that are using the memory? Perhaps it ends up creating a real lot of ssh sessions , instead of doing a bunch of stuff in one session?