I'm quite new to Common Lisp and would require help in using the functions in the "osicat" system. What I am trying to accomplish is to get the size of a file. To accomplish this, I would like to use the result returned by the function "stat" of the osicat system. Upon trying to get information from a file that is found in the same directory where I launched sbcl from, it seems as if either the function is taking forever to collect information from the file or it either does nothing and hangs for an unknown reason. I'm not quite how to go about this and I have no clue what might be causing this issue.
Here is the sequence of actions undertaken until I encounter the issue:
Open Powershell.
Execute cygwin
Execute sbcl
(ql:quickload :osicat) (which is loaded without any complaints)
(in-package :osicat-posix)
(osicat-posix:stat "env.db")
After executing the last command, it is as though the repl loop hangs. I tried scouring the internet for any clues or answers as to why this might happen, but only found tutorials giving the same instructions that are giving me a hard time. Any insight, clues, hints or help would greatly be appreciated.
There's nothing wrong with your sequence of actions and you should expect a result you wanted. Here is the output for me in SLIME:
CL-USER> (osicat-posix:stat "/etc/passwd")
#<OSICAT-POSIX:STAT {1039159BB3}>
CL-USER> (describe (osicat-posix:stat "/etc/passwd"))
#<OSICAT-POSIX:STAT {103916B4F3}>
[standard-object]
Slots with :INSTANCE allocation:
DEV = 64769
INO = 25166054
MODE = 33188
NLINK = 1
UID = 0
GID = 0
RDEV = 0
SIZE = 2324
BLKSIZE = 4096
BLOCKS = 8
ATIME = 1576246741
MTIME = 1575707407
CTIME = 1575707407
Maybe, your problem is connected with cygwin interaction. You might get some clues on what's happening by running sbcl under strace.
Related
I would appreciate it very much if you helped me with the following most annoying problem:
I'm using PyDev in Eclipse on my Ubuntu 14.04 machine, and every time I run my code in debug mode, it takes around 3-4 minutes to start.
My research yielded, that it takes a very long time to run each "import" statement row (without import statements, the problem vanishes).
Can anyone tell how can I overcome this problem?
Thanks!
I'm attaching:
1) my import statements.
2) my file tree (the file I'm running is in the folder "Gil").
3) and the debug window (during these 3-4 minutes, eclipse adds more and more lines there, that just say "light.py" (this is the file I'm running))
I'm only guessing here, but from your output in PyDev it seems you're executing something with multiprocessing or another thing which creates python subprocesses (which is why I think you're having a new light.py entry every time in the debugger).
Without looking at your code it's a bit hard guessing on what's actually happening, but I can give you some suggestions here:
Make your imports lazier (if you're always executing a new process which has to re-execute all the imports, that can indeed lead to quite more time -- imports in Python are usually slow, even more so with a debugger in place... maybe do a profile in regular mode to actually know what's going on -- if it's open source or you can afford the price, http://www.pyvmmonitor.com/ can probably help you quite a bit here -- if you haven't profiled your code before, you probably have low-hanging fruits which can give you a nice speedup).
Use only programatic breakpoints with the remote debugger (see: http://pydev.org/manual_adv_remote_debugger.html) -- this will make your code run at regular speed until it hits the programmatic breakpoint.
If none of those help, please add more details on your code (are you using stackless, greenlets, threads, multiple processes, etc? -- also 3-4 minutes may be much or not. Without having the original time to get there, it's hard to know...).
A few times each day, I receive an ispell error (like the following) that is corrected by restarting Emacs. Any ideas on how to further troubleshoot this type of error would be greatly appreciated.
Debugger entered--Lisp error: (wrong-type-argument number-or-marker-p nil)
ispell-command-loop(("Brae" "Br ea" "Br-ea" "Bra" "Bread" "Break" "Bream"
"Brew" "Bret" "Bred" "Area" "Urea") nil "Brea" 2229 2233)
ispell-process-line("^Brea, CA ~ 92821\n" nil)
ispell-region(1 6771)
ispell-buffer()
ispell()
call-interactively(ispell nil nil)
command-execute(ispell)
The document being spell-checked is in tex-mode (built-in -- i.e., not using AUCTeX). The error (today) comes form a simple address at flush-left:
242 S. Orange Avenue\\
Brea, CA ~ 92821
Try loading ispell.el and then:
Try to provoke the error. After loading the source file (not the byte-compiled file), you will perhaps get a more detailed backtrace, which will tell you better what causes the error. (You apparently already have debug-on-error non-nil.)
If that doesn't tell you enough, then do M-x debug-on-entry ispell-command-loop, and walk through the execution in the debugger. That should show you just what goes wrong - where that function expects a number and hasnil instead.
Based on your better understanding, you will likely know what to do, to either avoid or fix the problem.
If you cannot reproduce the error easily then #2 will probably not be of much help. In that case, you can try examining the code of ispell-command-loop to see if you can figure out where the problem is.
You can also copy that code and insert calls to message at various places, to try to determine where things go wrong when they do go wrong. IOW, provide yourself with some more info than that sparse backtrace.
Maybe someone else has a better idea - mine is pretty much brute force here.
Got stuck with one problem in our live server.
Have script (perl) which runs almost 15 to 18 hrs a day. it creates 100+ sub process every day . One place it has command (product command which we run in command line solaris box) which is being triggerred with back ticks inside perl code.
It looks like the back ticks command gets skipped or failed randomly.
for eg. if i need to run for 50 customers 2 or 3 gets failed randomly.
I do not see the evidence that the command has been triggerred in anywhere.
since its live server we can't even try making much in code change until we are sure about the problem.
here is the code..
my $comm = "inventory -noX customer1"; #sample command i have given here
my $newLogFile = "To capture command output here we have path whre the file gets created");
my $piddy = `$comm 2>&1 > $newLogFile`;
Is it because of the back ticks it happens I am really not sure :(.
Also tried various analysis like memory/CPU/diskspace/Adding librtld_db.so in LD_LIBRARY_PATH etc....but no luck...Also the perl is in 64 bit ...what else Can i? :(
I suspect you are not checking for errors (and perl doesn't make that easy to do correctly for backticks).
Consider using IPC::System::Simple's capture in place of your backticks/qx.
As its doc says, "If there's an error, it will die with a detailed description of what went wrong."
It shouldn't fail just because of backticks, however because it is spawning a new process, that process may be periodically subject to failure due to system conditions (eg. sysLoad). Backticks are really a "fire and forget" method and should never be used for anything critical in a production environment. As previously suggested, there are far more detailed ways to manage spawning external processes.
If the command's output is being lost due to buffering, you might try turning off buffering, but keep an eye on it for performance degradation (it's usually not significant).
Buffering can be turned off for an entire script by adding this near the top:
$|=1;
When calling external commands, I'm using system of IPC::System::Simple or open3 of IPC::Open3.
From time to time I get a "nesting exceeds `max-lisp-eval-depth'" error.
What does it mean?
When I get one, is there something I can do, other than "killall emacs"?
Edit:
You can get the error if you evaluate:
(defun func ()
(func))
(func)
However, in this case emacs remains responsive.
An immediate remedy can be to simply increase the maximum. Its default value is 500, but you could set it to, say, 10000 like this:
(setq max-lisp-eval-depth 10000)
But that's generally not a great idea, because the fact that you run into a nesting exceeds `max-lisp-eval-depth' error in the first place is a sign that some part of your code is taking up too much stack space. But at least increasing the maximum temporarily can help you analyze the problem without getting the same error message over and over again.
Basically, it means that some Lisp code used up more stack than Emacs was compiled to permit.
In practice, it's a sign of a bug in Lisp code. Correctly written code should avoid nesting this deeply, even if the algorithm and the input data were "correct"; but more frequently, it happens because of an unhandled corner case or unexpected input.
In other words, you have probably created an endless loop via recusion, or perhaps e.g. a regular expression with exponential backtracking.
If you are lucky, repeated control-G keypresses could get you out of the conundrum without killing Emacs.
If you are developing Emacs Lisp code, you might want to tweak down the value of max-lisp-eval-depth artificially to help find spots where your code might need hardening or bug fixing. And of course, having debug-on-error set to t should help by showing you a backtrace of the stack.
SOLVED see Edit 2
Hello,
I've been writing a Perl program to handle automatic upgrading of local (proprietary) programs (for the company I work for).
Basically, it runs via cron, and unfortunately has a memory leak (or something similar). The problem is that the leak only happens when I'm not looking (aka when run via cron, not via command line).
My code does not contain any circular (or other) references, so the commonly cited tools will not help me (Devel::Cycle, Devel::Peek).
How would I go about figuring out what is using so much memory that the kernel kills it?
Basically, the code SFTPs into a server (using ```sftp...`` `), calls OpenSSL to verify the file, and then SFTPs more if more files are needed, and installs them (untars them).
I have seen delays (~15 sec) before the first SFTP session, but it has never used so much memory as to be killed (in my presence).
If I can't sort this out, I'll need to re-write in a different language, and that will take precious time.
Edit: The following message is printed out by the kernel which led me to believe it was a memory leak:
[100023.123] Out of memory: kill process 9568 (update.pl) score 325406 or a child
[100023.123] Killed Process 9568 (update.pl)
I don't believe it is an issue with cron because of the stalling (for ~15 sec, sometimes) when running it via the command-line. Also, there are no environmental variables used (at least by what I've written, maybe underlying things do?)
Edit 2: I found the issue myself, with help from the below comment by mobrule (in response to this question). It turns out that the script was called from a crontab of a user (non-root) just once a day and that (non-root privs) caused a special infinite loop situation.
Sorry guys, I feel kinda stupid for not finding this before, but thanks.
mobrule, if you submit your comment as an answer, I will accept it as it lead to me finding the problem.
End Edits
Thanks,
Brian
P.S. I may be able to post small snippets of code, but not the whole thing due to company policy.
You could try using Devel::Size to profile some of your objects. e.g. in the main:: scope (the .pl file itself), do something like this:
use Devel::Size qw(total_size);
foreach my $varname (qw(varname1 varname2 ))
{
print "size used for variable $varname: " . total_size($$varname) . "\n";
}
Compare the actual size used to what you think is a reasonable value for each object. Something suspicious might pop out immediately (e.g. a cache that is massively bloated beyond anything that sounds reasonable).
Other things to try:
Eliminate bits of functionality one at a time to see if suddenly things get a lot better; I'd start with the use of any external libraries
Is the bad behaviour localized to just one particular machine, or one particular operating system? Move the program to other systems to see how its behaviour changes.
(In a separate installation) try upgrading to the latest Perl (5.10.1), and also upgrade all your CPAN modules
How do you know that it's a memory leak? I can think of many other reasons why the OS would kill a program.
The first question I would ask is "Does this program always work correctly from the command line?". If the answer is "No" then I'd fix these issues first.
On the other hand if the answer is "Yes", I would investigate all the differences between having the program executed under cron and from the command line to find out why it is misbehaving.
If it is run by cron, that shouldn't it die after iteration? If that is the case, hard for me to see how a memory leak would be a big deal...
Are you sure it is the script itself, and not the child processes that are using the memory? Perhaps it ends up creating a real lot of ssh sessions , instead of doing a bunch of stuff in one session?