I am having recurring intermittent problems with "losing" my environment variables, most onerously %windir% and %path%. The problem occurs when I have locked the keyboard and log back in. Rebooting the system (cold- and warm-boot) does not reliably bring them back, but eventually multiple iterations of booting has (so far) brought everything back.
If I open a command window and type echo %windir% and echo %path% and find that the variables exist and are properly defined, and if I leave that command window open, I can leave my system running for days without a problem.
I have captured the results of set to list all envars, both when the system is broken, and when it is fixed. The broken list is much shorter (%windir% is not even defined, %path% contains the definition from registry HKCU\Environment, but not from HKLM\SYSTEM\CurrentControlSet\Control\SessionManager\Environment).
I am guessing that the boot-up process is getting sidetracked.
Spent all morning with Geek Squad but they had no concrete suggestions. (They did suggest "taking the computer back to a previous restore point", but I fear that could cause more problems... and they didn't have confidence it would help.)
Do I have any options beyond possibly reinstalling everything?
I did finally find the answer, and although I understand this subject was closed by the commenter, I thought others might want to know. This link explains it pretty well:
https://superuser.com/questions/355594/windows-7s-path-and-environment-variables-are-corrupted
The short version is this: My system PATH exceeded a Windows maximum of 2048 bytes (it was more than 2200 bytes long). When that happens, the boot process fails to instantiate PATH and WINDIR.
The "fix" was to run c:\windows\system32\systempropertiesadvanced.exe from a command prompt (because without WINDIR, you can't open the Control Panel app for environment variables), and manually extract anything from the PATH I thought I could live without, until I whittled the PATH string down to under 2048 bytes.
Related
I have this problem:
I run some large calculations before going to sleep (or work).
When I return sometimes RAM is already filled and the program starts writing to Disk, which is a problem since then computer becomes almost non responsive, also the button "Interrupt the current operation" doesn't stop mserver.exe from executing a task.
This is what I saw 10 mins after I pressed the button "Interrupt the current operation":
Not to mention that calculations are probably like 100 or even 1000 times slower when it starts using Disk instead of RAM (so it's pointless anyway).
Another problem is that I was unable to save some variables to file since in Maple I couldn't type anything while mserver.exe was executing a task and after I killed the process mserver.exe I was still unable to save those variables since Maple commands don't work when connection to kernel is lost.
So, my question: can I make it so that mserver.exe won't use Disk at all (I mean from Maple alone, not by disabling page file in Windows) and just stop execution automatically when RAM is full (just like Classic Maple does when it hits 2GB limit)?
Also it would be nice to be able to limit Maple from using processor too much, for example up to 75% or so, so that I could work on that computer without problems.
You might experiment with a few of the options available for specifying limits on the Maple (kernel, mserver) engine.
In particular,
--init-reserve-mem=memorysize
(or, possible, the -T option). See here for some more detail:
https://www.maplesoft.com/support/help/MapleSim/view.aspx?path=maple
On Linux/OSX you could pass that in a call to the maple script that launches Maple. On MS-Windows you could add that to the command string/Property in the launcher (icon).
You might try setting it to a fraction of your total RAM, eg. 50-75%, and see how it goes. Presumably you'll have some other processes running.
As far as restricting the CPU use goes, that's more of a OS issue. On Linux/OSX you could use the system's nice facility. I don't know what's avaliable on MS-Windows (built-in or 3rd party). You might be able to set the Priority of the running mserver process from the Task Manager. Or you might look at something like the START facility:
https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/start
I was writing a batch file that was supposed to automatically set up a computer to receive "psexec" remote commads. Unluckly i didn't really pay attention to what i was writing and i wrote this command and then ran It:
setx /M Path "C:\Windows\System32\PSTools"
You can imagine what happened... I erased all the other path variables! Then, panicking, mis-reading an online-forum, I restarted the computer. I had no backups, no saving points and neither, obviously, opened cmd's or powershell's session. My questions are:
Is there still a way to recover the path variables i lost or they're gone forever?
If they're gone, is there a way for me to like "re-write" them or just get a list of the missing ones?
I know that my questions will seem stupid to the experienced programmers and i apologise for that, but I actually started this project with almost zero skills in bat, cmd, and the other stuffs...
Thanks to everyone that will help <3
I am getting the above error when trying to run a script produce to a report. It is a pre-existing script that has been run, successfully many times before. Research has told me that that it is something to do with the stack size? I’m running 10.2B02 in WRQ Reflections. Can anyone tell me what this statement means and how I look up the value of my –S.
Thanks,
Paul
-s is a client startup parameter. You mention "Reflections" so you are probably using a character terminal session. The -s parameter is on the command line used to start Progress (which might be inside a script). If there is a -pf somefile.pf on the command line then it is inside that "parameter file". If it is not specified the default value is 40. The maximum value is limited by available memory but setting it in the hundreds or even in the thousands is not unheard of.
You can also get the startup values by sending a SIGUSR1 to the _progres process that the session is running. I.e. kill -USR1 That will (safely) create a "protrace." file that includes startup parameters and a 4gl stack trace. The file will appear in either the current directory, the home directory or the temp-file directory (I forget which, just look for protrace*).
This error usually means that your code is manipulating a field that is too large. (Like the error says.) That might be for a lot of reasons.
One common possibility is string concatenation in a loop.
Or you might be calling lots of sub-procedures and passing parameters around.
If "nothing has changed" in the code then it probably just means that some data structure has grown slightly larger over time and increasing -s is really no big deal so long as it solves the problem.
If you keep having to increase it then it is more likely that you have some sort of coding issue. Maybe you're passing things by value that ought to be passed by reference or maybe you have run away recursion. Or something else. You'd need to provide a lot more detail to say for sure.
It is also possible (but unlikely) that you have a corrupt data record that appears to have a field in it that is too large. You could run "proutil dbName -C dbanalys" as an initial step to see if that is true.
Part of the error message is non-standard -- I'm not certain which log file it is coming from or how it got there (applications can write their own messages) but it seems that it might have something to do with trying to send an e-mail. So I'd be suspicious that either the list of recipients got too long or that the body of the e-mail is too large.
I've read a lot of similar questions, but I can't seem to find an answer to exactly what my problem is.
I've got a set of minidumps from a 32-bit application that was running on 64-bit Windows 2008. The 32-bit Visual Studio on my 32-Bit Vista Business wouldn't touch them at all, so I've been trying to open them in WinDbg.
I don't have the EXACT corresponding .pdb files (we only started saving them AFTER this particular release), but I have .pdbs built by the same machine with the same code. I also have access to the exact executable that created the minidumps.
I found a nifty little application called ChkMatch that can make .pdbs match an executable... the only difference (according to ChkMatch) was age, so I matched my newer .pdbs to the original executable.
However, when I load it in WinDbg, it still says that it is a "mismatched pdb" then, since I had set .symopts+0x40 it tries to load them anyway. I then get the warning:
*** WARNING: Unable to verify checksum for myexe.exe
I ran !lmi myexe and saw that, indeed, the checksum of the executable was in fact zero. From poking around a bit, I've found that the executable should have been built with the /release flag to have a checksum. That's all well and good, but I can't exactly go back in time and rebuild (if I did though, I'd definitely save the original .pdbs :-P ).
Is there anything I can do here? Seems a little ridiculous I can't make things match here at least enough to get a call stack.
you don't need the checksum to get a call stack - this warning can be safely ignored.
to get the stack you need to issue the stack command (any variant of k).
if the minidumps are any good (i.e. describe an actual fault), you should first try the auto analysis !analyze -v which will get you started.
come back when you have exhausted your expertise :o)
If you're working with minidumps then you have to set your image path (Ctrl+I) to point to a location with the images in the dump. The trouble with minidumps is that they don't contain any code or data from the executables on the target, so you have to supply them yourself.
-scott
SOLVED see Edit 2
Hello,
I've been writing a Perl program to handle automatic upgrading of local (proprietary) programs (for the company I work for).
Basically, it runs via cron, and unfortunately has a memory leak (or something similar). The problem is that the leak only happens when I'm not looking (aka when run via cron, not via command line).
My code does not contain any circular (or other) references, so the commonly cited tools will not help me (Devel::Cycle, Devel::Peek).
How would I go about figuring out what is using so much memory that the kernel kills it?
Basically, the code SFTPs into a server (using ```sftp...`` `), calls OpenSSL to verify the file, and then SFTPs more if more files are needed, and installs them (untars them).
I have seen delays (~15 sec) before the first SFTP session, but it has never used so much memory as to be killed (in my presence).
If I can't sort this out, I'll need to re-write in a different language, and that will take precious time.
Edit: The following message is printed out by the kernel which led me to believe it was a memory leak:
[100023.123] Out of memory: kill process 9568 (update.pl) score 325406 or a child
[100023.123] Killed Process 9568 (update.pl)
I don't believe it is an issue with cron because of the stalling (for ~15 sec, sometimes) when running it via the command-line. Also, there are no environmental variables used (at least by what I've written, maybe underlying things do?)
Edit 2: I found the issue myself, with help from the below comment by mobrule (in response to this question). It turns out that the script was called from a crontab of a user (non-root) just once a day and that (non-root privs) caused a special infinite loop situation.
Sorry guys, I feel kinda stupid for not finding this before, but thanks.
mobrule, if you submit your comment as an answer, I will accept it as it lead to me finding the problem.
End Edits
Thanks,
Brian
P.S. I may be able to post small snippets of code, but not the whole thing due to company policy.
You could try using Devel::Size to profile some of your objects. e.g. in the main:: scope (the .pl file itself), do something like this:
use Devel::Size qw(total_size);
foreach my $varname (qw(varname1 varname2 ))
{
print "size used for variable $varname: " . total_size($$varname) . "\n";
}
Compare the actual size used to what you think is a reasonable value for each object. Something suspicious might pop out immediately (e.g. a cache that is massively bloated beyond anything that sounds reasonable).
Other things to try:
Eliminate bits of functionality one at a time to see if suddenly things get a lot better; I'd start with the use of any external libraries
Is the bad behaviour localized to just one particular machine, or one particular operating system? Move the program to other systems to see how its behaviour changes.
(In a separate installation) try upgrading to the latest Perl (5.10.1), and also upgrade all your CPAN modules
How do you know that it's a memory leak? I can think of many other reasons why the OS would kill a program.
The first question I would ask is "Does this program always work correctly from the command line?". If the answer is "No" then I'd fix these issues first.
On the other hand if the answer is "Yes", I would investigate all the differences between having the program executed under cron and from the command line to find out why it is misbehaving.
If it is run by cron, that shouldn't it die after iteration? If that is the case, hard for me to see how a memory leak would be a big deal...
Are you sure it is the script itself, and not the child processes that are using the memory? Perhaps it ends up creating a real lot of ssh sessions , instead of doing a bunch of stuff in one session?