I wrote a simple script that calls a test that takes about 3 days. I redirect the test's output to a log file, so when running the script there's nothing on the screen to indicate progress. It's very simple:
CD C:\Test
test.exe > log.txt
I can check the log file every once in a while sure, but if the machine freezes (which happens) I wouldn't notice right away.
So, I need an idea of a nice way to show progress. Outputting a dot every now and then is not nice I think, since it takes 3 days! Any other idea? As a beginner in PowerShell, an implementation for a given idea would also be nice.
Much appreciated,
Yotam
Related
As per the image showing below, I have two PS1 script files.
main.ps1 (from where we call the call.ps1 function)
call.ps1
Now, check the below SS for reference of structure - Both are on same Folder.
Now, When we call the main.ps1 1st time in New Powershell window it's working fine. (see blow)
Now the real issue is, If we call the same function again 2nd time it's not working and giving an error. (see below)
Am I missing something or understanding the things incorrectly? - Or this is not the way to call the PS1 from another PS1 file?
Surprisingly, If I close the existing Terminal and open New Terminal it works fine for the 1st time..!!!
Anyone has any clue what's wrong we're doing??
Someone might write this better then me.
You need to rename call.ps1 to call.psm1
Then change the code in main.ps1 to the following
Import-Module '.\call.psm1'
callingthefunction
Also in your main.ps1 you used './call.ps1' the slash direction is wrong. Look at my code above.
For more information about modules which is what you are creating. Please read.
https://learn.microsoft.com/en-us/powershell/scripting/developer/module/how-to-write-a-powershell-script-module?view=powershell-7.2
My problem is as described. My script downloads files through an external call to cmd (using the system function and then .NET to make keypresses). The issue is that when it tries to fopen these files I downloaded (filenames from a text file I write as I download), it doesn't find them, causing an error. When I run the script again after seeing it fail, it works but only up to the point where it's trying to download/call new files again, where it runs into the same problem.
Are new files downloaded during when a script is running somehow not visible to the search path? Because the folder is most definitely in my search path (seeing as it works outside of during-script downloads). It's not that it isn't getting the files fast enough either, cause they appear in my folder almost instantly, and I've tried a delay to allow for it to recognize it, but that didn't work either.
I'm not sure if it's important to note that the script calls an external function which tries to read the files from the .txt list I create in the main script.
Any ideas?
The script to download the files looks like so:
NET.addAssembly('System.Windows.Forms');
sendkey = #(strkey) System.Windows.Forms.SendKeys.SendWait(strkey);
system('start cygwinbatch.bat')
pause(.1)
sendkey(callStr1)
sendkey('{ENTER}')
pause(.1)
sendkey(callStr2)
sendkey('{ENTER}')
pause(.1)
sendkey('exit')
pause(2)
sendkey('{ENTER}')
But that is not the main reason I am asking: I am confident that the downloads are occurring when the script calls them, because I see them appearing in my folder as it called. I am more confused as to why MATLAB doesn't seem to know they are there while the script is running, and I have to stop it and run it again for it to recognize the ones I've downloaded already.
Thank you,
Aaron
The answer here is probably to run the 'rehash' function. Matlab does not look for new files while executing an operation, and in some environments misses new files even during interactive activity.
Running the rehash function forces Matlab to search through its full path and determine if there are any new files.
I've never tried to run rehash in the middle of an operation though. ...
My guess is that the MATLAB interpreter is trying to look ahead and is throwing errors based on a snapshot of what the filesystem looked like before the files were downloaded. Do you get different behavior if you run it one line at a time using F9? If that's the case then you may be able to prevent the interpreter from looking ahead by using eval().
I like to create a kind of simple error log for a python program which runs on startup (through rc.local on a raspberry). Since I like to use this for debuging my files, the error logs should include date and time in their name.
This is what I got:
sudo python myprogram.py> /home/pi/errorlogs/myprogram.txt 2>&1
So far so good - but: How can I include the actual time and date in "myprogram.txt" (so it becomes lets say "myprogramm_2014-02-10_19:45:00.txt") and is not deleted any time I reboot? I played around with .strftime("%Y-%m-%d_%H-%M"). but didnĀ“t get it to work.
Not really perfect is the fact, that I do not get a continuous output in my file - that is something I could life with since I dont need them during the run - but maybe there is a whole different approach for what I need anyway?
Just let the shell do that for you.
sudo python myprogram.py> /home/pi/errorlogs/myprogram-$(date +%Y-%m-%d_%H-%M).txt 2>&1
I have 8 scripts in Powershell which I run one by one. Let's call the scripts: script1.bat, script2.bat, .., script8.bat.
Now I need a script which runs all scripts.bat one by one, but not simultaneously.
And is there a way to check, if each script was successful?
./script1.bat
./script2.bat
./script3.bat
...
You'll get the picture, I guess. This will run them in sequence. To determine whether they were sucessful or not that depends very much on how those batch files signal errors or sucessful completion. If you exit with exit /b 234 or something similar on an error then you can use $LastExitCode or also $? to determine that. You could also look whether the changes made by those batch files are actually done, of there is no other way of figuring out whether they were sucessful.
I have a few Perl scripts on a Solaris SunOS system which basically connect to other nodes on the network and fetch/process command logs. They run correctly 99% of the time when run manually, but sometimes they get stuck. In this case, I simply interrupt it and run again.
Now, I intend to cron them, and I would like to know if there is a way to detect if the script got stuck in the middle of execution (for whatever reason), and preferably exit as soon as that happens, in order to release any system resources it may be occupying.
Any help much appreciated.
TMTOWTDI, but one possibility:
At the start of your script, write the process id to a temporary file.
At the end of the script, remove the temporary file.
In another script, see if there are any of these temporary files more than a few minutes/hours old.