Sending commands to a detached GNU by python subprocess - subprocess

Hi I've been struggling to sort out an issue with some of my thesis code for a couple of days. What I'm trying to do is run a python code within screen via a command in Putty via:
$ screen ./Top.py
Top.py is a dummy code I made to try and sort this out rather than waiting eight hours for the real code to hit the error. The issue that it encounters is that subprocess.call() it cannot begin new screens from a detached screen.
Contents of Top.py:
#!/usr/bin/env python
import time
import subprocess
time.sleep(10)
subprocess.call(["screen", "nohup", "./Call1.py", "&"])
subprocess.call(["screen", "nohup", "./Call2.py", "&"])
time.sleep(10)
There aren't any issues with the Call1.py and Call2.py, and the whole code runs smoothly if I never detach the screen. (But the full code will take a couple of days, so I can't leave it attached.) Another note is the nohup is just there so I can get the nohup.out file to for later reference, my actual code changes the directory they are located in so they don't overwrite each other.
I don't have any issues with not using screen to run the Call1 & Call2, but they need to run in parallel and in the background so the rest of my code can continue.
Closest to a solution I have come - I think.....

Related

MATLAB doesn't find files I downloaded while the script is running

My problem is as described. My script downloads files through an external call to cmd (using the system function and then .NET to make keypresses). The issue is that when it tries to fopen these files I downloaded (filenames from a text file I write as I download), it doesn't find them, causing an error. When I run the script again after seeing it fail, it works but only up to the point where it's trying to download/call new files again, where it runs into the same problem.
Are new files downloaded during when a script is running somehow not visible to the search path? Because the folder is most definitely in my search path (seeing as it works outside of during-script downloads). It's not that it isn't getting the files fast enough either, cause they appear in my folder almost instantly, and I've tried a delay to allow for it to recognize it, but that didn't work either.
I'm not sure if it's important to note that the script calls an external function which tries to read the files from the .txt list I create in the main script.
Any ideas?
The script to download the files looks like so:
NET.addAssembly('System.Windows.Forms');
sendkey = #(strkey) System.Windows.Forms.SendKeys.SendWait(strkey);
system('start cygwinbatch.bat')
pause(.1)
sendkey(callStr1)
sendkey('{ENTER}')
pause(.1)
sendkey(callStr2)
sendkey('{ENTER}')
pause(.1)
sendkey('exit')
pause(2)
sendkey('{ENTER}')
But that is not the main reason I am asking: I am confident that the downloads are occurring when the script calls them, because I see them appearing in my folder as it called. I am more confused as to why MATLAB doesn't seem to know they are there while the script is running, and I have to stop it and run it again for it to recognize the ones I've downloaded already.
Thank you,
Aaron
The answer here is probably to run the 'rehash' function. Matlab does not look for new files while executing an operation, and in some environments misses new files even during interactive activity.
Running the rehash function forces Matlab to search through its full path and determine if there are any new files.
I've never tried to run rehash in the middle of an operation though. ...
My guess is that the MATLAB interpreter is trying to look ahead and is throwing errors based on a snapshot of what the filesystem looked like before the files were downloaded. Do you get different behavior if you run it one line at a time using F9? If that's the case then you may be able to prevent the interpreter from looking ahead by using eval().

Powershell: show progress for a 3 days script

I wrote a simple script that calls a test that takes about 3 days. I redirect the test's output to a log file, so when running the script there's nothing on the screen to indicate progress. It's very simple:
CD C:\Test
test.exe > log.txt
I can check the log file every once in a while sure, but if the machine freezes (which happens) I wouldn't notice right away.
So, I need an idea of a nice way to show progress. Outputting a dot every now and then is not nice I think, since it takes 3 days! Any other idea? As a beginner in PowerShell, an implementation for a given idea would also be nice.
Much appreciated,
Yotam

How to detect if cronned script is stuck

I have a few Perl scripts on a Solaris SunOS system which basically connect to other nodes on the network and fetch/process command logs. They run correctly 99% of the time when run manually, but sometimes they get stuck. In this case, I simply interrupt it and run again.
Now, I intend to cron them, and I would like to know if there is a way to detect if the script got stuck in the middle of execution (for whatever reason), and preferably exit as soon as that happens, in order to release any system resources it may be occupying.
Any help much appreciated.
TMTOWTDI, but one possibility:
At the start of your script, write the process id to a temporary file.
At the end of the script, remove the temporary file.
In another script, see if there are any of these temporary files more than a few minutes/hours old.

Run time error in Xcode for having too many lines of code

I have written a large code (a calculator program) about 220,000 line of code and more in my base implementation file. The program build and run well, but whenever it comes to execute the codes from specific line number and further, the program stops, and gives a run time error. There is no problem with the code, I have tried it in smaller scale, (I mean deleting some lines) when it become smaller it runs ok.
My question here is this, does the Xcode have limitation in capacity of each file when running?
And if the answer is NO, so what should I do?

Why doesn't "coverage.py run -a" always increase my code coverage percent?

I have a GUI application that I am trying to determine what is being used and what isn't. I have a number of test suites that have to be run manually to test the user interface portions. Sometimes I run the same file a couple of times with "coverage.py run file_name -a" and do different actions each time to check different interface tools. I would expect that each time I ran with the -a argument, I could only increase the code covered line count by coverage.py (at least unless new files are pulled in). However, sometimes it gives lower code coverage after an additional run - what could be causing this?
I am not editing source between runs and no new files are being pulled in as far as I can tell. I am using coverage.py version 3.5.1.
That sounds odd indeed. If you can provide source code and a list of steps to reproduce the problem, I'd like to take a look at it: you can create a ticket for it here: https://bitbucket.org/ned/coveragepy/issues