Non-buffering of output of a program run in AppVeyor Windows environment - appveyor

As far as I can see the, stdout, output of a command in an AppVeyor jpb is buffered and only shows up in the log when the command finishes.
This is a bit of a nuisance as when the AppVeyor job is killed the output is lost ... ( :-( ).
Is there a way to show the output directly in the log file?

Related

Print Pre-action script in console

For testing purposes I am using a Pre-Action Script for my tests (that cleans a database for Webservices tests).
For now I am printing the output inside a file located in my test folder, following this SO post ( Xcode scheme pre-action script not running ).
# Pre Build Output
exec > ${PROJECT_DIR}/MyProjectTests/TestLogs.log 2>&1
echo "=== PRE-ACTION SCRIPT ==="
...
However this is not really handy to have to open a log file via command line whenever a test set is launched to see what happened.
Is there a way to redirect Pre-Action script's output directly inside XCode console?
Thanks a lot

Perl script file run manually but not in crontab

I have a perlscript file was running fine in crontab but suddenly it stopped running without any modification.
cd /home/user/public_html/crons && ./script.pl 2>&1 >/dev/null
The top of the script file is #!/usr/bin/perl -X
The output expect from this script is changes in database
I have another script file with the same modification and still works fine
When I run the file in the browser it works fine and execute all lines without any problem
I tried full path /usr/bin/perl but it didn't work
I tried Perl at the beginning but it didn't work
I run the command from SSH using putty but nothing happened
I checked log file /var/log/cron but no errors at all
I created temporary log file cd /home/user/public_html/crons/script.pl> /tmp/temp.log 2>&1 to see the errors but the log is empty
Here is the solution:-
I found the issue, There is was a stuck process for the same cron file , so i killed this process and its fixed
You can find your file process like this
ps aux | grep 'your cron file here'
This is a really common antipattern people seem to tend toward with cron.
Cron sends you an email with the output of your script, if it generates any output. People often redirect output to /dev/null to prevent cron from sending the email. This is bad because now the output of your script is lost entirely. Even if the script has some built-in logging, it might generate errors before it gets the log file opened and those are lost. It also might crash in a way that doesn't get written to the logging mechanism.
At a bare minimum, you should just remove 2>&1 >/dev/null to start receiving the email. (and also, test your mail setup using a temporary cron job like 1 * * * * echo "Test" )
The next better solution is to change it to >> /var/log/myscript/current.log and then also set up something to rotate the log files (like logrotate) and also make sure to create that directory with permissions that the script user is allowed to write to it. By only redirecting STDOUT of the script, any errors or warnings it writes to STDERR cause you to get an email, and if there are no errors/warnings the output goes to the log file and no email gets sent.
Neither of those changes solve the root problem though, which is that when cron runs your script it does so with a different environment than you have on the command line. What you really want is a way to run the script with a consistent environment, and log it. The "ultimate solution" is to define your task in some kind of service manager, and then use cron to occasionally start it. For instance, you could use systemd and define a service that doesn't restart, then use systemctl start my_custom.service in your cron job. Now you can test independent of cron, and your tests will have the same exact environment, and be logged by the service manager. As extra bonuses, you are protected from accidentally running your script twice at once, and you get a clean way to stop a running cron job without the danger of stale pid files.
I don't particularly advocate systemd myself, but thankfully there are lots of alternatives:
Runit : http://smarden.org/runit/runsvdir.8.html
S6 : https://skarnet.org/software/s6/
Perp : http://b0llix.net/perp/site.cgi?page=perpd.8
(but installing and configuring a service manager is a bigger task than just using systemd if your distro is based on systemd) Each of these allows you to define a service that doesn't restart. Then you use a shell command to issue a "run once" directive to the supervisor, which runs the task as a child. Now you can easily launch the jobs yourself and see all the errors in the log, and then add that command to the crontab and know that it will run identically when cron starts it.
Back to your original problem, once you get some logging you are likely to discover it is a permission problem or a upgraded module in the system perl.

Cannot redirect EXE output to file

I have an exe program I'm running on Windows 10 using PowerShell. When I run it, I get the following output.
> .\Program.exe
Unlocked level 7/10
When trying to redirect all output or just stdout to a file, the program stops giving output. For example
PS > .\Program.exe > .\out.txt
PS > cat .\out.txt
PS >
I did not write the program but what I know is that it was written in C++.
Is there any trick to get the output into a file? I tried running in python and writing output to a file, running in python without fetching the output and redirecting, running from another powershell, and lots of other combinations but they don't seem to be working. Also, when running from Git Bash, I get no output at all.
I was thinking about some checks on the descriptors but I'm not sure since I don't have the source code, only the asm code
It looks like Program.exe is actually generating an error, and not output, first commenter is trying to get you to see that, but not really explaining that part...
(NOTE: You aren't actually using any powershell besides an implied "Invoke-Expression")I think you might be dealing with STDERR vs. STDOUT, when I invoke reg.exe in that fashion from powershell I get no output to the text file. This is because the text I was seeing was an error message ( Contents of STDERR ) from reg.exe, not the output ( contents of STDOUT ) from the command. When I passed proper parameters to it ( reg query HKLM\Software\Microsoft > C:\Users\foo\Documents\foo.txt) it wrote the Contents of STDOUT to the text file instead of the screen.
Here is an article that explains it better than I just did:
https://support.microsoft.com/en-us/help/110930/redirecting-error-messages-from-command-prompt-stderr-stdout

Can I make Rundeck read a log file on the remote node as job output?

I'm using Rundeck to run remote jobs through the SSH executor. Some of the jobs I run log to specific files on the host, rather than STDOUT, and I don't have the ability to change this.
Is there any way to tell Rundeck to read those files as they get written (using something like tail -f), and treat what appears there as the job output?
Adding tail -f itself as a step wouldn't work, since it will never terminate.
If need be, a 'hacky' solution will do (like adding extra job steps for copying and reading logs) but ideally I'd like it to be neater. So if you could give me some guidelines how to build a plugin that will take the filename as a parameter and read the output from there, that would be better.
If you just want a file to read and print on STDOUT, then just use this inline script as additional step in workflow.
#!/usr/bin/python
import os,sys
file_name=sys.argv[1]
if os.path.isfile(file_name):
with open(file_name) as file:
for line in file:
print line
else:
print 'file doesnt exists'
Give file name as argument

STDOUT redirected externally and no output seen at the console

I have a program which reads the output from external application.The external app gives set of output. My program reads the output from this external app while($line=<handle to external app>) and print it to STDOUT.But the "print $line STDOUT" prints only some lines and when the error has occurred ,print to STDOUT is not working , but my one more logging statement "push #arr,$line" has stored complete output from the external app.From this i got to know STDOUT is not working properly when error happens.
Eg:
if external app output is like:
Starting command
First command executed successfully
Error:123 :next command failed
Program terminated
In here the STDOUT prints only :
Starting command
First command executed successfully
But if I check the array it has complete output including error details. So I guessed STDOUT has been redirected or lost.
So I tried storing STDOUT in the beginning of the program to $old_handle using open and then try to restore it before print statement using select($old_handle) (thinking some thing redirects STDOUT when error happens)
But I was not successfull, I don't know what is wrong here. Please help me.
It's possible the output is being buffered. Try setting
$| = 1;
at the start of your program. This will cause the output to be displayed straight away, rather than being buffered for later.
Just guess, may be because error output doesn't go to STDOUT. Use redirect
first_program |& perl_program
or
first_program 2>&1 | perl_program