Does anyone have/"run into" a Fitnesse a Windows commnand line fixutre? I need to run executables then run my test suites and would appreciate if someone has such a fixture laying around.
Basically, what I am trying to do is the following:
|CommandlineFixture|
|C:\dev\myFileImporter.exe -f c:\dev\data\file.txt|
If you're using Fit, you can try Bob Martin's CommandLineFixture. You can use it by creating a simple test table as follows:
| com.objectmentor.fixtures.CommandLineFixture |
| command | C:\dev\myFileImporter.exe -f c:\dev\data\file.txt |
It also has some nice functionality like being able to search stderr/stdout for certain messages, wait for forked processes to finish, etc.
It's written in Java, and source code is available in case you have to customize it (when I used this, I customized it fairly heavily to add new functionality).
Related
I'm trying to automate the creating of remote repos using powershell and gh repo create and the first thing that happens after running that command is an option to create a new repo on GitHub or push an existing local repo up. I want to select the former, which should just require hitting enter since this is the option that is highlighted by default. I'm trying to use this in my ps1 script:
Add-Type -AssemblyName System.Windows.Forms
[System.Windows.Forms.SendKeys]::SendWait("{ENTER}");
When I try that without the gh repo create command, it works as expected, creating a new line in the powershell console. But when following gh repo create, it appears to do nothing. The console just sits on the following text which is output from the gh repo create command:
What would you like to do? [Use arrows to move, type to filter]
> Create a new repository on GitHub from scratch
Push an existing local repository to GitHub
I have tried countless combinations of the following commands
gh repo create
[Microsoft.VisualBasic.Interaction]::AppActivate("Administrator: Microsoft Powershell")
Add-Type -AssemblyName System.Windows.Forms
Start-Sleep 3
[System.Windows.Forms.SendKeys]::SendWait("{ENTER}")
and
$wshell = New-Object -ComObject wscript.shell;
$wshell.SendKeys("{ENTER}")
I'm new to powershell and can't tell if I'm doing something wrong or if sendkeys just doesn't work with gh commands for some reason, seems to be the latter. Any ideas would be appreciated.
Expanding on #mclayton's comments ...
SendKeys is designed for scenarios that are similar to a multithreaded execution environment: you've got some bit of UI waiting for input on one thread but also have a script running on a different thread, and you can use SendKeys in the script to send keystrokes to the UI.
The problem is, as #mclayton points out, that console applications tend to behave more like a single-threaded environment: so, in this case, the gh command is blocking everything after it.
If you want to go this route, try piping the output of SendKeys to the gh command, something like
[System.Windows.Forms.SendKeys]::SendWait("{ENTER}") | gh repo create
I'm not exactly sure how that would work (as I understand it, pipes behave somewhat differently in PowerShell than in the regular command line interface).
You might not even need to use SendKeys; you might just be able to use Write-Host or similar.
Note that this was the original use case for pipes: to be able to send the output of one command to another i.e. command1 [options] | command2 [options] ... and therefore be able to communicate between programs, even in a "single-threaded" command-line interface.
I am developing a Qt application in Python. It uses a resource file, which needs to be compiled. I am using autotools to manage compilation and installation of my projects.
Now, in order for the resource file to be usable by the application, it needs to be compiled with a certain version of the compilation program (pyrcc). I can get the version by putting the output of pyrcc -version in a variable in configure.ac. But then, I don't know how to check whether the string pyrcc5 is present in the output. If it is not present, I want to tell the user that his PyRCC programm has the wrong version, and abort configure.
Additionally, I would like to avoid the need of an extra variable for the program output, but instead do it like this (Pseudo code):
if "pyrcc5" not in output of "pyrcc -version":
say "pyrcc has wrong version"
exit 1
How can I do this ?
When writing a configure.ac for Autoconf, always remember that you are basically writing a shell script. Autoconf provides a host of macros that afford you a lot of leverage, but you can usually at least get an idea about basic "How can I do X in Autoconf?" questions by asking instead "How would I do X in a portable shell script?"
In particular, for ...
I would like to avoid the need of an extra variable for the program
output, but instead do it like this (Pseudo code):
if "pyrcc5" not in output of "pyrcc -version":
say "pyrcc has wrong version"
exit 1
... the usual tool for a portable shell script to use for such a task is grep, and, happily, the easiest way to apply it to the task does not require an intermediate variable. For example, this implements exactly your pseudocode (without emitting any extraneous messaging to the console):
if ! pyrcc -version | grep pyrcc5 >/dev/null 2>/dev/null; then
echo "pyrcc has wrong version"
exit 1
fi
That pipes the output of pyrcc -version into grep, and relies on the fact that grep exits with a success status if and only if it finds any matches.
You could, in fact, put exactly that in your configure.ac, but it would be more idiomatic to
Use the usual Autoconf mechanisms to locate pyrcc and grep, and to use the versions discovered that way;
Use the Autoconf AS_IF macro to write the if construct, instead of writing it literally;
Use standard Autoconf mechanisms for emitting a "checking..." message and reporting on its result; and
Use the standard Autoconf mechanism for outputting a failure message and terminating.
Of course, all of that makes the above considerably more complex, but also more flexible and portable. It might look like this:
AC_ARG_VAR([PYRCC], [The name or full path of pyrcc. Version 5 is required.])
# ...
AC_PROG_GREP
AC_CHECK_PROGS([PYRCC], [pyrcc5 pyrcc], [])
AS_IF([test "x${PYRCC}" = x],
[AC_MSG_ERROR([Required program pyrcc was not found])])
# ...
AC_MSG_CHECKING([whether ${PYRCC} has an appropriate version])
AS_IF([! pyrcc -version | grep pyrcc5 >/dev/null 2>/dev/null], [
AC_MSG_RESULT([no])
AC_MSG_ERROR([pyrcc version 5 is required, but ${PYRCC} is a different version])
], [
AC_MSG_RESULT([yes])
])
In addition to portability and conventional Autoconf progress messaging, that also gets the builder a way to specify a particular pyrcc executable to configure (by setting variable PYRCC in its environment), documents that in configure's help text, and exports PYRCC as a make variable.
Oh, and I snuck in a check for pyrcc under the name pyrcc5, too, though I don't know whether that's useful in practice.
The final result no longer looks much like the shell script fragment I offered first, I grant. But again, the pure shell script fragment could be used as is, and also, the fully Autoconfiscated version is derived directly from the pure script.
I'm running into problems testing a new addition to a module. (Specifically - the ~ operator seems to be not working in Math::Complex for this new feature only.) It's too bizarre to be what it appears but the ideal scheme would be to add the -d option on the top line of the .t program.
Well, I was quickly disabused of that idea! It does not invoke the debugger.
If I wanted to use the debugger, I'd need to create an edit of the .t program that:
Uses (the use command) the module directly. not in the form of
BEGIN { use_ok('My::Module') };
Does not "use Test::More;"
A few other edits that cause gluteal pains
The problem with doing that is that any changes I make in the edited test program I still need to transfer back to the true test program use in "make test". Error prone as best.
I am already using "make test TEST_VERBOSE=1" so that my stdio output shows up. But there's GOT to be a simpler way to invoke the debugger on the .t
Thanks for ideas here.
-- JS
use_ok tests are great, but you should have them in test files of their own, not test files that also test other things.
I'm not sure why you would need to avoid Test::More or use_ok to run the debugger, though. What does happen when you try your test directly:
perl -d -Mblib t/yourtestfile.t?
If all else fails, you can try using Enbugger in your test script.
I'm completely new to Powershell and I know that a number of people use it to automate tasks much in the way bash and c-shell programming is done in *NIX. I've successfully recompiled some ancient analysis software written in FORTRAN that takes individual input files. I now need to somehow run just under 1000 cases with only slightly varied input files. The analysis software writes intermediate files, so for concurrent runs, every run has to be within a different directory. Each case can take up to 40 minutes to solve, so individually running these will take a lot of time and be prone to error.
So now for the question, can Powershell automate this and is there some similar script out there that I can modify to do it?
The automation would need to do the following (as I see it):
Take in an input file with the various runs that have to be run
Create a subdirectory relative to the run name/number
Save a version of the input files with the variables switched in the subdirectory
Run the analysis software in the subdirectory
Look at standard/error output of analysis software to confirm it was successful
Append to a file success or failure of a run
Ideally would be able to run up to some number of analyses concurrently (4-6 for my machine)
If IT reboots the machine (as they do whenever they choose), I'd like to be able to restart where it left off, though I expect the loss of anything that the analysis software was running during the forced reboot.
I've tried recompiling the software with vectorization and automated parallelization and on the tested cases, the convergence time was only minimally reduced, so it is safe to assume that this is effectively single threaded.
Powershell has lots of familiar aliases for Unix users. ls, cat, cp etc are implemented as aliases to native Powershell commands. The commands are not case sensitive. What's more, you can search help even with alias' name. That is,
man ls <=> get-help get-childitem
apropos <=> get-help <keyword>
get-help loop
about_Break
about_Continue
about_do
about_For
about_Foreach
about_Language_Keywords
...
This should help converting an existing script. For the rest, I'll give some hints as the description is somewhat vague.
Get-Content is used to read file contents into a variable: $myVar = cat c:\some\file.txt.
Directory creation is just md.
Capturing exe output is done by assigning to a variable: $exeOutput = c:\myApp.exe
Adding stuff to a file is Add-Content.
Background jobs are started with Start-Job.
I have an scp command that I want to run repeatedly and also watch the output. Putting one or multiple scp commands in a script allows me to see the output but each has to finish before moving on to the next. I can open multiple PowerShell windows but I want to scale it up past the point where that is practical.
I've played with start-job and that allows me to run multiples but when using get-job/receive-job/wait-job I haven't been able to get what I want. I've also tried invoke-command and invoke-expression.
My command is something like this:
C:\temp\scp.exe' -i c:\temp\key c:\temp\test100.dat user1`#10.10.0.100:/tmp/
The output is updated regularly (once/second I guess) and looks like this:
Transfering c:\temp\test100.dat
c:\temp\test100.dat | 5.55mB | 812.13kB/s | 00:01:59 | 5%
Is it possible to do this?
You could pipe the output to file with Out-File and monitor the file with http://www.baremetalsoft.com/wintail/