sendkeys doesn't work inside github API (command line) - powershell

I'm trying to automate the creating of remote repos using powershell and gh repo create and the first thing that happens after running that command is an option to create a new repo on GitHub or push an existing local repo up. I want to select the former, which should just require hitting enter since this is the option that is highlighted by default. I'm trying to use this in my ps1 script:
Add-Type -AssemblyName System.Windows.Forms
[System.Windows.Forms.SendKeys]::SendWait("{ENTER}");
When I try that without the gh repo create command, it works as expected, creating a new line in the powershell console. But when following gh repo create, it appears to do nothing. The console just sits on the following text which is output from the gh repo create command:
What would you like to do? [Use arrows to move, type to filter]
> Create a new repository on GitHub from scratch
Push an existing local repository to GitHub
I have tried countless combinations of the following commands
gh repo create
[Microsoft.VisualBasic.Interaction]::AppActivate("Administrator: Microsoft Powershell")
Add-Type -AssemblyName System.Windows.Forms
Start-Sleep 3
[System.Windows.Forms.SendKeys]::SendWait("{ENTER}")
and
$wshell = New-Object -ComObject wscript.shell;
$wshell.SendKeys("{ENTER}")
I'm new to powershell and can't tell if I'm doing something wrong or if sendkeys just doesn't work with gh commands for some reason, seems to be the latter. Any ideas would be appreciated.

Expanding on #mclayton's comments ...
SendKeys is designed for scenarios that are similar to a multithreaded execution environment: you've got some bit of UI waiting for input on one thread but also have a script running on a different thread, and you can use SendKeys in the script to send keystrokes to the UI.
The problem is, as #mclayton points out, that console applications tend to behave more like a single-threaded environment: so, in this case, the gh command is blocking everything after it.
If you want to go this route, try piping the output of SendKeys to the gh command, something like
[System.Windows.Forms.SendKeys]::SendWait("{ENTER}") | gh repo create
I'm not exactly sure how that would work (as I understand it, pipes behave somewhat differently in PowerShell than in the regular command line interface).
You might not even need to use SendKeys; you might just be able to use Write-Host or similar.
Note that this was the original use case for pipes: to be able to send the output of one command to another i.e. command1 [options] | command2 [options] ... and therefore be able to communicate between programs, even in a "single-threaded" command-line interface.

Related

Restarting an updated powershell script

I'm making a makeshift CI/CD system for my app, the app stops itself when notified of a push to a Github repo and the script automatically runs git pull to bring in changes and some more commands depending on the things that changed. Some of the changes could be to the script.
I want the script to restart itself, without infinite nesting where it could hog resources.
While ($true) {
git pull
# check for changes...
If ($runScriptChanged) {
Break
}
node index.js
}
# ???
Omitted error-checking parts and other updating parts for brevity
Calling itself will probably work, but again, it could hog resources infinitely until stopped
Making a new file to run the above script still leaves a file in the repo that cannot be updated automatically
Start-Process is the best I've found for this, but I'm not sure about it's behavior on Linux
When does the launching shell close? Is it the same as on Windows with -NoNewWindow (where it will stay open, as long as there's something using it)? (Currently I'm running it on Windows Server, so compatibility with Linux isn't a big concern, but it is nice to have)
Which way should I use? Thanks
You may consider using PowerShell jobs and the Start-Job cmdlet. It will start your processes in the background and also has some monitoring and management capabilities using other -Job cmdlets such as Get-Job, Wait-Job,Stop-Job, etc.
See about_Jobs for more information.

How to Automate scripts with options in Powershell?

I'm not a native English speaker as such pardon some discrepancy in my question. I'm looking at a way to Automate option selection in Programs/Scripts running via PowerShell.
For example:
Start-Process -FilePath "velociraptor.exe" -ArgumentList "config generate -i"
In the above snipper PowerShell will run Velociraptor and initiate the configuration wizard from a Ps1 file. The Wizard has few options. After running it will generate some Yaml files.
As such what would be the way to have PowerShell Script automate the option selection process? I know what the option should be. I looked around but I don't know proper terms to find what I need. Nor am I sure this can be done with PowerShell.
The end goal is to have the Ps1 download Exe, run the config command and continue with choosing the selection based on predefined choices. So far I gotten download and lunching of the velociraptor.exe working. But not sure how to skip the window in screenshot and have PowerShell script do it instead.
I couldn't find a CLI reference for velociraptor at https://www.velocidex.com/, but, generally speaking, your best bet is to find a non-interactive way to provide the information of interest, via dedicated _parameters_ (possibly pointing to an input file).
Absent that, you can use the following technique to provide successive responses to an external program's interactive prompts, assuming that the program reads the responses from stdin (the standard input stream):
$responses = 'windows', 'foo', 'bar' # List all responses here
$responses | velociraptor.exe config generate -i

PowerShell SCript to Execute CMD.exe with Arguments

SO I have surfed this site and the web and I feel as though I am missing something simple.
I find related questions but none that combine a scriptblock and remote calling of a 3rd party app (not a simply windows function or app)
I have the following string that I can copy into a command window and run without issue
"C:\Program Files (x86)\Vizient\Vizient Secure Channel v2.1\VizientSC.exe" UID=me#musc.edu PWD=XXXXXXXXX HCOID=123456 PRODTYPE=PRO-UHCSECURECHANNEL-CDB PACKAGETYPE=OTH FOLDERPATH="\\da\db5\MyFiles\Viz\20180413"
To simplify this, lets just assume I want to run this same String every time BUT with a REMOTE call.
I have written this many different ways but to no avail using
Invoke-Command -ComputerName "edwsql" -ScriptBlock { .........
I simply want to run the designated string using cmd.exe on a remote machine.
The EXE being run in the string is a 3rd party software that I do not want to install all all possible locations. Much simpler to run remote form the box it is already installed and is secure.
Can someone point me in the right direction???? Pls???? I'm new to PowerShell. I am trying to phase out some old PERL as the folks who can support that on the client site are few and far between these days.
You don't need to try so hard. PowerShell can run commands. If the command you want to run contains spaces, enclose in " (as you have done) and invoke it with the & (call or invocation) operator. This is all you need to do:
& "C:\Program Files (x86)\Vizient\Vizient Secure Channel v2.1\VizientSC.exe" UID=me#musc.edu PWD=XXXXXXXXX HCOID=123456 PRODTYPE=PRO-UHCSECURECHANNEL-CDB PACKAGETYPE=OTH FOLDERPATH="\\da\db5\MyFiles\Viz\20180413"
If a parameter on the executable's command line contains any characters that PowerShell interprets in a special way, you will need to quote it.

How do I detect if the test run was successful in a Team Build 2013 Post-Test script?

I have a build configuration in TFS 2013 that produces versioned build artifacts. This build uses the out of the box process template workflow. I want to destroy the build artifacts in the event that unit tests fail leaving only the log files. I have a Post-Test powershell script. How do I detect the test failure in this script?
Here is the relevant cleanup method in my post-test script:
function Clean-Files($dir){
if (Test-Path -path $dir) { rmdir $dir\* -recurse -force -exclude logs,"$NewVersion" }
if(0 -eq 1) { rmdir $dir\* -recurse -force -exclude logs }
}
Clean-Files "$Env:TF_BUILD_BINARIESDIRECTORY\"
How do I tests for test success in the function?
(Updated based on more information)
The way to do this is to use environment variables and read them in your PowerShell script. Unfortunately the powershell scripts are run in a new process each time so you can't rely on the environment variables being populated.
That said, there is a workaround so you can still get those values. It involves calling a small utility at the start of your powershell script as described in this blog post: http://blogs.msmvps.com/vstsblog/2014/05/20/getting-the-compile-and-test-status-as-environment-variables-when-extending-tf-build-using-scripts/
This isn't a direct answer, but... We just set the retention policy to only keep x number of builds. If tests fail, the artifacts aren't pushed out to the next step.
With our Jenkins setup, it wipes the artifacts every new build anyway, so that isn't a problem. Only the passing builds fire the step to push the artifacts to the Octopus NuGet server.
The simplest possible way (without customizing the build template, etc.) is do something like this in your post-test script:
$testRunSucceeded = (sqlcmd -S .\sqlexpress -U sqlloginname -P passw0rd -d Tfs_DefaultCollection -Q "select State from tbl_TestRun where BuildNumber='$Env:TF_BUILD_BUILDURI'" -h-1)[0].Trim() -eq "3"
Let's pull this apart:
sqlcmd.exe is required; it's installed with SQL Server and is in the path by default. If you're doing builds on a machine without SQL Server, install the Command Line Utilities for SQL Server.
-S parameter is server + instance name of your TFS server, e.g. "sqlexpress" instance on the local machine
Either use a SQL login name/password combo like my example, or give the TFS build account an account on SQL Server (preferred). Grant the account read-only access to the TFS instance database.
The TFS instance database is named something like "Tfs_DefaultCollection".
The "-h-1" part at the end of the sqlcmd statement tells sqlcmd to output the results of the query without headers; the [0] selects the first result; Trim() is required to remove leading spaces; State of "3" indicates all tests passed.
Maybe someday Microsoft will publish a nice REST API that will offer access to test run/result info. Don't hold your breath though -- I've been waiting six years so far. In the meantime, hitting up the TFS DB directly is a safe and reliable way to do it.
Hope this is of some use.

Proper use of Invoke-Expression?

I've just recently completed my first nightly build script (first significant anything script, really) in powershell. I seem to have things working well, if not yet robustly (I haven't handled significant error-checking yet), but I found myself falling into an idiom around the Invoke-Expression cmdlet, and I'm wondering if I'm using it properly.
Specifically, I use a series of variables to build up command-lines that I will use to build the solution, then run the solution's unit tests. e.g., something like:
$tmpDir = "C:\Users\<myuser>\Development\Autobuild"
$solutionPath=$tmpDir+"\MyProj\MyProj.sln"
$devenv="C:\Program Files (x86)\Microsoft Visual Studio 10.0\common7\ide\devenv"
$releaseProfile="Release"
$releaseCommandLine="`"$devenv`" `"$solutionPath`" /build `"$releaseProfile`""
This works well enough, $releaseCommandLine contains the command line that I want to execute when I'm done. I then execute it via this line:
$output = Invoke-Expression "& $releaseCommandLine"
Is this the proper way to execute a manually-built command line from a powershell script? I thought initially that Invoke-Command would do it, but I must have been doing something wrong because I couldn't get that working at all for half an hour, and I got this working almost immediately.
I've followed this same pattern a few other times in this same script. Is this a best-practice?
Looks fine to me. Only thing I'd change is to use more Powershell features in place of fragile assumptions. E.g.:
use Join-Path instead of string concatenation
use the Env:\ provider to look up the %programfiles(x86)% dir (or better yet, use the HKML:\ provider to find the path - it's in SOFTWARE\Microsoft\VisualStudio\\InstallDir)
when I have to write a string that contains literal doublequotes and variable expansion, I usually fall back to the syntax below. Personal preference, obviously.
'"{0}" "{1}" /build "{2}"' -f $devenv, $solutionPath, $releaseProfile
In some cases I'd be inclined to use Process.Start() so that I could capture the stdout & stderr streams independently (and maybe even control stdin interactively, depending on the application).
PS - the '&' is not strictly necessary.
I think it is unnecessary to use Invoke-Expression here. I've done this with a lot of build scripts and it usually looks like this:
$vsroot = "$env:ProgramFiles(x86)\Microsoft Visual Studio 9.0"
$devenv = "$vsroot\Common7\IDE\devenv.exe"
$sln = Join-Path <source_root> Source\MyProj\MyProj.sln
& $devenv $sln /build Release
or
& $devenv $sln /build "Release|Any CPU"
Although lately, I have had some troubles with using devenv.exe (mis-behaving add-ins, etc), so now I use msbuild.exe:
$msbuild = 'C:\Windows\Microsoft.NET\Framework\v3.5\MSBuild.exe'
& $msbuild $sln /p:Configuration=Release
Currently MSBuild can handle C#, VB and C++ (invokes vcbuild) but it can't handle solutions with setup & deployment projects in them. However, I have found it to be more reliable than using devenv.exe.
BTW you typically need to invoke other tools (sn.exe, signtool.exe, mt.exe, etc) in a build script that are specific to the version of Visual Studio/.NET you want to build against. So it is usually best to configure your environment variables in the same way that the VS 2008 command prompt does. With the PowerShell Community Extensions installed, you can enable one line in the PSCX profile header to enable this for .NET 3.5/VS 2008 settings:
$Pscx:Preferences["ImportVisualStudioVars"] = $true