I'm trying to run a powershell script stored in AWS CodeCommit in Jenkins but without success.
I've tested all the possible solutions I can think of.
Is there any tested way that I can follow to do this?
Regards,
For the envirionment I am in I have had to do it the following way to get it to work.
Build Step - Windows Powershell with the following code
$File_Path_Name = $ENV:WORKSPACE + "\Folder\ScriptName.ps1"
Powershell -File $File_Path_Name
You will also need to make sure you have Git configured properly under Source Code Management. Repository URL, Credentials, and the Branch. Also need to make sure the credentials used have the proper access to GitHub(in my case)
Related
I have created some global credentials in Jenkins, and I want to pass them to a powershell command that starts the execution of a protractor test suite.
The credentials are created properly, the bindings are also done as you can see int he image below:
The thing is, I need these credentials to use them when running the automation tests. The powershell command that I execute is the next one:
npm run test -- --userName=${env:ECASUSER} --password=${env:ECASPWD}
After I start the job the command is called inside the windows agent as expected, triggering the tests. When the test are using those credentials to authenticate it seems they are empty strings.
In the job console log, both credentials appear to be there but displayed like this (****).
I have tried a lot of solution none of them work. Am I doing something wrong here?
After some time spent to this, I reached the conclusion that this never worked. Or maybe it did at some point but, yeah it simply stopped.
Basically, Jenkins is not passing credentials to Windows batch or Power shell commands. My automation tests were running on blanks.
The only solution, that I found to this problem, is this one:
Create a new binding: Username and Password(conjoined) - here create a new job parameter containing the credentials. Let's say the parameter is named USERPASS;
Create a new 'Windows batch command' section where you save the credentials to a file in you test project (amazingly this one works) - your credentials will be saved to the file like this: "username:password". This is how you save the credentials:
echo %USERPASS% > "%WORKSPACE%\password.txt"
Change the automation test project to retrieve the credentials from the file;
Delete the file after the tests are completed.
I believe it should have been
npm run test --userName=${ECASUSER} --password=${ECASPWD}
and make sure you don't pass unnecessary --
Let me know if that doesn't work, I have other ideas
I'm completely new to Bamboo, so thank you in advance for the help.
I'm trying to create a Bamboo Run that zips files from a git repo and uploads it to Artifactory. Currently my build contains 2 tasks - source code checkout and a simple powershell script. The first time I run it it builds perfectly fine, but without any modifications any consecutive runs fail.
The error I'm getting in the log is the following:
Failing task since return code of [powershell -ExecutionPolicy bypass -Command /bin/sh /opt/bamboo/agent/temp/OR-J8U-JOB1-4-ScriptBuildTask-539645121146088515.ps1] was -1 while expected 0
Replacing the powershell script with empty space does not resolve the issue - only removing the script completely allows the build to succeed, but I cannot reinsert a new script or it will fail. I read other online questions suggesting that I "merge the user-level PATH environment information in to the system-level PATH" but I cannot find the user-level environment information, my environmental variables section is completely empty.
Like Vlad, I found that it was more efficient to implement my powershell script with batch.
I'm having difficulties trying to setup a startup task in an Azure role.
The ultimate goal is to disable RC4 cipher, along with other SSL configurations. In my (VS2012Express) project (solution partially achieved following another answer here in SO that led me to https://gist.github.com/sidshetye/29d6d48dfa0c2f5488a4 ) I created a Startup.cmd file like this:
# Execute powershell command to disable RC4 and imporve SSL security settings
ECHO Batch started >> "StartupLog.txt" 2>&1
PowerShell -ExecutionPolicy Unrestricted .\HardenSSL.ps1 >> log- HardenSSL.txt 2>&1
EXIT /B 0
HardenSSL.ps1 is the PowerShell script from the previous link. Both the .cmd and .ps1 scripts are placed in the application root directory, marked as "Content" with properties set to "CopyLocal=Always".
In my service definition, I put this:
<Startup>
<Task commandLine="Startup.cmd" executionContext="elevated" taskType="background"></Task>
</Startup>
Now, when I deploy the application to Azure, "nothing" happens. I configured the role instance to allow remote desktop, connected to the machine. I verified the scripts where published, and there were no log files, RC4 still enabled. I tried to manually run the .cmd and the machine runs the scripts to completion, disables RC4 and restarts. So the scripts are actually "correct".
The problem is that the scripts are not getting fired up at startup. I may be wrong, but I don't see anything related looking Windows events. Actually, the server now keeps all the configurations, but I have to be sure the scripts get executed in case I'll have to publish to new instances/cloud services.
I also tried to:
1. place the scripts on a child directory
2. create other 2 "simpler" .cmd that just create a log file with "script started" to exclude problems related to the .cmd calling the PowerShell script.
None of those scripts got executed.
Hope I've been sufficiently clear, any help would be greatly appreciated.
Thank you in advance,
Alberto
UPDATE 1
Reading through various discussions, I missed one very important thing: the script files are actually published in 2 distinct places, one being inside the /bin folder.
Ex: I placed my scripts in a /StartupScripts folder in my project, and when I connect via Remote Desktop to the Azure server I find the scripts both in "approot/StartupScripts" and in "approot/bin/StartupScripts".
The scripts the are actually executing are those placed inside the "bin" folder. the real problem is that I have probably a path problem inside the .cmd since I now found the execution logs with an error.
Now I will try to change it up and update the question here on SO.
Ok.
In the end it was indeed a problem with a path in my Startup.cmd file: .\HardenSSL.ps1 could not be found if the StartUp Task pointed to a subfolder.
Solution was to place both Startup.cmd and HardenSSL.ps1 files in the application root, remove the ".\" part when calling the PowerShell Script and all worked well.
Anyway, I would like to suggest anyone to pick this other solution I found in stack exchage:
https://security.stackexchange.com/a/79957
It links to a NuGet package that does the same thing as the script I found on the link to github in the original post, just "better"; mainly:
Better configuration of cipher suites, with support for ForwardSecrecy for all reference browsers on SSLLabs
Retain SSL support for Internet Explorer 8 on windows XP (unfortunately still a necessity for us)
Alberto.
I have a build configuration in TFS 2013 that produces versioned build artifacts. This build uses the out of the box process template workflow. I want to destroy the build artifacts in the event that unit tests fail leaving only the log files. I have a Post-Test powershell script. How do I detect the test failure in this script?
Here is the relevant cleanup method in my post-test script:
function Clean-Files($dir){
if (Test-Path -path $dir) { rmdir $dir\* -recurse -force -exclude logs,"$NewVersion" }
if(0 -eq 1) { rmdir $dir\* -recurse -force -exclude logs }
}
Clean-Files "$Env:TF_BUILD_BINARIESDIRECTORY\"
How do I tests for test success in the function?
(Updated based on more information)
The way to do this is to use environment variables and read them in your PowerShell script. Unfortunately the powershell scripts are run in a new process each time so you can't rely on the environment variables being populated.
That said, there is a workaround so you can still get those values. It involves calling a small utility at the start of your powershell script as described in this blog post: http://blogs.msmvps.com/vstsblog/2014/05/20/getting-the-compile-and-test-status-as-environment-variables-when-extending-tf-build-using-scripts/
This isn't a direct answer, but... We just set the retention policy to only keep x number of builds. If tests fail, the artifacts aren't pushed out to the next step.
With our Jenkins setup, it wipes the artifacts every new build anyway, so that isn't a problem. Only the passing builds fire the step to push the artifacts to the Octopus NuGet server.
The simplest possible way (without customizing the build template, etc.) is do something like this in your post-test script:
$testRunSucceeded = (sqlcmd -S .\sqlexpress -U sqlloginname -P passw0rd -d Tfs_DefaultCollection -Q "select State from tbl_TestRun where BuildNumber='$Env:TF_BUILD_BUILDURI'" -h-1)[0].Trim() -eq "3"
Let's pull this apart:
sqlcmd.exe is required; it's installed with SQL Server and is in the path by default. If you're doing builds on a machine without SQL Server, install the Command Line Utilities for SQL Server.
-S parameter is server + instance name of your TFS server, e.g. "sqlexpress" instance on the local machine
Either use a SQL login name/password combo like my example, or give the TFS build account an account on SQL Server (preferred). Grant the account read-only access to the TFS instance database.
The TFS instance database is named something like "Tfs_DefaultCollection".
The "-h-1" part at the end of the sqlcmd statement tells sqlcmd to output the results of the query without headers; the [0] selects the first result; Trim() is required to remove leading spaces; State of "3" indicates all tests passed.
Maybe someday Microsoft will publish a nice REST API that will offer access to test run/result info. Don't hold your breath though -- I've been waiting six years so far. In the meantime, hitting up the TFS DB directly is a safe and reliable way to do it.
Hope this is of some use.
I am trying to install a msi from a network share remotely.
$app = [WMICLASS]"\\$pcname\ROOT\CIMV2:Win32_Product"
$app.Install($AppPath)
I am getting an err 1619. From some sources say that WMI cannot install remotely with first coping it to the local computer and running it. Some sources use this command to exactly.
That way works great, but I want to install via share so when the developer updates this msi, it will update the installed instances automagiclly. If I install them locally, the update would not be detected (not sure of this).
So I have tried using methods along these lines:
Invoke-Command -ComputerName $pcname{ msiexec /quiet /i "\\appsvr\apps\theapp.msi" }
Those commands seem to go off into the blackhole, those that command works when run locally.
Anyone have a method for doing this that works?
In your last scenario, you're credentials are likely getting lost. This is known as the "double-hop authentication" (or maybe it's "second-hop") problem. You're using creds from ServerA, to run something on ServerB, but it in the end has to connect to ServerC.
There's a fix if you have PowerShell v2 installed everywhere and are willing to accept the implications:
http://blogs.msdn.com/powershell/archive/2008/06/05/credssp-for-second-hop-remoting-part-i-domain-account.aspx