I am trying to add machines to Deployment group. I have executed the given Power-Shell script on the Azure host.
* I ticked the "Use the personal token ..." just in case.
** The script is auto generated by VSTS (azure).
but the script stops responding after 1 minute or so, as you see in the below image.
FYI:
I'm an admin on the VM as well as in VSTS.
after a little googling , look like I have to get somewhere like the below picture, but the script doesn't get there.
Notice: I don't get to the below screenshot !
Related
First off, I didn't have this issue until setting up my agent to run as a windows service.
My company has custom cmdlets we have built that are part of the default profile that is loaded when running powershell. I am using Jenkins to execute a batchfile that iterates a command over a series of machines. After settings up Jenkins to be a service, it no longer has access to those cmdlets leading me to believe the profile isn't being loaded. If I load the profile manually by running the profile script, it only seems to work on the first machine.
When setting up Jenkins as a service, I configured it to be the same user that I would manually run these scripts as if I were to login to the computer. I have verified it is using the proper user with $env:UserName.
I am at a loss as to why setting up jenkins as a windows service broke this. I could revert to using the command line to connect to Jenkins, but that doesn't always connect post server maintenance or after a power outage.
Did I configure something wrong or is there a way to load profiles instead of jenkins always running -NoProfile?
Update - I noticed when running $PROFILE it was set to a default profile location that did not exist. It seems when opening powershell manually on the machine it loads the AllUsersCurrentHost profile but this doesn't happen when using powershell from Jenkins when running as a service. I created the file location where it said it was using the profile and copied the default profile there and it works. I am still not sure why the behavior differs, but at least I found a solution.
I have a Jenkins job and in that job the first thing that runs is a powershell script that I want to capture user inputs values and set them as global variables that are used through out the Jenkins job.
Now i want the user to be able to put these values in from their machine and then run the job with these values ?
How can i do this ?
EDIT: In case anybody else finds this answer. Please see the comments below. This should not be used for credentials! As the communication can be secured by TLS, the credentials will still be visible in build logs etc.
You need to check the This project is parameterized checkbox in the settings of your job in Jenkins. Then define the name, type etc.
The given name is already accessible via standard syntax.
In shell script ${nameOfParam} or %nameOfParam (depending on your shell / os).
In pipelines they are also accessible via params.nameOfParam.
You can set these variables via GUI using Build with parameters or via API call http://<JENKINS_URL>/job/<JOB_NAME>/buildWithParameters/nameOfParam=foo
See also: https://www.baeldung.com/ops/jenkins-parameterized-builds
Only thing I quiet don't get from your question is, what you exactly want to do with the powershell script. A pipeline script in Jenkins is executed on a node, so if the job starts it should be running without any user interaction. To set values from the user input as global variables in a powershell script, you already need to have them available within the jenkins node, hence it's nonsense to set them in the powershell script because they are already available.
I am working on a TFS CI build pipeline. The build includes execution of functional UI tests (Run Functional Tests) and the required accompanying preparatory test agent deployment step (Deploy Test Agent).
This build executed successfully in the past but spontaneously stopped working recently.
I initially ran into difficulty with the DTA task executing hanging:
Task 'SetupTestMachineForUiTests' on machine '[testVM]:5985' is taking time. Please Wait
I had encountered this issue with this build task before albeit intermittently. However, this time the step would not complete no matter how many times it executed. Eventually (~20 minutes), the step crashed out with the following error:
Task 'SetupTestMachineForUiTests' for machine [testVM]:5985's Error :
System.Exception: Stopping test machine setup as it exceeded maximum number of reboots. If you are running test agent in interactive mode, please make sure that autologon is enabled and no legal notice is displayed on logon in test machines.
Unfortunately, the DTA task only writes logs to the usual location on the test VM when the DTAExecutionHost.exe is manually closed on the server after the step has failed. The logs offer no clue as to what the problem might be.
One of the prerequisites for the DTA step to execute successfully is that AutoLogon is enabled on the test VM; I had done this with a simple PowerShell script, executed prior to the DTA task. In order to confirm that the test VM registry values had been correctly assigned (to enable auto logon, disable legal notice, screensaver etc) during my PowerShell script execution, I added a further PowerShell debug script to the build to output each relevant registry value to the build console (all are correctly assigned).
However, when I went to test remote login on the test VM, using the test username, the user creds are accepted but the following warning message shows:
To sign in remotely, you need the right to sign in through Remote Desktop Services. By default members of the Administrators group have this right. If the group you're in does not have the right, or if the right has been removed from the Administrators group, you need to be granted the right manually.
I believe this is the problem. However, the solution has so far eluded me.
I double- and triple- checked; the test user has been added to the
Remote Desktop Users group (also Administrators group).
I've also confirmed that both Administrators and Remote Desktop Users groups
have been granted 'Allow log on through Remote Desktop Services' user
rights.
In testing, I forced successful execution of the build by substituting my own username instead of the test user into the build definition (my user name is also added to RDU and Admin user groups on the server but I can successfully remote onto the box with my own creds); this build executed successfully.
I also inspected the other (possibly, probably) related user groups:
Srv_SeDenyInteractiveLogonRight (test user is absent)
Srv_SeDenyRemoteInteractiveLogonRight (test user is absent)
Srv_SeInteractiveLogonRight (test user present)
I've been fighting with this problem for days now; it's now become a major headache. I'd be very grateful for any insights that might help find a resolution.
Thanks for looking.
The problem was that the account had been added to the AD domain 'DenyInteractiveLogon' group. Adding the account to the local 'Remote Desktop User' and/or the 'Srv_SeInteractiveLogonRight' groups had no effect.
Removing the user account from the domain group resolved the problem.
I've got a powershell script to archive log files. The script is intended to be run daily from a scheduled task as a specified user 'LogArchiver'.
The script uses the aws-cli to copy the file to S3 and needs sufficient credentials to access the bucket which are stored in the user directory C:\Users\LogArchiver\.aws as recommended in the aws docs.
When I run the script from a powershell terminal running as the user it recognises the credentials and successfully copies files to S3. But when it is run from the scheduled task it doesn't recognise the aws credentials and writing the Transcript to file shows the message:
Unable to locate credentials. You can configure credentials by running "aws configure".
Anyone know why this is and any fixes to it? I've read in another post about scheduled tasks doing funny things to environment variables but not sure if that would cause the problems i'm having.
Turns out that it was a bug in server 2012 and is fixed by this patch
https://support.microsoft.com/en-gb/kb/3133689
The 'fix' for me was to change the USERPROFILE environment variable at the top of the script with
$env:USERPROFILE = "C:\Users\LogArchiver"
Not elegent, but works.
I'm trying to automate the deployment of the solution my team is working on through TFS Build server. One of the steps which executes a PowerShell script on the target machine fails with the following error:
Microsoft ODBC Driver 11 for SQL Server : Login failed for user 'sa'..
The PowerShell script I'm trying to execute does in fact connect to multiple databases using the sa credentials. When I try to execute the same script passing it the exact same arguments by hand (i.e: executing the script from the target machine VM itself) it works like a charm. But when it is being executed as part of the build steps it fails with the aforementioned error.
Is there a way to further debug the issue? It would be great if there is a way to output trace statements from the script so I could have some insight on what is actually going on.
Usually all the related error should reflect in TFS build log. To narrow your issue you can try to connect to the TFS build agent with the credentials used for the build service and manually run the ps script.
If you execute the ps script with your own account, which will not help to the issue. Usually this kind of problems is related to permissions. Your build service account are lack of related permission. Try to add it to Administrator or SQL Administrator group and execute the build again.