Is there a simpler way of deploying Windows Services from TFS than using a Powershell script, run on the TFS server, which:
Stops the existing Windows Service on the remote server
Copy the file on a shared folder on the remote server (copy-item)
Starts the Windows Service on the remote
If not, can any other continuous integration/deployment tool do this better?
As the TFS server is using a domain controller which is different from the remote server, can we share a folder for a specific user? I tried to run the powershell script as a user from the target domain controller, but of course, it is not recognized as a valid user on TFS server.
At last, is there any difference on deploying on an hosted remote server or on the cloud?
Thanks,
In tasks based build system (TFS 2015 +), you can try to install Windows Service Release Tasks, which contains tasks to start and stop windows services as well as change the startup type.
Related
One of the requirements is to keep remote Windows Server intact.
No third party software allowed (no WinSCP, etc).
So we configure Windows Server with WinRM and allow remote access, AllowUnencrypted=true, Auth basic=true, etc...
Then we create job and execute command on Windows server like "ifconfig" successfully.
When it comes to executing inline script or copying file - Rundeck is trying to copy script/file to remote Windows server.
By default:
plugin.script-copy.default.command=get-services
where "get-services" seems to be free-form text rather than executable.
If we want to use SCP or SSH instead, here we have problem -> Windows Server doesn't have WinSCP or SSH or Python installed by default.
Is there any way to copy/deliver script to target/remote Windows Server 2008 using embedded capabilities only (no third-party software allowed) ?
Versions:
Rundeck 2.6.2 running on Linux
Windows Server 2008 R2 Enterprise, Service Pack 1
Thank you.
You can use the WinRM plugin (AKA "Overthere WinRM"), configure it, and use the copy file step on your job workflow (keep in mind that you need the 1.3.4 WinRM plugin at least which support copy file).
You need to download the plugin and put it in Rundeck the libext directory.
Add the Windows resources.xml entry (for "Overthere" WinRM plugin):
<node name="windows" description="Windows node" tags="" hostname="192.168.1.81" osArch="x86" osFamily="windows" osName="Windows 2008R2" osVersion="2008" username="user" winrm-protocol="http" winrm-auth-type="basic" winrm-cmd="CMD" winrm-password-storage-path="keys/winpasswd"/>
Set WinRM as your default node executor / default node file copier, and use the copy file step on your workflow like this.
So, this is important: the WinRM plugin isn't in active development (and Rundeck 2.6 branch is out of support/maintenance), the best way to deal with this is to move to the latest Rundeck version and use the PyWinRM plugin (out of the box with Rundeck, on active development and easiest to configure compared by the old "Overthere" WinRM plugin) and use the copy step as the same way.
We are using CI/CD pipeline in OneITVSO. Earlier we had an agent pool which was internally created. Now we are asked to use "Hosted VS 2017". We have a Database solution, ETL solution and Tabular Model solution that needs to get deployed. Additionally we have certain scope scripts.
We are able to build the solution using "Hosted VS 2017". But we are not able to deploy using "Hosted VS 2017" In the release pipeline we have a task "Windows Machine File Copy" which copies either artifacts/dacpac/ispac/.sql files from build server to dev/uat servers.
Using the earlier agent pool this pipeline was getting deployed successfully. But now when we use "Hosted VS 2017" we are getting below error:
Failed to connect to the path \AZDEVSERVERSQL01 with the user ***domain\servicecredentialdwd* for copying. System error 53 has occurred.**
1) Can "Hosted VS 2017" be used for task like "Windows Machine File Copy" (We are using Microsoft Azure Virtual Machine(Iaas) )
2) If we can use "Hosted VS 2017" even for Iaas Azure machines, are we missing any credential access. Should we give any access to domain\servicecredentialdwd for the agent pool "Hosted VS 2017". If so what permissions has to be given and how.
NOTE: Same pipeline gets deployed when "private" agent is used. gets failed when "Hosted VS 2017" is used.
If your IaaS server has a public IP configured, then yes. If not, then no. The build agent has to be able to establish a network route to your virtual machine. If the VM is isolated in a private network, then the build server can't send traffic to it.
I have servers on Azure and I am using OMS to patch the servers, which is working fine. How ever there are many servers which are non azure like Laptops, Is it possible to patch the Non-Azure VMs from OMS?
Could you please help?
Is it possible to patch the Non-Azure VMs from OMS?
Yes, it is possible. You need install OMS agent on your local VMs.
However, if you want to use Update management, there are some prerequisites that you need to be satisfied. You could refer to this link.
1.The solution supports performing update assessments against Windows
Server 2008 and higher, and update deployments against Windows Server
2012 and higher. Server Core and Nano Server installation options are
not supported.
2.Windows client operating systems are not supported.
3.Windows agents must either be configured to communicate with a Windows
Server Update Services (WSUS) server or have access to Microsoft
Update.
More information about connect Windows computers to OMS please refer to this link.
I have followed this guide to install a jenkins slave on windows 8 as a service:
https://wiki.jenkins-ci.org/display/JENKINS/Installing+Jenkins+as+a+Windows+service#InstallingJenkinsasaWindowsservice-InstallSlaveasaWindowsservice%28require.NET2.0framework%29
I need to run a job that interact with the desktop (run an application that opens a browser etc.). So after I have installed the slave as a service (running jnlp downloaded from the master) I have changed the service "Log on" to "Allow to interact with display".
For some reason its only possible to enable this for the "Local System account" even though its recommended to run the service as a specified user, eg. jenkins.
But nothing happens when I execute the job, the browser is not opened. If I instead stop the service and just launch the slave through the jnlp file the job runs fine - the browser is opened.
Anybody had any luck interacting with the desktop when running a jenkins windows slave as a service?
Services run since Vista in Session 0 and the first user is now in Session 1. So you can't interact any longer. This is called Session 0 Isolation.
Microsoft explains this here and here. You have to use 2nd Program which uses IPC to communicate to the Service.
I had lots of issues running Jenkins in Windows using the service.
Instead I now disable the service and run it from CMD.
So open CMD.
cd C:\Program Files (x86)\Jenkins
java -Xrs -Xmx256m -Dhudson.lifecycle=hudson.lifecycle.WindowsServiceLifecycle -jar
jenkins.war --httpPort=9091
To resolve it, first create Windows auto-logon as I explain here:
https://serverfault.com/questions/269832/windows-server-2008-automatic-user-logon-on-power-on/606130#606130
Then create a startup batch for Jenkins agent (place it in Jenkins directory). This will launch agent console on desktop, and should allow Jenkins to interact with Windows GUI:
java -jar slave.jar -jnlpUrl http://{Your Jenkins Server}:8080/computer/{Your Jenkins Node}/slave-agent.jnlp
(slave.jar can be download from http://{Your Jenkins Server}:8080/jnlpJars/slave.jar)
EDIT :
If you're getting black screenshots (when using Selenium or Sikuli, for example), create a batch file that disconnects Remote Desktop, instead of closing the RDP session with the regular X button:
%windir%\system32\tscon.exe %SESSIONNAME% /dest:console
Consider running the Java slave server directly at startup and then using something to monitor and restart should the server go down (e.g., Kiwi Restarter).
Please check the services (# TestNode) make sure the "Interactive Services Detection" service is STARTED, by default the startup type is set to Manual, you may like to set it to automatic as well.
After service started, when you run your test in the Test Node, you will see something like the below:
Click on it and choose view the message
You will see the activities happen there. Hope this helps :D
Note: If login with other account and cannot view the Interative Services Detection prompt, restart the service again.
My Jenkins Service runs as user "jenkins" and all I did was to create Desktop folders in: C:\Windows\system32\config\systemprofile\desktop and if 64 bit Windows also in C:\Windows\SysWOW64\config\systemprofile\desktop - then it runs perfectly.
Make sure that Desktop folders are created as such:
%WINDOWS%/System32/config/systemprofile/Desktop
%WINDOWS%/SystemWOW64/config/systemprofile/Desktop
Presence of those can sometimes be mandatory while running some Java software as a Service.
I am working on an automated deployment process for a web application. The deployment will need to:
Deploy DB changes to database using sqlpackage.exe
Deploy reporting services reports to the reports server using the web service
Deploy web app to web server(s)
Deploy fonts for reports
among other things
The first two are reasonably straightforward to run from the web server, as the web service and db are contactable, and the tools to deploy run over the network.
From reading it appears that powershell remoting should be the way to go, and internally this would not be a problem. However when deploying to production, this will be being carried out in a datacentre, where the machines (2web, 1db) are not on a domain at all. I'd like to come up with a generic process that can run both internally and externally with the appropriate configuration. Powershell remoting, with machines not in a domain appears to require a fair bit of configuration using https etc., as NT credentials can't be validated.
Should I battle out configuring powershell remoting, or would configuring this to just use psexec to execute a powershell script directly on the remote machien, copying the deployment artifacts to a drop folder on the remote machine be the best way to go?
psexec seems to "just work". It appears powershell remoting comes with a lot more pain.
Why not use psexec then? You can restrict it's role to just getting you on to the remote machine, and not let it infect your scripts. I have not attempted to get ps remoting working on a non-domain, but it general I have found it to be fairly high effort to get going. Psexec, as you say, can often be simpler.
Excuse the peddling, but the open source framework I helped build called PowerUp essentially does all this for your. It uses a model in which the powershell (well psake) scripts can move execution to another machine by calling a specific function. This can either be done with powershell remoting or psexec - you wouldn't need to change the script, it just requires a setting per environment to say which you would like to use.
Check other the sample at https://github.com/AffinityID/PowerUpSamples/tree/master/SimpleWebsite.
Hopefully that shows you enough, but if not let me know and we can go into more detail.