Can I control which Windows Services are started on the VM where my Azure instance runs? - service

Recently I tried to enumerate the Windows Services on the VM where my Azure web role instance runs using ServiceController.GetServices() - there's a lot of them including Telephony and CloudDrive which I don't need and so having them started is a waste of resources.
Is it possible to have them not started?

Yes, but you'll need a startup task to do this. Here is what you'll do to stop and disable the Telephony service:
sc.exe stop TapiSrv
sc.exe config TapiSrv start= disabled
As you can see I'm not using the display name (Telephony) but I'm using the service name (TapiSrv). If you want to get a list of service names for your system you can simply execute this command (in Azure you can do this via RDP):
sc.exe query
Executing this command will also give you the state of the service (running, ...).
Note: When calling sc.exe config you need to put a space after the equals sign.
Note: Stopping services can take some time, so I suggest you use a background task to stop/disable the services, in order to keep the startup time of your instance to a minimum.

Related

Using SC.exe post installUtil

I am trying to understand what is the correct way to uninstall a service.
According to Microsoft here, there are two ways to uninstall a service:
// Using installutil
installutil -u <yourproject>.exe
// Or using Powershell
Remove-Service -Name "YourServiceName"
In both examples, Microsoft states the following thing:
After the executable for a service is deleted, the service might still be present in the registry. If that's the case, use the command sc delete to remove the entry for the service from the registry.
sc.exe delete "YourServiceName"
Assuming most people will run a script to uninstall a given service using one of the above methods, how can we determine (from a batch or powershell script) if the uninstall worked properly or not (so that I can only conditionally run sc.exe)? In other words, how can we check that a service is still in the registry as per the Microsoft quote.
Also, I have been so far using sc.exe without prior uninstalling and it seems to have the desired effect.
ie. My service gets removed from the list of running Windows services and I am able to start with a fresh new installation of my service.
Is this a bad approach? If so, why is it a bad approach?

Unable to start service in window 10 by using NSSM

I have create a small script file to test.
This my script.bat file.
sc create myService binpath= C:\Users\Admin\Desktop\test.bat start= auto
This is my test.bat file.
echo "Welcome to Wizard"
Problem Statement
I am unable to start the service from control panel Service section.
I get following error.
[SC] StartService FAILED 1053:
The service did not respond to the start or control request in a timely fashion.
That is why I am using nssm.
NOW what happening is that when I run following command on powershell
.\nssm install myService, I dialogue box appears. I give it the path of my script file and click on install service.
After successfull installation of service. I go to control panel -> Service -> click on start against myService but it get paused and following dialog box appears
How can I fix this?
Is there anyother way to do it without doing manual steps and not using third party tool.
I am doing all this on window 10. Do I need any server to perform this task?
NOTE: I cannot use Always up or window scheduler in my case.
The NSSM behaviour is caused by the script terminating almost instantly. Try the following script:
echo Hello World
pause
This should allow the service to start, but you will not necessarily see a console window. Even if you tick 'allow service to interact with desktop', it will not be your desktop that it interacts with!
Windows implements 'session zero isolation' as a security feature, and this essentially prevents services interacting with end user desktops.
In terms of a solution, it's possible to write Windows 'service' applications fairly simply using Visual Studio. It's outside my area of expertise, but based on the Windows applications I'm familiar with, you would generally have a user-mode application running to provide desktop interaction. The user-mode application can interact with services hosted by the service application.
Probably this is resolved by now, but in case it helps anyone, what saved the day for me was checking again my input in the arguments field in nssm. I had an extra "-" which created the error. To edit my service, I went via nssm edit <servicename>
I would also add on the fix that worked for me. I added "" (quotes) in the argument path and that solved the issue for me.

Running additional service in Safe Mode win8.1

The problem is I need to install the program, that runs additional service in the middle of installation. While trying to run it manually, it returns error 1084: Can not start the service in the Safe Mode.
What should I add to registry in order to run this service?
To start services in Safe mode you have to run regedit.exe and go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SafeBoot.
Windows has 2 types of safe mode. The minimal one and the one with network.
Under your type create now a key with the short name of your driver and type service in the default string. Here you can also whitelist other windows services / drivers that you want o get loaded in safe mode.

How do I deploy service fabric application from VSTS release pipeline?

I have configured a CI build for a Service Fabric application, in Visual Studio Team Services, according to this documentation: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration
But instead of having my CI build do the publishing, I only perform the Build and Package tasks, and include all Service Fabric related output, such as pkg folder, scripts, publish profiles and application parameters, in the drop. This way I can pass it along to the new Release pipeline (agent-based releases) to do the actual deployment of my service fabric application.
In my release definition I have a single Azure Powershell task, that uses an ARM endpoint (with proper service principals configured).
When I deploy my app to an existing service fabric cluster, I use the default Deploy-FabricApplication cmdlet passing along the pkg folder and a publish profile that is configured with a connection to the existing cluster.
The release fails with an error message "Cluster connection instance is null". And I cannot understand why?
Doing some debugging I have found that:
The Deploy-FabricApplication cmdlet executes the Connect-ServiceFabricCluster cmdlet just fine, but as soon as the Publish-NewServiceFabricApplication cmdlet takes over execution, then the cluster connection is lost.
I would expect that this scenario is possible using the service fabric cmdlets, but I cannot figure out how to keep the cluster connection open during depoyment.
UPDATE: The link to the documentation no longer refers to the Service Fabric powershell scripts, so the pre-condition for this question is no longer documented. The article now refers to the VSTS build and release tasks, which can be prefered over the powershell cmdlets I tried to use.
When the Connect-ServiceFabricCluster function is called (from Deploy-FabricApplication.ps1) a local $clusterConnection variable is set after the call to Connect-ServiceFabricCluster. You can see that using Get-Variable.
Unfortunately there is logic in some of the SDK scripts that expect that variable to be set but because they run in a different scope, that local variable isn't available.
It works in Visual Studio because the Deploy-FabricApplication.ps1 script is called using dot source notation, which puts the $clusterConnection variable in the current scope.
I'm not sure if there is a way to use dot sourcing when running a script though the release pipeline but you could, as a workaround, make the $clusterConnection variable global right after it's been set via the Connect-ServiceFabricCluster call. Edit your Deploy-FabricApplication.ps1 script and add the following line after the connection logic (~line 169):
$global:clusterConnection = $clusterConnection
By the way, you might want to consider setting up custom build/release tasks that deploy a Service Fabric application, rather than using the various Deploy-FabricApplication.ps1 scripts.
There now exists a built-in VSTS task for deploying a Service Fabric app so you no longer need to bother with executing the PowerShell script on your own. Task documentation page is at https://www.visualstudio.com/docs/build/steps/deploy/service-fabric-deploy. The original CI article has also been updated which provides details on how to set everything up: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration/.
Try to use "PowerShell" task instead of "Azure PowerShell" task.
I hit the same bug today and opened a GitHub issue here
On a side note, VS generated script Deploy-FabricApplication.ps1 uses module
"$((Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Service Fabric SDK" -Name "FabricSDKPSModulePath").FabricSDKPSModulePath)\ServiceFabricSDK.psm1"
That's where Publish-NewServiceFabricApplication comes from. You can check the deployment logic and rewrite it in more sane way using lower-level ServiceFabric SDK cmdlets (potentially getting connection using Get-ServiceFabricClusterConnection instead of global-ling it)

Deployment not in a domain - psexec.exe or powershell remoting

I am working on an automated deployment process for a web application. The deployment will need to:
Deploy DB changes to database using sqlpackage.exe
Deploy reporting services reports to the reports server using the web service
Deploy web app to web server(s)
Deploy fonts for reports
among other things
The first two are reasonably straightforward to run from the web server, as the web service and db are contactable, and the tools to deploy run over the network.
From reading it appears that powershell remoting should be the way to go, and internally this would not be a problem. However when deploying to production, this will be being carried out in a datacentre, where the machines (2web, 1db) are not on a domain at all. I'd like to come up with a generic process that can run both internally and externally with the appropriate configuration. Powershell remoting, with machines not in a domain appears to require a fair bit of configuration using https etc., as NT credentials can't be validated.
Should I battle out configuring powershell remoting, or would configuring this to just use psexec to execute a powershell script directly on the remote machien, copying the deployment artifacts to a drop folder on the remote machine be the best way to go?
psexec seems to "just work". It appears powershell remoting comes with a lot more pain.
Why not use psexec then? You can restrict it's role to just getting you on to the remote machine, and not let it infect your scripts. I have not attempted to get ps remoting working on a non-domain, but it general I have found it to be fairly high effort to get going. Psexec, as you say, can often be simpler.
Excuse the peddling, but the open source framework I helped build called PowerUp essentially does all this for your. It uses a model in which the powershell (well psake) scripts can move execution to another machine by calling a specific function. This can either be done with powershell remoting or psexec - you wouldn't need to change the script, it just requires a setting per environment to say which you would like to use.
Check other the sample at https://github.com/AffinityID/PowerUpSamples/tree/master/SimpleWebsite.
Hopefully that shows you enough, but if not let me know and we can go into more detail.