There is java based server component responsible for remote management of amazon virtual machines. I need to write an azure adapter for this component.
I thought I would be better off using node.js based command line utils for azure management.
I wanted to know the way to invoke scripts either from c#/java and then process the output so that I could pass the output to the calling server component.
for e.g. An instruction to create a new vm will return the instance id back to the calling method.
Basically I would need to script the logic in to the adapter methods.
Any directions will be of great help.
-Sharath
Depending on the technology you're choosing you have a few options:
Using the System.Management.Automation assembly you can call any PowerShell script in a C#/.NET application
In Java you can call a batch file that runs a PowerShell script (where you would invoke the Azure cmdlets). There's an interesting discussion on the MSDN forum.
And why not use the Service Management API? This is a REST API that makes it possible to call it from .NET, Java, Node, ...
Related
I have configured a CI build for a Service Fabric application, in Visual Studio Team Services, according to this documentation: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration
But instead of having my CI build do the publishing, I only perform the Build and Package tasks, and include all Service Fabric related output, such as pkg folder, scripts, publish profiles and application parameters, in the drop. This way I can pass it along to the new Release pipeline (agent-based releases) to do the actual deployment of my service fabric application.
In my release definition I have a single Azure Powershell task, that uses an ARM endpoint (with proper service principals configured).
When I deploy my app to an existing service fabric cluster, I use the default Deploy-FabricApplication cmdlet passing along the pkg folder and a publish profile that is configured with a connection to the existing cluster.
The release fails with an error message "Cluster connection instance is null". And I cannot understand why?
Doing some debugging I have found that:
The Deploy-FabricApplication cmdlet executes the Connect-ServiceFabricCluster cmdlet just fine, but as soon as the Publish-NewServiceFabricApplication cmdlet takes over execution, then the cluster connection is lost.
I would expect that this scenario is possible using the service fabric cmdlets, but I cannot figure out how to keep the cluster connection open during depoyment.
UPDATE: The link to the documentation no longer refers to the Service Fabric powershell scripts, so the pre-condition for this question is no longer documented. The article now refers to the VSTS build and release tasks, which can be prefered over the powershell cmdlets I tried to use.
When the Connect-ServiceFabricCluster function is called (from Deploy-FabricApplication.ps1) a local $clusterConnection variable is set after the call to Connect-ServiceFabricCluster. You can see that using Get-Variable.
Unfortunately there is logic in some of the SDK scripts that expect that variable to be set but because they run in a different scope, that local variable isn't available.
It works in Visual Studio because the Deploy-FabricApplication.ps1 script is called using dot source notation, which puts the $clusterConnection variable in the current scope.
I'm not sure if there is a way to use dot sourcing when running a script though the release pipeline but you could, as a workaround, make the $clusterConnection variable global right after it's been set via the Connect-ServiceFabricCluster call. Edit your Deploy-FabricApplication.ps1 script and add the following line after the connection logic (~line 169):
$global:clusterConnection = $clusterConnection
By the way, you might want to consider setting up custom build/release tasks that deploy a Service Fabric application, rather than using the various Deploy-FabricApplication.ps1 scripts.
There now exists a built-in VSTS task for deploying a Service Fabric app so you no longer need to bother with executing the PowerShell script on your own. Task documentation page is at https://www.visualstudio.com/docs/build/steps/deploy/service-fabric-deploy. The original CI article has also been updated which provides details on how to set everything up: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-set-up-continuous-integration/.
Try to use "PowerShell" task instead of "Azure PowerShell" task.
I hit the same bug today and opened a GitHub issue here
On a side note, VS generated script Deploy-FabricApplication.ps1 uses module
"$((Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Service Fabric SDK" -Name "FabricSDKPSModulePath").FabricSDKPSModulePath)\ServiceFabricSDK.psm1"
That's where Publish-NewServiceFabricApplication comes from. You can check the deployment logic and rewrite it in more sane way using lower-level ServiceFabric SDK cmdlets (potentially getting connection using Get-ServiceFabricClusterConnection instead of global-ling it)
I'm running my Deployments on the Release Management(Currently Preview) tool in VSO.
When you configure a new Release(with the new release management tool on VSO) you can add to the Flow a task named:Azure PowerShell(Run a PowerShell script within an Azure environment)
What i'm trying to do is to Make some changes to the web.config using the Get-WebApplication and then Set-WebConfigurationProperty.
the error i get from the Log is:
Process should have elevated status to access IIS configuration data.
##[error]Cannot find a provider with the name 'WebAdministration'.
Is it even possible to run those kind of commands in there or do you i need to use another kind of command to update my web.config?
There is no Azure API to make arbitrary transforms to your web.config.
Instead, the way this is typically done is to use the deployment time transform engine (e.g. via Web.Debug.config or using Chained Config transforms).
If you're trying to set the web.config of an Azure WebApp then you need to use the Set-AzureWebSite cmdlet or the Set-AzureRMWebApp cmdlet.
Which one you need to use depends on which Azure cmdlets are installed on the machine running the script. The hosted servers for RM may still have the 0.9.x cmdlets (which uses SetAzureWebSite). The Set-AzureRMWebApp cmdlet is in the 1.x cmdlets. Either will work to set the config, you just need to use the appropriate cmdlet for what's have installed.
I am working on an automated deployment process for a web application. The deployment will need to:
Deploy DB changes to database using sqlpackage.exe
Deploy reporting services reports to the reports server using the web service
Deploy web app to web server(s)
Deploy fonts for reports
among other things
The first two are reasonably straightforward to run from the web server, as the web service and db are contactable, and the tools to deploy run over the network.
From reading it appears that powershell remoting should be the way to go, and internally this would not be a problem. However when deploying to production, this will be being carried out in a datacentre, where the machines (2web, 1db) are not on a domain at all. I'd like to come up with a generic process that can run both internally and externally with the appropriate configuration. Powershell remoting, with machines not in a domain appears to require a fair bit of configuration using https etc., as NT credentials can't be validated.
Should I battle out configuring powershell remoting, or would configuring this to just use psexec to execute a powershell script directly on the remote machien, copying the deployment artifacts to a drop folder on the remote machine be the best way to go?
psexec seems to "just work". It appears powershell remoting comes with a lot more pain.
Why not use psexec then? You can restrict it's role to just getting you on to the remote machine, and not let it infect your scripts. I have not attempted to get ps remoting working on a non-domain, but it general I have found it to be fairly high effort to get going. Psexec, as you say, can often be simpler.
Excuse the peddling, but the open source framework I helped build called PowerUp essentially does all this for your. It uses a model in which the powershell (well psake) scripts can move execution to another machine by calling a specific function. This can either be done with powershell remoting or psexec - you wouldn't need to change the script, it just requires a setting per environment to say which you would like to use.
Check other the sample at https://github.com/AffinityID/PowerUpSamples/tree/master/SimpleWebsite.
Hopefully that shows you enough, but if not let me know and we can go into more detail.
Using Powershell, how can I find out if my server has NUMA enabled and how many CPUs are assigned to each NUMA node?
Update:
I found out here that the microsoft.sqlserver.management.smo.server object object has an affinityinfo field. However, that field doesn't exist in my server object in Powershell when I create it (SQL Server 2005 on Windows XP).
Update:
It appears that the affinityinfo field only exists in SQL Server 2008 R2 and later.
There are APIs available that will get you this information but they are unmanaged which means they are not easily callable from PowerShell (.NET). In order to call these directly you have to use the Add-Type cmdlet to compile C# code into an in-memory assembly which you would then instantiate or invoke a static method from. I have an example of what this looks like on my blog.
Writing the C# is the tricky part because there is a lot of unfriendly looking code associated with it, check out this example. If you are familiar with C#, you might be able to adapt this to what you want. If not Mark has a tool called Coreinfo that looks like it will get you the information you are looking for. It actually calls the same unmangaged API that the linked p/invoke code does (GetLogicalProcessorInformation). You can just call this from PowerShell and process its STDOUT.
I don't think that native OS APIs in Windows 7 and Windows Server 2008 R2 for working with more than 64 logical processors are available in .NET, you can have a look to .NET Support for More Than 64 Processors. This guy use to write a .NET wrapper for OS APIs, you perhaps use that in PowerShell.
I am not a developer so please keep that in mind when reading the following message:
I need to be able to use Windows PowerShell to connect to a JMX RMI agent on a host, is this even possible ?
The example string from the java client I have been given is as below:
JMXServiceURL url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://localhost:7979/jmxrmi");
The reason for this is that I am doing other work in my PowerShell script and would like to keep it all in one place.
Thanks !
This is an unusual mix of two technologies, but it is possible.
On the off-chance that you are attempting to connect to a JBoss server, the quickest way may be for you to call twiddle, a command tool that will dispatch JMX requests to the target JBoss server and return the results to standard out.
Another way is to implement the Jolokia agent on the target servers. This will allow you to issue JMX requests using REST. Responses will also be returned in REST format which you can process in PowerShell using one of these solutions.
Thirdly, you can also deploy the JMX-WS service on your target servers which will allow you to communicate with the JMX server using web-services. This document provides some VBScript examples of this.
None of the above actually uses the JMXServiceURL syntax you outlined, and I cannot think of a way you could actually cleanly integrate this RMI based protocol into PowerShell, but hopefully one of the above will work for you.
========== UPDATE ==========
There may be a way to use the RMI implementation. Take a look at IKVM. It is a Java Byte Code to .NET compiler. I have successfully compiled JMX/RMI java code into a .Net assembly and used it from C#. I think PowerShell will do the same thing.