I'm writing some powershell to gather data from our vSphere clusters. We have some VMs paired as Windows/SQL failover clusters, and I only want to gather data from the primary nodes. Is there a way in the VMWare powershell CLI to distinguish between then primary and secondary? I've looked through the exteneded properties of the VMs and haven't found anything, but thought maybe I'd missed it.
Thanks for reading!
The first question is, how do you define which is the primary node, and which is the secondary node? From VMWare/CLI perspective, there is no "hardware"/"virtual hardware" level difference the only thing that would be different would be networking, and who owns the "Primary" IP Address.
Using VMWare PowerShell CLI module, it would look like:
Get-Module VMware.VimAutomation.Core
$cred = Get-Credential
Connect-VIServer -Server VC01 -Credential $cred
$computer = Get-VM -Name 'NODE01'
$IpAddresses = $computer.Guest.IPAddress
You end up having to visit each machine and pull a list of IP addresses, and then you will have iterate through to match up primary IP addresses, etc. This is a lot of work, and, in MHO, not the best way to find primary Node. The best way is to query the actual Failover Cluster to tell you what is the primary node using the FailoverClusters PowerShell Module:
Import-Module FailoverClusters
Get-Cluster -Name CLUSTER | Get-ClusterGroup
Name OwnerNode State
---- --------- -----
Available Storage NODE01 Offline
Cluster Group NODE01 Online
SQL01 NODE02 Online
Related
I am trying to extract the cluster active node using powershell for data collection purpose. Firstly, does cluster active node and current host server refer to the same thing.
get-clustergroup -Name 'Cluster Group'|select *
I am using the above script to extract the owner node. Let me know if this correct way to proceed or is there any other script to follow up to get the active node in the cluster.
The following command will give you the active node for the cluster group directly:
(get-clustergroup -Name 'Cluster Group').ownernode.name
I deployed a Service Fabric Cluster running with a single application and 3 Node Types of 5 machines, each with its own placement constraint.
I need to add other 2 Node types (Virtual Machine Scale sets), how can I do that from the azure portal?
The Add-AzureRmServiceFabricNodeType command can add a new node type to an existing Service Fabric cluster.
Note that the process can take roughly an hour to complete, since it creates one resource at a time starting with the cluster. It will create a new load balancer, public IP address, storage accounts, and virtual machine scale set.
$password = ConvertTo-SecureString -String 'Password$123456' -AsPlainText -Force
Add-AzureRmServiceFabricNodeType `
-ResourceGroupName "resource-group" `
-Name "cluster-name" `
-NodeType "nodetype2" `
-Capacity 2 `
-VmUserName "user" `
-VmPassword $password
Things to consider:
Check your quotas beforehand to ensure you can create the new virtual machine scale set instances or you will get an error and the whole process will roll back
Node type names have a limit of nine characters when creating a cluster via portal blade; this same restriction may apply using the PowerShell command
The command was introduced as part of v4.2.0 of the AzureRM PowerShell module, so you may need to update your module
You can also add a new node type by creating a new cluster using the Azure portal wizard and updating your DNS records, or by modifying the ARM template, but the PowerShell command is obviously the best option.
For those who read it in 2022 and later there is a newer version of PowerShell command to do that:
Add-AzServiceFabricNodeType
And there is also a AZ CLI command: az sf cluster node-type add
Another option is to use New-AzureRmResourceGroupDeployment with an updated ARM template that includes all the resources for your new node types, as well as the new node types.
What's nice about using the PS command is that it takes care of any manual work that you may need to do for creating and associating resources to the new node types.
We currently have multiple Azure VMs all on the same virtual network. We would like to run a script, which, in case of failure restarts the services on a VM, but we would want to run that script on all vms at the same time (in parallel).
I have currently tried with runbook which works but it is not an option since it takes about 5 minutes to complete.
Another option seems to be with Invoke-Command but that would mean opening some ports (I am not sure if the endpoint needs to be opened since the machines are on the same virtual network) which is not very convenient.
Does someone has another idea maybe?
I'm trying to determine if my microsoft failover cluster has a quorum (in powershell).
Cmdlet Get-ClusterQuorum gives me quorum configuration - but I need a state.
Cmdlet Get-Cluster | fl * gives me a lot of cluster properties, but I cannot find there the one I need (DynamicQuorum is a configuration parameters and I would be happy if someone could explain me what FixQuorum and PreventQuorum exactly means, but probably they relate to Start-ClusterNode -FixQuorum command)
Since I have AlwaysOn high availability installed, I can run a query:
select cluster_name, quorum_type_desc, quorum_state_desc from sys.dm_hadr_cluster
and get something like:
myclustername,NODE_MAJORITY,NORMAL_QUORUM
and it seems what I need, but how can I get this without SQL?
Thanks a lot in advance.
quorum_state_desc shows whether your Cluster has NORMAL_QUORUM or FORCED_QUORUM state.
See: https://msdn.microsoft.com/en-us/library/hh212952.aspx
Therefore, if you use
Get-Cluster | Select FixQuorum
you would get the same information.
FixQuorum can be 0 which equals NORMAL_QUORUM or 1 which equals FORCED_QUORUM. See: https://msdn.microsoft.com/en-us/library/ee342505(v=vs.85).aspx
And it is indeed related to:
Start-ClusterNode -FixQuorum
See: https://msdn.microsoft.com/en-us/library/hh270275.aspx
Having received no replies on the Couchbase forum after nearly 2 months, I'm bringing this question to a broader audience.
I'm configuring CB Server 2.2.0 XDCR between two different Openstack (Essex, eek) installations. I've done some reading on using a DNS FQDN trick in the couchbase-server file to add a -name ns_1#(hostname) value in the start() function. I've tried that with absolutely zero success. There's already a flag in the start() function that says -name 'babysitter_of_ns_1#127.0.0.1' so I don't know if I need to replace that line, comment it out, or keep it. I've tried all 3 of those; none of them seemed to have any positive effect.
The FQDNs are pointing to the Openstack floating_ip addresses (in amazon-speak, the "public" ones). Should they be pointed to the fixed_ip addresses (amazon: private/local) for the nodes? Between Openstack installations, I'm not convinced pointing to an unreachable (potentially duplicate) class-C private IP is of any use.
When I create a remote cluster reference using the floating_ip address to a node in the other cluster, of course it'll create the cluster reference just fine. But when I create a Replication using that reference, I always get one of two distinct errors: Save request failed because of timeout or Failed to grab remote bucket 'bucket' from any of known nodes.
What I think is happening is that the Openstack floating_ip isn't being recognized or translated to its fixed_ip address prior to surfing the cluster nodes for the bucket. I know the -name ns_1#(hostname) modification is supposed to fix this, but I wonder if anyone has had success configuring XDCR between Openstack installations that may be able to provide some tips or hacks.
I know this "works" in AWS. It's my belief that AWS uses some custom DNS enabling queries to return an instance's fixed_ip ("private" IP) when going between availability zones, possibly between regions. There may be other special sauce in AWS that makes this work.
This blog post on aws Couchbase XDCR replication should help! There are quite a few steps so I won't paste them all here.
http://blog.couchbase.com/cross-data-center-replication-step-step-guide-amazon-aws