I am trying to extract the cluster active node using powershell for data collection purpose. Firstly, does cluster active node and current host server refer to the same thing.
get-clustergroup -Name 'Cluster Group'|select *
I am using the above script to extract the owner node. Let me know if this correct way to proceed or is there any other script to follow up to get the active node in the cluster.
The following command will give you the active node for the cluster group directly:
(get-clustergroup -Name 'Cluster Group').ownernode.name
Related
Deployed Rundeck (rundeck/rundeck:4.2.0) importing and discovering my inventory using Ansible Resource Model Source. Having 300 nodes, out of which statistically ~150 are accessible/online, the rest is offline (IOT devices). All working fine.
My challenge is when creating jobs i can assign only those nodes which are online, while i wanted to assign ALL nodes (including those offline) and keep retrying the job for the failed ones only. Only this way i could track the completeness of my deployment. Ideally i would love rundeck to be intelligent enough to automatically deploy the job as soon as my node goes back online.
Any ideas/hints how to achieve that ?
Thanks,
The easiest way is to use the health checks feature (only available on PagerDuty Process Automation On-Prem, formerly "Rundeck Enterprise"), in that way you can use a node filter only for "healthy" (up) nodes.
Using this approach (e.g: configuring a command health check against all nodes) you can dispatch your jobs only for "up" nodes (from a global set of nodes), this is possible using the .* as node filter and !healthcheck:status: HEALTHY as exclude node filter. If any "offline" node "turns on", the filter/exclude filter should work automatically.
On Ansible/Rundeck integration it works using the following environment variable: ANSIBLE_HOST_KEY_CHECKING=False or using host_key_checking=false on the ansible.cfg file (at [defaults] section).
In that way, you can see all ansible hosts in your Rundeck nodes, and your commands/jobs should be dispatched only for online nodes, if any "offline" node changes their status, the filter should work.
I'm writing some powershell to gather data from our vSphere clusters. We have some VMs paired as Windows/SQL failover clusters, and I only want to gather data from the primary nodes. Is there a way in the VMWare powershell CLI to distinguish between then primary and secondary? I've looked through the exteneded properties of the VMs and haven't found anything, but thought maybe I'd missed it.
Thanks for reading!
The first question is, how do you define which is the primary node, and which is the secondary node? From VMWare/CLI perspective, there is no "hardware"/"virtual hardware" level difference the only thing that would be different would be networking, and who owns the "Primary" IP Address.
Using VMWare PowerShell CLI module, it would look like:
Get-Module VMware.VimAutomation.Core
$cred = Get-Credential
Connect-VIServer -Server VC01 -Credential $cred
$computer = Get-VM -Name 'NODE01'
$IpAddresses = $computer.Guest.IPAddress
You end up having to visit each machine and pull a list of IP addresses, and then you will have iterate through to match up primary IP addresses, etc. This is a lot of work, and, in MHO, not the best way to find primary Node. The best way is to query the actual Failover Cluster to tell you what is the primary node using the FailoverClusters PowerShell Module:
Import-Module FailoverClusters
Get-Cluster -Name CLUSTER | Get-ClusterGroup
Name OwnerNode State
---- --------- -----
Available Storage NODE01 Offline
Cluster Group NODE01 Online
SQL01 NODE02 Online
I have installed the latest mongodb mms agent (6.5.0.456) on ubuntu 16.04 and initialised the replicaset. Hence I am running a single node replicaset with the monitoring agent enabled. The agent works fine, however it does not seem to actually find the replicaset member:
[2018/05/26 18:30:30.222] [agent.info] [components/agent.go:Iterate:170] Received new configuration: Primary agent, Assigned 0 out of 0 plus 0 chunk monitor(s)
[2018/05/26 18:30:30.222] [agent.info] [components/agent.go:Iterate:182] Nothing to do. Either the server detected the possibility of another monitoring agent running, or no Hosts are configured on the Group.
[2018/05/26 18:30:30.222] [agent.info] [components/agent.go:Run:199] Done. Sleeping for 55s...
[2018/05/26 18:30:30.222] [discovery.monitor.info] [components/discovery.go:discover:746] Performing discovery with 0 hosts
[2018/05/26 18:30:30.222] [discovery.monitor.info] [components/discovery.go:discover:803] Received discovery responses from 0/0 requests after 891ns
I can see two processes for monitor agents:
/bin/sh -c /usr/bin/mongodb-mms-monitoring-agent -conf /etc/mongodb-mms/monitoring-agent.config >> /var/log/mongodb-mms/monitoring-agent.log 2>&1
/usr/bin/mongodb-mms-monitoring-agent -conf /etc/mongodb-mms/monitoring-agent.config
However if I terminate one, it also tears down the other, so I do not think that is the problem.
So, question is what is the Group that the agent is referring to. Where is that configured? Or how do I find out which Group the agent refers to and how do I check if the group is configured correctly.
The rs.config() looks fine, with one replicaset member, which has a host field, which looks just fine. I can use that value to connect to the instance using the mongo command. no auth is configured.
EDIT
It kind of looks that the cloud manager now needs to be configured with the seed host. Then it starts to discover all the other nodes in the replicaset. This seems to be different to pre-cloud-manager days, where the agent was able to track the rs - if I remember correctly... Probably there still is a way to get this done easier, so I am leaving this question open for now...
So, question is what is the Group that the agent is referring to. Where is that configured? Or how do I find out which Group the agent refers to and how do I check if the group is configured correctly.
Configuration values for the Cloud Manager agent (such as mmsGroupId and mmsApiKey) are set in the config file, which is /etc/mongodb-mms/monitoring-agent.config by default. The agent needs this information in order to communicate with the Cloud Manager servers.
For more details, see Install or Update the Monitoring Agent and Monitoring Agent Configuration in the Cloud Manager documentation.
It kind of looks that the cloud manager now needs to be configured with the seed host. Then it starts to discover all the other nodes in the replicaset.
Unless a MongoDB process is already managed by Cloud Manager automation, I believe it has always been the case that you need to add an existing MongoDB process to monitoring to start the process of initial topology discovery. Once a deployment is monitored, any changes in deployment membership should automatically be discovered by the Cloud Manager agent.
Production employments should have authentication and access control enabled, so in addition to adding a seed hostname and port via the Cloud Manager UI you usually need to provide appropriate credentials.
I want to know if there is a way to walk the Zookeeper's (ZK) in memory database and find if any particular node exists. Something similar to find . -name file inside ZK
I am logged in to ZK using zkCli.
Once you have your ZK cluster running, you can connect to a node and query the cluster.
For example:
$ZK_HOME/bin/zkCli.sh -server localhost
List of nodes:
ls /
List of commands:
?
I deployed a Service Fabric Cluster running with a single application and 3 Node Types of 5 machines, each with its own placement constraint.
I need to add other 2 Node types (Virtual Machine Scale sets), how can I do that from the azure portal?
The Add-AzureRmServiceFabricNodeType command can add a new node type to an existing Service Fabric cluster.
Note that the process can take roughly an hour to complete, since it creates one resource at a time starting with the cluster. It will create a new load balancer, public IP address, storage accounts, and virtual machine scale set.
$password = ConvertTo-SecureString -String 'Password$123456' -AsPlainText -Force
Add-AzureRmServiceFabricNodeType `
-ResourceGroupName "resource-group" `
-Name "cluster-name" `
-NodeType "nodetype2" `
-Capacity 2 `
-VmUserName "user" `
-VmPassword $password
Things to consider:
Check your quotas beforehand to ensure you can create the new virtual machine scale set instances or you will get an error and the whole process will roll back
Node type names have a limit of nine characters when creating a cluster via portal blade; this same restriction may apply using the PowerShell command
The command was introduced as part of v4.2.0 of the AzureRM PowerShell module, so you may need to update your module
You can also add a new node type by creating a new cluster using the Azure portal wizard and updating your DNS records, or by modifying the ARM template, but the PowerShell command is obviously the best option.
For those who read it in 2022 and later there is a newer version of PowerShell command to do that:
Add-AzServiceFabricNodeType
And there is also a AZ CLI command: az sf cluster node-type add
Another option is to use New-AzureRmResourceGroupDeployment with an updated ARM template that includes all the resources for your new node types, as well as the new node types.
What's nice about using the PS command is that it takes care of any manual work that you may need to do for creating and associating resources to the new node types.