We need to change VLAN of our DB Servers.
What steps i should take to change IP address of OS cluster ( Windows Server 2008 R2), which also has SQL Server 2008 R2 DB cluster?
What are the Risks?
Thanks!
No risks, only downtime.Change the ips on each node. Change the DNS entries to the new ips. And open failover cluster from server manager on one of the servers and change the ip addresses. Changing the node name is a bigger issue than changing the ip addresss.
Related
We've just shipped a standalone service fabric cluster to a customer site with a misconfiguration. Our setup:
Service Fabric 6.4
2 Windows servers, each running 3 Hyper-V virtual machines that host the cluster
We configured the cluster locally using static IP addresses for the nodes. When the servers arrived, the IP addresses of the Hyper-V machines were changed to conform to the customer's available IP addresses. Now we can't connect to the cluster, since every IP in the clusterConfig is wrong. Is there any way we can recover from this without re-installing the cluster? We'd prefer to keep the new IP's assigned to the VM's if possible.
I've tested this only on my test environment (I've never done this on production before so do it on your own risk), but since you can't connect to the cluster anyway I think it is worth to try.
Connect to each virtual machine which is a part of the cluster and do following steps:
Locate Service Fabric Cluster files (usually C:\ProgramData\SF\{nodeName}\Fabric)
Take ClusterManifest.current.xml file and copy it to temp folder (for example C:\temp)
Go to Fabric.Data subfolder, take InfrastructureManifest.xml file and copy it to the same temp folder
Inside each file you have copied change IP addresses for nodes to correct values
Stop FabricHostSvc process by running net stop FabricHostSvc command in powershell
After successful stop run this powershell (admin mode) command to update node cluster configuration:
New-ServiceFabricNodeConfiguration -ClusterManifestPath C:\temp\ClusterManifest.current.xml -InfrastructureManifestPath C:\temp\InfrastructureManifest.xml
Once the config is updated start FabricHostSvc net start FabricHostSvc
Do this for each node and pray for the best.
I'm trying to create a Windows Server Failover Cluster on Windows Server 2016 in Azure, using this article https://clusteringformeremortals.com/2016/04/23/deploying-microsoft-sql-server-2014-failover-clusters-in-azure-resource-manager-arm/
However, when I execute New-Cluster -Name sql-sql-cls -Node sql-sql-0,sql-sql-1 -StaticAddress 10.0.192.101 -NoStorage I get New-Cluster : Static address '10.0.192.101' was not found on any cluster network. My VM1 has 10.0.192.5 IP, and VM2 has 10.0.192.6 IP. How can I fix this?
Add a load balancer to the same subnet as the network cards that clister is on and use the ip address that gets assigned to the load balancer.
The fix seems to be that all the nodes must have a Default Gateway IP address. It doesn't have to be a real gateway, just an IP in the same range.
And now the cluster is created with no problems. After the cluster is running, you can remove the gateway IP address.
I've set up a MongoDB replica set with 3 nodes. All servers live in the same VPC, but in different availability zones. Thanks to etc/hosts file, in while I describe where to find the other nodes, my replica set is able to communicate between nodes. My etc/hosts file looks like this on all 3 nodes.
127.0.0.1 localhost mongo0.example.com
Private IP 1 mongo0.example.com
Private IP 2 mongo1.example.com
Private IP 3 mongo2.example.com
Now, the app server needs to connect to the replica set. Should I use the IP addresses of the nodes in the connection string or should I use the hostnames?
mongodb://private_ip1:27017,private_ip2:27017,private_ip3:27017/dbname?replicaSet=rs0
or
mongodb://mongo0.example.com:27017,mongo1.example.com:27017,mongo2.example.com:27017/dbname?replicaSet=rs0
If it's the latter (hostnames), should I configure the app server's /etc/hosts like each of the mongo nodes?
Using IP address is usually a bad idea, as many times they may need to be changed. If at all possible I would stick with hostnames.
And yes, you will need to ensure that all replica members and any app servers or client machines can resolve the names (using /etc/hosts if necessary).
See also this thread for a more thorough explanation.
I'm trying to run http://typesafe.com/activator/template/akka-distributed-workers on few machines connected to local network.
I want to host configuration be as transparent as possible, so I set in my project configuration just linux.local (as netty.tcp.hostname and as seed nodes) and at each machine there is a avahi daemon which is resolving linux.local to appropriate IP address.
Should akka-cluster/akka-remote discover other machines automatically using gossip protocol or above configuration won't be work and I need to explicitly set on each machine the IP address e.g. passing it by argument?
You need to set the hostname configuration on each machine to be an address where that machine can be contacted by the other nodes in the cluster.
So unfortunately, the configuration does need to be different on each node. One way to do this is to override the host configuration programmatically in your application code.
The seed nodes list, however, should be the same for all the nodes, and also should be the externally accessible addresses.
I've been following this tutorial to create an Azure SQL AlwaysOn Availability Group using Powershell:
Tutorial: AlwaysOn Availability Groups in Windows Azure (PowerShell)
When I get to the command that invokes the CreateAzureFailoverCluster powershell script, I check the state of the failover cluster. In Failover Cluster Manager, it is shown as "the cluster network name is not online"
When I look at the Cluster Events, I see this:
Cluster network name resource 'Cluster Name' cannot be brought online. Ensure that the network adapters for dependent IP address resources have access to at least one DNS server. Alternatively, enable NetBIOS for dependent IP addresses.
Each of the 3 servers in the cluster has access to the DC via ping. All of the preceding setup steps execute correctly. The servers are all on the 10.10.2.x/24 IP range except the DC, which is on 10.10.0.0/16 (with IP of 10.10.0.4)
All of the settings have been validated by prior execution of the tutorial on a different Azure subscription to create a failover cluster that works fine.
Cluster validation reveals this warning:
The "Cluster Group" does not contain an Cluster IP Address resource. This is a required resource for the group. It may be difficult to manage the cluster with this resource missing
(sic)
How do I add a Cluster IP Address resource?
There was nothing wrong with the configuration or the steps taken.
It took the cluster 3 hours to come online.
Attempting to bring the cluster online manually, failed for at least 20 mins after creating the cluster.
The Windows Event logs on all 4 servers showed nothing to say when the cluster moved into the online state.
It seems the correct solution was to work on something else until the cluster started under its retry policy.
Did you setup a fixed IP address in the cluster, using the cluster manager? there's a bug, DHCP will assign the cluster the IP address of one of the sql server isntances. I just assigned a high enough number (x.x.x.171, I think), and the problem went away.