In my Ambari server I have added seven slaves to the master, But the problem is I had changed the hostnames on slaves, therefore Master Node cannot identify the slaves now.
So can anyone help me to changed those hostnames which are already added?
Thank You.
As of Ambari 2.2.2 you can change hostnames using ambari-server update-host-names <hostnames.json>
The basic steps are:
Back up your Ambari DB
Disable Kerberos
Stop ambari-server and ambari-agent on all hosts
Create hostnames.json to map old names to new names. For example:
{"clusterName":{"oldhost1.example.com":"newhost1.example.com","oldhost2.example.com":"newhost2.example.com"}}
On Ambari Server: ambari-server update-host-names hostnames.json
Updates hostnames on all nodes
If the hostname of Ambari Server changed, update ambari-agent.ini on every Ambari Agent node
On Ambari Server: ambari-server start
On all Agents: ambari-agent start
Re-enable Kerberos if needed
http://docs.hortonworks.com/HDPDocuments/Ambari-2.2.2.0/bk_ambari_reference_guide/content/ch_changing_host_names.html
I'm facing a similar problem. What I have found so far is a mechanism for setting up custom hostnames:
https://ambari.apache.org/1.2.3/installing-hadoop-using-ambari/content/ambari-chap7a.html
It should solve the issue of changing hostnames, however I'm afraid it won't be that straightforward because of the first 2 steps:
On the Install Options screen, select Perform Manual Registration for Ambari Agents.
Install the Agents manually as described in Installing Ambari Agents Manually.
which you probably can't retake long after you set up the whole cluster.
Related
I have a cluster with 2 servers that are in HA, there is some configuration so that when I make a change for example in the password of a user or change of role, etc. the change is made immediately on the 2 servers?
The problem is that a user's password is changed and it does not update on the other server immediately, the same happens when a user is assigned a role mapping, it never updates on both servers, only when the server is reboot
OS: Linux (ubuntu 16.04)
keycloak version: 11.0
Thanks for the help
I can't tell from the first paragraph of your question whether Keycloak has ever propagated the changes to the other servers or not.
Does the setup you use usually propagate changes?
If not it sounds like there are issues with your cluster setup.
Do the nodes discover each other? You can check the logs on startup, there is a good illustration on the Keycloak blog on how to check this.
In general it would be a good idea to look over the recommended clustering setup in the docs.
You could change number of owners in the cluster, that way both nodes own the data before it is put in the database. Might help if the issue is that the changes are not immediate enough.
We have a cluster of 3 nodes, 2 of them are offline (missing) and I cannot get them to rejoin the cluster automatically only the master is Online.
Usually, you can use innodb admin:
var cluster = dba.getCluster();
but I cannot use the cluster instance because the metadata is not up to date. But I cannot upgrade the meta data because the missing members are required to be online to use dba.upgradeMetadata(). (Catch 22)
I tried to dissolve the cluster by using:
var cluster = dba.rebootClusterFromCompleteOutage();
cluster.dissolve({force:true});
but this requires the metadata to be updated as well.
Question is, how do I dissolve the cluster completely or upgrade the metadata so that I can use the cluster. methods.
This "chicken-egg" issue was fixed in MySQL Shell 8.0.20. dba.rebootClusterFromCompleteOutage() is now allowed in such situation:
BUG#30661129 – DBA.UPGRADEMETADATA() AND DBA.REBOOTCLUSTERFROMCOMPLETEOUTAGE() BLOCK EACH OTHER
More info at: https://mysqlserverteam.com/mysql-shell-adminapi-whats-new-in-8-0-20/
If you have a cluster where each node upgrades to the latest version of mysql and the cluster isn't fully operational and you need to update your metadata for mysqlsh, you'll need to use an older version of mysqlsh for example, https://downloads.mysql.com/archives/shell/ to get the cluster back up and running. Once it is up and running you can use the dba.upgrademetadata on the R/W node - make sure you update all of your routers or they will lose connection.
I've installed mesos-dns in our cluster and is running ok. We can check the domain of the apps installed in marathon but I would like to know in which host is installed the marathon itself. If I do a dig to marathon.domain is not resolving anything.
According to the doc of mesos-dns: "A records ({framework}.domain) and SRV records (_framework._tcp.{framework}.domain) - for every known Mesos master"
Thanks.
It's marathon.mesos unless you've used a different TLD. The Marathon scheduler runs on the Master.
You can use my mesosdns-resolver bash script to get the endpoint from Mesos DNS.
You can use it like:
mesosdns-resolver.sh -sn <service-name>.marathon.mesos -s <IP_ADDRESS_OF_MESOS_DNS_SERVER>
I am new in using pgbouncer 1.7 and I want to configure it with master slave configuration.
I have configured postgres 9.3 streaming replication using repmgr and I want to use pgbouncer for load balancing and connection pooling so that it automatically switches to slave if master goes down . So how should I configure it for the same. I have both master and slave on diff servers and og bouncer on diff servers. Do i need to install pgbouncer on both master and slave servers also for it to work or just installing on a diff server will work .
I have tried many online tutorial for it but sadly didnt found any suggestions. Please if anyone can help.
Thanks in advance,
Mohit
PgBouncer does not have automatic failover, propagation and ex-master rebuild handling. You can change IP for same hostname to failover though:
https://pgbouncer.github.io/faq.html
How to failover
PgBouncer does not have internal failover-host configuration nor detection. It is possible via some external tools:
DNS reconfiguration - when ip behind DNS name is reconfigured, pgbouncer will reconnect to new server. This behaviour can be tuned
via 2 config parameters - dns_max_ttl tunes lifetime for one hostname,
and dns_zone_check_period tunes how often zone SOA will be queried for
changes. If zone SOA record has changed, pgbouncer will re-query all
hostnames under that zone.
Write new host to config and let PgBouncer reload it - send SIGHUP or use RELOAD; command on console. PgBouncer will detect changed host
config and reconnect to new server.
Pgpool has automatic failover if you wnat to try.
I connect three servers to form an HPC cluster using condor as a middleware when I run the command condor_status from the central manager it does not shows the other nodes I can run jobs in the central manager and connect to the other nodes via SSH but it seems that there is something missing in condor configuration files where I set the central manager as condor host and allows writing and reading for everyone. I keep the daemon MASTER, STARTD in the daemon list for the worker nodes.
When I run condor_status in the central manager it just show the central manager and when I run it on the compute node it give me the error "CEDAR:6001:Failed to connect to" followed by the central manager IP and port number.
I manage to solve it. The problem was in the central manager's firewall (in my case it was iptables) which was running.
So, when I stopped the firewall (su -c "service iptables stop") all nodes appeared normally, typing condor_status".
The firewall status can be checked using "service iptables status".
There are a number of things that could be going on here. I'd suggest you follow this tutorial and see if it resolves your problems -
http://spinningmatt.wordpress.com/2011/06/12/getting-started-creating-a-multiple-node-condor-pool/
In my case the service "condor.exe" was not running on the server. I had stopped manually. I just start it and every thing went fine.