OpenMq clustering not supported for loopback addresses - openmq

If I start up a single instance of the broker on a loopback address I get the following:
[05/Sep/2014:16:45:11 BST] WARNING [B3236]: Bad bind address of portmapper service for cluster, please change imq.portmapper.hostname: Loopback IP address is not allowed in broker address localhost[localhost/127.0.0.1] for cluster
[05/Sep/2014:16:45:11 BST] WARNING [B1137]: Cluster initialization failed. Disabling the cluster service.
I have a setup (actually the Azure Compute Emulator) which allows multiple vms/processes to be started up with their own unique ipaddresses of the form 127.X.X.X which are actually loopback addresses as far as java.net.InetAddress is concenrned. Therefore despite the fact that I am successfully using these addresses for socket to socket communication between those vm/processes I cannot use them to run an OpenMq cluster.
As a work around I have set up the brokers to bind to a SINGLE non loopback address and use different ports and that works. So it's not the case that you can't cluster on one ipaddress.
Why was loopback disallowed?
If it is theoretically possible, is there a setting to enable it for clustering?

According to Amy Kang of Oracle opnenmq users mailing list this is by design since clustering is intended to be across muultiple servers. You can however bind several brokers to one non loopback address and use different ports.

Related

Egress IP address selection

We are running a SaaS service that we are looking to migrate to Kubernetes, preferably at one of the hyperscalars. One specific issue I have not yet found a clean solution for is the need for Egress IP address selection from within the application.
We deal with a large amount of upstream providers that have access control and rate limiting based on source IP adres. Also a partition of our customers are using their own accounts with some of the upstream providers. To access the upstream providers in the context of their account we need to control the source IP used for the connection from within the application.
We are running currently our services in a DMZ behind a load balancer, so direct network interface selection is already impossible. We use some iptables rules on our load balancers/gateways to do address selection based on mapped port numbers. (e.g. egress connections to port 1081 are mapped to source address B and target port 80, port 1082 to source address C port 80)
This however is quite a fragile setup that also does not map nicely when trying to migrate to more standardized *aaS offerings.
Looking for suggestions for a better setup.
One of the things that could help you solve it is Istio Egress Gateway so I suggest you look into it.
Otherwise, it is still dependent on particular platform and way to deploy your cluster. For example on AWS you can make sure your egress traffic always leaves from predefined, known set of IPs by using instances with Elastic IPs assigned to forward your traffic (be it regular EC2s or AWS NAT Gateways). Even with Egress above, you need some way to define a fixed IP for this, so AWS ElasticIP (or equivalent) is a must.

RaspberryPi MQTT Broker access via Wifi and Ethernet without interference

I would like to run an MQTT broker (Mosquitto) on a Pi2.
The Pi is connected to two networks, ethernet and wifi. Both networks are not administrated by myself.
Two independent DHCP servers in both networks.
How can I make the the broker available in both networks without interference with the network infrastructure.
Dumb question ?
Cheers
By default mosquitto will bind to the 0.0.0.0 address, this is a special address that represents all IP addresses of the host machine. There is no need to run 2 separate brokers, one will work just fine.
This means that the broker will be accessible from both networks. The only problem is that if the pi is getting addresses from DHCP on both interfaces then you will need to know what IP addresses have been assigned in order to access the broker from each network.
I suggest you look up a program called avahi which can be used to provide a mDNS service allowing you to refer to the pi by a .local domain name from both networks.

Google Container Engine: assign static IP to nodes for outbound traffic

I am using Google Container Engine to launch a cluster that connects to remote services (in a different data center / provider). The containers that are connecting may not have a kubernetes service associated with them and don't need external in-bound ip addresses. However, I want to set up firewall rules on the remote machines and have a known subnet that the nodes will be within when I expand/reduce the cluster or if a node goes down and is re-built.
In looking at Google Networks they appear to be related to internal networks (e.g. 10.128.0.0, etc). The external IP lets me set up single static IP addresses but not a range and I don't see how to apply that to a node — applying to a load balancer won't change the outbound IP address.
Is there a way I can reserve a block of IP addresses for my cluster to use in my firewall rules on my remote servers? Or is there some other solution I'm missing for this kind of thing?
The proper solution for this is to use a VPN to connect the two networks. Google Cloud VPN allows you to create this on the Google side.

Akka-cluster discovering other machines in local network

I'm trying to run http://typesafe.com/activator/template/akka-distributed-workers on few machines connected to local network.
I want to host configuration be as transparent as possible, so I set in my project configuration just linux.local (as netty.tcp.hostname and as seed nodes) and at each machine there is a avahi daemon which is resolving linux.local to appropriate IP address.
Should akka-cluster/akka-remote discover other machines automatically using gossip protocol or above configuration won't be work and I need to explicitly set on each machine the IP address e.g. passing it by argument?
You need to set the hostname configuration on each machine to be an address where that machine can be contacted by the other nodes in the cluster.
So unfortunately, the configuration does need to be different on each node. One way to do this is to override the host configuration programmatically in your application code.
The seed nodes list, however, should be the same for all the nodes, and also should be the externally accessible addresses.

Getting ZooKeeper to run on Google's Compute Engine using external IPs

I have been trying to setup a ZooKeeper cluster on the Google Compute Engine and have run into some issues when using the external IPs of the machines. My cluster consists of 3 nodes on their own separate instances on GCE.
Now, when I configure each node to use the external IP of the instance they seem to be unable to communicate with each other.
zoo.cfg
tickTime=2000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=externalIp1:2888:3888
server.2=externalIp2:2888:3888
server.3=externalIp3:2888:3888
If I configure them with their internal IP, however, everything works perfectly fine. My guess is that when ZooKeeper starts up, it binds itself to the internal IP of the instance regardless of the configurations. Because of this, when each node tries to look for the other 2 using the external IPs that they were configured, they're unable to find them.
So my question is, is there any way to make it so that ZooKeeper uses the external IP of the machine instead of the internal one? I'm relatively new to the Google Cloud Platform and to setting up hardware in general, so I'm not really sure if something like ip forwarding, firewall rules, or something else would achieve what I'm trying to do (assuming it's even possible).
According to the Zookeeper 3.4.5 docs, you need to specify the following option:
clientPortAddress
New in 3.3.0: the address (ipv4, ipv6 or hostname) to listen for client connections; that is, the address that clients attempt to connect to. This is optional, by default we bind in such a way that any connection to the clientPort for any address/interface/nic on the server will be accepted.
Although it appears that by default, it will bind to all available IPs on the server, so theoretically, it should have worked as you have set it up.
Important note: if Zookeeper instances talk to each other using external IPs rather than internal IPs, you will be charged for data egress whereas if all communication is over internal network (using internal IPs) within the same zone, you won't.