iSCSI multipath settings on CentOS6.5 - centos

All,
I want to know is it possible to use only one network card to configure iSCSI multipath for the backend iSCSI storage? E.g, I have a NIC of eth0 with IP address of 192.168.10.100,then I create a virtual NIC of eth0:1 with IP address of 192.168.11.100. The two IPs are corresponding to the ip addresses of the two controllers of the iSCSI storage. Or should one must use two separate physical NICs for iSCSI multipath? I tried the above settings but found only one path is available for any volumes attached to the server. I can ping both IPs of the controllers(192.168.10.10 and 192.168.11.10) without problem.
Cheers,
Doan

To use one network card for multipathing, you need the two NICs on that card to be used, with each one on a different subnet, i.e. using a different switch. It's still not great to have just one NIC card to do this, since that's a single point of failure. For maximum robustness, each path should be independent of the other as much as possible.
So I believe the answer is that it is possible but not recommended.

Related

custom outgoing network path for kubernetes pod

say I have 2 sites, S1,S2; each with at least a kubernetes worker. The two sites are geographically apart and have different public IPs on the nodes/workers.
Does kubernetes offer any existing mechanisms to route outgoing internet traffic from a pod/container in S1, via S2 ?
The goal is to be able to use the public IP(s) in S2 for pods in S1.
If k8s-federation is a requisite for a solution; then that is fine.
Kubernetes doesn't have any input on this, it's up to your network design and structure, just like it would be with traditional VMs or whatnot. That said, this sounds like a very bad network design given what you described so I would be surprised if it was easy to set up. Calico runs normal BGP under the hood so you can probably set up two ASes and force one to route via the other.
With calico you can introduce a non default IPPool which does not do NAT for outgoing traffic. Then you can annotate the pods and/or namespaces you want to use that IPPool.
At that point it leaves you with non-working outgoing traffic, since your cluster IPs are "leaking" to the outside world. But a return path is not known by any upstream routers between your cluster and the internet.
You have to let your router, which should be between your cluster and the internet, know about the cluster ranges. You can use the global BGPPeer concept from calico. Once you've done that, als set up BGP on your router. (Use [1]) for more info.
From there you should have all the flexibility on your router to route it differently based on the non default IPPool's subnet and/or first tunnel it, e.g., to the questioner's 'S2'
Note that unless you are truly using public IP space, i.e., non RFC-1918 IPs (plus some others), you should introduce NATing somewhere yourself now, you can choose to do so in S1 or S2, if you opt for the latter, than that site also needs to know about the return path back to your router.
This is not really a cloud-native solution since you're just moving the problem from kubernetes to "old-school" domain of policy based routing on fixed subnets -- which is not really what the questioner asked for since he implied that there is also a kubernetes process in 'S2'. In the possible solution above, a k8s process is not needed in S2.
This is what #coderanger's custom outgoing network path for kubernetes pod was suggesting

RaspberryPi MQTT Broker access via Wifi and Ethernet without interference

I would like to run an MQTT broker (Mosquitto) on a Pi2.
The Pi is connected to two networks, ethernet and wifi. Both networks are not administrated by myself.
Two independent DHCP servers in both networks.
How can I make the the broker available in both networks without interference with the network infrastructure.
Dumb question ?
Cheers
By default mosquitto will bind to the 0.0.0.0 address, this is a special address that represents all IP addresses of the host machine. There is no need to run 2 separate brokers, one will work just fine.
This means that the broker will be accessible from both networks. The only problem is that if the pi is getting addresses from DHCP on both interfaces then you will need to know what IP addresses have been assigned in order to access the broker from each network.
I suggest you look up a program called avahi which can be used to provide a mDNS service allowing you to refer to the pi by a .local domain name from both networks.

Google Container Engine: assign static IP to nodes for outbound traffic

I am using Google Container Engine to launch a cluster that connects to remote services (in a different data center / provider). The containers that are connecting may not have a kubernetes service associated with them and don't need external in-bound ip addresses. However, I want to set up firewall rules on the remote machines and have a known subnet that the nodes will be within when I expand/reduce the cluster or if a node goes down and is re-built.
In looking at Google Networks they appear to be related to internal networks (e.g. 10.128.0.0, etc). The external IP lets me set up single static IP addresses but not a range and I don't see how to apply that to a node — applying to a load balancer won't change the outbound IP address.
Is there a way I can reserve a block of IP addresses for my cluster to use in my firewall rules on my remote servers? Or is there some other solution I'm missing for this kind of thing?
The proper solution for this is to use a VPN to connect the two networks. Google Cloud VPN allows you to create this on the Google side.

Akka-cluster discovering other machines in local network

I'm trying to run http://typesafe.com/activator/template/akka-distributed-workers on few machines connected to local network.
I want to host configuration be as transparent as possible, so I set in my project configuration just linux.local (as netty.tcp.hostname and as seed nodes) and at each machine there is a avahi daemon which is resolving linux.local to appropriate IP address.
Should akka-cluster/akka-remote discover other machines automatically using gossip protocol or above configuration won't be work and I need to explicitly set on each machine the IP address e.g. passing it by argument?
You need to set the hostname configuration on each machine to be an address where that machine can be contacted by the other nodes in the cluster.
So unfortunately, the configuration does need to be different on each node. One way to do this is to override the host configuration programmatically in your application code.
The seed nodes list, however, should be the same for all the nodes, and also should be the externally accessible addresses.

OpenMq clustering not supported for loopback addresses

If I start up a single instance of the broker on a loopback address I get the following:
[05/Sep/2014:16:45:11 BST] WARNING [B3236]: Bad bind address of portmapper service for cluster, please change imq.portmapper.hostname: Loopback IP address is not allowed in broker address localhost[localhost/127.0.0.1] for cluster
[05/Sep/2014:16:45:11 BST] WARNING [B1137]: Cluster initialization failed. Disabling the cluster service.
I have a setup (actually the Azure Compute Emulator) which allows multiple vms/processes to be started up with their own unique ipaddresses of the form 127.X.X.X which are actually loopback addresses as far as java.net.InetAddress is concenrned. Therefore despite the fact that I am successfully using these addresses for socket to socket communication between those vm/processes I cannot use them to run an OpenMq cluster.
As a work around I have set up the brokers to bind to a SINGLE non loopback address and use different ports and that works. So it's not the case that you can't cluster on one ipaddress.
Why was loopback disallowed?
If it is theoretically possible, is there a setting to enable it for clustering?
According to Amy Kang of Oracle opnenmq users mailing list this is by design since clustering is intended to be across muultiple servers. You can however bind several brokers to one non loopback address and use different ports.