I want to communicate with Raspberry pi sensor node connected to DHCP network. As node has dynamically allotted ip, I cant directly communicate from with it. How can I do that?
You'll need a zero configuration alternative to deal with returning your changing ip for a given name. On other operating systems the bonjour service deals with this. On raspian you can install avahi:
sudo apt-get install avahi-daemon avahi-browse
This should make your pi available on the network as raspberrypi.local, unless you changed the hostname to something else in the raspian configuration. Avahi browse is optional, but handy to list the names of devices on your network
Related
TL;DR kubectl hangs indefinitely when calling controller node from itself, but operates normally when calling from another machine
I'm getting started with learning Kubernetes. I've installed Raspbian 64-bit OS on a Raspberry Pi 4 8Gb RAM, and then installed k3s (and nothing else) onto it as the master node (and installed k3s onto a 4Gb Pi4 as an worker node). Both Pis are fast and responsive when executing non-Kubernetes commands (and are connected to the same network via Ethernet), but kubectl get nodes from the controller node will almost-always (but not always-always) hang indefinitely (with no output).
Strangely, though, when I copy the configuration file from the controller node to my dev laptop and change the clusters[0]cluster.server address from using 127.0.0.1 to the appropriate domain name (as advised in "Accessing K3s from my dev laptop", here), kubectl commands from my dev laptop complete reliably and quickly.
Is this expected performance of k8s/k3s? If not, what are some areas I can look into to make kubectl complete reliably from my controller node? I've tried replacing 127.0.0.1 in the controller node's kubeconfig with localhost or with the machine's assigned domain name - neither made a difference.
I was looking at this url.
It says-"If you already have a way to configure hosting resources, use kubeadm to easily bring up a cluster with a single command per machine."
What do you mean by "If you already have a way to configure hosting resources"?
If I have a few Ubuntu machines within my office LAN can I setup Kubernetes cluster on them using kubeadm?
It just means that you already have a way of installing an OS on these machines, booting them, assigning IPs on your LAN and so. If you can SSH into your nodes to be you are ready!
Follow the guide carefully and you will have a demo cluster in no time.
I would like to run an MQTT broker (Mosquitto) on a Pi2.
The Pi is connected to two networks, ethernet and wifi. Both networks are not administrated by myself.
Two independent DHCP servers in both networks.
How can I make the the broker available in both networks without interference with the network infrastructure.
Dumb question ?
Cheers
By default mosquitto will bind to the 0.0.0.0 address, this is a special address that represents all IP addresses of the host machine. There is no need to run 2 separate brokers, one will work just fine.
This means that the broker will be accessible from both networks. The only problem is that if the pi is getting addresses from DHCP on both interfaces then you will need to know what IP addresses have been assigned in order to access the broker from each network.
I suggest you look up a program called avahi which can be used to provide a mDNS service allowing you to refer to the pi by a .local domain name from both networks.
I'm trying to run http://typesafe.com/activator/template/akka-distributed-workers on few machines connected to local network.
I want to host configuration be as transparent as possible, so I set in my project configuration just linux.local (as netty.tcp.hostname and as seed nodes) and at each machine there is a avahi daemon which is resolving linux.local to appropriate IP address.
Should akka-cluster/akka-remote discover other machines automatically using gossip protocol or above configuration won't be work and I need to explicitly set on each machine the IP address e.g. passing it by argument?
You need to set the hostname configuration on each machine to be an address where that machine can be contacted by the other nodes in the cluster.
So unfortunately, the configuration does need to be different on each node. One way to do this is to override the host configuration programmatically in your application code.
The seed nodes list, however, should be the same for all the nodes, and also should be the externally accessible addresses.
I have been trying to setup a ZooKeeper cluster on the Google Compute Engine and have run into some issues when using the external IPs of the machines. My cluster consists of 3 nodes on their own separate instances on GCE.
Now, when I configure each node to use the external IP of the instance they seem to be unable to communicate with each other.
zoo.cfg
tickTime=2000
dataDir=/var/lib/zookeeper
clientPort=2181
initLimit=5
syncLimit=2
server.1=externalIp1:2888:3888
server.2=externalIp2:2888:3888
server.3=externalIp3:2888:3888
If I configure them with their internal IP, however, everything works perfectly fine. My guess is that when ZooKeeper starts up, it binds itself to the internal IP of the instance regardless of the configurations. Because of this, when each node tries to look for the other 2 using the external IPs that they were configured, they're unable to find them.
So my question is, is there any way to make it so that ZooKeeper uses the external IP of the machine instead of the internal one? I'm relatively new to the Google Cloud Platform and to setting up hardware in general, so I'm not really sure if something like ip forwarding, firewall rules, or something else would achieve what I'm trying to do (assuming it's even possible).
According to the Zookeeper 3.4.5 docs, you need to specify the following option:
clientPortAddress
New in 3.3.0: the address (ipv4, ipv6 or hostname) to listen for client connections; that is, the address that clients attempt to connect to. This is optional, by default we bind in such a way that any connection to the clientPort for any address/interface/nic on the server will be accepted.
Although it appears that by default, it will bind to all available IPs on the server, so theoretically, it should have worked as you have set it up.
Important note: if Zookeeper instances talk to each other using external IPs rather than internal IPs, you will be charged for data egress whereas if all communication is over internal network (using internal IPs) within the same zone, you won't.