LoRa and LoRaWAN - server

I am trying understand LoRA and LoRaWAN technologies.
I want to establish comunication between my end nodes and gateway and I want gateways communicate own non-LoRaWAN server.
What shold be rules? For exmaple; Must uplink/donwnlink count per day, duty cycle etc. comply with ETSI EU863-870 or LoRaWAN-allinace?

Basically after a device has joined a network it has liberty to send or receive telegrams at its own time frames. The receive windows of the end nodes differ depending on the class type.
With respect to rules:
All end nodes should adhere to 1% duty cycle.
The regional specifications can give you more insights into the rules. You can find the information here - https://lora-alliance.org/resource_hub/rp2-101-lorawan-regional-parameters-2/

Related

Omnetpp application sends multiple streams

Let's say I have a car with different sensors: several cameras, LIDAR and so on, the data from this sensors are going to be send to some host over 5G network (omnetpp + inet + simu5g). For video it is like 5000 packets 1400 bytes each, for lidar 7500 packets 1240 bytes and so on. Each flow is encoded in UDP packets.
So in omnetpp module in handleMessage method I have two sentTo calls, each is scheduled "as soon as possible", i.e., with no delay - that corresponds to the idea of multiple parallel streaming. How does omnetpp handle situations, when it needs to send two different packets at the same time from the same module to the same module (some client, which receives sensor data streams)? Does it create some inner buffer on the sender or receiver side, therefore allowing really only one packet sending per handleMessage call or is it wrong? I want to optimize data transmission and play with packet sizes and maybe with sending intervals, so I want to know, how omnetpp handles multiple streaming at the same time, because if it actually buffers, maybe than it makes sense to form a single package from multiple streams, each such package will consist of a certain amount of data from each stream.
There is some confusion here that needs to be clarified first:
OMNeT++ is a discrete event simulator framework. An OMNeT++ model contains modules that communicate with each other, using OMNeT++ API calls like sendTo() and handleMessage(). Any call of the sendTo() method just queues the provided message into the future event queue (an internal, time ordered queue). So if you send more than one packet in a single handleMessage() method, they will be queued in that order. The packets will be delivered one by one to the requested destination modules when the requested simulation time is reached. So you can send as many packets as you wish and those packets will be delivered one by one to the destination's handleMessage() method. But beware! Even if the different packets will be delivered one by one sequentially in the program's logic, they can still be delivered simultaneously considering the simulation time. There are two time concepts here: real-time that describes the execution order of the code and simulation-time which describes the time passes from the point of the simulated system. That's why, while OMNeT++ is a single threaded application that runs each events sequentially it still can simulate infinite number of parallel running systems.
BUT:
You are not modeling directly with OMNeT++ modules, but rather using INET Framework which is a model directly created to simulate internet protocols and networks. INET's core entity is a node which is something that has network interface(s) (and queues belonging to them). Transmission between nodes are properly modeled and only a single packet can travel on an ethernet line at a time. Other packets must queue in the network interface queue and wait for an opportunity to be delivered from there.
This is actually the core of the problem for Time Sensitive Networks: given a lot of pre-defined data streams in a network, how the various packets interfere and affect each other and how they change the delay and jitter statistics of various streams at the destination, Plus, how you can configure the source and network gate scheduling to achieve some desired upper bounds on those statistics.
The INET master branch (to be released as INET 4.4) contains a lot TSN code, so I highly recommend to try to use it if you want to model in vehicle networks.
If you are not interested in the in-vehicle communication, bit rather want to stream some data over 5G, then TSN is not your interest, but you should NOT start to multiplex/demultiplex data streams at application level. The communication layers below your UDP application will fragment/defragment and queue the packets exactly how it is done in the real world. You will not gain anything by doing mux/demux at application layer.

Single Channel LoRaWAN systematically accepts just one packet out of 3 sent by node

I just built and tested a single channel LoRaWAN gateway which is connected to TTN as per the instructions of thing4U/esp-1ch-gateway with a single channel node both based on TTGO-ESP32Lora and eventually configured both on www.thethingsnetwork.org. Everything works nicely but I do not understand why despite the node sending data at pace of 2 minutes, the gateway receives just one packet out of three. So if I trasnmit: packets 0,3,6,9 etc. the data at ttn is updated every 6 minutes instead of 2.
That is correct. LoRaWAN uses the first three channels as main channels for communication. More can be configured for use. These three exist in part because they then can always be used for OTAA.
So if you have a single channel gateway and it is listening to 868.100 MHz and your node sends on 868.300 MHz then your gateway won't hear it because it is listening on the wrong frequency.
There are several solutions:
configure your node to only send on the single frequency your gateway is listening for.
Add two more single channel gateways who listen on the other main frequencies.
Add a multi channel gateway.
Frequencies are only meant as an example, these frequencies are applicable to EU and may differ in your own region but the principle still stands.

How to effectively establish point to point channel using ZeroMQ?

I have trouble with establishing asynchronous point to point channel using ZeroMQ.
My approach to build point to point channel was that it generates as many ZMQ_PAIR sockets as possible up to the number of peers in the network. Because ZMQ_PAIR socket ensures an exclusive connection between two peers, it needs the same number of peers. My first attempt is realized as the following diagram that represents paring connections between two peers.
But the problem of the above approach is the fact that each pairing socket needs a distinct bind address. For example, if four peers are in the network, then each peer should have at least three ( TCP ) address to bind the rest of peers, which is very unrealistic and inefficient.
( I assume that peer has exactly one unique address among others. Ex. tcp://*:5555 )
It seems that there is no way other than using different patterns, which contain some set of message brokers, such as XREQ/XREP.
( I intentionally avoid broker based approach, because my application will heavily exchange message between peers, which it will often result in performance bottleneck at the broker processes. )
But I wonder that if there is anybody who uses ZMQ_PAIR socket to efficiently build point to point channel? Or is there a way to bypass to have distinct host IP addresses for multiple ZMQ_PAIR sockets to bind?
Q: How to effectively establish ... well,
Given the above narrative, the story of "How to effectively ..." ( where a metric of what and how actually measures the desired effectivity may get some further clarification later ), turns into another question - "Can we re-factor the ZeroMQ Signalling / Messaging infrastructure, so as to work without using as many IP-addresses:port#-s as would the tcp://-transport-class based topology actually need?"
Upon an explicitly expressed limit of having not more than a just one IP:PORT# per host/node ( being thus the architecture's / desing's the very, if not the most expensive resource ) one will have to overcome a lot troubles on such a way forward.
It is fair to note, that any such attempt will come at an extra cost to be paid. There will not be any magic wand to "bypass" such a principal limit expressed above. So get ready to indeed pay the costs.
It reminds me one Project in TELCO, where a distributed-system was operated in a similar manner with a similar original motivation. Each node had an ssh/sshd service setup, where local-port forwarding enabled to expose a just one publicly accessible IP:PORT# access-point and all the rest was implemented "inside" a mesh of all the topological links going through ssh-tunnels not just because the encryption service, but right due to the comfort of having the ability to maintain all the local-port-forwarding towards specific remote-ports as a means of how to setup and operate such exclusive peer-to-peer links between all the service-nodes, yet having just a single public access IP:PORT# per node.
If no other approach will seem feasible ( PUB/SUB being evicted for either traffic actually flowing to each terminal node in cases of older ZeroMQ/API versions, where Topic-filtering gets processed but on the SUB-side, which both security and network Departments will not like to support, or for concentrated workloads and immense resources needs on PUB-side, in cases of newer ZeroMQ/API versions, where Topic-filter is being processed on the sender's side. Adressing, dynamic network peer (re-)discovery, maintenance, resources planning, fault resilience, ..., yes, not any easy shortcut seems to be anywhere near to just grab and (re-)use ) the above mentioned "stone-age" ssh/sshd-port-forwarding with ZeroMQ, running against such local-ports only, may save you.
Anyway - Good Luck on the hunt!

Is there a maintained alternative for libnids?

As libnids seems to be two years old and there are no current updates, do some one know any alternative solution for libnids or better library than it, as it seems to drop packets in higher speeds more than 1G/per sec
And more over it has no support for 64 bit ip addresses.
An alternative to libnids is Bro. It comes with a robust TCP reassembler which has been thoroughly tested and used by the network security monitoring community over the years. It ships with a bunch of protocol analyzers for common protocols, such as HTTP, DNS, FTP, SMTP, and SSL.
Bro is "the Python of network processing:" it has its own domain-specific scripting language with first-class types and functions for IP addresses (both v4 and v6), subnets, ports. The programming style has an asynchronous event-based flavor: users write callback functions for events that reflect network activity. The analysis operates at connection granularity. Here is an example:
event connection_established(c: connection)
{
if ( c$id$orig_h == 1.2.3.4 && c$id$resp_p == 31337/udp )
// IP 1.2.3.4 successfully connected to remote host at port 31337.
}
Moreover, Bro supports a cluster mode that allows for line-rate monitoring of 10 Gbps links. Because most analyses do not require sharing of inter-connection state, Bro scales very well across cores (using PF_RING) as well as multiple nodes. There exist Bro installations with >= 140 nodes. A typical deployment looks as follows:
(source: bro.org)
Due to the high scalability, there is typically no more need to grapple with low-level details and fine-tune C implementations. Or put differently, with Bro you spend your time working on the analysis and not the implementation.

What is the fastest (lowest latency) messaging queue solution for sending a message from host A to host B?

Ok folks, NOT counting ethernet speed (Infinitband), kernel bypass or any other fancy stuff, just plain TCP/IP (TCP/UDP over Ethernet) networking. What is the fastest messaging queue implementation that can deliver a message from host A to host B?
Let's assume 10Gigabits ethernet cards connecting both machines with up-to-date architecture and CPUs. What latency in microseconds are we talking here for a 1472 bytes message (MTU - IP/UDP headers)?
As #Sachin described very well, what I am looking for is the messaging queue and the latency number to send a message from A to B like below:
Host A <-------TCP-------> Messaging queue (process, route, etc) <-------TCP-------> Host B
if you do not require a broker in between, 0MQ gave us the best performance (you will need to test the numbers on your platform/use case). If using a broker in between, both ActiveMQ & RabbitMQ performed in the same range. Using Redis as a messaging server did not hold up for us.
If not using a messaging server, options such as Netty, J-groups etc might be useful (not sure about your programming language).
You could look into reliable UDP as well if going with straight socket connectivity.
hope it helps.
The lower bound would be at least 2 TCP connections and the routing time inside the messaging queue server (meaning the delays associated with these)
Host A <-------TCP-------> Messaging queue (process, route, etc) <-------TCP-------> Host B
Off course, if you build in redundancy, fault tolerance etc, then you are going to be certainly way above this lower bound.
It looks like you are talking about an UDP-based MQ because you mentioned MTU. Well, for UDP-based MQs this time is usually measured as the time required to publish a message and see it back in the message bus. So it is a round-trip time, not a one-way time as you described. This can usually be done in less than 6 microseconds, depending of course on your choice of LAN.