Single Channel LoRaWAN systematically accepts just one packet out of 3 sent by node - lorawan

I just built and tested a single channel LoRaWAN gateway which is connected to TTN as per the instructions of thing4U/esp-1ch-gateway with a single channel node both based on TTGO-ESP32Lora and eventually configured both on www.thethingsnetwork.org. Everything works nicely but I do not understand why despite the node sending data at pace of 2 minutes, the gateway receives just one packet out of three. So if I trasnmit: packets 0,3,6,9 etc. the data at ttn is updated every 6 minutes instead of 2.

That is correct. LoRaWAN uses the first three channels as main channels for communication. More can be configured for use. These three exist in part because they then can always be used for OTAA.
So if you have a single channel gateway and it is listening to 868.100 MHz and your node sends on 868.300 MHz then your gateway won't hear it because it is listening on the wrong frequency.
There are several solutions:
configure your node to only send on the single frequency your gateway is listening for.
Add two more single channel gateways who listen on the other main frequencies.
Add a multi channel gateway.
Frequencies are only meant as an example, these frequencies are applicable to EU and may differ in your own region but the principle still stands.

Related

Omnetpp application sends multiple streams

Let's say I have a car with different sensors: several cameras, LIDAR and so on, the data from this sensors are going to be send to some host over 5G network (omnetpp + inet + simu5g). For video it is like 5000 packets 1400 bytes each, for lidar 7500 packets 1240 bytes and so on. Each flow is encoded in UDP packets.
So in omnetpp module in handleMessage method I have two sentTo calls, each is scheduled "as soon as possible", i.e., with no delay - that corresponds to the idea of multiple parallel streaming. How does omnetpp handle situations, when it needs to send two different packets at the same time from the same module to the same module (some client, which receives sensor data streams)? Does it create some inner buffer on the sender or receiver side, therefore allowing really only one packet sending per handleMessage call or is it wrong? I want to optimize data transmission and play with packet sizes and maybe with sending intervals, so I want to know, how omnetpp handles multiple streaming at the same time, because if it actually buffers, maybe than it makes sense to form a single package from multiple streams, each such package will consist of a certain amount of data from each stream.
There is some confusion here that needs to be clarified first:
OMNeT++ is a discrete event simulator framework. An OMNeT++ model contains modules that communicate with each other, using OMNeT++ API calls like sendTo() and handleMessage(). Any call of the sendTo() method just queues the provided message into the future event queue (an internal, time ordered queue). So if you send more than one packet in a single handleMessage() method, they will be queued in that order. The packets will be delivered one by one to the requested destination modules when the requested simulation time is reached. So you can send as many packets as you wish and those packets will be delivered one by one to the destination's handleMessage() method. But beware! Even if the different packets will be delivered one by one sequentially in the program's logic, they can still be delivered simultaneously considering the simulation time. There are two time concepts here: real-time that describes the execution order of the code and simulation-time which describes the time passes from the point of the simulated system. That's why, while OMNeT++ is a single threaded application that runs each events sequentially it still can simulate infinite number of parallel running systems.
BUT:
You are not modeling directly with OMNeT++ modules, but rather using INET Framework which is a model directly created to simulate internet protocols and networks. INET's core entity is a node which is something that has network interface(s) (and queues belonging to them). Transmission between nodes are properly modeled and only a single packet can travel on an ethernet line at a time. Other packets must queue in the network interface queue and wait for an opportunity to be delivered from there.
This is actually the core of the problem for Time Sensitive Networks: given a lot of pre-defined data streams in a network, how the various packets interfere and affect each other and how they change the delay and jitter statistics of various streams at the destination, Plus, how you can configure the source and network gate scheduling to achieve some desired upper bounds on those statistics.
The INET master branch (to be released as INET 4.4) contains a lot TSN code, so I highly recommend to try to use it if you want to model in vehicle networks.
If you are not interested in the in-vehicle communication, bit rather want to stream some data over 5G, then TSN is not your interest, but you should NOT start to multiplex/demultiplex data streams at application level. The communication layers below your UDP application will fragment/defragment and queue the packets exactly how it is done in the real world. You will not gain anything by doing mux/demux at application layer.

LoRa and LoRaWAN

I am trying understand LoRA and LoRaWAN technologies.
I want to establish comunication between my end nodes and gateway and I want gateways communicate own non-LoRaWAN server.
What shold be rules? For exmaple; Must uplink/donwnlink count per day, duty cycle etc. comply with ETSI EU863-870 or LoRaWAN-allinace?
Basically after a device has joined a network it has liberty to send or receive telegrams at its own time frames. The receive windows of the end nodes differ depending on the class type.
With respect to rules:
All end nodes should adhere to 1% duty cycle.
The regional specifications can give you more insights into the rules. You can find the information here - https://lora-alliance.org/resource_hub/rp2-101-lorawan-regional-parameters-2/

SCTP : transmitting with both interfaces at the same time

On my machine, I have 2 interfaces connected to another machine with 2 interfaces as well. I want to use both interfaces at the same time to transfer data. From SCTP view, each machine is an endpoint. So, I used a one-to-one socket. On the server side, I tried to bind INADDR_ANY as well as bind() the first and bindx() the second. On the client side, I tried connect() and connectx(). Whatever I tried, SCTP use only one of the two interfaces at a given time.
I also tested the sctp function on Iperf and the test app in the source code. Nothing works.
What am I missing here? Do you have to send each packet by hand from one or the other address and to one or the other address?
There surely must have a function where you can build several streams where each stream allows the communication between a pair of specific addresses. Then when you send a packet, SCTP chooses automatically which stream to send the packet in.
Thanks in advance!
What you are asking for called concurrent multipath transfer, feature that isn't supported by SCTP (at least not per RFC 4960).
As described in RFC 4960 by default SCTP transmits data over the primary path. Other paths are meant to be monitored by heartbeats and used when transmission over primary path fails.

Simulating multiple modbus slave devices using node red

I've managed to simulate a single slave device on my raspberry pi using node-red using functions to send data random values to the Modbus flex server. However, now I want to be able to simulate multiple Modbus slave devices on the port number and I'm unsure how to do this.
I've tried creating another Modbus flex server with the same port number, but this causes the whole node-red application to crash when it's deployed. Secondly, I've tried using different Modbus flex-write nodes to simulate different slave devices, but I'm unsure whether this is correct and if so, how I'd configure them to appear as different slave devices. This is because so far, my raspberry pi appears as slave 1, but I'm unsure where this comes from. I'm guessing it's to do with the unit-id of the Modbus flex-server but when I change the unit-id to a different number and type that number as the address in the master, it says no connection.
In conclusion, is it possible to use a single raspberry pi to simulate multiple slave devices on node-red using node-red-contrib-modbus and if so how do you do it?
The concept of Slaves in Modbus TCP differ somewhat from RTP as set out in the Modbus TCP Spec:
The MODBUS ‘slave address’ field usually used on MODBUS Serial Line is
replaced by a single byte ‘Unit Identifier’ within the MBAP Header.
The ‘Unit Identifier’ is used to communicate via devices such as
bridges, routers and gateways that use a single IP address to support
multiple independent MODBUS end units.
So there is a difference in termanalogy between Modbus RTP and TCP as well as a difference in the intended use of this field. The solution suggested by the spec would be to setup multiple servers on different ports (you cannot run multiple servers on a single port).
Having said that some TCP->RTP gateways (and other devices) use the unitid as the slave ID so I'm assuming you are trying to simulate something like this?
The first issue is that there appears to be a bug in Modbus Flex Server (reported) in that when you change the unit-id it is being stored as a string rather than a number. If you export the flow you will see something like "unitId": "3",; changing this to "unitId": 3, (no quotes around the 3) and importing fixes the issue (so that probably explains why you could not get this working).
Having said that changing the unit-id like this does not help you because it only supports one ID. However if you set the unit-id to 255 then it will listen on all unit-ids (this is a feature of the modbus-serial module used internally). Remember that you will currently need to manually fix the config to get this to work due to the bug.
Having done that you can do something like the following to respond to requests to different unit ids (the example will return the unit id (1 or 2) for all addresses so is not useful but shows the concept):

What is the fastest (lowest latency) messaging queue solution for sending a message from host A to host B?

Ok folks, NOT counting ethernet speed (Infinitband), kernel bypass or any other fancy stuff, just plain TCP/IP (TCP/UDP over Ethernet) networking. What is the fastest messaging queue implementation that can deliver a message from host A to host B?
Let's assume 10Gigabits ethernet cards connecting both machines with up-to-date architecture and CPUs. What latency in microseconds are we talking here for a 1472 bytes message (MTU - IP/UDP headers)?
As #Sachin described very well, what I am looking for is the messaging queue and the latency number to send a message from A to B like below:
Host A <-------TCP-------> Messaging queue (process, route, etc) <-------TCP-------> Host B
if you do not require a broker in between, 0MQ gave us the best performance (you will need to test the numbers on your platform/use case). If using a broker in between, both ActiveMQ & RabbitMQ performed in the same range. Using Redis as a messaging server did not hold up for us.
If not using a messaging server, options such as Netty, J-groups etc might be useful (not sure about your programming language).
You could look into reliable UDP as well if going with straight socket connectivity.
hope it helps.
The lower bound would be at least 2 TCP connections and the routing time inside the messaging queue server (meaning the delays associated with these)
Host A <-------TCP-------> Messaging queue (process, route, etc) <-------TCP-------> Host B
Off course, if you build in redundancy, fault tolerance etc, then you are going to be certainly way above this lower bound.
It looks like you are talking about an UDP-based MQ because you mentioned MTU. Well, for UDP-based MQs this time is usually measured as the time required to publish a message and see it back in the message bus. So it is a round-trip time, not a one-way time as you described. This can usually be done in less than 6 microseconds, depending of course on your choice of LAN.