Receiving or transmitting other protocols by LoRaWAN - specifications

There is a description in 《LoRaWAN Specification 1.0.2》:
3.3.7 Receiving or transmitting other protocols
The node may listen or transmit other protocols or do any transactions between the LoRaWAN transmission and reception windows, as long as the end-­device remains compatible with the local regulation and compliant with the LoRaWAN specification.
I'm puzzled about this “between the LoRaWAN transmission and reception windows”. Does it mean that we cloud do any transactions in the slot which is after LoRaWAN transmission and before reception windows?

super.
For the best of my knowledge and understanding in this section we are talking about using other protocols, as Bluetooth, WiFi, ZigBee and so on.
Does it mean that we cloud do any transactions in the slot which is after LoRaWAN transmission and before reception windows?
Yes, that means that you can use other protocols in this slots, as long as "Other protocols" you are using do not violate any local regulations.
BTW: I think you can even use other protocols even in time of LoRa transmissions, but you should be aware of interference it could bring to your LoRa signal.

“between the LoRaWAN transmission and reception windows”
Every Transmission is followed by two reception windows. This window, per default, in EU868 region is 5 Seconds for a join between transmission and reception and 1 second between transmission and reception for a normal uplink.
In this window you may do other things such as blink an led or anything, you
current
▲ ┌───────┐
│ │ │
│ │ tx │
│ │ │
│ │ │
│ │ │ rx1 rx2
│ │ │ ┌──┐ ┌─┐
│ │ │ idle │ │ │ │
├───────┘ └──────────┘ └──┘ └───────►time
│
│

Does it mean that we cloud do any transactions in the slot which is after LoRaWAN transmission and before reception windows?
That's exactly what it means! As long as you comply with the specification - i.e. open Rx1 reception window (and Rx2 if nothing is received during Rx1) on time and with the right parameters - you can do whatever you want outside of these slots.
For instance, if you look at the SX126x description, you'll see that chip is capable of both LoRa and GFSK modulations. Between Tx and Rx1 (and Rx1 and Rx2 if it occurs), you can interleave a transmission / reception using GFSK modulation with the same chip.

Related

Synchronization of Named pipes server and clients

I want to send data between 1 server(overlapped) and 3 clients using named pipes, in high level i am using the named pipe to Toggle the 3 different GPIO pins in microcontroller . When i am doing that First client is fast and second client is slow and third client is slower
Speed::Client 1 > Client 2 > Client 3
I want 3 clients to run at same speed or at in a synchronization

Describe how you communicate with an external peripheral device on the I2C bus

I am trying to summaraize the general description and can't come up with a way to say it. Describe how you communicate with an external peripheral device on the I2C bus? Maybe with steps
There is plenty of material available throughout the web. For example you will find good information on https://i2c.info/. Also if you look out for data sheets of micro controllers like the ATMega328p you can also find very detailed descriptions.
The usual procedure looks like this:
Master Setup START condition (HIGH to LOW transition of SDA while SCL is HIGH)
Master sends I2C device address (usually a 7 bit address + bit0 = 0 to write)
Slave sends: ACK
Master sends I2C register address that you want to read (8 bits)
Slave sends: ACK
Master sends Repeated START (HIGH to LOW transition of SDA while SCL is HIGH)
Send I2C device address (7 bit address + bit0 = 1 to read)
Slave sends: ACK
Slave sends: MSB of the requested register
Master sends: ACK
Slave sends: LSB of the requested register (if register address actually contains more than one Byte)
Master sends: NACK (to inform the Slave that it received all expected data)
Master sends STOP (a LOW to HIGH transition of SDA while SCL is HIGH)

how high the percentage of packet delivery rate of MQTT than CoAP?

I am willing to know about the comparison of the Packet delivery rate between MQTT and CoAP transmission. I know that TCP is more secure than UDP, so MQTT should have a higher Packet delivery rate. I just want to know, if 2000 packets are sent using both protocols separately what would be the approximate percentage in the two cases?
Please help with an example if possible.
If you dig a little, you will find, that both, TCP and UDP, are mainly sending IP messages. And some of these messages may be lost. For TCP, the retransmission is handled by the TCP protocol without your influence. That sure works not too bad (at least in many cases). For CoAP, when you use CON messages, CoAP does the retransmission for you, so also not too much to lose.
When it comes to transmissions with more message loss (eg. bad connectivity), the reliability may also depend on the amount of data. If it fits into one IP message, the probability that this reaches the destination is higher, than 4 messages reaching their destination. In that situation the difference starts:
Using TCP requires, that all the messages are reaching the destination without gaps (e.g. 1,2, (drop 3), 4 will not work). CoAP will deliver the messages also with gaps. It depends on your application, if you can benefit from that or not.
I've been testing CoAP over DTLS 1.2 (Connection ID) for mostly a year now using a Android phone and just moving around sending request (about 400 bytes) with wifi and mobile network. It works very reliable. Current statistic: 2000 request, 143 retransmissions, 4 lost. Please note: 4 lost mainly means "no connection", so be sure, using TCP will have results below that, especially when moving around and frequently new TCP/TLS handshakes get required.
So my conclusion:
If you have a stable network connection, both should work.
If you have a less stable network connection and gaps are not acceptable by your application, both have trouble.
If you have a less stable network connection and gaps are acceptable by your application, then CoAP will do a better job.

How to read the full length of modbus RTU holding registers (Add 40001 : 49999)?

I'm using J2mod to communicate with HW over Modbus RTU and my scope is to read holding registers from address 40001 to 49999.
The problem is the Modbus frame max no of registers is 125 / request.
and i want to read almost 10000 registers, how to apply this because if i will apply for loop each loop reads only 125 registers then to complete the full scan cycle the time will be too too long.
so what is the best practices for this case?
Regards
Hani

Akka Streams: How do I model capacity/rate limiting within a system of 2 related streams?

Lets say I have a pizza oven and a line of pizzas I need to bake. My oven only has capacity to bake 4 pizzas at a time, and it's reasonable to expect that over the course of a day there's always at least 4 in the queue, so the oven will need to be at full capacity as often as possible.
Every time I put a pizza in the oven I set a timer on my phone. Once that goes off, I take the pizza out of the oven, give it to whoever wants it, and capacity becomes available.
I have 2 Sources here, one being the queue of pizzas to be cooked, and one of the egg timer that goes off when a pizza has been cooked. There are also 2 Sinks in the system, one being the destination for the cooked pizza, the other being a place to send confirmation that a pizza has been put into the oven.
I'm currently representing these very naively, as follows:
Source.fromIterator(() => pizzas)
.map(putInOven) // puts in oven and sets a timer
.runWith(Sink.actorRef(confirmationDest, EndSignal))
Source.fromIterator(() => timerAlerts)
.map(removePizza)
.runWith(Sink.actorRef(pizzaDest, EndSignal))
However, these two streams are currently completely independent of each other. The eggTimer functions correctly, removing a pizza whenever it is collected. But it can't signal to the previous flow that capacity has become available. In fact, the first flow has no concept of capacity at all, and will just try to cram pizzas into the oven as soon as they join the line.
What Akka concepts can be used to compose these flows in such a way that the first only takes pizzas from the queue when there's capacity, and that the second flow can "alert" the first one to a change in capacity when a pizza is removed from the oven.
My initial impression is to implement a flow graph like this:
┌─────────────┐
┌─>│CapacityAvail│>──┐
│ └─────────────┘ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ ┌─────────────┐ ├──>│ Zip │>─>│ PutInOven │>─>│ Confirm │
│ │ Queue │>──┘ └─────────────┘ └─────────────┘ └─────────────┘
│ └─────────────┘
│ ┌─────────────┐ ┌─────────────┐
│ │ Done │>─────>│ SendPizza │
│ └─────────────┘ └─────────────┘
│ v
│ │
└─────────┘
The principle that underpins this is that there are a fixed number of CapacityAvailable objects which populate the CapacityAvail Source. They're zipped with events that come in to the Pizza queue, meaning if none are available, no pizza processing starts as the zip operation will wait for them.
Then, once a pizza is done, an CapacityAvailable object is pushed back into the pool.
The main barrier I'm seeing to this implementation is that I'm not sure how to create and populate a pool for the CapacityAvail source, and I'm also not sure whether a Source can also be a Sink. Are there any Source/Sink/Flow types that would be suitable implementation for this?
This use case does not generally map well to Akka Streams. Under the hood an Akka Stream is a reactive stream; from the documentation:
Akka Streams implementation uses the Reactive Streams interfaces
internally to pass data between the different processing stages.
Your pizza example doesn't apply to streams because you have some external event that is just as much a broadcaster of demand as the sink of your stream. The fact that you openly state "the first flow has no concept of capacity at all" means that you aren't using streams for their intended purpose.
It is always possible to use some weird coding ju-jitsu to awkwardly bend streams to solve a concurrency problem, but you'll likely have difficulties maintaining this code down-the-line. I recommend you consider using Futures, Actors, or plain-old Threads as your concurrency mechanism. If your oven has infinite capacity to hold cooking pizzas then there's no need for streams to begin with.
I would also re-examine your entire design since you are using the passage of clock time as the signaler of demand (i.e. your "egg timer"). This usually indicates a flaw in the process design. If you can't get around this requirement then you should evaluate other design patterns:
Periodic Message Scheduling
Non Thread Block Timeouts
You can represent the oven with a mapAsyncUnordered stage with parallelism=4. Completion of the Future can be from a timer (http://doc.akka.io/docs/akka/2.4/scala/futures.html#After) or that you decide to take it out from the oven for some other reason.
This is what I ended up using. It's pretty much an exact implementation of the faux-state machine in the question. The mechanics of Source.queue are much clumsier than I would have hoped, but it's otherwise pretty clean. The real sinks and sources are provided as parameters and are constructed elsewhere, so the actual implementation has a little less boilerplate than this.
RunnableGraph.fromGraph(GraphDSL.create() {
implicit builder: GraphDSL.Builder[NotUsed] =>
import GraphDSL.Implicits._
// Our Capacity Bucket. Can be refilled by passing CapacityAvaiable objects
// into capacitySrc. Can be consumed by using capacity as a Source.
val (capacity, capacitySrc) =
peekMatValue(Source.queue[CapacityAvailable.type](CONCURRENT_CAPACITY,
OverflowStrategy.fail))
// Set initial capacity
capacitySrc.foreach(c =>
Seq.fill(CONCURRENT_CAPACITY)(CapacityAvailable).foreach(c.offer))
// Pull pizzas from the RabbitMQ queue
val cookQ = RabbitSource(rabbitControl, channel(qos = CONCURRENT_CAPACITY),
consume(queue("pizzas-to-cook")), body(as[TaskRun]))
// Take the blocking events stream and turn into a source
// (Blocking in a separate dispatcher)
val cookEventsQ = Source.fromIterator(() => oven.events().asScala)
.withAttributes(ActorAttributes.dispatcher("blocking-dispatcher"))
// Split the events stream into two sources so 2 flows can be attached
val bc = builder.add(Broadcast[PizzaEvent](2))
// Zip pizzas with the capacity pool. Stops cooking pizzas when oven full.
// When cooking starts, send the confirmation back to rabbitMQ
cookQ.zip(AckedSource(capacity)).map(_._1)
.mapAsync(CONCURRENT_CAPACITY)(pizzaOven.cook)
.map(Message.queue(_, "pizzas-started-cooking"))
.acked ~> Sink.actorRef(rabbitControl, HostDied)
// Send the cook events stream into two flows
cookEventsQ ~> bc.in
// The first tops up the capacity pool
bc.out(0)
.mapAsync(CONCURRENT_CAPACITY)(e =>
capacitySrc.flatMap(cs => cs.offer(CapacityAvailable))
) ~> Sink.ignore
// The second sends out cooked events
bc.out(1)
.map(p => Message.queue(Cooked(p.id()), "pizzas-cooked")
) ~> Sink.actorRef(rabbitControl, HostDied)
ClosedShape
}).run()