My gateway uses the Raspi and RFM95 configuration and operates at 915 MHz. I am using the single channel packet forwarder code by tfelkamp (https://github.com/tftelkamp/single_chan_pkt_fwd).
My gateway only the detects the first message it received and ignores the all messages afterwards. It is still connected to the TTN server but does not receive any more messages.
Can anyone explain what might be the cause of this? Might it because the RFM95 sleeping or the code no longer forwarding the message from the transceiver.
Thanks
I experienced a similar issue. Please note your sender is using different channels, but starts with channel(0). This is the first successful message you receive. Your single channel receiver is just able to receive channel(0). There is a work around for this issue for your sender explained here
This sounds like your transmitter sends the messages using frequency-hopping, while your receiver does not handle it correctly (or the other way around).
Definition of frequency-hopping found in chapter 4.1.1.8 of Semtech's SX1272 datasheet:
Frequency hopping spread spectrum (FHSS) is typically employed when
the duration of a single packet could exceed regulatory requirements
relating to the maximum permissible channel dwell time. This is most
notably the case in US operation where the 902 to 928 MHz ISM band
which makes provision for frequency hopping operation. [...]
If you're using the LMIC-Arduino library for your node then yes, by default it is transmitting in a range and the single_chan_pkt_fwd gateway is only receiving on the frequency you specify in the global_conf.json or the .cpp source (depending on your chosen library).
With the assumption that you're using the arduino-lmic library, make the changes/additions mentioned in the this TTN forum post linked by Rainer which is the same I ran into.
Also... you'll find this further down the thread: in src > lmic > lmic.c edit the following:
void LMIC_disableChannel (u1_t channel) {
if( channel < 72+MAX_XCHANNELS )
//LMIC.channelMap[channel>>4] &= ~(1<<(channel&0xF)); // comment this one
LMIC.channelMap[channel/16] &= ~(1<<(channel&0xF)); // add this one
}
Then pick a frequency on channel 0 and set that for both node and packet forwarder. Here's a table snip from this page. I went with 902300000 and it's working fine.
"freq": 902300000,
"spread_factor": 7,
Related
We're using DPDK (version 20.08 on ubuntu 20.04, c++ application) to receive UDP packets with a high throughput (>2 Mpps). We use a Mellanox ConnectX-5 NIC (and a Mellanox ConnectX-3 in an older system, would be great if the solution worked there aswell).
Contrary, since we only need to send a few configuration messages, we send messages through the default network stack. This way, we can use lots of readily available tools to send configuration messages; however, since all the received data is consumed by DPDK, these tools do not get back any messages.
The most prominent issue arises with ARP negotiation: the host tries to resolve addresses, the clients also do respond properly, however, these responses are all consumed by DPDK such that the host cannot resolve the addresses and refuses to send the actual UDP packets.
Our idea would be to filter out the high throughput packets on our application and somehow "forward" everything else (e.g. ARP responses) to the default network stack. Does DPDK have a built-in solution for that? I unfortunatelly coulnd't find anything in the examples.
I've recently heard about the packet function which allows to inject packets into SOCK_DGRAM sockets which may be a possible solution. I also couldn't find a sample implementation for our use-case, though. Any help is greatly appreciated.
Theoretically, if the NIC in question supports the embedded switch feature, it should be possible to intercept the packets of interest in the hardware and redirect them to a virtual function (VF) associated with the physical function (PF), with the PF itself receiving everything else.
The user configures SR-IOV feature on the NIC / host as well as virtualisation support;
For a given NIC PF, the user adds a VF and binds it to the corresponding Linux driver;
The DPDK application is run with the PF ethdev and a representor ethdev for the VF;
To handle the packets in question, the application adds the corresponding flow rules.
The PF (ethdev 0) and the VF representor (ethdev 1) have to be explicitly specified by the corresponding EAL argument in the application: -a [pci:dbdf],representor=vf0.
As for the flow rules, there should be a pair of such.
The first rule's components are as follows:
Attribute transfer (demands that matching packets be handled in the embedded switch);
Pattern item REPRESENTED_PORT with port_id = 0 (instructs the NIC to intercept packets coming to the embedded switch from the network port represented by the PF ethdev);
Pattern items matching on network headers (these provide narrower match criteria);
Action REPRESENTED_PORT with port_id = 1 (redirects packets to the VF).
In the second rule, item REPRESENTED_PORT has port_id = 1, and action REPRESENTED_PORT has port_id = 0 (that is, this rule is inverse). Everything else should remain the same.
It is important to note that some drivers do not support item REPRESENTED_PORT at the moment. Instead, they expect that the rules be added via the corresponding ethdevs. This way, for the provided example: the first rule goes to ethdev 0, the second one goes to ethdev 1.
As per the OP update, the adapter in question might indeed support the embedded switch feature. However, as noted above, item REPRESENTED_PORT might not be supported. The rules should be inserted via specific ethdevs. Also, one more attribute, ingress, might need to be specified.
In order to check whether this scheme works, one should be able to deploy a VF (as described above) and run testpmd with the aforementioned EAL argument. In the command line of the application, the two flow rules can be tested as follows:
flow create 0 ingress transfer pattern eth type is 0x0806 / end actions represented_port ethdev_port_id 1 / end
flow create 1 ingress transfer pattern eth type is 0x0806 / end actions represented_port ethdev_port_id 0 / end
Once done, that should pass ARP packets to the VF (thus, to the network interface) in question. The rest of packets should be seen by testpmd in active forwarding mode (start command).
NOTE: it is recommended to switch to the most recent DPDK release.
For the current use case, the best option is to make use of DPDK TAP PMD (which is part of LINUX DPDK). You can use Software or Hardware to filter the specific packets then sent it desired TAP interface.
A simple example to demonstrate the same would be making use DPDK skeleton example.
build the DPDK example via cd [root folder]/example/skeleton; make static
pass the desired Physical DPDK PMD NIC using DPDK eal options ./build/basicfwd -l 1 -w [pcie id of DPDK NIC] --vdev=net_tap0;iface=dpdkTap
In second terminal execute ifconfig dpdkTap 0.0.0.0 promisc up
Use tpcudmp to capture Ingress and Egress packets using tcpdump -eni dpdkTap -Q in and tcpdump -enu dpdkTap -Q out respectively.
Note: you can configure ip address, setup TC on dpdkTap. Also you can run your custom socket programs too. You do not need to invest time on TLDP, ANS, VPP as per your requirement you just need an mechanism to inject and receive packet from Kernel network stack.
As we all know, in the CAN bus communication protocol, sender know whether the data was successfully sent. I send socketcan data as follows.
ret = write (socket, frame, sizeof (struct can_frame));
However, even if the CAN communication cable is disconnected, the return value of ret is still 16(=sizeof (struct can_frame)).I queried the information and found that the problem was due to the tx_queue of the network stack used by socketcan. When write is called multiple times, the buffer is full and the return value of ret is -1.
But this is not the behavior I expect, I hope that every frame of data sent will immediately get the status of success or failure.
By
echo 0> / sys / class / net / can0 / tx_queue_len
I want to cancel the tx_queue, but it does not work.
What I want to ask is, is there a way to cancel the tx_queue of socketcan, or to get the status of the each sending frame about controller through the API (such as libsocketcan).
Thanks.
You cannot use write() itself to discover whether a CAN frame was successfully put on the bus, because all it does is write the frame to the in-kernel socket buffer. The kernel then moves the frame to the transmit queue of the SocketCAN network interface, followed by the driver moving it to the transmit buffer of the CAN controller, which finally puts the frame on the bus. What you want is a direct write which bypasses all those buffers, but that's not possible with SocketCAN, even if you set the transmit queue length to 0.
However, there is another way to get confirmation. If you enable the CAN_RAW_RECV_OWN_MSGS socket option (see section 4.1.4 and 4.1.7 in the SocketCAN documentation), you will receive frames that were successfully sent. You'll need to use recvmsg() so you get the message flags. msg_flags will have the MSG_CONFIRM bit set for a frames that was successfully sent by the same socket on which it is received. You won't be informed of failures, but you can detect them by using a timeout for the confirmation.
It's not an ideal solution because it mixes the read and write logic in your application. One way to avoid this would be to use two sockets. One for writing and reading MSG_CONFIRM frames, the other for reading all other frames. You could then create a (blocking) write function that does a write() followed by multiple calls to recvmsg() with an appropriate timeout.
Finally, it is useful to enable error frames (through the CAN_RAW_ERR_FILTER socket option). If you send a frame on a socket with a disconnected cable, this will typically result in a bus off state, which will be reported in an error frame.
I was trying to manipulate SOME/IP messages by falsifying their content(Payload) sent between 2 ECUs at run time.
After setting up the Hardware VN6510A MAC Bypassing and integrating it in the data traffic path between those 2 ECUs to monitor and control all Ethernet data streams.
ECU A ---> eth1 interface --VN6510A-- eth2 interface ---> ECU B
I successfully catch our target SOME/IP messages and I also succefully manipulate their paylod.
But at the end we got 2 SOME/IP messages: the real coming message and the falsified message forwarded at the same time.
How could we bound those 2 SOME/IP messages, the real message and the falsified message together, so that we could have just one falsified SOME/IP message, knowing that I am using the same SOME/IP message handle.
I used the callback function void OnEthPacket(LONG channel, LONG dir, LONG packet) to register a received Ethernet packet.
Probably by setting your VN.... to "Direct" and not "MAC Bypassing"
Well we could not manipulate Messages at run time using the vector box VN6510A Solution because simply their box doesn't support this feature.
I have read that the combination of three things causes something like a 200ms delay with TCP: Nagle's algorithm, delayed acknowledgement, and the "write-write-read" combination. However, I cannot reproduce this delay with Java sockets and I am therefore not sure if I have understood correctly.
I am running a test on Windows 7 with Java 7 with two threads using sockets over the loopback address. I have not touched the tcpNoDelay option on any socket (false by default) nor played around with any TCP settings on the OS. The main piece of the code in the client is as below. The server is responding with a byte after each two bytes it receives from the client.
for (int i = 0; i < 100; i++) {
client.getOutputStream().write(1);
client.getOutputStream().write(2);
System.out.println(client.getInputStream().read());
}
I do not see any delay. Why not?
I believe you see delay acknowledgment.
You write 4 and 4 bytes to the socket. The server's TCP stack receives a segment (that probably contains at least 4 bytes from an int number) and wakes up the server application thread. This thread writes a byte back to the stream and this byte is sent to the client within ACK segment. I.e. TCP stack gives a chance to an application to send a reply immediately. So you see no delay.
You can write a dump of traffic and also make an experiment between two computers to see what really happens.
It is possible to read and send data with TComPort for modbus RTU protocol?
I have read wiki http://en.wikipedia.org/wiki/Modbus for modbus, but what mean start and end with 3.5c idle?
I use C++Builder2009
Of course it's possible.
In MODBUS ASCII it is easy to determine end of message since 2 bytes are used for single byte transmitted over communication line (byte is transmitted as it's ASCII hexadecimal representation), but in MODBUS RTU you have 1 byte used for single byte transmitted which means that they had to know somehow that messages has ended. So, bytes are added to a new message as long as pause between them is less then 3.5 characters. When pause is greater then 3.5, you have an end of a message and you can parse the message, process it, and get ready for new one. This idle time is measured in characters since that is the only constant. Time period of 1 character transmitted over 9600 and over 115200 is not the same, and it is also not the same for 9600-8N1 and for 9600-8E2, so you have to adjust that time based on COM port settings.
yes its possible to send data with comport using modbus protocol.
There are various packages for that like RXTXcomm.jar,comm.jar which provide functions to communicate with slave device using com port