I'm trying to learn to use Contiki 3.x.
When launching the "rpl-collect" example in Cooja with one udp-sink and several udp-sender, I see that each sender node regularly unicast DIO message to its preferred parent (in addition of multicast DIO).
The only reason I see to this is either a response to a DIS or a probing mechanism, but there is no DIS message and I disabled probing and saw no change.
Maybe is it worth noting that every time a child node sends a DIO in unicast to its parent seems to be just after that child received a 802.15.4 ACK for a previous communication.
Does somebody knows why child node unicast DIO to their parents ?
Seems like it was the probing mechanism still running, I didn't disable it properly the first time !
Yes, this is happening due to probing mechanism. The probing is either done using DIO or DIS message. In your case DIO probing must be enabled. 120 system clock seconds is the default interval of probing.
Related
We are using gRPC (version 1.37.1) for our inter-process communication between our C# process and C++ process. Both processes act as a server and client with the other and run on the same machine over localhost using the HTTP/2 transport. All of the calls are use blocking synchronous unary calls and not bi-directional streaming. Some average(ish) stats:
From C++->C#: 0-2 calls per second, 0-40 calls per minute
From C#->C++: 0-5 calls per second, 0-200 calls per minute
Intermittently, we were getting one of 3 issues
C# client call to C++ server comes back with an RpcException, usually “HTTP2/Parse Error”, “Endpoint Read Failed”, or “Transport Closed”
C++ client call to C# server comes back with Unavailable or Unknown
C++ client WaitForConnected call to check the channel fails after 500ms
The top most one is the most frequent and where we have the most information about. Usually, what we’ll see is the Client receives the RPC call and runs into an unknown frame type. Then the subchannel goes into shutdown and everything usually re-connects fine. We also generally see an embedded error like the following (note that we replaced all FILE instances to FUNCTION in our gRPC source):
win_read","file_line":307,"os_error":"The system detected an invalid pointer address in attempting to use a pointer argument in a call.\r\n","syscall":"WSARecv","wsa_error":10014}]},{"created":"#1622120588.494000000","description":"frame of size 262404 overflows local window of 65535","file":"grpc_core::chttp2::TransportFlowControl::ValidateRecvData","file_line":213}]}
What we’ve seen with the unknown frame type, is that it parses the HEADERS, WINDOW_UPDATE, DATA, WINDOW_UPDATE and then gets a TCP: on_read without a corresponding READ and then tries to parse again. It’s this parse where it looks like the parser is at the wrong offset in the buffer, because it gets the unknown frame type, incoming frame size and incoming stream_id all map to the middle of the RPC call that it just parsed.
The above was what we were encountering prior to a change to create a new channel for each rpc call. While we realize it is not great from a performance standpoint, we have seen increased stability since making the change. However, we still do occasionally get rpc exceptions. Now, the most common is “Unknown”/”Stream Removed” rather than the ones listed above.
Any ideas on what might be going wrong is appreciated. We've turned on all gRPC tracing and have even added to it, as well as captured the issue in wireshark but so far aren't getting a great indication of what's causing the transport to close. Are there any good tools to monitor the socket/port for failure?
I am new to STM32 and freertos. I need to write a program to send and receive data from a module via UART port. I have to send(Transmit) a data to that module(for eg. M66). Then I would return to do some other tasks. once the M66 send a response to that, my seial-port-receive-function(HAL_UART_Receive_IT) has to be invoked and receive that response. How can I achieve this?
The way HAL_UART_Receive_IT works is that you configure it to receive specified amount of data into given buffer. You give it your buffer to which it'll read received data and number of bytes you want to receive. It then starts receiving data. Once exactly this amount of data is received, a callback function HAL_UART_RxCpltCallback gets called (from IRQ) where you can do whatever you want with this data, e.g. add it to some kind of queue for later processing in the task context.
If I was to express my experiences related to working with HAL's UART module is that it's not the greatest one for generic use where you don't know the amount of data you expect to receive in advance. In the case of M66 modem you mention, this will happen all the time.
To solve this you have two choices:
Simply don't use HAL functions at all in case of UART, other than the initialization functions. Implement your own UART interrupt handler (most of the code can be copied from handler in HAL) where upon receiving data you place received bytes in a receive byte queue handled in your RTOS task. In this task you implement protocol parsing. This is the approach I use personally.
If you really want to use HAL but also work with a module that sends varying amount of data, call HAL_UART_Receive_IT and specify that you want to receive 1 byte each time. This will work, but will be (potentially much) slower than the first approach. Assuming you'll later want to implement some tcp/ip communication (you mentioned M66 GPRS module) you probably don't want to do it this way.
You should try the following way.
Enable UARTX Rx interrupt in NVIC.
Set Interrupt priority.
Unmask Interrupt request in EXTI.
Then use USARTX Interrupt Handler Function Define in you Vector.
Whenever the data is received from USARTX this function get automatically called and you can copy data from USARTX Receive Data Register.
I would rather suggest another approach. You probably want to archive higher speeds (lets say 921600 bods) and the interrupt way is fat to slow for it.
You need to implement the DMA transmition with the data end detection features. Run your USART in the DMA mode in the circular mode. You will have two events to serve. The first one is the DMA end of thransmition interrupt (then you copy the data from the current tail pointer to the end of the buffer to avoid data override) and USART IDLE interrupt - this will detect the end of the receive.
I am using fsl_elbc_nand [1] driver for my NAND device. This is a NAND IC connected to the LocalBus Controller (eLBC), part of SoC. I have also ethernet MAC (ASIX) connected to the same ELBC bus. The nand driver works as follows:
setups operation via MMIO
wait_event_timeout for an interrupt from elbc informing that the operation (read/write/whatever) has finished
process result
Problem is, during wait_event_timeout() (which can take up to 0.5s) ASIX cannot talk with eLBC. But it is in three ways:
interrupt, when incoming frames
start_xmit from softirq for outgoing frames
mdio from ethtool ioctl (i.e. link status every second)
I can disable particular irq, resolving first case. I can return NETDEV_TX_BUSY by start_xmit, so I will resolve second case. But I cannot find any way to resolve third case. I tried spinlocks, mutexes, but I've learnt, that spinlocks are prohibited, when sleeping.
Is there some way to achieve proper locking? Maybe I should replace wait_event_timeout() to something else?
[1] http://lxr.free-electrons.com/source/drivers/mtd/nand/fsl_elbc_nand.c?v=3.10
I've been using ZMQ in some Python applications for a while, but only very recently I decided to reimplement one of them in Go and I realized that ZMQ sockets are not thread-safe.
The original Python implementation uses an event loop that looks like this:
while running:
socks = dict(poller.poll(TIMEOUT))
if socks.get(router) == zmq.POLLIN:
client_id = router.recv()
_ = router.recv()
data = router.recv()
requests.append((client_id, data))
for req in requests:
rep = handle_request(req)
if rep:
replies.append(rep)
requests.remove(req)
for client_id, data in replies:
router.send(client_id, zmq.SNDMORE)
router.send(b'', zmq.SNDMORE)
router.send(data)
del replies[:]
The problem is that the reply might not be ready on the first pass, so whenever I have pending requests, I have to poll with a very short timeout or the clients will wait for more than they should, and the application ends up using a lot of CPU for polling.
When I decided to reimplement it in Go, I thought it would be as simple as this, avoiding the problem by using infinite timeout on polling:
for {
sockets, _ := poller.Poll(-1)
for _, socket := range sockets {
switch s := socket.Socket; s {
case router:
msg, _ := s.RecvMessage(0)
client_id := msg[0]
data := msg[2]
go handleRequest(router, client_id, data)
}
}
}
But that ideal implementation only works when I have a single client connected, or a light load. Under heavy load I get random assertion errors inside libzmq. I tried the following:
Following the zmq4 docs I tried adding a sync.Mutex and lock/unlock on all socket operations. It fails. I assume it's because ZMQ uses its own threads for flushing.
Creating one goroutine for polling/receiving and one for sending, and use channels in the same way I used the req/rep queues in the Python version. It fails, as I'm still sharing the socket.
Same as 2, but setting GOMAXPROCS=1. It fails, and throughput was very limited because replies were being held back until the Poll() call returned.
Use the req/rep channels as in 2, but use runtime.LockOSThread to keep all socket operations in the same thread as the socket. Has the same problem as above. It doesn't fail, but throughput was very limited.
Same as 4, but using the poll timeout strategy from the Python version. It works, but has the same problem the Python version does.
Share the context instead of the socket and create one socket for sending and one for receiving in separate goroutines, communicating with channels. It works, but I'll have to rewrite the client libs to use two sockets instead of one.
Get rid of zmq and use raw TCP sockets, which are thread-safe. It works perfectly, but I'll also have to rewrite the client libs.
So, it looks like 6 is how ZMQ was really intended to be used, as that's the only way I got it to work seamlessly with goroutines, but I wonder if there's any other way I haven't tried. Any ideas?
Update
With the answers here I realized I can just add an inproc PULL socket to the poller and have a goroutine connect and push a byte to break out of the infinite wait. It's not as versatile as the solutions suggested here, but it works and I can even backport it to the Python version.
I opened an issue a 1.5 years ago to introduce a port of https://github.com/vaughan0/go-zmq/blob/master/channels.go to pebbe/zmq4. Ultimately the author decided against it, but we have used this in production (under VERY heavy workloads) for a long time now.
This is a gist of the file that had to be added to the pebbe/zmq4 package (since it adds methods to the Socket). This could be re-written in such a way that the methods on the Socket receiver instead took a Socket as an argument, but since we vendor our code anyway, this was an easy way forward.
The basic usage is to create your Socket like normal (call it s for example) then you can:
channels := s.Channels()
outBound := channels.Out()
inBound := channels.In()
Now you have two channels of type [][]byte that you can use between goroutines, but a single goroutine - managed within the channels abstraction, is responsible for managing the Poller and communicating with the socket.
The blessed way to do this with pebbe/zmq4 is with a Reactor. Reactors have the ability to listen on Go channels, but you don't want to do that because they do so by polling the channel periodically using a poll timeout, which reintroduces the same exact problem you have in your Python version. Instead you can use zmq inproc sockets, with one end held by the reactor and the other end held by a goroutine that passes data in from a channel. It's complicated, verbose, and unpleasant, but I have used it successfully.
My goal is to send a TCP packet with empty data field, in order to test the socket with the remote machine.
I am using the OutputStream class's method of write(byte[] b).
my attempt:
outClient = ClientSocket.getOutputStream();
outClient.write(("").getBytes());
Doing so, the packet never show up on the wire. It works fine if "" is replaced by " " or any non-zero string.
I tried jpcap, which worked with me, but didn't serve my goal.
I thought of extending the OutputStream class, and implementing my own OutputStream.write method. But this is beyond my knowledge. Please give me advice if someone have already done such a thing.
If you just want to quickly detect silently dropped connections, you can use Socket.setKeepAlive( true ). This method ask TCP/IP to handle heartbeat probing without any data packets or application programming. If you want more control on the frequency of the heartbeat, however, you should implement heartbeat with application level packets.
See here for more details: http://mindprod.com/jgloss/socket.html#DISCONNECT
There is no such thing as an empty TCP packet, because there is no such thing as a TCP packet. TCP is a byte-stream protocol. If you send zero bytes, nothing happens.
To detect broken connections, short of keepalive which only helps you after two hours, you must rely on read timeouts and write exceptions.