using USART in STM32L4R5 - stm32

I am using asynchronous USART on STM32L4R5 for communication with PC. I am able to receive data on PC side but I am not able to receive any data on nucleo board send by PC. Following is the code I am using for transmission
while (1)
{
HAL_GPIO_TogglePin(LD2_GPIO_Port,LD2_Pin); //Toggle LED
HAL_Delay(1000);
for(i = 0; i < 5; i++)
{
USART1->TDR = p[i];
while((USART1->ISR & 0x40) == 0);
}
while ((USART1->ISR & 0x20) == 0);
uint32_t receivedByte = (uint32_t)(USART1->RDR);
}
In the above sending part is working fine but receiving is not working. I have checked and wiring is proper.

Why don't you just using USART Receiving interrupt it will help you in capturing received data. Rather than polling for USART Reception.
There are two reasons for no response
You might not sending EOD and Carriage return while sending data on USART. Many USART based operation based on these character. Module will listen to USART untill these characters are received.
Your hardware connections are not right. Make sure you are connecting TX to Host to Rx of Slave and vice versa.
I suggest USART interrupt because polling is not a good method while writing a code.

Related

How to implement coap observer with block2 in Contiki OS

I will explain setup first;
Setup: I have a microcontroller board running a Coap rest server (using Contiki OS) with an observable resource and a client (using Coapthon - python library for the Coap) observing that resource running on a Linux SOM. I am successfully able to observe a small amount of data (64 bytes)from the server (microcontroller) to the client (Linux SOM). I will add code at the end after describing everything.
Question: I need help in sending a big chunk of data (suppose 1024 bytes) from the Coap server to the client observer. How can I do that (thanks in advance for any kind of help, I will appreciate for any help I can get regarding this)?
I am posting Contiki observable resource code and the coapthon client code (I am posting the code which not sending big data).
Contiki Code:
char * temp_payload = "Behold empty data";
PERIODIC_RESOURCE(res_periodic_ext_temp_data,
"title=\"Temperature\";rt=\"Temperature\";obs",
res_get_handler_of_periodic_ext_temp_data,
NULL,
NULL,
res_delete_handler_ext_temp_data,
(15*CLOCK_SECOND),
res_periodic_handler_of_ext_temp_data);
static void
res_get_handler_of_periodic_ext_temp_data(void *request, void *response, uint8_t *buffer, uint16_t preferred_size, int32_t *offset)
{
/*
* For minimal complexity, request query and options should be ignored for GET on observable resources.
* Otherwise the requests must be stored with the observer list and passed by REST.notify_subscribers().
* This would be a TODO in the corresponding files in contiki/apps/erbium/!
*/
/* Check the offset for boundaries of the resource data. */
if(*offset >= 1024) {
REST.set_response_status(response, REST.status.BAD_OPTION);
/* A block error message should not exceed the minimum block size (16). */
const char *error_msg = "BlockOutOfScope";
REST.set_response_payload(response, error_msg, strlen(error_msg));
return;
}
REST.set_header_content_type(response, REST.type.TEXT_PLAIN);
REST.set_response_payload(response,(temp_payload + *offset), MIN( (int32_t)strlen(temp_payload) - *offset, preferred_size));
REST.set_response_status(response, REST.status.OK);
/* IMPORTANT for chunk-wise resources: Signal chunk awareness to REST engine. */
*offset += preferred_size;
/* Signal end of resource representation. */
if(*offset >= (int32_t)strlen( temp_payload) + 1) {
*offset = -1;
}
REST.set_header_max_age(response, MAX_AGE);
}
I am not adding code for periodic handler, get handler is getting notified from periodic handler periodically.
Coapthon code:
def ext_temp_data_callback_observe(response):
print response.pretty_print()
def observe_ext_temp_data(host, callback):
client = HelperClient(server=(host, port))
request = Request()
request.code = defines.Codes.GET.number
request.type = defines.Types["CON"]
request.destination = (host, port)
request.uri_path = "data/res_periodic_ext_temp_data"
request.content_type = defines.Content_types["text/plain"]
request.observe = 0
request.block2 = (0, 0, 64)
try:
response = client.send_request(request, callback)
print response.pretty_print()
except Empty as e:
print("listener_post_observer_rate_of_change({0}) timed out". format(host))
Again, I need help in implementing observer with coap block wise transfer (https://www.rfc-editor.org/rfc/rfc7959#page-26).
I can't tell much about the particular systems you use, but in general the combination of block-wise transfer and observes works in that the server only sends the first block of the updated resource. It is then up to the client to ask for the remaining blocks, and to verify that their ETag options match.
The contiki code looks like it should be sufficient, as it sets the offset to -1 which probably sets the "more data" bit in the block header.
On the coapython side, you may need to do reassembly manually, or ask that coapython do the reassembly automatically (its code does not indicate that it'd support the combination of blockwise and observe, at least not at a short glance).
To "bootstrap" your development you may consider to use Eclipse/Californium.
The simple client in demo-apps/cf-helloworld-client requires some change for the observe. If you need help, just open an issue in github.
With a two years of experience with that function, let me mention, that, if your data changes faster than your "bandwidth" is able to transfer (including the considered RTT for the blocks), you may send a lot of blocks in vain.
If the data changes just faster than your last block could be send, that invalidates the complete transfer so far. Some start then to develop their work-around, but from that your on very thin ice :-).

how to detect disconnection in IOCP server using zero byte recv

I'm currently implementing IOCP server for a game and I'm trying zero bytes recv technic.
I have 3 questions.
If you know disconnection from the client by checking if bytestransferred is 0, then how do you distinguish between receive completion and disconnection?
I'm not performing non-block mode recv() when I process actual receive process because clients send the bytes of actual data first, so I know how many bytes I'm receiving. Do I need to use non-block mode recv() still?
I'm doing it like so.
InputMemoryBitStream incomming;
std::string data;
uint32_t strLen = 0;
recv(socket, reinterpret_cast<char*>(&strLen), sizeof(uint32_t), 0);
incomming.resize(strLen);
recv(socket, reinterpret_cast<char*>(incomming.getBufferPtr()), strLen, 0);
incomming.read(data, strLen);
(InputMemoryBitStream is for reading compressed data.)
I'm dynamically allocating per io data every time I do WSARecv() and WSASend() and I free them as soon as I finish processing completed I/Os. Is it inefficient to do that? or is it acceptable? Should I reuse per io data maybe?
Thank you in advance.

Sending data from STM32F401 MCU to ESP8266 and getting response from ESP8266 to MCU

I am working on an STM32f401 Nucleo board and ESP8266 wifi module. I am using Eclipse gcc-arm tool chain and cubeMx to generate code. I can transfer and receive data perfectly with USART/UART DMA.
Now I am stuck with ESP8266. I cannot send data from MCU to ESP and I'm not getting response from ESP to MCU. I already tested the ESP module communication, I can connect TO THE wifi with AT commands through USB and can also receive data in web via socket connection.
I configured USART1_TX/USART1_RX with PA9/PA10
Thanks in advance.
I'm not an expert, but I try to help you.
Which baud rate are you using? Is it coherent with the ESP8266 documentation?
Check the power supply and the connections.
Therefore, remember that the AT commands are case sensitive (they must be written with capital letters only) and they must terminate with carriage return and line feed, so "/r/n".
That's right at first check baud rate are matching
Then do you use dma for both tx/rx direction ?
For dma rx note that the "completion" callback will be called only when the full buffer will be filled.
If you neeed to "break" reception on ending "\n" "\n" then you might use the interrupt rx method oen hatr at a time and inspect it as it arrives in the callback that keep on asking one more byte until not done.
Alternatively with dma keep on polling the dma count and analyzed current rx buffer for some \r \n. abort/Stop dma when done.

Socket programming Client Connect

I am working with client-server programming I am referring this link and my server is successfully running.
I need to send data continuously to the server.
I don't want to connect() every time before sending each packet. So for first time I just created a socket and send the first packet, the rest of the data I just used write() function to write data to the socket.
But my problem is while sending data continuously if the server is not there or my Ethernet is disabled still it successfully write data to socket.
Is there any method by which I can create socket only at once and send data continuously with knowing server failure?.
The main reason for doing like this that, on the server side I am using GPRS modem and on each time when call connect() function for each packet the modem get hanged.
For creating socket I using below code
Gprs_sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (Gprs_sockfd < 0)
{
Display("ERROR opening socket");
return 0;
}
server = gethostbyname((const char*)ip_address);
if (server == NULL)
{
Display("ERROR, no such host");
return 0;
}
bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
bcopy((char *)server->h_addr,(char *)&serv_addr.sin_addr.s_addr,server->h_length);
serv_addr.sin_port = htons(portno);
if (connect(Gprs_sockfd,(struct sockaddr *) &serv_addr,sizeof(serv_addr)) < 0)
{
Display("ERROR connecting");
return 0;
}
And each time I writing to the socket using the below code
n = write(Gprs_sockfd,data,length);
if(n<0)
{
Display("ERROR writing to socket");
return 0;
}
Thanks in advance.............
TCP was designed to tolerate temporary failures. It does byte sequencing, acknowledgments, and, if necessary, retransmissions. All unacknowledged data is buffered inside the kernel network stack. If I remember correctly the default is three re-transmission attempts (somebody correct me if I'm wrong) with exponential back-off timeouts. That quickly adds up to dozens of seconds, if not minutes.
My suggestion would be to design application-level acknowledgments into your protocol, meaning the server would send a short reply saying that it received that much data up to now, say every second. If the client does not receive suck ack in say 3 seconds, the client knows the connection is unusable and can close it. By the way, this is easier done with non-blocking sockets and polling functions like select(2) or poll(2).
Edit 0:
I think this would be very relevant here - "The ultimate SO_LINGER page, or: why is my tcp not reliable".
Nikolai is correct here, the behaviour you experience here is desirable as basically you could continue transfering data after network outage without any logic in your application. If your application should detect outages longer that specified amount of time, you need to add heartbeating into your protocol. This is standard way of solving the problem. It can also allow you for detect situation when network is all right, receiver is alive, but it has deadlocked (due to to a software bug).
Heartbeating could be as simple as mentioned by Nikolai -- sending a small packet every X seconds; if the server can't see the packet for N*X seconds, the connection would be dropped.

sendto not working on VxWorks

I asked this question before and had no resolution (still having the problem). I am stumped because the function returned without error and NO DATA was sent! This code works on Linux ... the VxWorks version does not work (sendto does not send, though it returns without an ERROR).
The synopsis - I am writing a simple echo server - The server successfully receives
the data (from an x86 box) and claims it successfully SENT it back.
However NO DATA is received on the client (netcat on an x86). This
code is running on VxWorks 5.4 on a PowerPC box ...
I is the UDP data being buffered somehow?
Could another task be preventing sendto from sending? (NOT to get off on a wild goose chase here, but I taskspawn my application with a normal priority, i.e. below critical tasks like the network task etc etc ... so this is fine).
Could VxWorks be buffering my UDP data?
I HAVE setup my routing table ... pinging works!
There is NO firewall AFAIK ...
What are the nuances of sendto and what would prevent my data from
reaching the client ...
while(1)
{
readlen = recvfrom(sock, buf, BUFLEN, 0, (struct sockaddr *) &client_address, &slen);
if (readlen == ERROR)
{
printf("RECVFROM FAILED()/n");
return (ERROR);
}
printf("Received %d bytes FROM %s:%d\nData: %s\n\n",
readlen, inet_ntoa(client_address.sin_addr),
ntohs(client_address.sin_port), buf);
// Send it to right back to the client using the open UDP socket
// but send it to OUTPORT
client_address.sin_port = htons(OUTPORT);
// Remember slen is a value (not an address ... in, NOT in-out)
sendlen = sendto(sock, buf, BUFLEN, 0, (struct sockaddr*)&client_address, slen);
// more code ....
}
I trust ERROR is defined as -1, right? Then are you checking the return value of the sendto(2) call? What about the errno(3) value?
One obvious problem I see in the code is that you give BUFLEN as length of the message to be sent, while it should actually be readlen - the number of bytes you received.