Found the following in the example code appserver.cpp distributed with UDT
hints.ai_flags = AI_PASSIVE;
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
Why would UDT use SOCK_STREAM and not SOCK_DGRAM?
This can be perfectly normal.
If I didn't know anything about UDT, then I would assume that "hints" is likely an instance of addrinfo and used as the second parameter of getaddrinfo()
If the code is just trying to get the IP address of a server (i.e. DNS lookup), then it has to pass something into the hints structure for socktype. Otherwise, the result of getaddrinfo is likely to return 3x the the number of results. One result for SOCK_STREAM, another for SOCK_DGRAM, and a third for SOCK_RAW. But the ai_addr member for each will be the same address.
Now I did just take a peak at the UDT code. Never heard of it until now. But it does appear to have some code that is doing some SOCK_STREAM stuff and using getaddrinfo as a formal way to initialize a sockaddr for subsequent TCP connect.
memset(&hints, 0, sizeof(struct addrinfo));
hints.ai_flags = AI_PASSIVE;
hints.ai_family = AF_INET;
hints.ai_socktype = SOCK_STREAM;
UDTSOCKET fhandle = UDT::socket(hints.ai_family, hints.ai_socktype,hints.ai_protocol);
if (0 != getaddrinfo(argv[1], argv[2], &hints, &peer))
{
cout << "incorrect server/peer address. " << argv[1] << ":" << argv[2] << endl;
return -1;
}
// connect to the server, implict bind
if (UDT::ERROR == UDT::connect(fhandle, peer->ai_addr, peer->ai_addrlen))
But you'll have to ask the UDT developers what it's all about.
UDT is the UDP-based data transfer protocal,so in all it's is just UDP.
check the UDT Manual
link。it says
UDT is connection oriented, for both of its SOCK_STREAM and SOCK_DGRAM mode. connect must be called in order to set up a UDT connection.
so whatever we use,we all have to do a connect() call. so what's the difference?
In SOCK_STREAM,we can use udt's send() API,while in SOCK_DGRAM,we only can use udt's sendmsg() API.
check the manual's "transfer data" and "Messaging with Partial Reliability",i think it may be do a little help.
Related
I'm gonna to read/write under the modbus-tcp specification.
So, I'm trying to code the client and server in the linux environment.
(I would communicate with the windows program(as a client) using the modbus-tcp.)
but it doesn't work as I want, so I ask you here.
I'm testing the client code for linux as a client and the easymodbus as a server.
I used the libmodbus code.
I'd like to read coil(0x01) and write coil(0x05).
When the code is executed using the libmodbus, 'ff' is printed out from the Unit ID part.(according to the manual, 01 should be output for modbus-tcp.
I don't know why 'ff' is printed(photo attached).
Wrong result:
Expected result:
'[00] [00] .... [00]' ; Do you know where to control this part?
Do you have or do you know the sample code that implements the 'read/write' function using the libmodbus?
please let me know the information, if you know that.
ctx = modbus_new_tcp("192.168.0.99", 502);
modbus_set_debug(ctx, TRUE);
if (modbus_connect(ctx) == -1) {
fprintf(stderr, "Connection failed: %s\n",
modbus_strerror(errno));
modbus_free(ctx);
return -1;
}
tab_rq_bits = (uint8_t *) malloc(nb * sizeof(uint8_t));
memset(tab_rq_bits, 0, nb * sizeof(uint8_t));
tab_rp_bits = (uint8_t *) malloc(nb * sizeof(uint8_t));
memset(tab_rp_bits, 0, nb * sizeof(uint8_t));
nb_loop = nb_fail = 0;
/* WRITE BIT */
rc = modbus_write_bit(ctx, addr, tab_rq_bits[0]);
if (rc != 1) {
printf("ERROR modbus_write_bit (%d)\n", rc);
printf("Address = %d, value = %d\n", addr, tab_rq_bits[0]);
nb_fail++;
} else {
rc = modbus_read_bits(ctx, addr, 1, tab_rp_bits);
if (rc != 1 || tab_rq_bits[0] != tab_rp_bits[0]) {
printf("ERROR modbus_read_bits single (%d)\n", rc);
printf("address = %d\n", addr);
nb_fail++;
}
}
printf("Test: ");
if (nb_fail)
printf("%d FAILS\n", nb_fail);
else
printf("SUCCESS\n");
free(tab_rq_bits);
free(tab_rp_bits);
/* Close the connection */
modbus_close(ctx);
modbus_free(ctx);
return 0;
That FF you see right before the Modbus function is actually correct. Quoting the Modbus Implementation Guide, page 23:
On TCP/IP, the MODBUS server is addressed using its IP address; therefore, the
MODBUS Unit Identifier is useless. The value 0xFF has to be used.
So libmodbus is just sticking to the Modbus specification. I'm assuming, then, that the problem is in easymodbus, which is apparently expecting you to use 0x01as the unit id in your queries.
I imagine you don't want to mess with easymodbus, so you can fix this problem pretty easily from libmodbus: just change the default unit id:
modbus_set_slave(ctx, 1);
You could also go with:
rc = modbus_set_slave(ctx, MODBUS_BROADCAST_ADDRESS);
ASSERT_TRUE(rc != -1, "Invalid broadcast address");
to make your client address all slaves within the network, if you have more than one.
You have more info and a short explanation of where this problem is coming from in the libmodbus man page for modbus_set_slave function.
For a very comprehensive example, you can check libmodbus unit tests
And regarding your question number 5, I don't know how to answer it, the zeros you mean are supposed to be the states (true or false) you want to write (or read) to the coils. For writing you can change them with the value field of function modbus_write_bit(ctx, address, value).
I'm very grateful for your reply.
I tested the read/write function using the 'unit-test-server/client' code you recommended.
I've reviewed the code, but there are still many things I don't know.
However, there is an address value that acts after testing each other with unit-test-server/client code and there is an address value that does not work
(Do you know why?).
-Checked and found that the UT_BITS_ADDRESS (address value) value operates from 0x130 to 0x150
-'error Illegal data address' occurs at values below -0x130 and above 0x150
-The address I want to read/write is 0x0001 to 0x0004(Do you know how to do?).
I want to know how to process and transmit data like the TX part of the right picture.
enter image description here
I'm running both client and server in my Linux environment and I'm doing read/write testing.
Among the wrong pictures...[06][FF]... <-- I want to know how to modify FF part (to change the value to 01 as shown in the picture)
enter image description here
and "modbus_set_slave" is the function for modbus rtu?
I'd like to communicate PC Program and Linux device in the end.
so Which part do I use that function?
I thanks for your concern again.
I am new to modbus. I have spent hours reading the Help(?) files, which never seem to give you an example! I am using C on a Raspberry Pi, model3 and have installed libmodbus. I am trying to talk to an epSolar solar panel controller via an FTDI USB to RS485 converter.
The epSolar docs say that the Read Input registers start at address 3000 and continue to 311D. I am trying to read 3104.
I modified the code below. It connects to the device but trying to read input register 0x04 always returns -1:
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>
#include <errno.h>
#include <modbus.h>
enum {TCP, RTU};
int main(int argc, char *argv[])
{
int socket;
modbus_t *ctx;
modbus_mapping_t *mb_mapping;
int rc;
int use_backend;
int i;
uint16_t tab_reg[64];
use_backend = RTU;
printf("Waiting for Serial connection\n");
ctx = modbus_new_rtu("/dev/SOLAR", 115200, 'N', 8, 1);
modbus_set_slave(ctx, 0);
//modbus_connect(ctx);
if(modbus_connect(ctx) == -1)
{
fprintf(stderr, "Serial connection failed:
%s\n", modbus_strerror(errno));
modbus_free(ctx);
return -1;
}
printf("Serial connection started!\n");
mb_mapping = modbus_mapping_new(MODBUS_MAX_READ_BITS, 0,
MODBUS_MAX_READ_REGISTERS, 0);
if(mb_mapping == NULL)
{
fprintf(stderr, "Failed to allocate the mapping: %s\n",
modbus_strerror(errno));
modbus_free(ctx);
return -1;
}
rc = modbus_read_input_registers(ctx, 1, 0x0A, tab_reg);
if(rc == -1)
{
fprintf(stderr, "%s\n", modbus_strerror(errno));
return -1;
}
for(i=0; i < rc; i++)
printf("reg[%d]=%d (0x%X)\n", i, tab_reg[i], tab_reg[i]);
modbus_mapping_free(mb_mapping);
modbus_free(ctx);
modbus_close(ctx);
return 0;
}
It connects fine and allocates the mapping, but rc is always -1 with error message that the port has timed out.
I have run out of ideas and feel like I am navigating through treacle!
Any help most appreciated.
I am also new to Modbus. With my current experience, make sure you are allocating enough memory for the tab_reg for storing the results. Also try setting the Debug mode on i.e modbus_set_debug(ctx, TRUE); to Check for the request and response code.
I know this is a really old question, but hopefully this answer will help anyone who lands here via a Google search.
I can see a few points that need some help.
As commented by Saad above, the modbus server ID above is incorrect. ID 0 is reserved for broadcast messages, which a slave will not respond to. Find out what the Modbus ID for the target device is, and use that.
I think what's tricking you is that you'll also always get a proper "connect" as long as the serial port you provided is valid. This isn't a connection to any particular device so much as it's a connection to the Modbus network port. You're getting a timeout because a response was expected by libmodbus, but no response was received on the wire.
There are several other little troubles in the code presented, but given the age of this post I almost feel like I'm nitpicking something the OP probably already solved. The big problem is the unworkable slave ID. Other minor problems include: unnecessary use of modbus_mapping (struct for use on server/slaves), possible misallocation of modbus_mapping (no space allocated for input registers).
The situation is that I have a blocking pipe or socket fd to which I want to write() without blocking, so I do a select() first, but that still doesn't guarantee that write() will not block.
Here is the data I have gathered. Even if select() indicates that
writing is possible, writing more than PIPE_BUF bytes can block.
However, writing at most PIPE_BUF bytes doesn't seem to block in
practice, but it is not mandated by the POSIX spec.
That only specifies atomic behavior. Python(!) documentation states that:
Files reported as ready for writing by select(), poll() or similar
interfaces in this module are guaranteed to not block on a write of up
to PIPE_BUF bytes. This value is guaranteed by POSIX to be at least
512.
In the following test program, set BUF_BYTES to say 100000 to block in
write() on Linux, FreeBSD or Solaris following a successful select. I
assume that named pipes have similar behavior to anonymous pipes.
Unfortunately the same can happen with blocking sockets. Call
test_socket() in main() and use a largish BUF_BYTES (100000 is good
here too). It's unclear whether there is a safe buffer size like
PIPE_BUF for sockets.
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <limits.h>
#include <stdio.h>
#include <sys/select.h>
#include <unistd.h>
#define BUF_BYTES PIPE_BUF
char buf[BUF_BYTES];
int
probe_with_select(int nfds, fd_set *readfds, fd_set *writefds,
fd_set *exceptfds)
{
struct timeval timeout = {0, 0};
int n_found = select(nfds, readfds, writefds, exceptfds, &timeout);
if (n_found == -1) {
perror("select");
}
return n_found;
}
void
check_if_readable(int fd)
{
fd_set fdset;
FD_ZERO(&fdset);
FD_SET(fd, &fdset);
printf("select() for read on fd %d returned %d\n",
fd, probe_with_select(fd + 1, &fdset, 0, 0));
}
void
check_if_writable(int fd)
{
fd_set fdset;
FD_ZERO(&fdset);
FD_SET(fd, &fdset);
int n_found = probe_with_select(fd + 1, 0, &fdset, 0);
printf("select() for write on fd %d returned %d\n", fd, n_found);
/* if (n_found == 0) { */
/* printf("sleeping\n"); */
/* sleep(2); */
/* int n_found = probe_with_select(fd + 1, 0, &fdset, 0); */
/* printf("retried select() for write on fd %d returned %d\n", */
/* fd, n_found); */
/* } */
}
void
test_pipe(void)
{
int pipe_fds[2];
size_t written;
int i;
if (pipe(pipe_fds)) {
perror("pipe failed");
_exit(1);
}
printf("read side pipe fd: %d\n", pipe_fds[0]);
printf("write side pipe fd: %d\n", pipe_fds[1]);
for (i = 0; ; i++) {
printf("i = %d\n", i);
check_if_readable(pipe_fds[0]);
check_if_writable(pipe_fds[1]);
written = write(pipe_fds[1], buf, BUF_BYTES);
if (written == -1) {
perror("write");
_exit(-1);
}
printf("written %d bytes\n", written);
}
}
void
serve()
{
int listenfd = 0, connfd = 0;
struct sockaddr_in serv_addr;
listenfd = socket(AF_INET, SOCK_STREAM, 0);
memset(&serv_addr, '0', sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = htonl(INADDR_ANY);
serv_addr.sin_port = htons(5000);
bind(listenfd, (struct sockaddr*)&serv_addr, sizeof(serv_addr));
listen(listenfd, 10);
connfd = accept(listenfd, (struct sockaddr*)NULL, NULL);
sleep(10);
}
int
connect_to_server()
{
int sockfd = 0, n = 0;
struct sockaddr_in serv_addr;
if((sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0) {
perror("socket");
exit(-1);
}
memset(&serv_addr, '0', sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(5000);
if(inet_pton(AF_INET, "127.0.0.1", &serv_addr.sin_addr) <= 0) {
perror("inet_pton");
exit(-1);
}
if (connect(sockfd, (struct sockaddr *)&serv_addr, sizeof(serv_addr)) < 0) {
perror("connect");
exit(-1);
}
return sockfd;
}
void
test_socket(void)
{
if (fork() == 0) {
serve();
} else {
int fd;
int i;
int written;
sleep(1);
fd = connect_to_server();
for (i = 0; ; i++) {
printf("i = %d\n", i);
check_if_readable(fd);
check_if_writable(fd);
written = write(fd, buf, BUF_BYTES);
if (written == -1) {
perror("write");
_exit(-1);
}
printf("written %d bytes\n", written);
}
}
}
int
main(void)
{
test_pipe();
/* test_socket(); */
}
Unless you wish to send one byte at a time whenever select() says the fd is ready for writes, there is really no way to know how much you will be able to send and even then it is theoretically possible (at least in the documentation, if not in the real world) for select to say it's ready for writes and then the condition to change in the time between select() and write().
Non blocking sends are the solution here and you don't need to change your file descriptor to non blocking mode to send one message in non-blocking form if you change from using write() to send(). The only thing you need to change is to add the MSG_DONTWAIT flag to the send call and that will make the one send non-blocking without altering your socket's properties. You don't even need to use select() at all in this case either since the send() call will give you all the information you need in the return code - if you get a return code of -1 and the errno is EAGAIN or EWOULDBLOCK then you know you can't send any more.
The Posix section you cite clearly states:
[for pipes] If the O_NONBLOCK flag is clear, a write request may cause the thread to block, but on normal completion it shall return nbyte.
[for streams, which presumably includes streaming sockets] If O_NONBLOCK is clear, and the STREAM cannot accept data (the STREAM write queue is full due to internal flow control conditions), write() shall block until data can be accepted.
The Python documentation you quoted can therefore only apply to non-blocking mode only. But as you're not using Python it has no relevance anyway.
The answer by ckolivas is the correct one but, having read this post, I thought I could add some test data for interest's sake.
I quickly wrote a slow reading tcp server (sleeping 100ms between reads) which did a read of 4KB on each cycle. Then a fast writing client which I used for testing various scenarios on write. Both were using select before read (server) or write (client).
This was on Linux Mint 18 running under a Windows 7 VM (VirtualBox) with 1GB of memory assigned.
For the blocking case
If a write of a "certain number of bytes" became possible, select returned and the write either completed in total immediately or blocked until it completed. On my system, this "certain number of bytes" was at least 1MB. On the OP's system, this was clearly much less (less than 100,000).
So select did not return until a write of at least 1MB was possible. There was never a case (that I saw) where select would return if a smaller write would subsequently block. Thus select + write(x) where x was 4K or 8K or 128K never write blocked on this system.
This is all very well of course but this was an unloaded VM with 1GB of memory. Other systems would be expected to be different. However, I would expect that writes below a certain magic number (PIPE_BUF perhaps), issued subsequent to a select, would never block on all POSIX compliant systems. However (again) I don't see any documentation to that effect so one can't rely on that behaviour (even though the Python documentation clearly does). As the OP says, it's unclear whether there is a safe buffer size like PIPE_BUF for sockets. Which is a pity.
Which is what ckolivas' post says even though I'd argue that no rational system would return from a select when only a single byte was available!
Extra information:
At no point (in normal operation) did write return anything other than the full amount requested (or an error).
If the server was killed (ctrl-c), the client side write would immediately return a value (usually less than was requested - no normal operation!) with no other indication of error. The next select call would return immediately and the subsequent write would return -1 with errno saying "Connection reset by peer". Which is what one would expect - write as much as you can this time, fail the next time.
This (and EINTR) appears to be the only time write returns a number > 0 but less than requested.
If the server side was reading and the client was killed, the server continued to read all available data until it ran out. Then it read a zero and closed the socket.
For the non-blocking case:
The behaviour below some magic value is the same as above. select returns, write doesn't block (of course) and the write completes in its totality.
My issue was what happens otherwise. The send(2) man page says that in non-blocking mode, send fails with EAGAIN or EWOULDBLOCK. Which might imply (depending on how you read it) that it's all or nothing. Except that it also says select may be used to determine when it is possible to send more data. So it can't be all or nothing.
Write (which is the same as send with no flags), says it can return less than requested. This nitpicking seems pedantic but the man pages are the gospel so I read them as such.
In testing, a non-blocking write with a value larger than some particular value returned less than requested. This value wasn't constant, it changed from write to write but it was always pretty large (> 1 to 2MB).
I have been exploring on function select() to check if some sockets are ready to read and I must admit that I'm a bit confused. The MSDN says "The select function returns the total number of socket handles that are ready and contained in the fd_set structures".
Suppose I have 3 sockets and 2 sockets are ready, select() returns 2, but this gives me no information which 2 of these 3 sockets are ready to read so how can I check it?
On stack overflow I came across this: When select returns, it has updated the sets to show which file descriptors have become ready for read/write/exception
So I put breakpoints in my program to track my fd_set structure. What I have realized is that ( just one socket in fd_set): If socket is ready to read select():
returns 1
leaves fd_count (The number of sockets in the set) untouched
leaves fd_array (An array of sockets that are in the set.) untouched
If client did not send any data addressed to that socket select():
returns 0
decreases fd_count to 0
leaves fd_array untouched
If I call select() again and client again sent no data:
return -1 (I think this is because of the fd_count value - 0)
I guess I miss some crucial rules how select() works and what this function does but I can't figure out it.
Here is some code snippet to show what I do to call select():
CServer::CServer(char *ipAddress,short int portNumber)
{ // Creating socket
ServerSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (ServerSocket == INVALID_SOCKET)
std::cout << "I was not able to create ServerSocket\n";
else
std::cout << "ServerSocket created successfully\n";
// Initialization of ServerSocket Address
ServerSocketAddress.sin_family = AF_INET;
ServerSocketAddress.sin_addr.S_un.S_addr = inet_addr(ipAddress);
ServerSocketAddress.sin_port = htons(portNumber);
// Binding ServerSocket to ServerSocket Address
if (bind(ServerSocket, (SOCKADDR*)&ServerSocketAddress, sizeof(ServerSocketAddress)) == 0)
std::cout << "Binding ServersSocket and ServerSocketAddress ended with success\n";
else
std::cout << "There were problems with binding ServerSocket and ServerSocket Address\n";
// Initialization of the set of sockets
ServerSet.fd_count = 1;
ServerSet.fd_array[0] = ServerSocket;
}
In main :
CServer Server(IP_LOOPBACK_ADDRESS, 500);
tmp = select(0, &Server.ServerSet, NULL, NULL, &TimeOut);
Should't the fd_array be filled with 0 values after the select() call, when there is no socket that can be read?
You're suppose to use the FD_SET macro and friends. You're not doing that.
I am trying to broadcast data but the output is udp send failed. I chose a random port 33333. What's wrong with my code?
int main()
{
struct sockaddr_in udpaddr = { sin_family : AF_INET };
int xudpsock_fd,sock,len = 0,ret = 0,optVal = 0;
char buffer[255];
char szSocket[64];
memset(buffer,0x00,sizeof(buffer));
memset(&udpaddr,0,sizeof(udpaddr));
udpaddr.sin_addr.s_addr = INADDR_BROADCAST;
udpaddr.sin_port = htons(33333);
xudpsock_fd = socket(PF_INET,SOCK_DGRAM,IPPROTO_UDP);
optVal = 1;
ret = setsockopt(xudpsock_fd,SOL_SOCKET,SO_BROADCAST,(char*)&optVal,sizeof(optVal));
strcpy(buffer,"this is a test msg");
len = sizeof(buffer);
ret = sendto(xudpsock_fd,buffer,len,0,(struct sockaddr*)&udpaddr,sizeof(udpaddr));
if (ret == -1)
printf("udp send failed\n");
else
printf("udp send succeed\n");
return (0);
}
One problem is that the address family you are trying to send to is zero (AF_UNSPEC). Although you initialize the family to AF_INET at the top of the function, you later zero it out with memset.
On the system I tested with, the send actually works anyway for some strange reason despite the invalid address family, but you should definitely try fixing that first.
You probably had a problem with your default route (eg, you didn't have one). sendto needs to pick an interface to send the packet on, but the destination address was probably outside the Destination/Genmask for each defined interface (see the 'route' command-line tool).
The default route catches this type of packet and sends it through an interface despite this mismatch.
Setting the destination to 127.255.255.255 will usually cause the packet to be sent through the loopback interface (127.0.0.1), meaning it will be able to be read by applications that (in this case) are run on the local machine.