TLS connection problem in iOS due to Socket File descriptor value increase more than 1023 - sockets

We are using RESIProcate Lib in iOS, here in socket.hxx select() was used for getting FD value.
we are facing issue that FD value is getting increased in every new TLS connection establishment.
socket.hxx -->> https://github.com/resiprocate/resiprocate/blob/master/rutil/Socket.hxx
int select(struct timeval& tv)
{
return numReady = ::select(size, &read, &write, &except, &tv);
}
int selectMilliSeconds(unsigned long ms)
{
struct timeval tv;
tv.tv_sec = (ms/1000);
tv.tv_usec = (ms%1000)*1000;
return select(tv);
}
from Linux manual it is written that:
select() can monitor only file descriptors numbers that are less than FD_SETSIZE (1024)—an unreasonably low limit for many modern applications—and this limitation will not change.
(https://www.man7.org/linux/man-pages/man2/select.2.html)
we faced this problem with iOS platform. when size param in select() is getting more that 1024 value.
due to this:
resip_assert(read.fd_count < FD_SETSIZE); // Ensure there is room to add new FD
#endif
FD_SET(fd, &read);
size = ( int(fd+1) > size) ? int(fd+1) : size;
tls connection is getting interrupted and not proceeding further.
So is there any way in the current resip to overcome this situation if FD value becomes more than 1023?
in internalTransport.cxx -> fd = ::socket(ipVer == V4 ? PF_INET : PF_INET6, SOCK_STREAM, 0);
when fd is becoming more than 1023, we are facing the above-mentioned problem.
Is there any way to control this value so that it may always stay under 1023 in iOS?
one more thing:
is there any relation with server and client connection time period, which may cause the FD value increase on the client side in each new set of TCP/TLS connections by using the socket method even though the previous socket was closed?

Related

C select is overwriting timeout value [duplicate]

This question already has answers here:
Is timeout changed after a call to select in c?
(5 answers)
Closed 25 days ago.
In a very simple C program, using select() to check any new read data on a socket, when I use the optional timeout parameter, it is being overwritten by select(). It looks like it resets it to values of seconds and microseconds it actually waited, so when data is coming sooner than the timeout, it will have much smaller values, leading to smaller and smaller timeouts unless timeout is reset, when select() is called in a loop.
I could not find any information on this behavior in select() description. I am using Linux Ubuntu 18.04 in my testing. It looks like I have to reset the timeout value every time before calling select() to keep the same timeout?
The code snippet is this:
void *main_udp_loop(void *arg)
{
struct UDP_CTX *ctx = (UDP_CTX*)arg;
fd_set readfds = {};
struct sockaddr peer_addr = { 0 };
int peer_addr_len = sizeof(peer_addr);
while (1)
{
struct timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 850000; // wait 0.85 second.
FD_ZERO(&readfds);
FD_SET(ctx->udp_socketfd, &readfds);
int activity = select( ctx->udp_socketfd + 1 , &readfds , NULL , NULL , &timeout);
if ((activity < 0) && (errno != EINTR))
{
printf("Select error: Exiting main thread\n");
return NULL;
}
if (timeout.tv_usec != 850000)
{
printf ("Timeout changed: %ld %ld\n", (long)timeout.tv_sec, (long)timeout.tv_usec);
}
if (activity == 0)
{
printf ("No activity from select: %ld \n", (long)time(0));
continue;
}
...
}
This is documented behavior in the Linux select() man page:
On Linux, select() modifies timeout to reflect the amount of time not slept; most other implementations do not do this. (POSIX.1 permits either behavior.) This causes problems both when Linux code which reads timeout is ported to other operating systems, and when code is ported to Linux that reuses a struct timeval for multiple select()s in a loop without reinitializing it. Consider timeout to be undefined after select() returns.
So, yes, you have to reset the timeout value every time you call select().

NetworkingDriverKit - How can I access packet data?

I've been creating a virtual ethernet interface. I've opened asynchronous communication with a controlling application and every time there are new packets, the controlling app is notified and then asks for the packet data. The packet data is stored in a simple struct, with uint8_t[1600] for the bytes, and uint32_t for the length. The dext is able to populate this struct with dummy data every time a packet is available, with the dummy data visible on the controlling application. However, I'm struggling to fill it with the real packet data.
The IOUserNetworkPacket provides metadata about a packet. It contains a packets timestamp, size, etc, but it doesn't seem to contain the packet's data. There are the GetDataOffset() and GetMemorySegmentOffset() methods which seem to return byte offsets for where the packet data is located in their memory buffer. My instinct tells me to add this offset to the pointer of wherever the packet data is stored. The problem is I have no idea where the packets are actually stored.
I know they are managed by the IOUserNetworkPacketBufferPool, but I don't think that's where their memory is. There is the CopyMemoryDescriptor() method which gives an IOMemoryDescriptor of its contents. I tried using the descriptor to create an IOMemoryMap, using it to call GetAddress(). The pointers to all the mentioned objects lead to junk data.
I must be approaching this entirely wrong. If anyone knows how to access the packet data, or has any ideas, I would appreciate any help. Thanks.
Code snippet within IOUserClient::ExternalMethod:
case GetPacket:
{
IOUserNetworkPacket *packet =
ivars->m_provider->getPacket();
GetPacket_Output output;
output.packet_size = packet->getDataLength();
IOUserNetworkPacketBufferPool *pool;
packet->GetPacketBufferPool(&pool);
IOMemoryDescriptor *memory = nullptr;
pool->CopyMemoryDescriptor(&memory);
IOMemoryMap *map = nullptr;
memory->CreateMapping(0, 0, 0, 0, 0, &map);
uint64_t address = map->GetAddress()
+ packet->getMemorySegmentOffset();
memcpy(output.packet_data,
(void*)address, packet->getDataLength());
in_arguments->structureOutput = OSData::withBytes(
&output, sizeof(GetPacket_Output));
// free stuff
} break;
The problem was caused by an IOUserNetworkPacketBufferPool bug. My bufferSize was set to 1600 except this value was ignored and replaced with 2048. The IOUserNetworkPackets acted as though the bufferSize was 1600 and so they gave an invalid offset.
Creating the buffer pool and mapping it:
kern_return_t
IMPL(FooDriver, Start)
{
// ...
IOUserNetworkPacketBufferPool::Create(this, "FooBuffer",
32, 32, 2048, &ivars->packet_buffer));
packet_buffer->CopyMemoryDescriptor(ivars->packet_buffer_md);
ivars->packet_md->Map(0, 0, 0, IOVMPageSize,
&ivars->packet_buffer_addr, &ivars->packet_buffer_length));
// ...
}
Getting the packet data:
void FooDriver::getPacketData(
IOUserNetworkPacket *packet,
uint8_t *packet_data,
uint32_t *packet_size
) {
uint8_t packet_head;
uint64_t packet_offset;
packet->GetHeadroom(&packet_head);
packet->GetMemorySegmentOffset(&packet_offset);
uint8_t *buffer = (uint8_t*)(ivars->packet_buffer_addr
+ packet_offset + packet_head);
*packet_size = packet->getDataLength();
memcpy(packet_data, buffer, *packet_size);
}

How to work out 'read/write' function using the libmodbus?(c code)

I'm gonna to read/write under the modbus-tcp specification.
So, I'm trying to code the client and server in the linux environment.
(I would communicate with the windows program(as a client) using the modbus-tcp.)
but it doesn't work as I want, so I ask you here.
I'm testing the client code for linux as a client and the easymodbus as a server.
I used the libmodbus code.
I'd like to read coil(0x01) and write coil(0x05).
When the code is executed using the libmodbus, 'ff' is printed out from the Unit ID part.(according to the manual, 01 should be output for modbus-tcp.
I don't know why 'ff' is printed(photo attached).
Wrong result:
Expected result:
'[00] [00] .... [00]' ; Do you know where to control this part?
Do you have or do you know the sample code that implements the 'read/write' function using the libmodbus?
please let me know the information, if you know that.
ctx = modbus_new_tcp("192.168.0.99", 502);
modbus_set_debug(ctx, TRUE);
if (modbus_connect(ctx) == -1) {
fprintf(stderr, "Connection failed: %s\n",
modbus_strerror(errno));
modbus_free(ctx);
return -1;
}
tab_rq_bits = (uint8_t *) malloc(nb * sizeof(uint8_t));
memset(tab_rq_bits, 0, nb * sizeof(uint8_t));
tab_rp_bits = (uint8_t *) malloc(nb * sizeof(uint8_t));
memset(tab_rp_bits, 0, nb * sizeof(uint8_t));
nb_loop = nb_fail = 0;
/* WRITE BIT */
rc = modbus_write_bit(ctx, addr, tab_rq_bits[0]);
if (rc != 1) {
printf("ERROR modbus_write_bit (%d)\n", rc);
printf("Address = %d, value = %d\n", addr, tab_rq_bits[0]);
nb_fail++;
} else {
rc = modbus_read_bits(ctx, addr, 1, tab_rp_bits);
if (rc != 1 || tab_rq_bits[0] != tab_rp_bits[0]) {
printf("ERROR modbus_read_bits single (%d)\n", rc);
printf("address = %d\n", addr);
nb_fail++;
}
}
printf("Test: ");
if (nb_fail)
printf("%d FAILS\n", nb_fail);
else
printf("SUCCESS\n");
free(tab_rq_bits);
free(tab_rp_bits);
/* Close the connection */
modbus_close(ctx);
modbus_free(ctx);
return 0;
That FF you see right before the Modbus function is actually correct. Quoting the Modbus Implementation Guide, page 23:
On TCP/IP, the MODBUS server is addressed using its IP address; therefore, the
MODBUS Unit Identifier is useless. The value 0xFF has to be used.
So libmodbus is just sticking to the Modbus specification. I'm assuming, then, that the problem is in easymodbus, which is apparently expecting you to use 0x01as the unit id in your queries.
I imagine you don't want to mess with easymodbus, so you can fix this problem pretty easily from libmodbus: just change the default unit id:
modbus_set_slave(ctx, 1);
You could also go with:
rc = modbus_set_slave(ctx, MODBUS_BROADCAST_ADDRESS);
ASSERT_TRUE(rc != -1, "Invalid broadcast address");
to make your client address all slaves within the network, if you have more than one.
You have more info and a short explanation of where this problem is coming from in the libmodbus man page for modbus_set_slave function.
For a very comprehensive example, you can check libmodbus unit tests
And regarding your question number 5, I don't know how to answer it, the zeros you mean are supposed to be the states (true or false) you want to write (or read) to the coils. For writing you can change them with the value field of function modbus_write_bit(ctx, address, value).
I'm very grateful for your reply.
I tested the read/write function using the 'unit-test-server/client' code you recommended.
I've reviewed the code, but there are still many things I don't know.
However, there is an address value that acts after testing each other with unit-test-server/client code and there is an address value that does not work
(Do you know why?).
-Checked and found that the UT_BITS_ADDRESS (address value) value operates from 0x130 to 0x150
-'error Illegal data address' occurs at values below -0x130 and above 0x150
-The address I want to read/write is 0x0001 to 0x0004(Do you know how to do?).
I want to know how to process and transmit data like the TX part of the right picture.
enter image description here
I'm running both client and server in my Linux environment and I'm doing read/write testing.
Among the wrong pictures...[06][FF]... <-- I want to know how to modify FF part (to change the value to 01 as shown in the picture)
enter image description here
and "modbus_set_slave" is the function for modbus rtu?
I'd like to communicate PC Program and Linux device in the end.
so Which part do I use that function?
I thanks for your concern again.

Detecting CAN bus errors under socketCAN linux driver

Our products are using a well known CANopen stack, which uses socketCAN, on an embedded Beaglebone Black based system running under Ubuntu 14.04 LTS. But for some reason, even though the stack we're using will detect when the CAN bus goes into a PASSIVE state or even a BUS OFF state, it never indicates when the CAN bus recovers from errors and goes out of a PASSIVE or warning state, and enters a non error state.
If I were to query the socketCAN driver directly (via ioctl calls), would I be able to detect when the CAN bus goes in and out of a warning state (which is less than 127 errors), in and out of a PASSIVE state (greater than 127 errors) or goes BUS OFF (greater than 255 errors)?
I'd like to know if I'd be wasting my time doing this or is there a better way to detect, accurately and in real-time, all conditions of a CAN bus?
I have only a partial solution to that problem.
As you are using socketCAN, the interface is seen as a standard network interface, on which we can query the status.
Based on How to check Ethernet in Linux? (replace "eth0" by "can0"), you can check the link status.
This is not real-time, but can be executed in a periodic thread to check the bus state.
So while this is an old question, I just happened to stumble upon it (while searching for something only mildly related).
SocketCAN provides all the means for detecting error frames OOB.
Assuming your code looks similar to this:
int readFromCan(int socketFd, unsigned char* data, unsigned int* rxId) {
int32_t bytesRead = -1;
struct can_frame canFrame = {0};
bytesRead = (int32_t)read(socketFd, &canFrame, sizeof(can_frame));
if (bytesRead >= 0) {
bytesRead = canFrame.can_dlc;
if (data) {
memcpy(data, canFrame.data, readBytes);
}
if (rxId) {
*rxId = canFrame.can_id; // This will come in handy
}
}
return bytesRead;
}
void doStuffWithMessage() {
int32_t mySocketFd = fooGetSocketFd();
int32_t receiveId = 0;
unsigned char myData[8] = {0};
int32_t dataLength = 0;
if ((dataLength = readFromCan(mySocketFd, myData, &receiveId) == -1) {
// Handle error
return;
}
if (receiveId & CAN_ERR_MASK != 0) {
// Handle error frame
return;
}
// Do stuff with your data
}

Broadcast sendto failed

I am trying to broadcast data but the output is udp send failed. I chose a random port 33333. What's wrong with my code?
int main()
{
struct sockaddr_in udpaddr = { sin_family : AF_INET };
int xudpsock_fd,sock,len = 0,ret = 0,optVal = 0;
char buffer[255];
char szSocket[64];
memset(buffer,0x00,sizeof(buffer));
memset(&udpaddr,0,sizeof(udpaddr));
udpaddr.sin_addr.s_addr = INADDR_BROADCAST;
udpaddr.sin_port = htons(33333);
xudpsock_fd = socket(PF_INET,SOCK_DGRAM,IPPROTO_UDP);
optVal = 1;
ret = setsockopt(xudpsock_fd,SOL_SOCKET,SO_BROADCAST,(char*)&optVal,sizeof(optVal));
strcpy(buffer,"this is a test msg");
len = sizeof(buffer);
ret = sendto(xudpsock_fd,buffer,len,0,(struct sockaddr*)&udpaddr,sizeof(udpaddr));
if (ret == -1)
printf("udp send failed\n");
else
printf("udp send succeed\n");
return (0);
}
One problem is that the address family you are trying to send to is zero (AF_UNSPEC). Although you initialize the family to AF_INET at the top of the function, you later zero it out with memset.
On the system I tested with, the send actually works anyway for some strange reason despite the invalid address family, but you should definitely try fixing that first.
You probably had a problem with your default route (eg, you didn't have one). sendto needs to pick an interface to send the packet on, but the destination address was probably outside the Destination/Genmask for each defined interface (see the 'route' command-line tool).
The default route catches this type of packet and sends it through an interface despite this mismatch.
Setting the destination to 127.255.255.255 will usually cause the packet to be sent through the loopback interface (127.0.0.1), meaning it will be able to be read by applications that (in this case) are run on the local machine.