How often can you write to a socket when uploading through FTP? - sockets

I'm working on an integrated FTP upload module and using code online I've found this method is called at intervals of 300 milliseconds:
var uploadInterval:int = 300;
var bufferSize:uint = 4096;
private function sendData():void {
var sourceBytes:ByteArray = new ByteArray();
sourceFile.readBytes(sourceBytes, 0, bufferSize);
passiveSocket.writeBytes(sourceBytes, 0, sourceBytes.bytesAvailable);
passiveSocket.flush();
if (sourceFile.bytesAvailable < bufferSize) {
bufferSize = sourceFile.bytesAvailable;
}
}
interval = setInterval(sendData, uploadInterval);
If I set the interval to 5 ms the file uploads in 10 seconds. If I set it 300 ms it loads in around 37 seconds. Is it alright to set it to 5ms instead of 300?
Update:
It looks like there is a commented out command that is using blocking mode. I searched more online and it looks like it's using an interval to be able to get upload progress information. Anyway, here is the original commented out method. I haven't tested it:
private function sendBlockData():void {
var bytes:int;
var sourceBytes:ByteArray = new ByteArray();
sourceFile.readBytes(sourceBytes, bytes, Math.min(bufferSize, sourceFile.bytesAvailable - bytes));
bytes += bufferSize;
bufferSize = Math.min(bufferSize, sourceFile.bytesAvailable - bytes);
passiveSocket.writeBytes(sourceBytes, 0, sourceBytes.bytesAvailable);
passiveSocket.flush();
}

If the socket is blocking, as is usually the default you can write as fast as you can and the write() call will block until the OS can take more data.
So I don't know why the code you found is waiting some milliseconds between write() calls, in the general case you loop and send all your available data without waiting. The OS takes care of blocking the sender when needed.

Related

C select is overwriting timeout value [duplicate]

This question already has answers here:
Is timeout changed after a call to select in c?
(5 answers)
Closed 25 days ago.
In a very simple C program, using select() to check any new read data on a socket, when I use the optional timeout parameter, it is being overwritten by select(). It looks like it resets it to values of seconds and microseconds it actually waited, so when data is coming sooner than the timeout, it will have much smaller values, leading to smaller and smaller timeouts unless timeout is reset, when select() is called in a loop.
I could not find any information on this behavior in select() description. I am using Linux Ubuntu 18.04 in my testing. It looks like I have to reset the timeout value every time before calling select() to keep the same timeout?
The code snippet is this:
void *main_udp_loop(void *arg)
{
struct UDP_CTX *ctx = (UDP_CTX*)arg;
fd_set readfds = {};
struct sockaddr peer_addr = { 0 };
int peer_addr_len = sizeof(peer_addr);
while (1)
{
struct timeval timeout;
timeout.tv_sec = 0;
timeout.tv_usec = 850000; // wait 0.85 second.
FD_ZERO(&readfds);
FD_SET(ctx->udp_socketfd, &readfds);
int activity = select( ctx->udp_socketfd + 1 , &readfds , NULL , NULL , &timeout);
if ((activity < 0) && (errno != EINTR))
{
printf("Select error: Exiting main thread\n");
return NULL;
}
if (timeout.tv_usec != 850000)
{
printf ("Timeout changed: %ld %ld\n", (long)timeout.tv_sec, (long)timeout.tv_usec);
}
if (activity == 0)
{
printf ("No activity from select: %ld \n", (long)time(0));
continue;
}
...
}
This is documented behavior in the Linux select() man page:
On Linux, select() modifies timeout to reflect the amount of time not slept; most other implementations do not do this. (POSIX.1 permits either behavior.) This causes problems both when Linux code which reads timeout is ported to other operating systems, and when code is ported to Linux that reuses a struct timeval for multiple select()s in a loop without reinitializing it. Consider timeout to be undefined after select() returns.
So, yes, you have to reset the timeout value every time you call select().

NetworkingDriverKit - How can I access packet data?

I've been creating a virtual ethernet interface. I've opened asynchronous communication with a controlling application and every time there are new packets, the controlling app is notified and then asks for the packet data. The packet data is stored in a simple struct, with uint8_t[1600] for the bytes, and uint32_t for the length. The dext is able to populate this struct with dummy data every time a packet is available, with the dummy data visible on the controlling application. However, I'm struggling to fill it with the real packet data.
The IOUserNetworkPacket provides metadata about a packet. It contains a packets timestamp, size, etc, but it doesn't seem to contain the packet's data. There are the GetDataOffset() and GetMemorySegmentOffset() methods which seem to return byte offsets for where the packet data is located in their memory buffer. My instinct tells me to add this offset to the pointer of wherever the packet data is stored. The problem is I have no idea where the packets are actually stored.
I know they are managed by the IOUserNetworkPacketBufferPool, but I don't think that's where their memory is. There is the CopyMemoryDescriptor() method which gives an IOMemoryDescriptor of its contents. I tried using the descriptor to create an IOMemoryMap, using it to call GetAddress(). The pointers to all the mentioned objects lead to junk data.
I must be approaching this entirely wrong. If anyone knows how to access the packet data, or has any ideas, I would appreciate any help. Thanks.
Code snippet within IOUserClient::ExternalMethod:
case GetPacket:
{
IOUserNetworkPacket *packet =
ivars->m_provider->getPacket();
GetPacket_Output output;
output.packet_size = packet->getDataLength();
IOUserNetworkPacketBufferPool *pool;
packet->GetPacketBufferPool(&pool);
IOMemoryDescriptor *memory = nullptr;
pool->CopyMemoryDescriptor(&memory);
IOMemoryMap *map = nullptr;
memory->CreateMapping(0, 0, 0, 0, 0, &map);
uint64_t address = map->GetAddress()
+ packet->getMemorySegmentOffset();
memcpy(output.packet_data,
(void*)address, packet->getDataLength());
in_arguments->structureOutput = OSData::withBytes(
&output, sizeof(GetPacket_Output));
// free stuff
} break;
The problem was caused by an IOUserNetworkPacketBufferPool bug. My bufferSize was set to 1600 except this value was ignored and replaced with 2048. The IOUserNetworkPackets acted as though the bufferSize was 1600 and so they gave an invalid offset.
Creating the buffer pool and mapping it:
kern_return_t
IMPL(FooDriver, Start)
{
// ...
IOUserNetworkPacketBufferPool::Create(this, "FooBuffer",
32, 32, 2048, &ivars->packet_buffer));
packet_buffer->CopyMemoryDescriptor(ivars->packet_buffer_md);
ivars->packet_md->Map(0, 0, 0, IOVMPageSize,
&ivars->packet_buffer_addr, &ivars->packet_buffer_length));
// ...
}
Getting the packet data:
void FooDriver::getPacketData(
IOUserNetworkPacket *packet,
uint8_t *packet_data,
uint32_t *packet_size
) {
uint8_t packet_head;
uint64_t packet_offset;
packet->GetHeadroom(&packet_head);
packet->GetMemorySegmentOffset(&packet_offset);
uint8_t *buffer = (uint8_t*)(ivars->packet_buffer_addr
+ packet_offset + packet_head);
*packet_size = packet->getDataLength();
memcpy(packet_data, buffer, *packet_size);
}

TLS connection problem in iOS due to Socket File descriptor value increase more than 1023

We are using RESIProcate Lib in iOS, here in socket.hxx select() was used for getting FD value.
we are facing issue that FD value is getting increased in every new TLS connection establishment.
socket.hxx -->> https://github.com/resiprocate/resiprocate/blob/master/rutil/Socket.hxx
int select(struct timeval& tv)
{
return numReady = ::select(size, &read, &write, &except, &tv);
}
int selectMilliSeconds(unsigned long ms)
{
struct timeval tv;
tv.tv_sec = (ms/1000);
tv.tv_usec = (ms%1000)*1000;
return select(tv);
}
from Linux manual it is written that:
select() can monitor only file descriptors numbers that are less than FD_SETSIZE (1024)—an unreasonably low limit for many modern applications—and this limitation will not change.
(https://www.man7.org/linux/man-pages/man2/select.2.html)
we faced this problem with iOS platform. when size param in select() is getting more that 1024 value.
due to this:
resip_assert(read.fd_count < FD_SETSIZE); // Ensure there is room to add new FD
#endif
FD_SET(fd, &read);
size = ( int(fd+1) > size) ? int(fd+1) : size;
tls connection is getting interrupted and not proceeding further.
So is there any way in the current resip to overcome this situation if FD value becomes more than 1023?
in internalTransport.cxx -> fd = ::socket(ipVer == V4 ? PF_INET : PF_INET6, SOCK_STREAM, 0);
when fd is becoming more than 1023, we are facing the above-mentioned problem.
Is there any way to control this value so that it may always stay under 1023 in iOS?
one more thing:
is there any relation with server and client connection time period, which may cause the FD value increase on the client side in each new set of TCP/TLS connections by using the socket method even though the previous socket was closed?

How to read all the data of unknown length from a StreamSocket in WinRT using DataReader

I have configured my socket to read partial data too like this:
#socket = new Windows.Networking.Sockets.StreamSocket()
hostName = new Windows.Networking.HostName(#ip)
#ensureConnection = #socket.connectAsync(hostName, #port.toString())
.then () =>
#writer = new DataWriter(#socket.outputStream)
#reader = new DataReader(#socket.inputStream)
#reader.inputStreamOptions = InputStreamOptions.partial
Then my function to read from the socket looks like this:
readLineAsync = (reader, buffer = "") ->
while reader.unconsumedBufferLength
byte = reader.readByte()
if byte is 0
return WinJS.Promise.as(buffer)
buffer += String.fromCharCode(byte)
reader.loadAsync(1024).then (readBytes) ->
if readBytes is 0
WinJS.Promise.as(buffer)
else
while reader.unconsumedBufferLength
byte = reader.readByte()
if byte is 0
return WinJS.Promise.as(buffer)
buffer += String.fromCharCode(byte)
readLineAsync(reader, buffer)
There are 2 problems with this function:
With very large responses, the stack builds up with recursive readLineAsync calls. How can I prevent that? Should I use the WinJS Scheduler API or similar to queue the next call to readLineAsync?
Sometimes the reader.loadAsync does not finish when no data is on the socket anymore. Sometimes it does and readByte is 0 then. Why is that?
Why do I loop over the reader's uncunsumedBufferLength on 2 locations in that function? I initially had this code in the loadAsync continuation handler. But since a response can contain a terminating \0 char I need to check for unread data in the readers buffer upon function entry too.
Thats the pseudo loop to send/receive to/from the socket:
readResponseAsync = (reader) ->
return readLineAsync(#reader).then (line) ->
result = parseLine(line)
if result.unknown then return readResponseAsync(reader)
return result
#ensureConnection.then () =>
sendCommand(...)
readResponseAsync(#reader).then (response) ->
# handle response
All the WinRT samples from MS deal with known amount of data on the socket, so they not really fit my scenario.

Simplest way to process a list of items in a multi-threaded manner

I've got a piece of code that opens a data reader and for each record (which contains a url) downloads & processes that page.
What's the simplest way to make it multi-threaded so that, let's say, there are 10 slots which can be used to download and process pages in simultaneousy, and as slots become available next rows are being read etc.
I can't use WebClient.DownloadDataAsync
Here's what i have tried to do, but it hasn't worked (i.e. the "worker" is never ran):
using (IDataReader dr = q.ExecuteReader())
{
ThreadPool.SetMaxThreads(10, 10);
int workerThreads = 0;
int completionPortThreads = 0;
while (dr.Read())
{
do
{
ThreadPool.GetAvailableThreads(out workerThreads, out completionPortThreads);
if (workerThreads == 0)
{
Thread.Sleep(100);
}
} while (workerThreads == 0);
Database.Log l = new Database.Log();
l.Load(dr);
ThreadPool.QueueUserWorkItem(delegate(object threadContext)
{
Database.Log log = threadContext as Database.Log;
Scraper scraper = new Scraper();
dc.Product p = scraper.GetProduct(log, log.Url, true);
ManualResetEvent done = new ManualResetEvent(false);
done.Set();
}, l);
}
}
You do not normally need to play with the Max threads (I believe it defaults to something like 25 per proc for worker, 1000 for IO). You might consider setting the Min threads to ensure you have a nice number always available.
You don't need to call GetAvailableThreads either. You can just start calling QueueUserWorkItem and let it do all the work. Can you repro your problem by simply calling QueueUserWorkItem?
You could also look into the Parallel Task Library, which has helper methods to make this kind of stuff more manageable and easier.