iOS: throttle bandwidth of e.g. Alamofire - swift

Is it possible to throttle the bandwidth of an upload operation in e.g. Alamofire?I would like to upload data in the background while the user is using the app and up- and downloading more important stuff.Therefore, I would like to throttle the bandwidth in the background under specific circumstances.
The only possibility I found so far is using ASIHTTPRequest, which has a maxBandwidthPerSecond property, but that library is far too old, I would like to use something newer.

The Chilkat API provides a CKOSocket() that provides the possibility to throttle the used bandwidths like this:
// To use bandwidth throttling, the connection should be made using the socket API.
// This provides numerous properties to customize the connection, such as
// BandwidthThrottleDown, BandwidthThrottleUp, ClientIpAddress, ClintPort, Http Proxy,
// KeepAlive, PreferIpv6, RequireSslCertVerify, SoRcvBuf, SoSndBuf, SoReuseAddr,
// SOCKS proxy, TcpNoSDelay, TlsPinSet, TlsCipherSuite, SslAllowedCiphers, etc.
let socket = CkoSocket()
var maxWaitMs: Int = 5000
var success: Bool = socket.Connect("content.dropboxapi.com", port: 443, ssl: true, maxWaitMs: maxWaitMs)
if success != true {
print("\(socket.LastErrorText)")
print("Connect Fail Reason: \(socket.ConnectFailReason.integerValue)")
return
}
// Set the upload bandwidth throttle rate to 50000 bytes per second.
socket.BandwidthThrottleUp = 50000
Check this for further documentation.
The example in the documentation demonstrates how to use upload bandwidth throttling with the REST API. It will upload a file to Drobox using a file stream, with a limit on the bandwidth that can be used for the transfer.

I cannot find anything about Alamofire (swift part) but you could use AFNetwork.
As you can see from this link (AFNetworking/AFNetworking/AFURLRequestSerialization.h) the sources report:
/**
Throttles request bandwidth by limiting the packet size and adding a delay for each chunk read from the upload stream.
When uploading over a 3G or EDGE connection, requests may fail with "request body stream exhausted". Setting a maximum packet size and delay according to the recommended values (`kAFUploadStream3GSuggestedPacketSize` and `kAFUploadStream3GSuggestedDelay`) lowers the risk of the input stream exceeding its allocated bandwidth. Unfortunately, there is no definite way to distinguish between a 3G, EDGE, or LTE connection over `NSURLConnection`. As such, it is not recommended that you throttle bandwidth based solely on network reachability. Instead, you should consider checking for the "request body stream exhausted" in a failure block, and then retrying the request with throttled bandwidth.
#param numberOfBytes Maximum packet size, in number of bytes. The default packet size for an input stream is 16kb.
#param delay Duration of delay each time a packet is read. By default, no delay is set.
*/
- (void)throttleBandwidthWithPacketSize:(NSUInteger)numberOfBytes
delay:(NSTimeInterval)delay;
#end
You could build a your custom "NetworkManager" class in objective-C and easily import it to your swift project.

Related

How socketcan get send failure status?

As we all know, in the CAN bus communication protocol, sender know whether the data was successfully sent. I send socketcan data as follows.
ret = write (socket, frame, sizeof (struct can_frame));
However, even if the CAN communication cable is disconnected, the return value of ret is still 16(=sizeof (struct can_frame)).I queried the information and found that the problem was due to the tx_queue of the network stack used by socketcan. When write is called multiple times, the buffer is full and the return value of ret is -1.
But this is not the behavior I expect, I hope that every frame of data sent will immediately get the status of success or failure.
By
echo 0> / sys / class / net / can0 / tx_queue_len
I want to cancel the tx_queue, but it does not work.
What I want to ask is, is there a way to cancel the tx_queue of socketcan, or to get the status of the each sending frame about controller through the API (such as libsocketcan).
Thanks.
You cannot use write() itself to discover whether a CAN frame was successfully put on the bus, because all it does is write the frame to the in-kernel socket buffer. The kernel then moves the frame to the transmit queue of the SocketCAN network interface, followed by the driver moving it to the transmit buffer of the CAN controller, which finally puts the frame on the bus. What you want is a direct write which bypasses all those buffers, but that's not possible with SocketCAN, even if you set the transmit queue length to 0.
However, there is another way to get confirmation. If you enable the CAN_RAW_RECV_OWN_MSGS socket option (see section 4.1.4 and 4.1.7 in the SocketCAN documentation), you will receive frames that were successfully sent. You'll need to use recvmsg() so you get the message flags. msg_flags will have the MSG_CONFIRM bit set for a frames that was successfully sent by the same socket on which it is received. You won't be informed of failures, but you can detect them by using a timeout for the confirmation.
It's not an ideal solution because it mixes the read and write logic in your application. One way to avoid this would be to use two sockets. One for writing and reading MSG_CONFIRM frames, the other for reading all other frames. You could then create a (blocking) write function that does a write() followed by multiple calls to recvmsg() with an appropriate timeout.
Finally, it is useful to enable error frames (through the CAN_RAW_ERR_FILTER socket option). If you send a frame on a socket with a disconnected cable, this will typically result in a bus off state, which will be reported in an error frame.

Lowest TCP receive window size

What is the lowest possible TCP receive window size can be announced by Linux kernel TCP/IP stack impl and how can i configure to make it announce such one? My goal is to achieve low latency and sacrificing throughput?
Right now my test client application reads about 128 bytes each second from buffer and TCP stack waits till there are free SO_RCVBUF/2 bytes space before notifying server about new window size. I'd like to make my client announce window size lower than SO_RCVBUF/2 if its possible.
I do not think that changing the receive window size will have any effect on latency.
However, since your messages are small (128 bytes), you may like to disable Nagle algorithm in the sender that makes the sender wait when sending TCP payload smaller than MSS:
Applications that expect real-time responses and low latency can react poorly with Nagle's algorithm. Applications such as networked multiplayer video games or the movement of the mouse in a remotely controlled operating system, expect that actions are sent immediately, while the algorithm purposefully delays transmission, increasing bandwidth efficiency at the expense of latency. For this reason applications with low-bandwidth time-sensitive transmissions typically use TCP_NODELAY to bypass the Nagle delay.
On the receiver you may like to disable TCP delayed acknowledgements:
The additional wait time introduced by the delayed ACK can cause further delays when interacting with certain applications and configurations. If Nagle's algorithm is being used by the sending party, data will be queued by the sender until an ACK is received. If the sender does not send enough data to fill the maximum segment size (for example, if it performs two small writes followed by a blocking read) then the transfer will pause up to the ACK delay timeout. Linux 2.4.4+ supports a TCP_QUICKACK socket option that disables delayed ACK.
C++ code:
void disableTcpNagle(int sock) {
int value = 1;
if(::setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, &value, sizeof value))
throw std::system_error(errno, std::system_category(), "setsockopt(TCP_NODELAY)");
}
void enableTcpQuickAck(int sock) {
int value = 1;
if(::setsockopt(sock, IPPROTO_TCP, TCP_QUICKACK, &value, sizeof value))
throw std::system_error(errno, std::system_category(), "setsockopt(TCP_QUICKACK)");
}

Fiddler - add latency on single response without modifying response

In Fiddler how can I slow down the response of a specific request only, while passing through the response from the server?
I'm aware I can simulate a slow speed for all requests - that's not what I want.
Using the AutoResponder with a specific rule forces me to choose what to respond with.
How can I use the "Latency" feature without modifying the response? Is this possible in Fiddler?
I understood your question that you want to delay either request or response time for a specific request.
You could do it with the FiddlerScript module by updating the oSession object.
onBeforeRequest
// Delay sends by 300ms per KB uploaded.
oSession["request-trickle-delay"] = "300";
onBeforeResponse
// Delay receives by 150ms per KB downloaded.
oSession["response-trickle-delay"] = "150";
You would also need to filter the correct request in the selected method.
Filtering
// Sample Rule: Break requests for URLs containing "/path/"
if (oSession.uriContains("/path/")) {
}
if (oSession.hostname == "some.hostname") {
}
if (oSession.url == "some.url") {
}
Additional info can be found here
Hope it helps
Rather than using the latency feature, you can enter *delay:5000 as the then respond with... command, instead of a file path.
I noticed that the rules are ignored with a blank response, so you can use the latency with a command/path of *action, which isn't a real action, but causes the rule to execute and the latency to take effect, in case you really want to use the Latency column.

Single channel gateway only detect first message

My gateway uses the Raspi and RFM95 configuration and operates at 915 MHz. I am using the single channel packet forwarder code by tfelkamp (https://github.com/tftelkamp/single_chan_pkt_fwd).
My gateway only the detects the first message it received and ignores the all messages afterwards. It is still connected to the TTN server but does not receive any more messages.
Can anyone explain what might be the cause of this? Might it because the RFM95 sleeping or the code no longer forwarding the message from the transceiver.
Thanks
I experienced a similar issue. Please note your sender is using different channels, but starts with channel(0). This is the first successful message you receive. Your single channel receiver is just able to receive channel(0). There is a work around for this issue for your sender explained here
This sounds like your transmitter sends the messages using frequency-hopping, while your receiver does not handle it correctly (or the other way around).
Definition of frequency-hopping found in chapter 4.1.1.8 of Semtech's SX1272 datasheet:
Frequency hopping spread spectrum (FHSS) is typically employed when
the duration of a single packet could exceed regulatory requirements
relating to the maximum permissible channel dwell time. This is most
notably the case in US operation where the 902 to 928 MHz ISM band
which makes provision for frequency hopping operation. [...]
If you're using the LMIC-Arduino library for your node then yes, by default it is transmitting in a range and the single_chan_pkt_fwd gateway is only receiving on the frequency you specify in the global_conf.json or the .cpp source (depending on your chosen library).
With the assumption that you're using the arduino-lmic library, make the changes/additions mentioned in the this TTN forum post linked by Rainer which is the same I ran into.
Also... you'll find this further down the thread: in src > lmic > lmic.c edit the following:
void LMIC_disableChannel (u1_t channel) {
if( channel < 72+MAX_XCHANNELS )
//LMIC.channelMap[channel>>4] &= ~(1<<(channel&0xF)); // comment this one
LMIC.channelMap[channel/16] &= ~(1<<(channel&0xF)); // add this one
}
Then pick a frequency on channel 0 and set that for both node and packet forwarder. Here's a table snip from this page. I went with 902300000 and it's working fine.
"freq": 902300000,
"spread_factor": 7,

Packet Size ,Window Size and Socket Buffer In TCP

After studying the "window size" concept, what I understood is that it keeps packet before sending over wire and till acknowledgement come for earliest packet . Once this gets filled up, subsequent packet will be dropped. Somewhere I also have read that TCP is a streaming protocol, and packet is what related to IP protocol at Network layer .
What I assumed till was that I have declared a Buffer (inside code) which I fill with some data and send this Buffer using socket. I declared a buffer of 10000 bytes and send it repeatedly using socket over 10 Gbps link .
I have following assumptions and questions. Please verify and help
If I want to send a packet of 64,256,512 etc. bytes, declared buffer inside code of that much space and send over socket. Each execution of send() command will send one packet of that much size .
So if I want to study the packet size variation effect on throughput, what do I have to do? Do I need to vary buffer size in code?
What are the socket buffer which we set using SO_SNDBUF and SO_RECVBUF? Google says it's buffer space for socket. Is it same as TCP window size or something different? Which parameter is more suitable to vary or to increase throughput?
Also there are three parameter in socket buffer: Min, Default and Max. Which one should I vary to my experiment and to get more relevance?
If I want to send a packet of 64,256,512 etc. bytes , Declared buffer inside code of that much space and send over socket .Each execution of send() command will send one packet of that much size.
Only if you disable the Nagle algorithm and the size is less than the path MTU. You mustn't rely on this.
So if I want to Study the Packet size variation effect on throughput, What I have to do , vary buffer space in Code?
No. Vary SO_RCVBUF at the receiver. This is the single biggest determinant of throughput, as it determines the maximum receive window.
what are the socket buffer which we set using SO_SNDBUF and SO_RCVBUF
Send buffer size at the sender, and receive buffer size at the receiver. In the kernel.
It's Same as TCP Window size
See above.
or else different ? Which parameter is more suitable to vary to increase throughput ?
See above.
Also there are three parameter in Socket Buffer min Default and Max . Which one should I vary for My experiment to get more relevance
None of them. These are the system-wide parameters. Just play with SO_SNDBUF and SO_RCVBUF for the specific sockets in your application.
TCP does not directly expose a way to control the way packets are sent since it is a stream protocol. But you can make the TCP stack send packets by disabling the Nagle algorithm. That way all data that you send will be sent out immediately instead of being buffered. Data will be split into packets of MTU size which is like ~1400 bytes. Depends on the link.
To answer (2): Disable nagling and invoke send with buffers of < 1400 bytes. Use Wireshark to make sure you got what you wanted.
The buffer settings have nothing to do with any of this. I know of no valid reason to touch them.
In general this question is probably moot since you seem to want to send a lot of data. Just leave Nagling enabled and send big buffers (such as 64KB).
I do some experience on Windows 10:
code from https://docs.python.org/3/library/socketserver.html#asynchronous-mixins,
RawCap for loopback capture,
WireShark for watching result.
The primary client code is:
def client(ip, port, message):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET,socket.SO_RCVBUF, 100000)
sock.connect((ip, port))
sock.sendall(bytes(message, 'ascii'))
response = str(sock.recv(1024), 'ascii')
print("Received: {}".format(response))
Here is the result(the server port is 11111):
you can see, the tcp recive window size is the same as SO_RCVBUF, may it is platform indepent, you can verify it on other platform.
on https://msdn.microsoft.com/en-us/library/windows/hardware/ff570832(v=vs.85).aspx
The SO_RCVBUF socket option determines the size of a socket's receive buffer that is used by the underlying transport.
verified this.
Also, when I set SO_SNDBUF = 100000, it have no affects on the tcp transmission between client and server, as server just can discard data if client send much data one time.
So, if you want to change SO_RCVBUF to max Throughput, you can refer http://packetbomb.com/understanding-throughput-and-tcp-windows/, the os may offer func to detect ideal send backlog (ISB).