How to simulate modem speed? - fiddler

I am trying to configure Fiddler for a download and upload speed of 2 MBPS.
How to change accordingly CustomRules.js???
if (m_SimulateModem) {
// Delay sends by 300ms per KB uploaded.
oSession["request-trickle-delay"] = "300";
// Delay receives by 150ms per KB downloaded.
oSession["response-trickle-delay"] = "150";
}

2mb/sec is quite fast-- I trust you understand that for this to work, the network in question must be faster than 2mb down/2mb up?
The closest you can get to approximate this would be as follows:
if (m_SimulateModem) {
oSession["request-trickle-delay"] = "2";
oSession["response-trickle-delay"] = "2";
}

Related

Akka Http: How to handle high throughput through a proxy?

I would like to handle 3.000 requests per second via proxy usage.
The following code without proxy works without problems and can handle 3.000 requests per second with an average send-response time of ~50MS per request:
Http(context.system)
.singleRequest(request) // live without proxies -> speed boost by over 100%
.flatMap(_.toStrict(2.seconds)) // locally it might take longer than 2 seconds
.flatMap { resp =>
Unmarshal(resp.entity).to[String].map((resp.status, _))
}
But now I would like to archive the same but via Proxy usage:
val httpsProxyTransport = ClientTransport.httpsProxy(InetSocketAddress.createUnresolved("proxyip", 8888))
val settings: ConnectionPoolSettings = ConnectionPoolSettings(context.system)
.withTransport(httpsProxyTransport)
.withMaxRetries(0)
.withMaxConnectionBackoff(2.seconds)
Http(context.system)
.singleRequest(request, settings = settings)
.flatMap(_.toStrict(2.seconds))
.flatMap { resp =>
Unmarshal(resp.entity).to[String].map((resp.status, _))
}
Problem is that the proxy version:
1.000 requests / second -> average send-response time of ~50MS per request just like the solution without proxy
1.500 requests / second -> average send-response time of ~80MS per request
2.000 requests / second -> average send-response time of ~150MS per request
The proxy version gets worse in terms of average response time the more I try to send through. I doubt the problems lies on the end of the proxy, as I've also tried to use 3 different proxies at the same time to balance like 1.000 requests per proxy, but it showed the same increase in average response time.
Any ideas, what could under the hood limit the https proxy requests?
(It feels kind of as if the proxy version isn't using multiple cores in parallel.)
host-connection-pool {
max-connections = 650
max-retries = 0
max-open-requests = 1024
idle-timeout = 5 s
}
dispatcher-io {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
fixed-pool-size = 350
keep-alive-time = 60s
allow-core-timeout = on
}
shutdown-timeout = 60s
throughput = 1
}

UDP server consuming high CPU

I am observing high CPU usage in my UDP server implementation which runs an infinite loop expecting 15 1.5KB packets every milliseconds. It looks like below:
struct RecvContext
{
enum { BufferSize = 1600 };
RecvContext()
{
senderSockAddrLen = sizeof(sockaddr_storage);
memset(&overlapped, 0, sizeof(OVERLAPPED));
overlapped.hEvent = CreateEvent(NULL, FALSE, FALSE, NULL);
memset(&sendersSockAddr, 0, sizeof(sockaddr_storage));
buffer.clear();
buffer.resize(BufferSize);
wsabuf.buf = (char*)buffer.data();
wsabuf.len = ULONG(buffer.size());
}
void CloseEventHandle()
{
if (overlapped.hEvent != INVALID_HANDLE_VALUE)
{
CloseHandle(overlapped.hEvent);
overlapped.hEvent = INVALID_HANDLE_VALUE;
}
}
OVERLAPPED overlapped;
int senderSockAddrLen;
sockaddr_storage sendersSockAddr;
std::vector<uint8_t> buffer;
WSABUF wsabuf;
};
void Receive()
{
DWORD flags = 0, bytesRecv = 0;
SOCKET sockHandle =...;
while (//stopping condition//)
{
std::shared_ptr<RecvContext> _recvContext = std::make_shared<IO::RecvContext>();
if (SOCKET_ERROR == WSARecvFrom(sockHandle, &_recvContext->wsabuf, 1, nullptr, &flags, (sockaddr*)&_recvContext->sendersSockAddr,
(LPINT)&_recvContext->senderSockAddrLen, &_recvContext->overlapped, nullptr))
{
if (WSAGetLastError() != WSA_IO_PENDING)
{
//error
}
else
{
if (WSA_WAIT_FAILED == WSAWaitForMultipleEvents(1, &_recvContext->overlapped.hEvent, FALSE, INFINITE, FALSE))
{
//error
}
if (!WSAGetOverlappedResult(sockHandle, &_recvContext->overlapped, &bytesRecv, FALSE, &flags))
{
//error
}
}
}
_recvContext->CloseEventHandle();
// async task to process _recvContext->buffer
}
}
The cpu consumption for this udp server is very high even when the packets are not being processed post receipt. How can the cpu consumption be improved here?
You've chosen about the most inefficient combination of mechanisms imaginable.
Why use overlapped I/O if you're only going to pend one operation and then wait for it complete?
Why use an event, which is about the slowest notification scheme that Windows has.
Why do you only pend one operation at a time? You're forcing the implementation to stash datagrams in its own buffers and then copy them into yours.
Why do you post the receive operation right before you're going to wait for it to complete rather than right after the previous one completes?
Why do you create a new receive context each time instead of re-using the existing buffer, event, and so on?
Use IOCP. Windows events are very slow and heavy.
Post lots of operations. You want the operating system to be able to put the datagram right in your buffer rather than having to allocate another buffer that it copies data into and out of.
Re-use your buffers and allocate all your receive buffers from a contiguous pool rather than fragmenting them throughout process memory. The memory used for your buffers has to be pinned and you want to minimize the amount of pinning needed.
Re-post operations as soon as they complete. Don't process them and then re-post. There's no reason to delay starting the operation. You can probably ignore this if you followed all the other suggestions because you wouldn't have a "spare" buffer to post anyway.
Alternatively, you can probably get away with having a thread that spins on a blocking receive operation. Just make sure your code has a loop that is as tight as possible, posting a different (already-allocated) buffer as soon as it returns after dispatching another thread to process the buffer it just filled with the receive operation.

Flutter Dart Dio Get Request is so slow

I am hosting a space in digital ocean - it is basically Amazon S3 equalivant of digital ocean. My problem with dio is, I am making a get request with dio to a file of 10MB size. The request takes around 9 seconds on my phone but 3 seconds in my browser. I also had this issue in my custom backend. Get requests made with dio (which uses http module of dart) seems to be extremely slow. I need to solve this issue as I need to transfer 50MB of data to user from time to time. Why is dio acting slow on GET requests?
I suspect this might be the underlying cause check here
await Dio().get(
"Remote_Url_I_can_not_share",
onReceiveProgress: (int downloaded, int total) {
listener
.call((downloaded.toDouble() / total.toDouble() * metadataPerc));
},
cancelToken: _cancelToken,
).catchError((err) => throw err);
I believe that reason for this; buffer size is limited to 8KB somewhere underlying.
I tried whole day to increase it. Still no success. Let me share my experience with that buffer size.
Imagine you're downloading a file which is 16 mb.
Please consider that remote server has also higher speed than your download speed. (I mean just forget about server load, etc.. )
If buffersize:
128 bytes, downloading 16mb file takes : 10.820 seconds
1024 bytes, downloading 16mb file takes : 6.276 seconds
8192 bytes, downloading 16mb file takes : 4.776 seconds
16384 bytes, downloading 16mb file takes : 3.759 seconds
32768 bytes, downloading 16mb file takes : 2.956 seconds
------- After This, If we increase chunk size, download time is also increasing
65536 bytes, downloading 16mb file takes : 4.186 seconds
131072 bytes, downloading 16mb file takes : 5.250 seconds
524288 bytes, downloading 16mb file takes : 7.460 seconds
So somehow, if you can set that buffersize 16k or 32k rather than 8k, I believe download speed will increase.
Please feel free to test your results (I got 3 tries and got average of them for the timings)
package dltest;
import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;
public class DLTest
{
public static void main(String[] args) throws Exception
{
String filePath = "http://hcmaslov.d-real.sci-nnov.ru/public/mp3/Metallica/Metallica%20'...And%20Justice%20For%20All'.mp3";
URL url = new URL(filePath);
URLConnection uc = url.openConnection();
InputStream is = uc.getInputStream();
long start = System.currentTimeMillis();
int downloaded = 0;
int total = uc.getContentLength();
int partialRead = 0;
// byte chunk[] = new byte[128];
// byte chunk[] = new byte[1024];
// byte chunk[] = new byte[4096];
// byte chunk[] = new byte[8192];
byte chunk[] = new byte[16384];
// byte chunk[] = new byte[32768];
// byte chunk[] = new byte[524288];
while ( (partialRead = is.read(chunk)) != -1)
{
// Print If You Like..
}
is.close();
long end = System.currentTimeMillis();
System.out.println("Chunk Size ["+(chunk.length)+"] Time To Complete : "+(end - start));
}
}
My experience with DigitalOcean Spaces has been a very fluctuating one. DO Spaces is, in my opinion, not production ready. I was using their CDN feature for a website, and sometimes the response times would be about 20ms, but sometimes they would exceed 6 seconds. This was in AMS3 datacenter region.
Can you confirm this happens with other S3/servers as well? Such as gstatic, or Amazon CloudFront CDN?
This fluctuating behaviour happened constantly, which is why we transferred all our assets to Amazon S3 + CloudFront. It provides much more consistent results.
It could be that the phone you are testing on uses a very unoptimized traceroute to the DigitalOcean datacenters. That's why you should try different servers.

PubNub to server data transfer

I am building an IoT application. I am using PubNub to communicate between the harware and the user.
Now I need to store all the messages and data coming from the hardware and from the user in a central server. We want to do a bit of machine learning.
Is there a way to do this other than having the server subscribe to all the output channels (There will be a LOT of them)?
I was hoping for some kind of once-a-day data dump involving the storage and playback module in PubNub
Thanks in advance
PubNub to Server Data Transfer
Yes you can perform once-a-day data dumps involving the storage and playback feature.
But first check this out! You can subscribe to Wildcard Channels like a.* and a.b.* to receive all messages in the hierarchy below. That way you can receive messages on all channels if you prefix each channel with a root channel like: root.chan_1 and root.chan_2. Now you can subscribe to root.* and receive all messages in the root.
To enable once-a-day data dumps involving Storage and Playback, first enable Storage and Playback on your account. PubNub will store all your messages on disk over multiple data centers for reliability and read latency performance boost. Lastly you can use the History API on your server to fetch all data stored as far back as forever as long as you know the channels to fetch.
Here is a JavaScript function that will fetch all messages from a channel.
Get All Messages Usage
get_all_history({
limit : 1000,
channel : "my_channel_here",
error : function(e) { },
callback : function(messages) {
console.log(messages);
}
});
Get All Messages Code
function get_all_history(args) {
var channel = args['channel']
, callback = args['callback']
, limit = +args['limit'] || 5000
, start = 0
, count = 100
, history = []
, params = {
channel : channel,
count : count,
callback : function(messages) {
var msgs = messages[0];
start = messages[1];
params.start = start;
PUBNUB.each( msgs.reverse(), function(m) {history.push(m)} );
callback(history);
if (history.length >= limit) return;
if (msgs.length < count) return;
count = 100;
add_messages();
},
error : function(e) {
log( message_out, [ 'HISTORY ERROR', e ], '#FF2262' );
}
};
add_messages();
function add_messages() { pubnub.history(params) }
}

Perform long-polling from nodejs (possible memory leak)

I wrote a piece of code that is going to perform a request to Facebook.
Now i wrapped this code into a infinite loop which is going to send those requests every 10 seconds using timeouts.
Code:
var poll = function(socket, userProvider) {
var lastCallTime = new Date();
var polling = true;
// The stream itself, non blocking
function performPoll() {
var results = feed(function (err, data) {
lastCallTime = new Date();
// PROCESS DATA
// Check new posts
if (polling) {
setTimeout(performPoll, 1000 * 10);
}
});
};
// Start infinite loop
performPoll();
};
The feed(cb) is just going to call a request to Facebook requesting data, this works 100% and does what i want it to do, the only problem that i am having now is that this piece of code is keeping to increase my memory usage. After a few minutes it increased by 50MB already (From 50 -> 100).
Is there anybody that can help me identify the cause of this?
v8 does not collect memory immediately. If it stabilizes at 100mb, then it is to be expected. For more information, checkout nodejs setTimeout memory leak?
If you really, really want to clear the memory, use global.gc(). Read this blog about how to call garbage collector manually.