Flutter Dart Dio Get Request is so slow - flutter

I am hosting a space in digital ocean - it is basically Amazon S3 equalivant of digital ocean. My problem with dio is, I am making a get request with dio to a file of 10MB size. The request takes around 9 seconds on my phone but 3 seconds in my browser. I also had this issue in my custom backend. Get requests made with dio (which uses http module of dart) seems to be extremely slow. I need to solve this issue as I need to transfer 50MB of data to user from time to time. Why is dio acting slow on GET requests?
I suspect this might be the underlying cause check here
await Dio().get(
"Remote_Url_I_can_not_share",
onReceiveProgress: (int downloaded, int total) {
listener
.call((downloaded.toDouble() / total.toDouble() * metadataPerc));
},
cancelToken: _cancelToken,
).catchError((err) => throw err);

I believe that reason for this; buffer size is limited to 8KB somewhere underlying.
I tried whole day to increase it. Still no success. Let me share my experience with that buffer size.
Imagine you're downloading a file which is 16 mb.
Please consider that remote server has also higher speed than your download speed. (I mean just forget about server load, etc.. )
If buffersize:
128 bytes, downloading 16mb file takes : 10.820 seconds
1024 bytes, downloading 16mb file takes : 6.276 seconds
8192 bytes, downloading 16mb file takes : 4.776 seconds
16384 bytes, downloading 16mb file takes : 3.759 seconds
32768 bytes, downloading 16mb file takes : 2.956 seconds
------- After This, If we increase chunk size, download time is also increasing
65536 bytes, downloading 16mb file takes : 4.186 seconds
131072 bytes, downloading 16mb file takes : 5.250 seconds
524288 bytes, downloading 16mb file takes : 7.460 seconds
So somehow, if you can set that buffersize 16k or 32k rather than 8k, I believe download speed will increase.
Please feel free to test your results (I got 3 tries and got average of them for the timings)
package dltest;
import java.io.InputStream;
import java.net.URL;
import java.net.URLConnection;
public class DLTest
{
public static void main(String[] args) throws Exception
{
String filePath = "http://hcmaslov.d-real.sci-nnov.ru/public/mp3/Metallica/Metallica%20'...And%20Justice%20For%20All'.mp3";
URL url = new URL(filePath);
URLConnection uc = url.openConnection();
InputStream is = uc.getInputStream();
long start = System.currentTimeMillis();
int downloaded = 0;
int total = uc.getContentLength();
int partialRead = 0;
// byte chunk[] = new byte[128];
// byte chunk[] = new byte[1024];
// byte chunk[] = new byte[4096];
// byte chunk[] = new byte[8192];
byte chunk[] = new byte[16384];
// byte chunk[] = new byte[32768];
// byte chunk[] = new byte[524288];
while ( (partialRead = is.read(chunk)) != -1)
{
// Print If You Like..
}
is.close();
long end = System.currentTimeMillis();
System.out.println("Chunk Size ["+(chunk.length)+"] Time To Complete : "+(end - start));
}
}

My experience with DigitalOcean Spaces has been a very fluctuating one. DO Spaces is, in my opinion, not production ready. I was using their CDN feature for a website, and sometimes the response times would be about 20ms, but sometimes they would exceed 6 seconds. This was in AMS3 datacenter region.
Can you confirm this happens with other S3/servers as well? Such as gstatic, or Amazon CloudFront CDN?
This fluctuating behaviour happened constantly, which is why we transferred all our assets to Amazon S3 + CloudFront. It provides much more consistent results.
It could be that the phone you are testing on uses a very unoptimized traceroute to the DigitalOcean datacenters. That's why you should try different servers.

Related

Transfering large data (>100 MB) over Mirror in Unity

string SerializedFileString contains a serialized file, potentially hundreds of MB in size. The server tries to copy it into the client's local string ClientSideSerializedFileString. Too good to be true, which is why it throws an exception. Is there a Mirror-way to do this?
[TargetRpc]
private void TargetSendFile(NetworkConnection target, string SerializedFileString)
{
if (!hasAuthority) { return; }
ClientSideSerializedFileString = SerializedFileString;
}
ArgumentException The output byte buffer is too small to contain the encoded data, encoding 'Unicode (UTF-8)' fallback 'System.Text.EncoderExceptionFallback'.

Compact Framework - Upload file via REST

I am looking for the best way to transfer files from the compact framework to a server via REST. I have a web service I created using .net Web API. I've looked at several SO questions and other sites that dealt with sending files, but none of them seem to work the for what I need.
I am trying to send media files from WM 6 and 6.5 devices to my REST service. While most of the files are less than 300k, an odd few may be 2-10 or so megabytes. Does anyone have some snippets I could use to make this work?
Thanks!
I think this is the minimum for sending a file:
using (var fileStream = File.Open(#"\file.txt", FileMode.Open, FileAccess.Read, FileShare.Read))
{
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create("http://www.destination.com/path");
request.Method = "POST"; // or PUT, depending on what the server expects
request.ContentLength = fileStream.Length; // see the note below
using (var requestStream = request.GetRequestStream())
{
int bytes;
byte[] buffer = new byte[1024]; // any reasonable buffer size will do
while ((bytes = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
requestStream.Write(buffer, 0, bytes);
}
}
try
{
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
}
}
catch (WebException ex)
{
// failure
}
}
Note: HTTP needs a way to know when you're "done" sending data. There are three ways to achieve this:
Set request.ContentLength as used in the example, because we know the size of the file before sending anything
Set request.SendChunked, to send chunks of data including their individual size
You could also set request.AllowWriteStreamBuffering to write to an in-memory buffer, but I wouldn't recommend wasting that much memory on the compact framework.

How to simulate modem speed?

I am trying to configure Fiddler for a download and upload speed of 2 MBPS.
How to change accordingly CustomRules.js???
if (m_SimulateModem) {
// Delay sends by 300ms per KB uploaded.
oSession["request-trickle-delay"] = "300";
// Delay receives by 150ms per KB downloaded.
oSession["response-trickle-delay"] = "150";
}
2mb/sec is quite fast-- I trust you understand that for this to work, the network in question must be faster than 2mb down/2mb up?
The closest you can get to approximate this would be as follows:
if (m_SimulateModem) {
oSession["request-trickle-delay"] = "2";
oSession["response-trickle-delay"] = "2";
}

Perform long-polling from nodejs (possible memory leak)

I wrote a piece of code that is going to perform a request to Facebook.
Now i wrapped this code into a infinite loop which is going to send those requests every 10 seconds using timeouts.
Code:
var poll = function(socket, userProvider) {
var lastCallTime = new Date();
var polling = true;
// The stream itself, non blocking
function performPoll() {
var results = feed(function (err, data) {
lastCallTime = new Date();
// PROCESS DATA
// Check new posts
if (polling) {
setTimeout(performPoll, 1000 * 10);
}
});
};
// Start infinite loop
performPoll();
};
The feed(cb) is just going to call a request to Facebook requesting data, this works 100% and does what i want it to do, the only problem that i am having now is that this piece of code is keeping to increase my memory usage. After a few minutes it increased by 50MB already (From 50 -> 100).
Is there anybody that can help me identify the cause of this?
v8 does not collect memory immediately. If it stabilizes at 100mb, then it is to be expected. For more information, checkout nodejs setTimeout memory leak?
If you really, really want to clear the memory, use global.gc(). Read this blog about how to call garbage collector manually.

Spymemcache- Memcache/Membase Faileover

Platform: 64 Bit windows OS, spymemcached-2.7.3.jar, J2EE
We want to use two memcache/membase servers for caching solution. We want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data.
We are using spymemcached java client for setting and getting data from memcache. We are not using any replication between two membase servers.
We loading memcacheClient object at the time of our J2EE application startup.
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
client = new MemcachedClient(serverList, "default", "");
After that we are using memcacheClient to get and set value in memcache/membase server.
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
Looks like memcacheClient is setting and getting and value only from server1.
If we stop server1, it fails to get/set value. Should it not use server2 in case of server1 down? Please let me know if we are doing anything wrong here...
aspymemcached java client dos not handle membase failover for particular node.
Ref : https://blog.serverdensity.com/handling-memcached-failover/
We need to handle it manually(by our code)
We can do this by using ConnectionObserver
Here is my code :
public static void main(String a[]) throws InterruptedException{
try {
URI server1 = new URI("http://192.168.100.111:8091/pools");
URI server2 = new URI("http://127.0.0.1:8091/pools");
final ArrayList<URI> serverList = new ArrayList<URI>();
serverList.add(server1);
serverList.add(server2);
final MemcachedClient client = new MemcachedClient(serverList, "bucketName", "");
client.addObserver(new ConnectionObserver() {
#Override
public void connectionLost(SocketAddress arg0) {
//method call when connection lost
for(MemcachedNode node : client.getNodeLocator().getAll()){
if(!node.isActive()){
client.shutdown();
//re init your client here, and after re-init it will connect to your secodry node
break;
}
}
}
#Override
public void connectionEstablished(SocketAddress arg0, int arg1) {
//method call when connection established
}
});
Object obj = client.get("spoon");
client.set("spoon", 50, "Hello World!");
} catch (Exception e) {
}
}
client.get() would use first available node and therefore your value would be stored/updated on one node only.
You seems to be a bit contradicting in your requirements - first you're saying that 'we want to allocate 1 GB memory to each memcache/membase server so total we can cache 2 GB data' which implies distributed cache model (particular key is stored on one node in the cache farm) and then you expect to fetch it if that node is down, which obviously won't happen.
If you need your cache farm to survive node failure without losing data cached on that node you should use replication, which is available in MemBase but obviously you would pay the price of storing the same values multiple times so your desire of '1GB per server...total 2GB of cache' won't be possible.