How can I expand buffer space or slow down gsutil upload speed? - google-cloud-storage

I'm trying to upload some files to google cloud storage.
No problem when the upload speed is about 2MiB/s.
But if the upload speed is about 9~10MiB/s, gsutil stopped and left this error, "[Errno 55] No buffer space available".
So, I'm looking for the solutions expanding buffer space or slowing down upload speed.
How can I do?

Related

Used space displayed for my Google disk is 4 times more than total size of the stored files

As it follows from the notification displayed, I have used almost all available Google disk space (96%) while the total size of the files are 3.5 Gb only. Additional 1Gb was deleted and stored in the bin. What is the reason an how can I fix it? Also I have a lot of files shared with me from other accounts. But regarding Google Disk documentation they should not be taken into account. Additionally I have 0.8GB in Gmail and no files in Google photo
Go to this link and evaluate what are the files that are consuming more storage
Delete the files in your bin, as they are still counting towards your quota
After you delete the files, it usually takes some time to update the space on your Drive. A propagation matter.
Make sure you don't have a lot of photos taking quota out of your account

Google storage operations extremely slow when using Customer Managed Encryption Key

We're planning on switching from Google managed keys to our own keys (working with sensitive medical data) but are struggling with the performance degradation when we turn on CMEK. We move many big files around storage in our application (5-200GB files), both with the Java Storage API and gsutil. The former stops working on even 2GB size files (times out, and when timeouts are raised silently does not copy the files), and the latter just takes about 100x longer.
Any insights into this behaviour?
When using CMEK, you are actually using an additional layer of encryption on top of Google-managed encryption keys and not replacing them. As for gsutil, if your moving process involves including the objects’ hashes then gsutil will perform an additional operation per object, this might explain why moving the big files is taking much longer than usual.
As a workaround, you may instead use resumable uploads. This type of upload works best with large files since it includes the option of uploading files in multiple chunks which allows you to resume an operation even if the flow of data is interrupted.

S3 uploading high disk I/O and CPU usage

I faced hight CPU and I/O usage when I tried to upload 100Gb of small files (PNG images) to S3 bucket via very simple go s3 uploader.
Is there any way to limit bandwidth (i.e. via aws-sdk-go config) or something else to make the process of uploading less intensive or effective :) to reduce CPU and I/O usage.
I've tried nice CPU and IO but it actually doesn't help.
Have you tried S3Manager, https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/? From the docs:
Package s3manager provides utilities to upload and download objects from S3 concurrently. Helpful for when working with large objects.

ruby google-api-client storage slow upload speed

I am consuming the ruby google-api-client v0.9.pre1, which I have recently upgraded from v0.7.1.
I have been aware that the upload time of my files from my Rails server using the ruby library was slow. However, I was uploading file by file instead of batching and I assumed this added some time. When I uplifted to 0.9.pre1 I refactored to the the batch_upload apis and I still have very slow upload times.
The last several attempts have come out to about 0.23 mb/s upload. It is taking 12-13 seconds to upload 2-3 MB. My server is hosted on a Google Compute Engine which has access to my Google Storage Bucket.
Can anyone give me an idea why it is so slow to upload files from a server within Google's hosting to Google Storage? Both AWS and Rackspace blow Google out of the water on storage upload speeds. I can't help but think I'm missing something. If not, I may head back in those directions.
Anyone getting better speeds?
Any help or ideas?

Ideal Chunk Size for Writing Streamed Content to Disk on iPhone

I am writing an app that caches streaming content from the web on the iPhone. Right now, I'm saving data to disk as it arrives (in chunk sizes ranging from 1KB to about 60KB), but application response is somewhat sluggish (better than I was expecting, but still pretty bad).
My question is: does anyone have a rule of thumb for how frequent and large writes to the device memory should be to maximize performance?
I realize this seems application-specific, and I intend to do performance tuning for my scenario, but this applies generally to any app on the iPhone downloading a lot of data because there is probably a sweet spot (given sufficient incoming data availability) for write frequency/size.
These are the resources I've already read related to the issue, but no one addresses the specific issue of how much data to accumulate before dumping:
Best way to download large files from web to iPhone for writing to disk
The Joy in Discovering You are an Idiot
One year later, I finally got around to writing a test harness to test chunking performance of streaming downloads.
Here's the set-up: Use an iPhone 4 to download a large file over a Wi-Fi connection* with an asynchronous NSURLConnection. Periodically flush downloaded data to disk (atomically), whenever the amount of data downloaded exceeds a threshold.
And the results: It doesn't make a difference. The performance difference between using 32kB and 512kB chunks (and several sizes in-between) is smaller than the variance between runs using the same chunking size. The file download time, as expected, is comprised almost entirely of time spent waiting on the network.
*Average throughput was approximately 8Mbps.