I need to upload data from a public source to one of my Cloud Storage buckets. Currently, I download the data to my machine and then upload it to GCS. Being huge data sources (60GB in all, this week), I began running into problems to do it.
Is there a way to do it coding straight into GCS, without needing all the local downloading process?
UPDATE: I have tried using curl http://originaladdress | gsutil cp - gs://bucket. The problem is it would take 21 hours to do the whole process with 100 MB chunks, which is longer than it takes for me to download and upload the file. Is that right? Did I miss some parameter?
Related
I was hoping to use s3fs to upload new files into S3. On the documentation I saw that it doesn't work well when there are multiple clients uploading/syncing to the same bucket.
I really don't care about syncing files from to bucket to my local drive, I only want to perform the opposite: only upload to s3 new files as they are created.
Is there a way to achieve that with s3fs? It wasn't clear on the docs if they offer that functionality by the usage of flags.
s3fs does not synchronize files. Instead it intercepts the open, read, write, etc. calls and relays them to the S3 server. Thus it will work for your upload-only use case. Note that s3fs does use some temporary storage to stage the upload.
as stated, I'm trying to download this dataset of zip folders containing images: https://data.broadinstitute.org/bbbc/BBBC006/ and store them in an s3 bucket so I can later unzip them in the bucket, reorganize them, and pull them in smaller chunks into a vm for some computation. Problem is, I don't know how to get the data from https://data.broadinstitute.org/bbbc/BBBC006/BBBC006_v1_images_z_00.zip for example or any of the other ones, to then send it s3
this is my first time using aws or really any cloud platform so please bear with me :]
Amazon EC2 provides a virtual computer just like a normal Linux or Windows computer.
Amazon S3 is a block storage service where you can upload/download files.
If you wish to copy files from a website to Amazon S3, you will need to write an application or script that will:
Download the files from the website
Upload them to Amazon S3
If you wish to do it from a script, you could use the AWS Command-Line Interface (CLI).
Or, you could do it from a programming language, see: SDKs and Programming Toolkits for AWS
I'm building a customer management system using Rails that requires CSV files containing customer information to be imported into/diffed with a Postgres database. I'm hosting the application on Heroku. I moved the database to the background with Sidekiq but need advice on where to upload the file to in the first place for importing. Is hosting the file on S3 really the best solution or is there a simpler solution without using a third party storage service? The application will be used daily but up 10 employees and the larges CSV file being upload is around 100,000 rows.
Thanks.
Yes, I do think S3 is the best solution
We faced same problem at Storemapper (we use Resque instead of Sidekiq, but that's not a problem). The limiting factor here is the Heroku request timeout. You only have 30s to finish your upload to Heroku, which put hard limit on how big your csv can be. This is where S3 come. Basically what we do is:
User upload csv directly to S3 via javascript, bypassing our app server on Heroku.
Once the upload complete, the javascript makes a request to app server that will launch background worker, telling the worker where the file is at S3
The worker download the csv from s3, then process it as necessary
I found carrierwave_direct gem to be very helpful for step 1 and 2. For step 3, I use smarter_csv gem. Checkout our complete story here:
https://tylertringas.com/very-large-csv-import-in-rails-on-heroku/
We've been using gsutil -m rsync -r to keep dev and deploy boxes in sync with a GCS bucket for nearly 2 years without any problem. There are about 85k objects in the bucket.
Until recently, this worked perfectly: we'd run a deploy-box -> GCS rsync every 15 mins or so, to keep all new uploaded resource backed up, and then a GCS -> dev box rsync whenever we wanted to refresh the local dev data (running on OSX El Capitan).
Within the last couple of months, though, the GCS->dev rsync has started to bloat, downloading more and more images.
Initially I just thought "great, we're getting more resources uploaded", but it's been growing way faster than the data, until today when it seems to be downloading the whole 85k images.
I've double-checked I'm in the right place, the command is correct, the paths are correct, etc. For all that the gsutil output is scrolling by with reams and reams of "Copying..." and "Downloading..." messages, making good parallel use of our 100mbps connection, when I go to another terminal and run find . -type f | wc -l on the destination directory every 10 seconds, it shows that barely 2 or 3 new files are being added a minute. I look at modification times on files that gsutil says it's downloading right now and in the large majority they're old, plenty haven't changed in a year or more. Meaning: it's downloading all the data, using tons of time and bandwidth, all for the sake of a few hundred files.
Has something changed in recent OSX gsutil versions? Is there possibly a bug? How would I even start to go about tracking this down? Or reporting it? The newsgroups gsutil-discuss and gs-discussion have been archived, and the talk in gce-discussion is all about using gsutil from GCE instances.
Thanks!
I had a similar issue where the same files were synced over and over. I don't have that many files so you might need to check for performance but I decided to use the -c option to force using the checksum instead of mtime which was modified locally in my build process.
I think (and hope) the documentation is slightly wrong stating that
compare checksums for files if the size of source and destination as
well as mtime match
as it seems to use checksum even if mtime does not match
gsutil 4.20 (released 2016-07-20) modified the change detection algorithm for rsync. Instead of comparing only the size of the local file with its cloud counterpart, it now compares both the size and file modification time of local files. The file modification time is stored in the custom user metadata for the file when it is uploaded with rsync. If that doesn't exist the object creation time is used.
I have been using the Google Cloud Storage Manager link on the Google APIs console in order to upload my files.
This works great for most files: 1KB, 10KB, 1MB, 10MB, 100MB. However yesterday I could not upload a 3GB file. Any idea what is wrong?
What is the best way to upload large files to Google Cloud Storage?
The web UI only supports uploads smaller than 2^32 bytes (4 GigaBytes). I believe this is a javascript limitation.
If you need to transfer many or large files consider using gsutil:
GSUtil uploads and downloads any size file.
GSUtil resumes uploads and resumes downloads that fail part way through.
GSUtil calculates the MD5 checksum to verify the contents of each file transferred correctly.
GSUtil can upload and download many files at the same time.
gsutil -m cp /path/to/*thousands-of-files* gs://my-bucket/*
In my experience, the accepted answer is not correct - maybe it was but something has changed.
I just uploaded a file of size 2.2GB to GCS using the web interface on Chrome 42 on Windows 8.1.
I would also point out that the question is about files > 2GB, and the answer mentions 2GB, but gets that from 2^32, which is 4GB, not 2. So maybe the limit really is 2^32 (4GB) - I haven't tried anything that big.
(It is still a good idea to use gsutil for large files.)