I am working on a project that there is a raspberry pi that generates some data as CSV files in a laptop connected to it. My goal is to send these CSV files regularly into GCS (real-time or each 15minutes).
Then I will be using google cloud functions to send the data from GCS to BigQuery.
The raspberry pi is registered in the network (I am not sure how it can help)
My question: How do send CSV files from the laptop connected to raspberry Pi into Google cloud storage buckets?
You have to use the GCS client libraries:
https://cloud.google.com/storage/docs/reference/libraries
The one from python may be your best fit for the raspberry https://pypi.org/project/google-cloud-storage/
You will need a GCP project, create a bucket and create a ServiceAccount with permission to upload files. Download the ServiceAccount file to your raspeberry and use it as specified in the Client Library you chose: basically specify the credentials file location in your script or as a env var.
BTW to insert the file into BigQuery you could use the Cloud Storage pubsub notifications that create messages when new files are uploaded, then with a Push subscription to your Cloud Function may load it into the BigQuery with the BigQuery Client Library. Take a look to: https://cloud.google.com/storage/docs/pubsub-notifications?hl=es-419
I am newbie at cloud servers and I've opened a google cloud storage to host image files. I've verified my domain and configured it, to view images via my domain. The problem is, same file is both accessible via my domain example.com/images/tiny.png and also via storage.googleapis.com/example.com/images/tiny.png Is there any solution to disable access via storage.googleapis.com and use only my domain?
Google Cloud Platform Support Version:
NOTE: This is the reply from Google Cloud Platform Support when contacted via email...
I understand that you have set up a domain name for one of your Cloud Storage buckets and you want to make sure only URLs starting with your domain name have access to this bucket.
I am afraid that this is not possible because of how Cloud Storage permission works.
Making a Cloud Storage bucket publicly readable also gives each of its files a public link. And currently this public link can’t be disabled.
A workaround would be implement a proxy program and running it on a Compute Engine virtual machine. This VM will need a static external IP so that you can map your domain to it. The proxy program will be in charged of returning the requested file from a predefined Cloud Storage bucket while the bucket keeps to be inaccessible to the public.
You may find these documents helpful if you are interested in this workaround:
1. Quick start to set up a Linux VM (1).
2. Python API for accessing Cloud Storage files (2).
3. How to download service account keys to grant a program access to a set of services (3).
4. Pricing calculator for getting a picture on how much a VM may cost (4).
(1) https://cloud.google.com/compute/docs/quickstart-linux
(2) https://pypi.org/project/google-cloud-storage/
(3) https://cloud.google.com/iam/docs/creating-managing-service-account-keys
(4) https://cloud.google.com/products/calculator/
My Version:
It seems the solution to this question is really a simple, just FUSE Google Cloud Storage with VM Instance.
After FUSE private files from GCS can be accessed through VM's IP address. It made Google Cloud Storage Bucket act like a directory.
The detailed documentation about how to setup FUSE in Google Cloud is here.
There is but it requires you to do more work.
Your current solution works because you've made access to the GCS bucket (example.com), public and then you're DNS aliasing from your domain.
An alternative approach would be for you to limit access to the GCS bucket to one (possibly several) accounts and then run a web-server that uses one of the accounts to access your image files. You could then also either permit access to your web-server to anyone or also limit access to it.
More work for you (and possibly cost) but more control.
I have a Cloud Object storage created from bluemix (IBM Cloud) console. Now for some reason, I need to backup all the data present in the existing object storage into another object storage.
I'm not able to find anything proper on this. Any help on this would be much appreciated.
You can use a tool like rclone or s3cmd to sync the Cloud Object Storage buckets to a linux server and then use those same tools to send them to the new Cloud Object Storage account.
https://mpcarl.github.io/rclone-cos
https://knowledgelayer.softlayer.com/procedure/connecting-cos-s3-using-s3cmd
I have a number of files that I transferred into Azure Blob Storage via the Azure Data Factory. Unfortunately, this tool doesn't appear to set the Content-MD5 value for any of the values, so when I pull that value from the Blob Storage API, it's empty.
I'm aiming to transfer these files out of Azure Blob Storage and into Google Storage. The documentation I'm seeing for Google's Storagetransfer service at https://cloud.google.com/storage/transfer/reference/rest/v1/TransferSpec#HttpData indicates that I can easily initiate such a transfer if I supply a list of the files with their URL, length in bytes and an MD5 hash of each.
Well, I can easily pull the first two from Azure Storage, but the third doesn't appear to automatically get populated by Azure Storage, nor can I find any way to get it to do so.
Unfortunately, my other options look limited. In the possibilities so far:
Download file to local machine, determine the hash and update the Blob MD5 value
See if I can't write an Azure Functions app in the same region that can calculate the hash value and write it to the blob for each in the container
Use an Amazon S3 egress from Data Factory and then use Google's support for importing from S3 to pull it from there, per https://cloud.google.com/storage/transfer/reference/rest/v1/TransferSpec#AwsS3Data but this really seems like a waste of bandwidth (and I'd have to set up an Amazon account).
Ideally, I want to be able to write a script, hit go and leave it alone. I don't have the fastest download rate from Azure, so #1 would be less than desireable as it'd take a long time.
Have any other approaches?
May 2020 update: Google Cloud Data Transfer now supports Azure Blob storage as a source. This is a no-code solution.
We used this to transfer ~ 1TB of files from Azure Blob storage to Google Cloud Storage. We also have a daily refresh so any new files in Azure Blob are automatically copied to Cloud Storage.
I know it's a bit late to answer this question for you, but it might help others who all are trying to migrate data from Azure Blob Storage to Google Cloud Storage
Google Cloud Storage and Azure Blob Storage, both platforms being storage services, does not have a command line interface, where we can simply go and run transfer commands. For that, we need an intermediate compute instance which would actually be able to run the required commands. We will follow the steps below in order to achieve the Cloud to Cloud transfer.
First and foremost, create a Compute Instance in Google Cloud Platform. You needn't create a computationally powerful instance, all you need is a Debian-10GB machine with 2-core CPU and 4 GB of memory.
In the early days, you would have downloaded the data to the Compute Instance in GCP and then move it further to Google Cloud Storage. But now with the introduction of gcsfuse we can simply mount a Google Storage Account as a File System.
Once the compute instance is created, simply login to that instance using SSH from Google Console and install the following packages.
Install Google Cloud Storage Fuse
export GCSFUSE_REPO=gcsfuse-`lsb_release -c -s`
echo "deb http://packages.cloud.google.com/apt $GCSFUSE_REPO main" | sudo tee /etc/apt/sources.list.d/gcsfuse.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update -y
sudo apt-get install gcsfuse -y
# Create local folder
mkdir local_folder_name
# Mount the Storage Account as a bucket
gcsfuse <bucket_name> <local_folder_path>
Install Azcopy
wget https://aka.ms/downloadazcopy-v10-linux
tar -xvf downloadazcopy-v10-linux
sudo cp ./azcopy_linux_amd64_*/azcopy /usr/bin/
Once these packages are installed, the next step is to create the Shared Signature Access key. If you have Azure Blob Storage Explorer, just right click on the storage account name in the directory tree and Select Generate Shared Access Signature
Now you will have to create a URL to your blob objects. To achieve this, simply right-click on any of your blob object, select Properties and copy the URL from the dialogue box.
Your final Url should look like.
<https://URL_to_file> + <SAS Token>
https://myaccount.blob.core.windows.net/sascontainer/sasblob.txt?sv=2015-04-05&st=2015-04-29T22%3A18%3A26Z&se=2015-04-30T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D
Now, use following command to start copying the files from Azure to GCP storage.
azcopy cp --recursive=true "<-source url->" "<-destination url->"
If in case, your job fails you can list your jobs using:
azcopy jobs list
and to resume failed jobs:
azcopy jobs resume jobid <-source sas->
You can collate all the steps into one bash, leave it running till your data transfer is complete.
And that's all! I hope it help others
We have migrated about 3TB files from Azure to Google Storage. We have started a cheap Linux server with a few TB local disk in the Google Computing Engine. Transferred the the Azure files to the local disk by blobxfer, then copied the files from the local disk to the Google Storage by gsutil rsync (gsutil cp works too).
You can use other tools to transfer files from Azure, you may even start the Windows server in the GCE and use gsutils on Windows.
It has taken a few days, but was simple and straightforward.
Did you think about using Azure Data Factory custom activity support that is used for data transformation? On back-end, you can use Azure Batch for downloading, updating and uploading your files into Google Storage, if you go with ADF custom activity.
I'm trying to use Cloud SQL from my VM instance.
When creating the VM Instance I activated Cloud SQL Option for it.
The Cloud SQL instance authorizes my Compute Engine Project to access it.
At first I was expecting to have some tools like google_sql.sh installed on my VM since I had activated Cloud SQL on it but no :-/
In Cloud SQL docs it says that I should copy my local access token to my VM Instance.
My local machine is Mac OSX so the tokens are stored in :
~user/Library/Preferences/com.google.cloud.plist
but on my Linux VM it's stored in:
~user/.java/.userPrefs/com/google/cloud/sqlservice/oauth2/prefs.xml.
Do I have to create a prefs.xml and copy it on my VM? (but I guess the XML schema is not the same between com.google.cloud.plist and prefs.xml?)
Does someone have perfs.xml example I could use as a template (unless schema is exactly the same as com.google.cloud.plist which I doubt)?
Thanks all for your help.
The simplest thing is actually to include service account scopes when you create your instance. This page in the compute engine docs describes how to do it. This maintains an access token in the compute engine instance's metadata server which the Cloud SQL tools can then access when they need to authenticate. A similar technique works for cloud storage and other products.