How to create a Blob from a Google Cloud Storage URL in Python - google-cloud-storage

I have a blob created using the google cloud storage API, and have saved its path using blob.path. The path is of the form
/b/bucketname/o/some%2Fobject%2Fid
How do I recreate the blob from this URL?

It's unfortunate that the GCS API doesn't provide a factory method to go from the path URL back to a blob, since saving blob paths in databases etc. is quite common.
Here is a factory method that allows you to go from a blob.path back to a blob:
def blob_from_blobpath(blob_path):
import google.cloud.storage as gcs
blob_path = blob_path[3:] # /b/
slash_loc = blob_path.index('/')
bucket_name = blob_path[:slash_loc]
blob_name = blob_path[(slash_loc+3):] # /o/
bucket = gcs.Client().get_bucket(bucket_name)
return bucket.blob(blob_name)

Related

How can I access a file of storage with dart Google Storage API

I'm trying to get information of a file that is inside of a folder in Google cloud Storage (not firebase storage). But the API of it in Dart language is not complete, it doesn't have a function to show the blob (file) information like we have in python same API. I just need to access the name of the file. Here's my code:
var credentials = auth.ServiceAccountCredentials.fromJson({
"type": "service_account",
...
});
List<String> scopes = []..addAll(Storage.SCOPES);
var client = await auth.clientViaServiceAccount(credentials, scopes);
var storage = Storage(client, "project_name");
var bucket = storage.bucket("Bucket_name");
var list = await bucket.read("folder_name/");
list.forEach((element) {
print(element.toString());
});
It has a lot of options like toList(),toSet(), asBroadcastStream() and etc. But any of these return me what I need. Some ones just return a empty list, that doesn't make sence for me.
Anyways, if someone know how to read data from a folder of GCP storage, please anwser me. Sorry for my english and Thanks!
The API docs: https://pub.dev/documentation/gcloud/latest/gcloud.storage/gcloud.storage-library.html
For you backend, you can use bucket.list(prefix: "folder_name/","") as described in the documentation:
Listing operates like a directory listing, despite the object namespace being flat. Unless delimiter is specified, the character / is being used to separate object names into directory components. To list objects recursively, the delimiter can be set to empty string.
For the front end, forget this! You can't provide a service account key file in your frontend! If you share the secret publicly, it's like if you set your bucket public!
So, for this, you need to have a backend that authenticated the user and generate a signed URL to read and write inside the bucket.
bucket.read() gets the contents of individual objects. If you want to get the object names, use bucket.list().

Displaying Images of File Service from Azure in external system + REST API

I have created a method using GETFILE() service of azure. Reference: https://learn.microsoft.com/en-us/rest/api/storageservices/get-file
public void getImage(){
string storageKey = 'xxxxStorageKeyxxx';
string storageName = '<storageName>';
Datetime dt = Datetime.now();
string formattedDate = dt.formatGMT('EEE, dd MMM yyyy HH:mm:ss')+ ' GMT';
string CanonicalizedHeaders = 'x-ms-date:'+formattedDate+'\nx-ms-version:2016-05-31';
string CanonicalizedResource = '/' + storageName + '/<shareName>/<dirName>/<File Name>\ntimeout:20';
string StringToSign = 'GET\n\n\n\n\napplication/octet-stream\n\n\n\n\n\n\n' + CanonicalizedHeaders+'\n'+CanonicalizedResource;
Blob temp = EncodingUtil.base64Decode(storageKey);
Blob hmac = Crypto.generateMac('HmacSHA256',Blob.valueOf(StringToSign),temp ); //StringToSign
system.debug('oo-'+EncodingUtil.base64Encode(hmac));
HttpRequest req = new HttpRequest();
req.setMethod('GET');
req.setHeader('x-ms-version','2016-05-31' );
req.setHeader('x-ms-date', formattedDate);
req.setHeader('content-type','application/octet-stream');
string signature = EncodingUtil.base64Encode(hmac);
string authHeader = 'SharedKey <storageName>'+':'+signature;
req.setHeader('Authorization',authHeader);
req.setEndpoint('https://<storageName>.file.core.windows.net/<shareName>/<dirName>/<file Name>?timeout=20');
Http http = new Http();
HTTPResponse res;
res = http.send(req);
}
The above was working fine and giving the 200 as response code. But, my main goal is to display/download the respective image which i retrieved through REST API. How can i achieve that?
So a few things before I answer your question:
File storage is not really suitable for what you're trying to accomplish (it's possible though).
You should look at Blob storage for this as blob storage is more suitable for this kind of scenario.
Assuming you go with Blob storage, there are a few things you could do:
If the blob container (equivalent to a share in file storage) has an ACL is Blob or Container (i.e. blobs in a container are publicly available), you could simply return the blob's URL (Same is your request URL in code above) in your response and then create a link in your application with href set to this URL.
If the blob container has an ACL as Private (i.e. blobs are not publicly available), you would need to create a Shared Access Signature (SAS) token on that blob with at least Read permission and then create a SAS URL. A SAS URL is simply blob URL + SAS token and return this SAS URL in your response and then create a link in your application with href set to this URL.
Since an Azure File Share is always private, if you were to use Azure File service to serve a file, you would do the same thing as 2nd option I listed above. You will create a SAS token on the file with at least Read permission and then return the SAS URL in the response and then create a link in your application with href set to this URL.
To read about Shared Access Signature, you may find this link helpful: https://learn.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1.
To create a Shared Access Signature using REST API, you may find this link helpful: https://learn.microsoft.com/en-us/rest/api/storageservices/Constructing-a-Service-SAS?redirectedfrom=MSDN

Pass Credentials to StorageOptions to Create Bucket

I currently have a very simple function which creates a google storage bucket leveraging the Java API (in Scala). However, it currently infers gCloud credentials from the environment.
def createBucket(bucketName: String) = {
val storage = StorageOptions.getDefaultInstance.getService // May need to change. Creds inferred from environment.
val bucket = storage.create(BucketInfo.of(bucketName))
}
Is there any way to pass a GoogleCredential object into the constructor? Been stepping through the API # https://github.com/GoogleCloudPlatform/google-cloud-java/tree/master/google-cloud-storage/src/main/java/com/google/cloud/storage but haven't had any luck.
Found the answer outlined at https://github.com/GoogleCloudPlatform/google-cloud-java#authentication
Storage storage = StorageOptions.newBuilder()
.setCredentials(new GoogleCredentials(new AccessToken(accessToken, expirationTime)))
.build()
.getService();

Is it possible to stream data(Upload) to store on bucket of Google cloud storage and allow to download at the same time?

Is it possible to stream data(Upload) to store on bucket of Google cloud storage and allow to download at the same time?
I have tried to use the Cloud API to upload a 100MB file to the bucket by using the code as below, but during the upload, and i refresh the bucket in the Google cloud console, i cannot see the new uploading file until the upload is finished. I would like to upload realtime video encoded in H.264 to store on the Cloud storage, so the size is unknown and at the same time, other users can start downloading the file event it is uploading. So is it possible?
Test code:
File tempFile = new File("StorageSample");
RandomAccessFile raf = new RandomAccessFile(tempFile, "rw");
try
{
raf.setLength(1000 * 1000 * 100);
}
finally
{
raf.close();
}
uploadFile(TEST_FILENAME, "text/plain", tempFile, bucketName);
public static void uploadFile(
String name, String contentType, File file, String bucketName)
throws IOException, GeneralSecurityException
{
InputStreamContent contentStream = new InputStreamContent(
contentType, new FileInputStream(file));
// Setting the length improves upload performance
contentStream.setLength(file.length());
StorageObject objectMetadata = new StorageObject()
// Set the destination object name
.setName(name)
// Set the access control list to publicly read-only
.setAcl(Arrays.asList(
new ObjectAccessControl().setEntity("allAuthenticatedUsers").setRole("READER"))); //allUsers//
// Do the insert
Storage client = StorageFactory.getService();
Storage.Objects.Insert insertRequest = client.objects().insert(
bucketName, objectMetadata, contentStream);
insertRequest.getMediaHttpUploader().setDirectUploadEnabled(false);
insertRequest.execute();
}
Unfortunately it's not possible, as state in the documentation:
Objects are immutable, which means that an uploaded object cannot
change throughout its storage lifetime. An object's storage lifetime
is the time between successful object creation (upload) and successful
object deletion.
This means that an object in cloud storage starts to exist when the upload it's finished, so you cannot access the object until your upload it's not completed.

jclouds : how do I update metadata for an existing blob?

I've got a few thousand blobs at Rackspace's Cloud Files which I need to update content type for. However, I can't figure out how I do that using the jclouds API.
How can I go about to update metadata on an existing blob?
Assuming you have the whole set up running for your rackspace, using jclouds is easy:
First initialize with the following details:
BlobStoreContext context = ContextBuilder.newBuilder(provider)
.credentials(username, apiKey)
.buildView(BlobStoreContext.class);
BlobStore blobStore = context.getBlobStore();
You can now build a new blob to to put on rackspace:
Blob blob = blobStore.blobBuilder(key)
.userMetadata(metadata)
.payload(value)
.build();
blobStore.putBlob(container, blob);
value is the input bytes[] and metadata is a hash map of meta data associated with the blob like content type.
If you want to do operations like update:
RegionScopedBlobStoreContext context = ContextBuilder.newBuilder("openstack-swift")
.endpoint(config.getAuthUrl())
.credentials(config.getUser(), config.getPasswd())
.overrides(p)
.buildView(RegionScopedBlobStoreContext.class);
SwiftApi swift = (SwiftApi) ((org.jclouds.rest.internal.ApiContextImpl)context.unwrap()).getApi();
boolean success = swift.objectApiInRegionForContainer(config.getRegion(), container).updateMetadata(filename, metaData);
I know it is an overview but I hope it gives you a good direction.
As of jclouds 2.1.0 (and 1.9.3 at least) the API to change object custom metadata looks like this:
BlobStoreContext context = contextBuilder.buildView(BlobStoreContext.class);
SwiftApi api = (SwiftApi) ((org.jclouds.rest.internal.ApiContextImpl)context.unwrap()).getApi();
ObjectApi objectApi = api.getObjectApi(region, container);
Map<String, String> meta = new HashMap<>();
meta.put('some-meta', value);
objectApi.updateMetadata(blobName, meta);
Content type cannot be updated this way, only metadata with keys starting from X-Object-Meta- can be updated. updateMetadata automatically prefixes all keys passed to it with X-Object-Meta-. In the example above custom data with key X-Object-Meta-some-meta would be added to the blob.
Theoretically updateRawMetadata should be able to update content type (it does not add X-Object-Meta- prefix to keys and passes them verbatim) but due to a bug in jclouds it fails for content type key with error:
configuration error please use request.getPayload().getContentMetadata().setContentType(value) as opposed to adding a content type header
I've checked update content type via curl and it works fine, so it is a bug in jclouds:
curl -X POST -H "X-Auth-Token: $TOKEN" -H "Content-Type: $CONTENT_TYPE" "$PUBLIC_URL/$CONTAINER/$BLOB_NAME"
The workaround for this is to use copy operation to copy the blob into itself as described in the documentation for the API:
You can use COPY as an alternate to the POST operation by copying to the same object
And this can be done using vendor/api independent jclouds API like this:
Blob blob = blobStore.getBlob(container, blobName);
MutableContentMetadata contentMetadata = blob.getPayload().getContentMetadata();
contentMetadata.setContentType(mimeType);
blobStore.copyBlob(getContainer, blobName, getContainer, blobName,
CopyOptions.builder().contentMetadata(contentMetadata).build());
Or via SwiftApi (this does not require fetching of the blob's metadata):
BlobStoreContext context = contextBuilder.buildView(BlobStoreContext.class);
SwiftApi api = (SwiftApi) ((org.jclouds.rest.internal.ApiContextImpl)context.unwrap()).getApi();
ObjectApi objectApi = api.getObjectApi(region, container);
Map<String, String> meta = new HashMap<>();
meta.put(HttpHeaders.CONTENT_TYPE, mimeType);
objectApi.copy(blobName, container, blobName, new HashMap<String, String>(), meta);