How can I access a file of storage with dart Google Storage API - flutter

I'm trying to get information of a file that is inside of a folder in Google cloud Storage (not firebase storage). But the API of it in Dart language is not complete, it doesn't have a function to show the blob (file) information like we have in python same API. I just need to access the name of the file. Here's my code:
var credentials = auth.ServiceAccountCredentials.fromJson({
"type": "service_account",
...
});
List<String> scopes = []..addAll(Storage.SCOPES);
var client = await auth.clientViaServiceAccount(credentials, scopes);
var storage = Storage(client, "project_name");
var bucket = storage.bucket("Bucket_name");
var list = await bucket.read("folder_name/");
list.forEach((element) {
print(element.toString());
});
It has a lot of options like toList(),toSet(), asBroadcastStream() and etc. But any of these return me what I need. Some ones just return a empty list, that doesn't make sence for me.
Anyways, if someone know how to read data from a folder of GCP storage, please anwser me. Sorry for my english and Thanks!
The API docs: https://pub.dev/documentation/gcloud/latest/gcloud.storage/gcloud.storage-library.html

For you backend, you can use bucket.list(prefix: "folder_name/","") as described in the documentation:
Listing operates like a directory listing, despite the object namespace being flat. Unless delimiter is specified, the character / is being used to separate object names into directory components. To list objects recursively, the delimiter can be set to empty string.
For the front end, forget this! You can't provide a service account key file in your frontend! If you share the secret publicly, it's like if you set your bucket public!
So, for this, you need to have a backend that authenticated the user and generate a signed URL to read and write inside the bucket.

bucket.read() gets the contents of individual objects. If you want to get the object names, use bucket.list().

Related

Azure Media Services - Download Transient Error

I have a lot of audios in my database whose URLs are like:
https://mystorage.blob.core.windows.net/mycontainer/uploaded%2F735fe9dc-e568-4920-a3ed-67230ce01991%2F5998d1f8-1795-4776-a19c-f1bc4a0d4786%2F2020-08-13T13%3A09%3A13.0996703Z?sv=2020-02-10&se=2022-01-05T16%3A58%3A50Z&sr=b&sp=r&sig=hQBPyOE92%2F67MqU%2Fe5V2NsqGzgPxogVeXQT%2BOlvbayw%3D
I am using these URLs as my JobInput, and submitting a encoding job, because I want to migrate the audios distribution to a streaming approach.
However, every time I use this kind of URL, it fails with DownloadTransientError, and a message something like while trying to download the input files, the files were not acessible.
If I manually upload a file to the blob storage with a simpler URL (https://mystorage.blob.core.windows.net/mycontainer/my-audio.wav), and use it as the JobInput, it works seamlessly. I suspect it has something to do with the special characters on the bigger URL, but I am not sure. What could be the problem?
Here is the part of the code that submits the job:
var jobInput = new JobInputHttp(new[]
{
audio.AudioUrl.ToString()
});
JobOutput[] jobOutput =
{
new JobOutputAsset(outputAssetName),
};
var job = await client.Jobs.CreateAsync(
resourceGroupName: _azureMediaServicesSettings.ResourceGroup,
accountName: _azureMediaServicesSettings.AccountName,
transformName: TransformName,
jobName: jobName,
new Job
{
Input = jobInput,
Outputs = jobOutput
});
You need to include the file name in the URL you're providing. I'll use your URL as an example, but unescape it as well so that it is more clear. The URL should be something like https://mystorage.blob.core.windows.net/mycontainer/uploaded/735fe9dc-e568-4920-a3ed-67230ce01991/5998d1f8-1795-4776-a19c-f1bc4a0d4786/2020-08-13T13:09:13.0996703Z/my-audio.wav?sv=2020-02-10&se=2022-01-05T16:58:50Z&sr=b&sp=r&sig=hQBPyOE92/67MqU/e5V2NsqGzgPxogVeXQT+Olvbayw=
Just include the actual blob name of the input video or audio file with the associated file extension.

Store the path to uploaded file on client-side or the file outside the browser for offline

Is there a way to store the path to file which user wants to upload, but doesn't have an internet connection (it's a PWA) and reupload it when a connection is back? Or maybe not store the path, but save the file outside browser storage, somewhere on the user's machine (even if it will require some acceptance from the user to allow the browser to read/write files), but I'm not sure if it's even allowed to do.
Currently, I'm storing the whole file as a base64 in IndexedDB, but it's crashing/slowing down the browser when it comes to reading big files (around 100MB). Also, I don't want to overload browser storage.
There's a couple of things to consider.
Storing the data you need to upload in IndexedDB and then reading that in later will be the most widely supported approach. As you say, though, it means taking up extra browser storage. One thing that might help is to skip the step of encoding the file in Base64 first, as in all modern browsers, IndexedDB will gladly store bytes directly for you as a Blob.
A more modern approach, but one that's not currently supported by non-Chromium browsers, would be to use the File System Access API. As described in this article, once you get the user's permission, you can save a handle to a file in IndexedDB, and then read the file later on (assuming the underlying file hasn't changed in the interim). This has the advantage of not duplicating the file's contents in IndexedDB, saving on storage space. Here's a code snippet, borrowed from the article:
import { get, set } from 'https://unpkg.com/idb-keyval#5.0.2/dist/esm/index.js';
const pre = document.querySelector('pre');
const button = document.querySelector('button');
button.addEventListener('click', async () => {
try {
// Try retrieving the file handle.
const fileHandleOrUndefined = await get('file');
if (fileHandleOrUndefined) {
pre.textContent =
`Retrieved file handle "${fileHandleOrUndefined.name}" from IndexedDB.`;
return;
}
// This always returns an array, but we just need the first entry.
const [fileHandle] = await window.showOpenFilePicker();
// Store the file handle.
await set('file', fileHandle);
pre.textContent =
`Stored file handle for "${fileHandle.name}" in IndexedDB.`;
} catch (error) {
alert(error.name, error.message);
}
});
Regardless of how you store the file, it would be helpful to use the Background Sync API when available (again, currently limited to Chromium browsers) to handle automating the upload once the network is available again.

To complete this transfer, you need the 'storage.buckets.setIamPolicy' permission for the source bucket

I am getting this error when trying to create a "transfer" to transfer the contents of one bucket in Google Cloud to another bucket in Google Cloud under the same owner:
To complete this transfer, you need the 'storage.buckets.setIamPolicy' permission for the source bucket. Ask the bucket's administrator to grant you the required permission and try again.
I have no idea what I'm supposed to do. I tried going to "Bucket -> Permissions -> Add Members -> myemail.com for Storage -> ...Admin" but I just keep getting "IAM policy update failed".
Please help on what to do to get this working so I can make my files publicly accessible.
I am using Node.js if that helps.
If I even try to fetch the photo and bypass it directly, I can't even do that :/
const { Storage } = require('#google-cloud/storage')
const storage = new Storage({
projectId: 'my-bucket'
})
const bucket = storage.bucket('my.bucket')
app.get('/photo/:photo.:ext', (req, res) => {
const remoteFile = bucket.file(`photo/${req.params.photo}.${req.params.ext}`)
remoteFile.createReadStream().pipe(res)
})
Can't do this either:
const opts = {
includeFiles: true
};
bucket.makePublic(opts, function(err, files) {
// `err`:
// The first error to occur, otherwise null.
//
// `files`:
// Array of files successfully made public in the bucket.
console.log(arguments)
});
Cannot get legacy ACLs for a bucket that has enabled Bucket Policy Only. Read more at https://cloud.google.com/storage/docs/bucket-policy-only.
$ gsutil iam ch allUsers:objectViewer gs://my.bucket
ServiceException: 401 Anonymous caller does not have storage.buckets.getIamPolicy access to my.bucket.
The error clearly indicates a missing permission on the Source bucket. I recommend you confirm that the Owner on the Source bucket has the Permission, storage.objects.getIamPolicy(IAM&admin --> IAM Menu --> Filter by the Owner's email address --> check the role on it). Then, you can check if the roles has that permission, storage.objects.getIamPolicy (go to IAM&admin -->Roles and then, search for specific role --> Click on it and it would show the list of assigned permission. Ensure that storage.objects.getIamPolicy is one of the permissions listed for the Role
Meanwhile, for you to be able to grant access to specific buckets, your account role must be a Storage Admin. So, if your account does not have that role, you would need someone that has that role to be able to grant access or have other control over the bucket
I expect you have found a solution to this on your own.
But for anyone getting the same error message i got this when trying to set up a transfer from the cloud console. I had done a transfer before between these two storage buckets and not changed the name of the transfer, so the second time trying this the suggested name was the same as the previous. I changed the transfer name to a unique one and that solved the issue for me.
No idea why i kept getting this error message as permissions was not part of the problem.

How to create a Blob from a Google Cloud Storage URL in Python

I have a blob created using the google cloud storage API, and have saved its path using blob.path. The path is of the form
/b/bucketname/o/some%2Fobject%2Fid
How do I recreate the blob from this URL?
It's unfortunate that the GCS API doesn't provide a factory method to go from the path URL back to a blob, since saving blob paths in databases etc. is quite common.
Here is a factory method that allows you to go from a blob.path back to a blob:
def blob_from_blobpath(blob_path):
import google.cloud.storage as gcs
blob_path = blob_path[3:] # /b/
slash_loc = blob_path.index('/')
bucket_name = blob_path[:slash_loc]
blob_name = blob_path[(slash_loc+3):] # /o/
bucket = gcs.Client().get_bucket(bucket_name)
return bucket.blob(blob_name)

Multiple s3 buckets in Filepicker.io

I need to upload to multiple s3 buckets with filepicker.io. I found a tweet that indicated that there was a hacky, but possible, way to do this. Support hasn't gotten back to me yet, so I'm hoping that someone here already knows the answer!
Have you tried generating a second application/API key? It looks like they lock your S3/AWS credentials to an application/API key rather than directly to the account.
Support just got back to me. There's no way to do this besides creating multiple applications, which is okay if you are just switching between prod/staging/dev, but not a good solution if you have to upload to arbitrary buckets.
My solution is to execute a PUT request with the x-amz-copy-source header after the file has been uploaded, which copies it to the correct bucket.
This is pretty hacky as it request two extra requests per file -- one filepicker.stat and one more call to s3 (or your server).
#Ben
I am developing code with same issue of files needing to go into many buckets. I'm working in ASP.net.
What I have done is have one Filepicker 'application' with it's own S3 bucket.
I already had a callback to the server in the javascript onSuccess() function (which is passed as a parameter to filepicker.store()). This callback needed to be there to do some book-keeping anyway.
So I have just added in an extra bit to the server-side callback code which uses the AWS SDK to copy the object from the bucket filepicker uploades it to, to it's final destination bucket.
This is my C# code for moving, or rather copying, an object between buckets:
public bool MoveObject(string bucket1, string key1, string bucket2, string key2 = null)
{
bool success = false;
if (key2 == null) key2 = key1;
Logger logger = new Logger(); // my logging system
try
{
RegionEndpoint region = RegionEndpoint.EUWest1; // use your region here
using (AmazonS3Client s3Client = new AmazonS3Client(region))
{
// TODO: CheckForBucketFunction
CopyObjectRequest request = new CopyObjectRequest();
request.SourceBucket = bucket1;
request.SourceKey = key1;
request.DestinationBucket = bucket2;
request.DestinationKey = key2;
S3Response response = s3Client.CopyObject(request);
logger.Info2Log("response xml = \n{0}\n", response.ResponseXml);
response.Dispose();
success = true;
}
}
catch (AmazonS3Exception ex)
{
logger.Info2Log("Error copying file between buckets: {0} - {1}",
ex.ErrorCode, ex.Message);
success = false;
}
return success;
}
There are AWS SDKs for other server languages and the good news is Amazon doesn't charge for copying objects between buckets in the same region.
Now I just have to decide how to delete the object from the filepicker application bucket. I could do it on the server using more AWS SDK code but that will be messy as it leaves links to the object in the filepicker console. Or I could do it from the browser using filepicker code.