How to access S3 bucket using Flutter Amplify without authentication from AWS cognito - flutter

I've created an S3 bucket and made its access level to public. I don't have AWS Cognito configured with the project. I need to use amplify_storage_s3 to get and put files to the bucket without a cognito userpool. Is this possible?

As per the official response here, it does not support.
currently it's not possible to use S3 plugin without Cognito as it is used to sign the S3 requests. You can enable guest access while configuring Auth through CLI such that your app's users wouldn't have to sign in to use S3 resources (by using the guest access)
But I found a probably workaround here

Related

How to use amplify-flutter without cli

Our client provided required Cognito User Pool id and related tokens but not giving the aws credentials so is there any possibility to setup aws amplify official flutter library with out using cli?
You should be able to modify the CFN file see
aws-export.js
and
the cloudformation temlpate basicauthentication5fbc10de-cloudformation-tempalte.yml
in this solution for an idea how to do this.

API for creating Service credentials in IBM COS

I am using IBM COS for various bucket operations. While I could find various ways of programmatically performing various bucket operations, I was wondering if there are any ways of programmatically(any sdk or rest apis) creating Service credentials as well as editing the policy for a service id?
Yes, there are APIs available to access and manage Cloud IAM
Go to the following API docs to review the available APIs:
IAM Identity Services API
IAM Access Groups API
IAM Policy Management API
Gaurav,
See this doc page to provision an instance of IBM Cloud Object Storage
https://cloud.ibm.com/docs/services/cloud-object-storage/basics/developers.html#provision-an-instance-of-ibm-cloud-object-storage

Powershell for Google Cloud: Authenticate with a service account

I'm trying to build an automatic sync solution that uses a Google Cloud storage bucket for storing data.
When I install the cloud SDK it asks for my authentication, but obviously I don't want to use my credentials on the client's server, it should be done with a service account with specific permissions, right?
The documentation just says to authenticate with your credentials. What is the security best practice here?
Found it, it's this simple command:
gcloud auth activate-service-account --key-file=credentials.json
And it works! I can upload stuff with PowerShell
The doc is here

Authorizing GCE to Access GCS

I have a django app running in my Google Compute Engine, and it needs to upload video files to my bucket in Google Cloud Storage. When searching for authentication methods, I found this doc. Under Setting the scope of service account access for instances section, it says I need to enable the Cloud Platform access in the settings when creating the VM. I wonder if it is a must and if there's any other way that I can access my cloud storage bucket from my apps in the compute engine. Because creating a new VM and set up the environment is very time-consuming. Any input would be greatly appreciated. Thanks in advance.
As documented on the page you linked to, to authenticate from Google Compute Engine to Google Cloud Storage, you have several options:
Use VM scopes: this must be set before creating the VM, because scopes are immutable once the VM is created. If you want read-only access, you need to add the scope devstorage.read_only (short form) or https://www.googleapis.com/auth/devstorage.read_only (full path). If you want read-write access, you should use the scope devstorage.read_write (short form) or https://www.googleapis.com/auth/devstorage.read_write (full path).
Note: there's also a feature gcloud beta compute instances set-scopes to update GCE VM scopes at runtime.
An alternative to using scopes is to use JSON authentication tokens, such as via Service accounts which can be used by Google API client libraries to connect to Google Cloud Storage.

Google Cloud Storage authentication: restrict permissions without creating additional Google Accounts

I want to backup multiple servers using Google Cloud Storage. Each server needs access to the bucket containing his backups. A server should not have access to the bucket of any other server.
I have used Amazon S3 before and simply created one user per server in IAM and assigned a policy to the user that allows accessing a specific bucket.
On Google Cloud Storage it seems like Authentication is based on Google Accounts and OAuth 2.0 so every server uses the same Google Account (mine) and the result is that every server has full access to all buckets.
How to give each server his own access credentials (that has access to his own bucket only) without the need to create a new Google Account for each server?
You can create/use service accounts for each bucket and use them to authenticate access to storage bucket from your Google Compute Engine instance. You can limit the access you your bucket by changing the bucket ACLs to allow the access to service account only.