Store Access Key ID and Secret Access Key to disk using golang aws sdk - aws-sdk-go

I am trying to persist aws credentials to disk using the AWS SDK for GO. I want these credentials to get stored in the standard file used in the credential chain.
The credentials themselves are user-supplied. They enter their Access Key ID and Secret Access Key. Those are what I want to persist to disk.
I could create this file myself, but I would prefer to let the SDK do it. The SDK will know the rules of the cred chain better than I do, and anytime the rules change, the SDK will handle it for me.

The format is basic:
[default]
aws_access_key_id = ...
aws_secret_access_key =
On Windows the file is %USERPROFILE%.aws\credentials
On Linux $HOME/.aws/credentials

Related

IBM App ID: What are the parameters to configure custom encryption during provisioning?

I know how to create an App ID instance using the IBM Cloud browser UI, or via CLI and even utilizing Terraform. But what are the parameters for Terraform (and the Resource Controller API) to specify that a root key from my Key Protect instance should be used for encryption?
It seems that a parameter for the KMS instance and one for the root key are required. But what's their name?
I created an App ID instance with custom encryption in the browser UI, then retrieved the details using the CLI with --output JSON:
The above parameters indicate the required parameters to be passed on the CLI / API / Terraform.
kms_info: a JSON object with the KMS (Key Protect or Hyper Protect Crypto Services) ID and an url field.
tek_id: The actual CRN for the crypto key.

Edit sql file to secure credentials during deployment of project in azure devOps

I am using an open source tool for deployment of schema for my warehouse snowflake. I have successfully done it for tables, views and procedures. Currently I'm facing an issue, I have to deploy snowflake stages same way. But stages required url and azure saas token when you define it in your sql file like this:
CREATE or replace STAGE myStage
URL = 'azure://xxxxxxxxx.blob.core.windows.net/'
CREDENTIALS = ( AZURE_SAS_TOKEN = 'xxxxxxxxxxxxxxxxxxxx' )
file_format = myFileFormat;
As it is not encouraged to use your credentials in file that will be published on version control and access by others. Is there a way/task in azure devOps so I can just pass a template SQL file in repo and change it before compilation and execution(may be via azure key vault) and change back to template? So these credentials and token always remain secure.
Have you considered using a STORAGE INTEGRATION, instead? If you use the storage integration credentials and grant that to your Blob storage, then you'd be able to create STAGE objects without passing any credentials at all.
https://docs.snowflake.net/manuals/sql-reference/sql/create-storage-integration.html
For this issue ,you can use credential-less stages to secure your cloud storage without sharing secrets.
Here agree with Mike, storage integrations, a new object type, allow a Snowflake administrator to create a trust policy between Snowflake and the cloud provider. When Snowflake connects to the organization’s cloud storage, the cloud provider authenticates and authorizes access through this trust policy.
Storage integrations and credential-less external stages put into the administrator’s hands the power of connecting to storage in a secure and manageable way. This functionality is now generally available in Snowflake.
For details ,please refer to this document. In addition, you can also via azure key vault, key vault provides a secure place for accessing and storing secrets.

How to make Keycloak use its database password from keyvault instead of env file

Currently I am using keycloak on postgres db. and the db creds are provided to environment files. Wanted to know how I can make keycloak obtain the db creds from keyvault something like Azure keyvault ? Is there any documentation / guideline around it?
As per the official documentation ,some part already done but look like still work in progress
To use a vault, a vault provider must be registered within Keycloak.
It is possible to either use a built-in provider described below or
implement your own provider. See the Server Developer Guide for more
information.
To obtain a secret from a vault instead of entering it directly, enter the following specially crafted string into the appropriate field: ${vault.entry-name} where you replace the entry-name with the name of the secret as recognized by the vault.
https://www.keycloak.org/docs/latest/server_admin/#_vault-administration
https://issues.redhat.com/browse/KEYCLOAK-3205

Google Cloud Storage 500 Internal Server Error 'Google::Cloud::Storage::SignedUrlUnavailable'

Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)

how external app can access ibm cloud object storage

I have IBM COS service and able to use Curl command via cli to retrieve objects. I used IAM tokens to retrieve. But how do I let an external web app ex., node access this service?
what value should be there in authorization for external app access?
External apps will come in the form of something like the AWS CLI or any other app that uses either an HTTP library coupled with IBM Cloud Object Storage API or even an SDK for languages like Python, Java or Node.Js
All of the above will ask you for access key and secret key.
You can get both of them from the IBM Cloud console by generating new HMAC Credentials [1]:
Navigate to your Cloud Object storage account
Click on right under Service credentials
Click New credentials button on right
For the "Add Inline Configuration Parameters (Optional)" text box enter the following JSON:
{"HMAC":true}
[1] https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.html#service-credentials
We'll you could use the ibm-cos-sdk Node library https://www.npmjs.com/package/ibm-cos-sdk. You'll need to use your HMAC credentials.
var config = {
endpoint: '<endpoint>',
ibmAuthEndpoint: 'https://iam.ng.bluemix.net/oidc/token',
serviceInstanceId: '<resource-instance-id>',
accessKeyId: '<HMAC access_key>',
secretAccessKey: '<HMAC secret access key>'
};