Cloud Custodian - AWS IAM Policies - Extract KMS Blanket Allow - aws-iam-policy

I am new to Cloud Custodian and AWS and I am trying to extract all AWS IAM Policies that have a blanket allow for KMS decryption. The filter syntax and formatting just don't make sense to me. Any advice on how to construct the filters to find all these policies?

Related

Can't use AWS IAM Roles with KMS Providers for MongoDB Client Side Field Level Encryption?

I am using EC2 Instance profile credentials for allowing the AWS EC2 instance to access other AWS services.
Recently, I implemented MongoDB Client-Side Field-Level Encryption for which the AWS KMS has been used as KMS Providers. The MongoDB Documentation for CSFLE mentions that the KMS Provider should have secret key and access key that maps to an IAM User.
This way I will have to create another IAM User and then maintain those credentials separately. A simpler way (and more secure) would have been to use the DefaultCredentialsProvider from software.amazon.awssdk:auth and that could have used the credentials from the instance profile that could have given access to the KMS. But this does not work for me and MongoClient fails as KMS rejects the security token used.
Is there any reason behind not allowing this way of accessing KMS?
As all projects, initial implementation of CSFLE had a scope. This scope did not include the ability to use instance roles for credential identification.
I suggest you submit your request to https://feedback.mongodb.com/ for consideration.

Edit sql file to secure credentials during deployment of project in azure devOps

I am using an open source tool for deployment of schema for my warehouse snowflake. I have successfully done it for tables, views and procedures. Currently I'm facing an issue, I have to deploy snowflake stages same way. But stages required url and azure saas token when you define it in your sql file like this:
CREATE or replace STAGE myStage
URL = 'azure://xxxxxxxxx.blob.core.windows.net/'
CREDENTIALS = ( AZURE_SAS_TOKEN = 'xxxxxxxxxxxxxxxxxxxx' )
file_format = myFileFormat;
As it is not encouraged to use your credentials in file that will be published on version control and access by others. Is there a way/task in azure devOps so I can just pass a template SQL file in repo and change it before compilation and execution(may be via azure key vault) and change back to template? So these credentials and token always remain secure.
Have you considered using a STORAGE INTEGRATION, instead? If you use the storage integration credentials and grant that to your Blob storage, then you'd be able to create STAGE objects without passing any credentials at all.
https://docs.snowflake.net/manuals/sql-reference/sql/create-storage-integration.html
For this issue ,you can use credential-less stages to secure your cloud storage without sharing secrets.
Here agree with Mike, storage integrations, a new object type, allow a Snowflake administrator to create a trust policy between Snowflake and the cloud provider. When Snowflake connects to the organization’s cloud storage, the cloud provider authenticates and authorizes access through this trust policy.
Storage integrations and credential-less external stages put into the administrator’s hands the power of connecting to storage in a secure and manageable way. This functionality is now generally available in Snowflake.
For details ,please refer to this document. In addition, you can also via azure key vault, key vault provides a secure place for accessing and storing secrets.

Encrypt Mongodb with Google Cloud Key Management Service

Is it possible to use Google KMS with Mongodb server on Ubuntu 18.04 (GCP) to encrypt data at rest? What are the requirements? How is it done? I want to use mongodb encryption feature for additional security.
The documentation mentions KMIP protocol and does Google provide such service?
ps: I have installed Mongodb enterprise edition on my server along with other services such as backend.
From your comment and assuming your questions is on regards of how to use the KMS integration with MongoDB:
For a start, it is possible to use KMS with MongoDB. Google even provides an out-of-the-box solution of MongoDB Atlas to integrate with KMS via Market Place.
However, this integration is not available on Atlas M0, M2 and M5.
You can follow the same link for details on how to use the integration. If you have any specific question on this integration, please edit your question to include it.
Data on GCP is always encrypted at rest. You can optionally use your own KMS keys to encrypt the disks.
gcloud compute disks create encrypted-disk \
--kms-key projects/[KMS_PROJECT_ID]/locations/[REGION]/keyRings/[KEY_RING]/cryptoKeys/[KEY]

API for creating Service credentials in IBM COS

I am using IBM COS for various bucket operations. While I could find various ways of programmatically performing various bucket operations, I was wondering if there are any ways of programmatically(any sdk or rest apis) creating Service credentials as well as editing the policy for a service id?
Yes, there are APIs available to access and manage Cloud IAM
Go to the following API docs to review the available APIs:
IAM Identity Services API
IAM Access Groups API
IAM Policy Management API
Gaurav,
See this doc page to provision an instance of IBM Cloud Object Storage
https://cloud.ibm.com/docs/services/cloud-object-storage/basics/developers.html#provision-an-instance-of-ibm-cloud-object-storage

Using AWS KMS and/or credstash with non AWS server

Is it possible to use AWS KMS and a tool like credstash without the use of EC2 or equivalent or does it rely solely on IAM roles?
I've got a server elsewhere where I am testing some things out and ultimately I will be looking at migrating an app to EC2 etc. to make use of scaling. But for now whilst I'm setting up my deployment pipeline etc. I wondered if it was still possible to make use of KMS on my non-aws provisioned server?
The only possible way I can think of is by installing the AWS CLI tools on the server in question. Does this sounds like the right approach?
What #Viccari said is correct (in the comments). In terms of what you want to do (store passwords), the AWS Parameter Store would be a good fit for you. See https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html for more information. The guide explicitly calls out your use-case:
Parameter Store offers the following benefits and features.
Use a secure, scalable, hosted secrets management service (No servers to manage).
In the end, if you end up using Parameter Store or KMS, you will need some sort of credentials stored somewhere to grab an AWS STS token to use to call the underlying AWS services. If working outside of AWS EC2, you will need the AWS Access Key and AWS Secret Key from an IAM user. If you are in EC2, the IAM instance role will magically provide you the credentials and use that role to call those AWS services. The AWS SDK does this for you behind the scenes.
But, as you state, you don't want to run this in EC2 (to save money, or other reasons). The quickest way to store these credentials is to have them in a un-tracked file (added to your .gitignore) you can source from as environment variables, which your program will then read. This allows you to do local testing, and easily run it in EC2
with zero code changes. See https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html for what variables to set. Note that this doc talks about the CLI; the SDK's follow the same behavior.