Can values passed in as parameters be retrieved from CloudFormation for other uses? - powershell

I have Windows user account credentials passed in as parameters in a CloudFormation template. Using SSM/EC2Config I will need to execute commands on my instances associated with this template, but since only one specific user account on Windows has been granted access to resources I need, I need to specify these same credentials when I execute my Powershell commands via SSM (as just running as Administrator will not have the proper access).
The commands will be run later, not at instance launch. Is there any way for me to grab these credentials from CloudFormation? Or any other way to achieve this or something similar?

As long as the parameters in question do not have the NoEcho property explicitly set to true (it defaults to false), then you can retrieve the parameter values using the describe-stacks call from any of the various tools (e.g. AWS API, CLI, or SDK of your choice). If NoEcho is set to true, you won't be able to retrieve those parameter values.
To run the command, you will need to either run it from an instance that's running with an IAM role / instance profile which has the correct permissions to call describe-stacks, or the tool has been configured with AWS security credentials (i.e. Access Key Id and Secret Access Key) that have permission.
AWS CLI examples:
aws cloudformation describe-stacks --region <region> --stack-name <stack-name>
By default, you'll notice the parameters are embeded in a JSON response, along with a bunch of other information about the stack. To be more useful in scripting, you could use a JMESPath query to narrow down the data returned to just the parameter's value:
aws cloudformation describe-stacks --region <region> --stack-name <stack-name> --query 'Stacks[*].Parameters[?ParameterKey == `<parameter-name>`].ParameterValue' --output text

Related

Can't connect to GCS bucket from Python despite being logged in

I have a GCS bucket set up that contains data that I want to access remotely. As per the instructions, I have logged in via gcloud auth login, and have confirmed that I have an active, credentialed account via gcloud auth list. However, when I try to access my bucket (using the Python google.cloud.storage API), I get the following:
HttpError: Anonymous caller does not have storage.objects.list access to <my-bucket-name>.
I'm not sure why it is being accessed anonymously, since I am clearly logged in. Is there something obvious I am missing?
The Python GCP library (and others) uses another authentication mechanism than the gcloud command.
Follow this guide to set up your environment and have access to GCS with Python.
gcloud aut login sets up the gcloud command tool with your credentials.
However, the way forward when executing code, is to have a Service Account. Then, when the env. variable GOOGLE_APPLICATION_CREDENTIALS has been set. Python will use the Service Account credentials
Edit
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "path_to_your_.json_credential_file"
Edit
And then, to download gs://my_bucket/my_file.csv to a file: (from the python-docs-samples)
download_blob('my_bucket', 'my_file.csv', 'local/path/to/file.csv')
def download_blob(bucket_name, source_blob_name, destination_file_name):
"""Downloads a blob from the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
print('Blob {} downloaded to {}.'.format(
source_blob_name,
destination_file_name))

How to pass a role to cli command "aws cloudformation deploy" or "sam deploy"?

I am creating a cloudformation stack using a SAM template and the CLI. I have successfully done this using an account that gets all the required permissions from policies directly attached to it. It's poor security practice to give this account all these permissions so I've created a role with the same policies attached and want to use that for deployment instead. However, even though I pass my role through the --role-arn parameter the command is still looking to the account for the permissions.
Here are the commands I've tried using:
aws cloudformation deploy --template-file TemplatePackaged.yaml --stack-name TestStack --capabilities CAPABILITY_IAM --region us-east-1 --role-arn arn:aws:iam::666488004797:role/LambdaApplicationCreateRole
or
sam deploy --template-file TemplatePackaged.yaml --stack-name TestStack --capabilities CAPABILITY_IAM --region us-east-1 --role-arn arn:aws:iam::666488004797:role/LambdaApplicationCreateRole
Unless the user logged into the cli has the required permissions I get the error with either command:
An error occurred (AccessDenied) when calling the DescribeStacks
operation: User: arn:aws:iam::666488004797:user/DummyUser1 is not
authorized to perform: cloudformation:DescribeStacks on resource:
arn:aws:cloudformation:us-east-1:666488004797:stack/Melissa/*
How do I get the deploy command to use the role passed in the --role-arn parameter to get the permissions it needs?
After a lot of reading and trial and error I found that Manoj's answer is correct, but the tricky part is the argument that one needs to pass as xyz in his answer. Here is what I had to in order to pass a role:
I had to configure the role that I wanted to pass on the AWS CLI's config file as a profile. The parameter --profile that Manoj mentioned only works with profiles configured in this file (to the best of my knowledge). The way to configure a role as a profile is using the command:
aws configure --profile arbitraryName
What follows after profile is just a label or variable that you will use to refer to your role when you want to pass it, you can give it any name but ideally you would name it the same as the role it will hold. Running this command will prompt you for a couple of fields. As far as I know roles don't have access_key or secret_access_key so just hit enter to skip these as well as the region and output, you don't need those for your role. Next you will set fields that roles actually need by using these commands:
aws configure set profile.arbitraryName.role_arn roleArn
aws configure set profile.arbitraryName.source_profile cliProfile
The roleArn is the arn of the role you are configuring into the CLI,the cliProfile is a user already configured in the CLI that has rights to assume the role. Once this is done, whenever you want to pass the configured role in a command you just need to add --profile arbitraryName as the last parameter of your command and the command will use permissions from the role that was passed.
*Interesting to know, passing a role this way does an implicit aws sts assume-role. If you know where your .aws folder is you can go in and see a folder named cli, which contains a json file with the temporary credentials that are created when a role is assumed.
I had to do a lot of reading to figure this out, I hope this answer will save someone else some time.
there could be multiple approaches.
Assume the role and use profile for deploying aws cloudformation
aws cloudformation deploy --template-file TemplatePackaged.yaml --stack-name TestStack --profile xyz
Launch an EC2 instance with an instance profile which is having access to cloudformation, you don't have to explicitly specify role arn or profile details
aws cloudformation deploy --template-file TemplatePackaged.yaml --stack-name TestStack

Recovering access after initially provisioning wrong scopes for an instance

I recently created a VM, but mistakenly gave the default service account Storage: Read Only permissions instead of the intended Read Write under "Identity & API access", so GCS write operations from the VM are now failing.
I realized my mistake, so following the advice in this answer, I stopped the VM, changed the scope to Read Write and started the VM. However, when I SSH in, I'm still getting 403 errors when trying to create buckets.
$ gsutil mb gs://some-random-bucket
Creating gs://some-random-bucket/...
AccessDeniedException: 403 Insufficient OAuth2 scope to perform this operation.
Acceptable scopes: https://www.googleapis.com/auth/cloud-platform
How can I fix this? I'm using the default service account, and don't have the IAM permissions to be able to create new ones.
$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* (projectnum)-compute#developer.gserviceaccount.com
I will suggest you to try add the scope "cloud-platform" to the instance by running the gcloud command below
gcloud alpha compute instances set-scopes INSTANCE_NAME [--zone=ZONE]
[--scopes=[SCOPE,…] [--service-account=SERVICE_ACCOUNT
As a scopes put "https://www.googleapis.com/auth/cloud-platform" since it give Full access to all Google Cloud Platform resources.
Here is gcloud documentation
Try creating the Google Cloud Storage bucket with your user account.
Type gcloud auth login and access the link you are provided, once there, copy the code and paste it into the command line.
Then do gsutil mb gs://bucket-name.
The security model has 2 things at play, API Scopes and IAM permissions. Access is determined by the AND of them. So you need an acceptable scope and enough IAM privileges in order to do whatever action.
API Scopes are bound to the credentials. They are represented by a URL like, https://www.googleapis.com/auth/cloud-platform.
IAM permissions are bound to the identity. These are setup in the Cloud Console's IAM & admin > IAM section.
This means you can have 2 VMs with the default service account but both have different levels of access.
For simplicity you generally want to just set the IAM permissions and use the cloud-platform API auth scope.
To check if you have this setup go to the VM in cloud console and you'll see something like:
Cloud API access scopes
Allow full access to all Cloud APIs
When you SSH into the VM by default gcloud will be logged in as the service account on the VM. I'd discourage logging in as yourself otherwise you more or less break gcloud's configuration to read the default service account.
Once you have this setup you should be able to use gsutil properly.

Adding roles to service accounts on Google Cloud Platform using REST API

I want to create a service account on GCP using a python script calling the REST API and then give it specific roles - ideally some of these, such as roles/logging.logWriter.
First I make a request to create the account which works fine and I can see the account in Console/IAM.
Second I want to give it the role and this seems like the right method. However, it is not accepting roles/logging.logWriter, saying HttpError 400, "Role roles/logging.logWriter is not supported for this resource.">
Conversely, if I set the desired policy in console, then try the getIamPolicy method (using the gcloud tool), all I get back is response etag: ACAB, no mention of the actual role I set. Hence I think these roles refer to different things.
Any idea how to go about scripting a role/scope for a service account using the API?
You can grant permissions to a GCP service account in a GCP project without having to rewrite the entire project policy!
Use the gcloud projects add-iam-policy-binding ... command for that (docs).
For example, given the environment variables GCP_PROJECT_ID and GCP_SVC_ACC the following command grants all privileges in the container.admin role to the chosen service account:
gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} \
--member=serviceAccount:${GCP_SVC_ACC} \
--role=roles/container.admin
To review what you've done:
$ gcloud projects get-iam-policy $GCP_PROJECT_ID \
--flatten="bindings[].members" \
--format='table(bindings.role)' \
--filter="bindings.members:${GCP_SVC_ACC}"
Output:
ROLE
roles/container.admin
(or more roles, if those were granted before)
Notes:
The environment variable GCP_SVC_ACC is expected to contain the email notation for the service account.
Kudos to this answer for the nicely formatted readout.
You appear to be trying to set a role on the service account (as a resource). That's for setting who can use the service account.
If you want to give the service account (as an identity) a particular role on the project and its resources, see this method: https://cloud.google.com/resource-manager/reference/rest/v1/projects/setIamPolicy

Google Cloud Platform: Logging in to GCP from commandline

I was sure it will be simple but couldn't find any documentation or resolution.
I'm trying to write a script using gcloud to perform some operations in my GCP instances.
Is there anyway to login/authenticate using gcloud via command line only?
Thanks
You have a couple of options here (depending on what exactly you're trying to do).
The first option is to log in using the --no-launch-browser option. This still requires interaction from a human user, but doesn't require a browser on the machine you're using:
> gcloud auth login --no-launch-browser
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&prompt=select_account&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute&access_type=offline
Enter verification code: *********************************************
Saved Application Default Credentials.
You are now logged in as [user#example.com].
Your current project is [None]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
The non-interactive option involves service accounts. The linked documentation explains them better than I can, but the short version of what you need to do is as follows:
Create a service account in the Google Developers Console. Make sure it has the appropriate "scopes" (these are permissions that determine what this service account can do. Download the corresponding JSON key file.
Run gcloud auth activate-service-account --key-file <path to key file>.
Note that Google Compute Engine VMs come with a slightly-different service account; the difference is described here.