Where is PowerShell for AWS getting its credential from? - powershell

I recently installed the AWS .NET SDK which came with the PowerShell For AWS CLI enhancements.
I went ahead and added an IAM user and generated a key pair, then installed it into the SDK Store:
Set-AWSCredentails -AccessKey AAAAAAAAAAAAAA -SecretKey AAAAAAAAAA/AAAA -StoreAs default
I then tested my credentials by making a request that I knew I didn't have access to:
Get-EC2Instance
... Then was surprised to find out print out three EC2 instances. Instances I don't own! I tried this as well:
Get-EC2Instance -Profile default
Which produced the desired result, insufficient access. To continue testing, I added EC2FullAccess to my user and repeated the last line. It correctly printed my personal use EC2 instance:
GroupNames : {}
Groups : {}
Instances : {aws_personal}
OwnerId : 835586800000
RequesterId :
ReservationId : r-0e625fd77d0000000
However whenever I attempt a statement without the -Profile default, I am accessing another account. Without going into too much detail, I disabled my access to that account in AWS Dashboard. Now commands produce this output:
Get-EC2Instance : AWS was not able to validate the provided access credentials
At line:1 char:1
+ Get-EC2Instance
I do not have a .AWS directory in my %UserProfile%. Searching my computer for .aws or credentials fails to find a credential file which would explain this.

I can't explain why you are seeing different behavior between specifying the -ProfileName parameter and not, but I can shed light on where credentials are coming from.
The PowerShell tools can read from two credential locations (as well as environment variables and EC2 instance metadata when running on an EC2 instance).
Firstly there is the encrypted SDK credential store file which is located at C:\Users\userid\AppData\Local\AWSToolkit\RegisteredAccounts.json - this one is shared between the PowerShell tools, the AWS SDK for .NET and the AWS Toolkit for Visual Studio. It can also read from the ini-format shared credentials file (shared with the AWS CLI and other AWS SDKs). Note that although the shared credentials file can be moved between accounts and machines, the encrypted SDK file can be used only by the owning user and only on that single machine.
The PowerShell tools currently only write to one store though - the encrypted file used by the .NET tools exclusively. So when you set up credentials and used the -StoreAs option, the profile would have been written to the RegisteredAccounts.json file. If you open this file in a text editor you should see your profile named 'default' along with two encrypted blobs that are your access and secret keys.
When a profile name is given with a command, the tools look for a profile with that name first in RegisteredAccounts.json and if not found there, it attempts to read the ini-format file in %USERPROFILE%.aws\credentials (to bypass the encrypted store, you can use the -ProfilesLocation parameter to point at the ini-format file you want to load credentials from, if it's not at its default location under your user profile).
If no profile name is given, the tools probe to find the closest set of credentials - the search 'path' is described in a blog post at https://blogs.aws.amazon.com/net/post/Tx2HQ4JRYLO7OC4/. Where you see references to loading a profile, remember that the tools check for the profile first in RegisteredAccounts.json and then in the shared credentials file.
HTH you track down where the tools are finding credentials.

Related

How to log into AWS CLI with a federated SAML (AzureAD) login

I'm connecting to AWS using a SAML login with AzureAD as my IdP. This is all working great and I can use SAML response in the browser to generate a temporary session token that gives me an hour to work in AWS CLI. However, according to this blog, I should be able to use AWS Tools for Windows PowerShell to capture the SAML Response but all I ever get is an error Unable to set credentials: Root element is missing. All my googling is leading to possible transient issues with certificates being near expiration or using older versions of the Powershell Module (I'm using the latest as of this writing: v4.1.13.0) or Powershell itself (I'm using 7.1.3).
Anybody successfully got the AWS Tools for Windows PowerShell to work with AzureAD as an IdP?
I found this somewhat more recent post, which has a ton more information about this kind of setup, some detail about how to configure it, and a note about why it may not be working (as of Jan2020)
Try using the AWSPowerShell command Use-STSRoleWithSAML(AWS docs) to generate some temporary credentials. The doc page goes into a lot of detail on what is required too for your Idp and in IAM, including links to relevant IAM guides. For example:
# Returns a set of temporary security credentials
Use-STSRoleWithSAML `
-RoleArn 'arn:aws:iam::ACCOUNTNUMBER:role/IAMROLE' `
-PrincipalArn 'arn:aws:iam::ACCOUNTNUMBER:saml-provider/SAMLPROVIDER' `
-SAMLAssertion 'BASE64ENCODEDRESPONSE' `
-DurationInSeconds 3600

Powershell - automated connection to Power BI service without hardcoding password

We have a PowerShell script to pull Power BI activity data (using Get-PowerBIActivityEvent), and I have been trying to automate it so that it can pull this data daily using an unattended account. The problem is the script must necessarily use the Connect-PowerBIServiceAccount cmdlet, which requires a credential. I don't want to have the passwords hard-coded anywhere (obviously) and ideally don't want to be passing it into the script as a plaintext parameter in case of memory leaks.
I've tried using SSIS as a scheduling mechanism since it allows for encrypted parameters in script tasks, but can't call the PS script with a SecureString parameter since the System.Management.Automation namespace isn't in the GAC (a commandline call wouldn't be possible).
I don't believe task scheduler would offer the functionality needed.
Does anyone know of any elegant ways to connect to the power BI service using encrypted credentials?
In the docs of Connect-PowerBIServiceAccount there are 2 options for unattended sign-in:
Using -Credential, where you pass AAD client ID as username and application secret key as password
Using -CertificateThumbprint and -ApplicationId
For both options you need to configure service pricipal and add proper permissions. I'm not going into details how to configure that, but most probably you'd need (at least) the following application permissions:
I'm not really sure what functionalities you need in the script, but in my experience, majority of the cases can be covered by scheduled task, so the explanation below will apply to that solution.
How you can secure the credentials?
There are variuos possible solutions, depending on your preferences. I'd consider certificate-based authentication as more secure (certificate is available only to current user/all users of the machine).
What's important in certificate-based authentication - make sure that the certificate is available for the account running the script (in many cases it's service account, not your user account).
How can I secure more?
If you want, you can store application ID as secure string (I don't have SSIS to test, so I'm not sure if there's any workaround to make it working in there) or use Export-CliXml. They use Windows Data Protection API (DPAPI), so the file can be decrypted only by the account which was used to encrypt.
To add one more level of security (I'm not even mentioning setting correct access rights to the files as it's obvious) you might put the file in the folder encrypted (you might already have a solution for disk encryption, so use it if you wish).
There are probably some solutions to secure the keys even better, but these ones should do the job. I'm using other Microsoft 365 modules with similar approach (Outlook, SharePoint PnP) and it works quite well.
NOTE: If you need to use user account, instead of service principal, make sure that you have MultiFactor Authentication disabled on that account for that specific application.
The relevant documentation to this (https://learn.microsoft.com/en-us/power-bi/developer/embedded/embed-service-principal) states that admin APIs (i.e. those served via Get-PowerBiActivityEvent) do not currently support service principals. This means it's not currently possible to use a registered app to run these cmdlets unattended.
There is a feature request open to provide this at the moment: https://ideas.powerbi.com/forums/265200-power-bi-ideas/suggestions/39641572-need-service-principle-support-for-admin-api

Google Cloud Storage 500 Internal Server Error 'Google::Cloud::Storage::SignedUrlUnavailable'

Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)

Get-AzureSubscription -ExtendedDetails in PowerShell doesn't include certificate

I'm trying to revoke a VPN certificate using Microsoft's byzantine Azure Powershell commands, as described here: https://blogs.technet.microsoft.com/keithmayer/2014/12/09/step-by-step-revoking-and-reinstating-client-vpn-certificates-for-azure-point-to-site-vpns/. (Don't get me started on why you should need to write a 20-line script that makes a manually-constructed REST API call to do basic user management - that's a separate issue for now.)
One of the key bits is getting the appropriate management certificate. You're supposed to use this command:
$cert = (Get-AzureSubscription -SubscriptionName BizSpark -ExtendedDetails).Certificate
One some machines this works. But on my main client machine, the one that I need to run it on, the Certificate property is always blank. I've tried re-importing my .publishsettings file, upgrading the Azure Powershell commandlets, deleting the C:\Users\user\AppData\Roaming\Windows Azure Powershell directory, and so forth, to no avail.
Any suggestions?

How to authorize clients in non-default directory to KeyVault

I created a KeyVault in my Azure subscription and a client application in one of my Azure AD directories. However, the client application is not registered in the default directory of the subscription.
When I run the following PowerShell cmdlet, it tries to look up the service principal in the default directory of the subscription and fails to find it.
PS > Set-AzureKeyVaultAccessPolicy -VaultName <vaultname>
-ServicePrincipalName <principal guid> -PermissionsToSecrets Get
I found an article describing how to change the default directory for a subscription in the management portal, but was wondering how to do the same using PowerShell.
The 'Select-AzureSubscription' cmdlet does not seem to support changing the default directory.
Nor does the 'Set-AzureKeyVaultAccessPolicy' support a parameter to indicate in which directory it should look.
Key Vault can only authorize applications (clients) registered in the directory associated with the Azure subscription, and the only way (currently) to change the 'home' directory associated with a subscription is through the Azure management portal.
I would imagine this is as designed behaviour, and can't imagine how / why it would change.
An Azure subscription has an Azure Active Directory attached to it, this is the directory it will use to authenticate against whenever someone tries to access resources.
While you can create trusts to other Active Directories simply creating an new AAD does not automatically enable that domain to be trusted by Azure.
Key Vault is designed to only be accessible to authenticated users, it is designed to provide secure data to those users. Since there is no authentication mechanism between the multiple directories you have created, there is no mechanism for Key Vault to determine who those directories belong to.
Key Vault needs to resolve a service principle through the Active Directory attached to the subscription it is running under (whether that is directly through that directory, or through another domain that it trusts). Anything else would create additional attack vectors and considerably weaken the product.