Create policy with CloudFormation - aws-cloudformation

I am able to create a policy this way with the AWS CLI:
aws iam create-policy --policy-name "alpha-policy" --policy-document file:///tmp/policy.json
The content of the policy.json is following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement",
"Effect": "Allow",
"Action": [ "s3:ListBucket"],
"Resource": "*"
}
]
}
I convert it into the following CloudFormation file:
Resources:
SimplePolicy:
Type: AWS::IAM::Policy
Properties:
PolicyName: OfficialSimplePolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Resource: "*"
Action:
- cloudformation:Describe*
And the command to create the policy with CloudFormation is:
aws cloudformation create-stack --stack-name bucket-policy --template-body file://BucketPolicy.yaml --capabilities CAPABILITY_IAM
When I run this command, I only get a stack ID back. However, no policy is created. What am I missing?
I would have expected the policy to be available in the aws console or via following command:
aws iam list-policies
It's nowhere to be found.
I checked the event list with:
aws cloudformation describe-stack-events --stack-name bucket-policy
What this reveals is that: "At least one of [Groups,Roles,Users] must be non-empty."
And so my question is why can I create a policy without user, group or role when using the cli directly and I am not able to do the same when using cloudformation.

Following article explains my problem: https://cloudkatha.com/iam-policy-at-least-one-of-groupsrolesusers-must-be-non-empty/
Basically for standalone policies I should use ManagedPolicy. Also PolicyName is not a support field. These two changes solved my problem.

Related

Minimal permissions for Kubernetes backup with Kasten on AWS S3

I would like to set up a Location profile for Kasten to perform backup on a S3 bucket on AWS. The documentation recommends giving minimal permissions to the user/role designated to perform the backup, but I keep getting an error when trying to add the profile only with those permissions in my IAM policy.
https://docs.kasten.io/latest/usage/configuration.html#profile-creation
When I give full S3 access to the user, the profile is added correctly, but I don't want to do that.
The correct minimal permissions are a combination of the permissions specified on these 2 pages in the documentation:
https://docs.kasten.io/latest/usage/configuration.html#profile-creation
https://docs.kasten.io/latest/install/aws/using_aws_iam_roles.html#using-aws-iam-roles
Here is how your minimal permissions policy should look like (just replace your bucket name at the end):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetBucketObjectLockConfiguration",
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectRetention",
"s3:PutObjectRetention",
"s3:PutBucketPolicy",
"s3:ListBucket",
"s3:DeleteObject",
"s3:DeleteBucketPolicy",
"s3:GetBucketLocation",
"s3:GetBucketPolicy"
],
"Resource": [
"arn:aws:s3:::{BUCKET_NAME}",
"arn:aws:s3:::{BUCKET_NAME}/*"
]
}
]
}

Azure DevOps pipeline error: Tenant ID, application ID, principal ID, and scope are not allowed to be updated

I try to create SQL Server with ARM on Azure DevOps.
Pipeline successfully create SQL Server resource to Azure Portal, but I'm getting strange errors in Azure DevOps. Why this occurs and how to fix?
ERROR:
There were errors in your deployment. Error code: DeploymentFailed.
##[error]RoleAssignmentUpdateNotPermitted: Tenant ID, application ID, principal ID, and scope are not
allowed to be updated.
##[error]Check out the troubleshooting guide to see if your issue is addressed:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-resource-group-deployment?
view=azure-devops#troubleshooting
##[error]Task failed while creating or updating the template deployment.
YML:
task: AzureResourceManagerTemplateDeployment#3
inputs:
deploymentScope: 'Resource Group'
azureResourceManagerConnection: 'TestRG-Conn'
subscriptionId: '1111753a-501e-4e46-9aff-6120ed561111'
action: 'Create Or Update Resource Group'
resourceGroupName: 'TestRG'
location: 'North Europe'
templateLocation: 'Linked artifact'
csmFile: '$(System.DefaultWorkingDirectory)/CreateSQLServer/azuredeploy.json'
csmParametersFile:
'$(System.DefaultWorkingDirectory)/CreateSQLServer/azuredeploy.parameters.json'
deploymentMode: 'Incremental'
VARIABLE IN TEMPLATE:
"variables": {
"StorageBlobContributor": "[subscriptionResourceId('Microsoft.Authorization/roleDefinitions', '111111111111111111111-')]"
},
RESOURCE IN TEMPLATE:
"resources": [
{
"condition": "[parameters('enableADS')]",
"type":
"Microsoft.Storage/storageAccounts/providers/roleAssignments",
"apiVersion": "2018-09-01-preview",
"name": "[concat(variables('storageName'),
'/Microsoft.Authorization/', variables('uniqueRoleGuid') )]",
"dependsOn": [
"[resourceId('Microsoft.Sql/servers',
parameters('serverName'))]",
"[resourceId('Microsoft.Storage/storageAccounts',
variables('storageName'))]"
],
"properties": {
"roleDefinitionId": "[variables('StorageBlobContributor')]",
"principalId": "[reference(resourceId('Microsoft.Sql/servers',
parameters('serverName')), '2018-06-01-preview',
'Full').identity.principalId]",
"scope": "[resourceId('Microsoft.Storage/storageAccounts',
variables('storageName'))]",
"principalType": "ServicePrincipal"
}
}
Chances are you have deployed and deleted the resources, however, the role assignment is still there and that is what it is clashing with (what 4c7... is saying). So, go check the permissions on the storage account - if you use managed identities, that identity will be deleted but the role assignment will persists and show the user as 'unknown' which will also cause the above error when trying to deploy again - had the same issue but with a managed identity I was using for an aks cluster. Frustrating.
When you deleted a managed identity it does not delete associated roles created for it, I wish it cleaned up properly.
In my case, it was the name of the RoleAssignment. It was unique on the Resource Group level but not on the subscription level. Not sure what is the scope for the uniqueness of the name.
Bouncing off #Richard answer, I didn't have the permission to delete the "ghost" managed identities so I deployed the same role assignment under a different guid by adding an additional string to the guid() function. String functions for ARM templates docs
To do this, I changed my roleNameGuid's value from
"[guid(resourceGroup().id)]" to
"[guid(resourceGroup().id, parameters('guid_seed'))]", where parameters('guid_seed') is an arbitrary string that is passed from DevOps.

Serverless deploy resource does not support attribute type Arn in Fn::GetAtt

Error: The CloudFormation template is invalid: Template error: resource <Policy in serverless.yml> does not support attribute type Arn in Fn::GetAtt
When deploying my project, i get the above error. It seems the Fn:GetAttr happens when converting to CloudFormation as i haven't explicitly defined any usage of that function
functions:
myfn:
handler: lambda/handler.my
role: DataIamPolicy
environment:
DynamoTableName: "my-data"
I've previously defined my table as MyData. My policy resource looks like:
DataIamPolicy:
Type: AWS::IAM::Policy
DependsOn: MyData
Properties:
PolicyName: "my-data-dynamodb-policy"
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: "Allow"
Action:
- "dynamodb:DescribeTable"
- "dynamodb:GetItem"
Resource:
Fn::Join:
- ""
- - "arn:aws:dynamodb:::"
- "Ref" : "MyData"
I thought it may be the resources in the policy but changing that around doesn't seem to help.
So the issue is to do with defining a specific role to your function. by default serverless applies the roles and policies to all functions.
I applied the:
role: DataIamPolicy
Which doesnt work, as in the background it fetches the arn for a policy instead of the role which we hadn't created yet.
You need to set a role with a custom policy for this method to work. ie:
role: DataIamRole

Amazon EKS: generate/update kubeconfig via python script

When using Amazon's K8s offering, the EKS service, at some point you need to connect the Kubernetes API and configuration to the infrastructure established within AWS. Especially we need a kubeconfig with proper credentials and URLs to connect to the k8s control plane provided by EKS.
The Amazon commandline tool aws provides a routine for this task
aws eks update-kubeconfig --kubeconfig /path/to/kubecfg.yaml --name <EKS-cluster-name>
Question: do the same through Python/boto3
When looking at the Boto API documentation, I seem to be unable to spot the equivalent for the above mentioned aws routine. Maybe I am looking at the wrong place.
is there a ready-made function in boto to achieve this?
otherwise how would you approach this directly within python (other than calling out to aws in a subprocess)?
There isn't a method function to do this, but you can build the configuration file yourself like this:
# Set up the client
s = boto3.Session(region_name=region)
eks = s.client("eks")
# get cluster details
cluster = eks.describe_cluster(name=cluster_name)
cluster_cert = cluster["cluster"]["certificateAuthority"]["data"]
cluster_ep = cluster["cluster"]["endpoint"]
# build the cluster config hash
cluster_config = {
"apiVersion": "v1",
"kind": "Config",
"clusters": [
{
"cluster": {
"server": str(cluster_ep),
"certificate-authority-data": str(cluster_cert)
},
"name": "kubernetes"
}
],
"contexts": [
{
"context": {
"cluster": "kubernetes",
"user": "aws"
},
"name": "aws"
}
],
"current-context": "aws",
"preferences": {},
"users": [
{
"name": "aws",
"user": {
"exec": {
"apiVersion": "client.authentication.k8s.io/v1alpha1",
"command": "heptio-authenticator-aws",
"args": [
"token", "-i", cluster_name
]
}
}
}
]
}
# Write in YAML.
config_text=yaml.dump(cluster_config, default_flow_style=False)
open(config_file, "w").write(config_text)
This is explained in Create kubeconfig manually section of https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html, which is in fact referenced from the boto3 EKS docs. The manual method there is very similar to #jaxxstorm's answer except that it doesn't shown the python code you would need, however it also does not assume heptio anthenticator (it shows token and IAM authenticator approaches).
I faced same problem decided to implement it as a Python package
it can be installed via
pip install eks-token
and then simply do
from eks_token import get_token
response = get_token(cluster_name='<value>')
More details and examples here
Amazon's aws tool is included in the python package awscli, so one option is to add awscli as a python dependency and just call it from python. The code below assumes that kubectl is installed (but you can remove the test if you want).
kubeconfig depends on ~/.aws/credentials
One challenge here is that the kubeconfig file generated by aws has a users section like this:
users:
- name: arn:aws:eks:someregion:1234:cluster/somecluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- someregion
- eks
- get-token
- --cluster-name
- somecluster
command: aws
So if you you mount it into a container or move it to a different machine you'll get this error when you try to use it:
Unable to locate credentials. You can configure credentials by running "aws configure".
Based on that user section, kubectl is running aws eks get-token and it's failing because the ~/.aws dir doesn't have the credentials that it had when the kubeconfig file was generated.
You could get around this by also staging the ~/.aws dir everywhere you want to use the kubeconfig file, but I have automation that takes a lone kubeconfig file as a parameter, so I'll be modifying the user section to include the necessary secrets as env vars.
Be aware that this makes it possible for whoever gets that kubeconfig file to use the secrets we've included for other things. Whether this is a problem will depend on how much power your aws user has.
Assume Role
If your cluster uses RBAC, you might need to specify which role you want for your kubeconfig file. The code below does this by first generating a separate set of creds and then using them to generate the kubeconfig file.
Role assumption has a timeout (I'm using 12 hours below), so you'll need to call the script again if you can't manage your mischief before the token times out.
The Code
You can generate the file like:
pip install awscli boto3 pyyaml sh
python mkkube.py > kubeconfig
...if you put the following in mkkube.py
from pathlib import Path
from tempfile import TemporaryDirectory
from time import time
import boto3
import yaml
from sh import aws, sh
aws_access_key_id = "AKREDACTEDAT"
aws_secret_access_key = "ubREDACTEDaE"
role_arn = "arn:aws:iam::1234:role/some-role"
cluster_name = "mycluster"
region_name = "someregion"
# assume a role that has access
sts = boto3.client(
"sts",
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key,
)
assumed = sts.assume_role(
RoleArn=role_arn,
RoleSessionName="mysession-" + str(int(time())),
DurationSeconds=(12 * 60 * 60), # 12 hrs
)
# these will be different than the ones you started with
credentials = assumed["Credentials"]
access_key_id = credentials["AccessKeyId"]
secret_access_key = credentials["SecretAccessKey"]
session_token = credentials["SessionToken"]
# make sure our cluster actually exists
eks = boto3.client(
"eks",
aws_session_token=session_token,
aws_access_key_id=access_key_id,
aws_secret_access_key=secret_access_key,
region_name=region_name,
)
clusters = eks.list_clusters()["clusters"]
if cluster_name not in clusters:
raise RuntimeError(f"configured cluster: {cluster_name} not found among {clusters}")
with TemporaryDirectory() as kube:
kubeconfig_path = Path(kube) / "config"
# let awscli generate the kubeconfig
result = aws(
"eks",
"update-kubeconfig",
"--name",
cluster_name,
_env={
"AWS_ACCESS_KEY_ID": access_key_id,
"AWS_SECRET_ACCESS_KEY": secret_access_key,
"AWS_SESSION_TOKEN": session_token,
"AWS_DEFAULT_REGION": region_name,
"KUBECONFIG": str(kubeconfig_path),
},
)
# read the generated file
with open(kubeconfig_path, "r") as f:
kubeconfig_str = f.read()
kubeconfig = yaml.load(kubeconfig_str, Loader=yaml.SafeLoader)
# the generated kubeconfig assumes that upon use it will have access to
# `~/.aws/credentials`, but maybe this filesystem is ephemeral,
# so add the creds as env vars on the aws command in the kubeconfig
# so that even if the kubeconfig is separated from ~/.aws it is still
# useful
users = kubeconfig["users"]
for i in range(len(users)):
kubeconfig["users"][i]["user"]["exec"]["env"] = [
{"name": "AWS_ACCESS_KEY_ID", "value": access_key_id},
{"name": "AWS_SECRET_ACCESS_KEY", "value": secret_access_key},
{"name": "AWS_SESSION_TOKEN", "value": session_token},
]
# write the updates to disk
with open(kubeconfig_path, "w") as f:
f.write(yaml.dump(kubeconfig))
awsclipath = str(Path(sh("-c", "which aws").stdout.decode()).parent)
kubectlpath = str(Path(sh("-c", "which kubectl").stdout.decode()).parent)
pathval = f"{awsclipath}:{kubectlpath}"
# test the modified file without a ~/.aws/ dir
# this will throw an exception if we can't talk to the cluster
sh(
"-c",
"kubectl cluster-info",
_env={
"KUBECONFIG": str(kubeconfig_path),
"PATH": pathval,
"HOME": "/no/such/path",
},
)
print(yaml.dump(kubeconfig))

Logging for public hosted zone Route53

I'm trying to set up the logging for a public hosted zone on Route53 AWS. the template looks like this:
Resources:
HostedZonePublic1:
Type: AWS::Route53::HostedZone
Properties:
HostedZoneConfig:
Comment: !Join ['', ['Hosted zone for ', !Ref 'DomainNamePublic' ]]
Name: !Ref DomainNamePublic
QueryLoggingConfig:
CloudWatchLogsLogGroupArn: !GetAtt Route531LogGroup.Arn
Route531LogGroup:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: Route531-AWSLogGroup
RetentionInDays: 7
But when I try to launch the stack I'm getting the following message:
The ARN for the CloudWatch Logs log group is invalid. (Service: AmazonRoute53; Status Code: 400; Error Code: InvalidInput; Request ID: 6c02db60-ef62-11e8-bce8-d14210c1b0cd)
Anybody an idea what could be wrong with this setup?
merci A
I encountered the same issue. The CloudWatch logs log group needs to be created in a specific region to be valid.
See following:
You must create the log group in the us-east-1 region.
You must use the same AWS account to create the log group and the hosted zone that you want to configure query logging for.
When you create log groups for query logging, we recommend that you use a consistent prefix.
You can find the full documentation here.