I am configuring a CodePipeline in Account 00000000000.
I would like to deploy a CloudFormation stack
by executing a CloudFromation template via the CodePipeline
but not in account 123456789123 and not in 00000000000
Question
How do I configure the CodePipeline action of type "Deploy" to do so?
Especially how do I point it to the account 123456789123 ?
What I did so far
I assume it works via roles.123456789123.
I created an IAM role in account 123456789123,
with trust to the account 00000000000,
with trust to the service cloudformation.
I named it arn:aws:iam::123456789123:role/CFDep
Below is the configuration of my CodePipeline-Action.
I am getting an error The role name is invalid. Check that the specified role exists and can be assumed by AWS CloudFormation. Why?
From the docs:
You cannot use the AWS CodePipeline console to create or edit a
pipeline that uses resources associated with another AWS account.
However, you can use the console to create the general structure of
the pipeline, and then use the AWS CLI to edit the pipeline and add
those resources. Alternatively, you can use the structure of an
existing pipeline and manually add the resources to it.
You can do one of the following 2 things:
Use aws codepipeline cli to edit the pipeline
aws codepipeline update-pipeline --cli-input-json file://pipeline.json
OR
Create the pipeline itself using cloudformation
You can use this pipeline definition from aws reference architecture for cross account pipeline as a starting point for your template.
Related
Is there have any tutorials for creating a service account to GCP Artifact Registry?
i have tried this: https://cloud.google.com/architecture/creating-cicd-pipeline-vsts-kubernetes-engine
... but it is using GCP Container Registry
I do not imagine it should be much different but i keep on getting this:
##[error]denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource
BUT the service account i created has the permissions needed (albeit those roles are in beta). i even gave it a very elevated role and still getting this.
when i created the service connect i followed these steps from the documentation linked above:
Docker Registry: https://gcr.io/PROJECT_ID, replacing PROJECT_ID with the name of your project (for example, https://gcr.io/azure-pipelines-test-project-12345).
Docker ID: _json_key
Password: Paste the content of azure-pipelines-publisher-oneline.json.
Service connection name: gcr-tutorial
Any advice on this would be appreciated.
I was having the same issue. As #Mexicoder points out the service account needs the ArtifactRegistryWriter permission. In addition, the following wasn't clear to me initially:
The service connection needs to be in the format: https://REGION-docker.pkg.dev/PROJECT-ID (where region is something like 'us-west2')
The repository parameter to the Docker task (Docker#2) needs to be in the form: PROJECT-ID/REPO/IMAGE
I was able to get it working with the documentation for Container Registry.
my issue was with the repository name.
ALSO the main difference when using Artifact Registry is the permission you need to give the IAM service account. Use ArtifactRegistryWriter. StorageAdmin will be useless.
we have automated the following steps using azure devops
A Release pipeline which creates a website in azure and the next step which deploys the code, all is well and working so far
The next step that we need to do is create a Azure SQLDB and and an Azure Storage Account in the release pipeline and then configure these values in the appsettings.json file
Questions
Creating the Storage account is the easy part, but how do we get the storage account key back in the pipeline and associate that value in the appsetting.json file ?
Similarly for the SQLDB how do we get the IPAddress and add it in the exclusions list.
and also it you could point us to any documentation on this
how do we get the storage account key back in the pipeline.
You can use the Azure CLI az storage account keys to get storage account keys.
I'm currently working on a pipeline job that requires kubernetes access through powershell.
The only issue is that I need to sign in for Az cli. For testing I'm using my personal credentials, clearly not a good definitive option. Are there any other options for Azure cli login that could be used instead?
I'm guessing you are working with hosted agents, therefore, you need to configure kube.config on the hosted agent.
in order to do that, run az aks get-credentials --name $(CLUSTER_NAME) --resource-group $(RESOURCE_GROUP_NAME). The easiest way is to use Azure CLI task. Be aware that this task required authorization from Azure DevOps to Azure.
More info can be found here.
In case you are the subscription owner- select your subscription and click on Authorize.
When the kube.config configured on the hosted agent, you can run any kubectl command you wish (Using Powershell\Bash\CMD).
I'm new to Pulumi so I'm struggling at the moment trying to run it in my Azure release pipeline in order to create my infrastructure.
During development I've used the local storage to store my pulumi state (pulumi login --local), I've created my stacks (dev being one of them) and I was able to easily test my deployment script against my azure subscription.
Now I've pushed my code to source control, created by build pipeline (which works) and I'm trying to create my infrastructure from the release pipeline by using the Pulumi Azure Pipelines Task.
I've managed to configure it to use the blob storage for the state file, but when running pulumi up --yes --skip-preview for the dev stack I get an error that the dev stack does not exist.
Do I need to do a pulumi stack init dev on every "store" that I use? Aren't the Pulumi.stack_name.yaml files enough?
Any advice on how to proceed is welcomed as the documentation on this is non existent or not clear.
Thank you!
The error is probably caused by the stack not existing in your blob storage.
If you use pulumi login --local. The stack will be managed in your local machine and is not synced to azure blob storage. Check here for more login options.
In my test pipeline. I got error: no stack named 'dev' found. If dev does not exist on app.pulumi.com. If i created the dev on app.pulumi.com(i use pulumi.com for storage), it worked as expected.
So please go to azure blob to check if the dev stack exists. You need to create one on azure blob for your account if not exist.
If you want to migrate you local endpoints to azure blob. Please check the steps here.
Once the stack exists in your azure blob. You can run pulumi up --yes --skip-preview directly in pulumi task of azure devopline. No need to run pulumi stack init dev
Please make sure the login args is empty to use the online stack. If you specify --local, you will get the error too, for the stack does not exist in agent machine.
You can also enable the option Create the stack if it does not exist to let the pulumi task create the stack if it is not found on your azure blob.
Here is the an example from Pulumi official documents to integrate with azure devops. Hope it helps!
I am creating a cloudformation stack using a SAM template and the CLI. I have successfully done this using an account that gets all the required permissions from policies directly attached to it. It's poor security practice to give this account all these permissions so I've created a role with the same policies attached and want to use that for deployment instead. However, even though I pass my role through the --role-arn parameter the command is still looking to the account for the permissions.
Here are the commands I've tried using:
aws cloudformation deploy --template-file TemplatePackaged.yaml --stack-name TestStack --capabilities CAPABILITY_IAM --region us-east-1 --role-arn arn:aws:iam::666488004797:role/LambdaApplicationCreateRole
or
sam deploy --template-file TemplatePackaged.yaml --stack-name TestStack --capabilities CAPABILITY_IAM --region us-east-1 --role-arn arn:aws:iam::666488004797:role/LambdaApplicationCreateRole
Unless the user logged into the cli has the required permissions I get the error with either command:
An error occurred (AccessDenied) when calling the DescribeStacks
operation: User: arn:aws:iam::666488004797:user/DummyUser1 is not
authorized to perform: cloudformation:DescribeStacks on resource:
arn:aws:cloudformation:us-east-1:666488004797:stack/Melissa/*
How do I get the deploy command to use the role passed in the --role-arn parameter to get the permissions it needs?
After a lot of reading and trial and error I found that Manoj's answer is correct, but the tricky part is the argument that one needs to pass as xyz in his answer. Here is what I had to in order to pass a role:
I had to configure the role that I wanted to pass on the AWS CLI's config file as a profile. The parameter --profile that Manoj mentioned only works with profiles configured in this file (to the best of my knowledge). The way to configure a role as a profile is using the command:
aws configure --profile arbitraryName
What follows after profile is just a label or variable that you will use to refer to your role when you want to pass it, you can give it any name but ideally you would name it the same as the role it will hold. Running this command will prompt you for a couple of fields. As far as I know roles don't have access_key or secret_access_key so just hit enter to skip these as well as the region and output, you don't need those for your role. Next you will set fields that roles actually need by using these commands:
aws configure set profile.arbitraryName.role_arn roleArn
aws configure set profile.arbitraryName.source_profile cliProfile
The roleArn is the arn of the role you are configuring into the CLI,the cliProfile is a user already configured in the CLI that has rights to assume the role. Once this is done, whenever you want to pass the configured role in a command you just need to add --profile arbitraryName as the last parameter of your command and the command will use permissions from the role that was passed.
*Interesting to know, passing a role this way does an implicit aws sts assume-role. If you know where your .aws folder is you can go in and see a folder named cli, which contains a json file with the temporary credentials that are created when a role is assumed.
I had to do a lot of reading to figure this out, I hope this answer will save someone else some time.
there could be multiple approaches.
Assume the role and use profile for deploying aws cloudformation
aws cloudformation deploy --template-file TemplatePackaged.yaml --stack-name TestStack --profile xyz
Launch an EC2 instance with an instance profile which is having access to cloudformation, you don't have to explicitly specify role arn or profile details
aws cloudformation deploy --template-file TemplatePackaged.yaml --stack-name TestStack