Disable Deploy API for specific Stages - aws-api-gateway

I'm building an API using AWS API Gateway, I will have two or more stages like dev, production etc.
What i want to do is allow only a group of users to deploy to production stage.
What i have accomplished is deny deploy to all stages, but i can't figure out how to specify stages.
Here is my policy to deny Deploy to every stage, also if there is a better way to control I will be glad to hear it.
{
"Sid": "VisualEditor2",
"Effect": "Deny",
"Action": "apigateway:POST",
"Resource": "arn:aws:apigateway:us-east-1::/restapis/{APIID}/deployments"
}

Did you try something like this, to block the hole stage
"Resource": [
"arn:aws:apigateway:us-east-1::/restapis/{APIID}/stages",
"arn:aws:apigateway:us-east-1::/restapis/{APIID}/stages/production"
]
Source: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-iam-policy-examples.html#api-gateway-policy-example-apigateway-stage-full-access

Related

cloudformation template applied to all resources, using a wildcard

I am trying to use a JSON script as a Cloudformation template, but I am being asked to add a resource member even though the JSON script is already running in AWS.
The policy is meant to apply to all resources, and it's currently defined in IAM:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"*"
],
"Resource": [
"*"
],
"Condition": {
"DateLessThan": {
"aws:TokenIssueTime": "[policy creation time]"
}
}
}
]
}
All I want to do is simply copy that code (which currently sits in IAM > Roles > Revoke Sessions tab)
and stick it into a cloudformation template, but I cannot figure out how to tell Cloudformation that the JSON script is meant to be applied to ALL resources.
Is there any way to specify that the policy should apply to all resources in the JSON script? Any help would be much appreciated. Thank you!

Trigger a dag in Amazon Managed Workflows for Apache Airflow (MWAA) as a part CI/CD

Wondering if there is any way (blueprint) to trigger an airflow dag in MWAA on the merge of a pull request (preferably via github actions)? Thanks!
You need to create a role in AWS :
set permission with policy airflow:CreateCliToken
{
"Action": "airflow:CreateCliToken",
"Effect": "Allow",
"Resource": "*"
}
Add trusted relationship (with your account and repo)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::{account_id}:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"token.actions.githubusercontent.com:sub": "repo:{repo-name}:*"
}
}
}
]
}
In github action you need to set AWS credential with role-to-assume and permission to job
permissions:
id-token: write
contents: read
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: arn:aws:iam::{ account_id }:role/{role-name}
aws-region: {region}
Call MWAA using the CLI see aws ref about how to create token and run dag.
(Answering for Airflow without specific context to MWAA)
Airflow offers rest API which has trigger dag end point so in theory you can configure GitHub action that will run after merge of PR and trigger a dag run via REST call. In theory this should work.
In practice this will not work as you expect.
Airflow is not synchronous with your merges (even if merged dump code in the dag folder and there is no additional wait time for GitSync). Airflow has a DAG File Processing service that scans the Dag folder and lookup for changes in files. It process the changes and then a dag is registered to the database. Only after that Airflow can use the new code. This seralization process is important it makes sure different parts of airflow (webserver etc..) don't have access to your dag folder.
This means that if you invoke dagrun right after merge you are risking that it will execute an older version of your code.
I don't know what why you need such mechanism it's not very typical requirement but I'd advise you to not trying to force this idea into your deployment.
To clarify:
If under a specific deployment you can confirm that the code you deployed is parsed and register as dag in the database then there is no risk in doing what you are after. This is probably a very rare and unique case.

How can I help Azure Network Security Group rules recognize Service Tags for Azure DevOps

I am attempting to setup inbound Network Security Group rules to permit controlled access from Azure DevOps Pipelines to Azure a public Azure vnet which interfaces with a private Azure vnet containing Azure Container Instances which are running Sonarqube. I've crafted this according to the Azure documents here.
The NSG rule for the inbound traffic from Azure DevOps is leveraging Service Tags, specifically the ‘AzureDevOps’ service tag. My ARM template currently contains these two NSG rules:
{
"name": "inbound-devops-rule",
"properties": {
"description": "Inbound Azure DevOps",
"protocol": "*",
"sourcePortRange": "*",
"destinationPortRange": "*",
"sourceAddressPrefix": "AzureDevOps",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 100,
"direction": "Inbound"
}
},
{
"name": "InboundRequiredGatewayPorts",
"properties": {
"description": "Inbound AZ admin",
"protocol": "TCP",
"sourcePortRange": "*",
"destinationPortRange": "65200-65535",
"sourceAddressPrefix": "GatewayManager",
"destinationAddressPrefix": "*",
"access": "Allow",
"priority": 115,
"direction": "Inbound"
}
},
Currently, this does not permit traffic through the NSG rules and into the vnets. The only thing I’ve been able to do to resolve this in my testing has been to create a fully open rule allowing all traffic from all sources in my NSG…obviously not an ideal scenario from a security perspective. I have combed through the documents and attempted dozens of different configurations up to this point, and nothing but open/unprotected NSG configuration has allowed Azure DevOps traffic.
The simple error I am currently getting when attempting to connect to my containerized Azure Sonarqube resources from the Azure DevOps pipelines is:
2022-12-02T19:46:46.7098999Z ##[section]Starting: SonarQubePrepare
2022-12-02T19:46:46.7231846Z ==============================================================================
2022-12-02T19:46:46.7232191Z Task : Prepare Analysis Configuration
2022-12-02T19:46:46.7232466Z Description : Prepare SonarQube analysis configuration
2022-12-02T19:46:46.7232684Z Version : 5.8.0
2022-12-02T19:46:46.7232868Z Author : sonarsource
2022-12-02T19:46:46.7233232Z Help : Version: 5.8.0. [More Information](http://redirect.sonarsource.com/doc/install-configure-scanner-tfs-ts.html)
2022-12-02T19:46:46.7233633Z ==============================================================================
2022-12-02T19:47:08.4683602Z ##[error][SQ] API GET '/api/server/version' failed, error was: {"errno":"ETIMEDOUT","code":"ETIMEDOUT","syscall":"connect","address":"<REDACTED>","port":80}
2022-12-02T19:47:08.4811763Z ##[section]Finishing: SonarQubePrepare
Based on the rule sample above, is this properly configured to allow traffic from Azure DevOps using Service Tags? Can any additional guidance be provided for setting up NSG rules to allow traffic from my Azure DevOps pipelines to Azure containerized resources?
Azure DevOps Microsoft-hosted agents are using the Service Tag AzureCloud.region instead of Azure DevOps, addressed here: doc
First check your DevOps organization region in DevOps UI.
Then check the region in this Azure Geography
Pay attention to this: To obtain the complete list of possible IP ranges for your agent, you must use the IP ranges from all of the regions that are contained in your geography.
For your reference: if your DevOps organization locates in UK, you must add the service tag for both AzureCloud.uksouth and AzureCloud.ukwest to the NSG Inbound Security Rules to ensure the MS-hosted agents have access to the Azure Resources.

Adding Secrets and access policy to existing shared keyvault using ARM

I was searching the web after information in regards to the question I have to add secrets and access policies to an existing keyvault in azure shade by others applications using ARM.
I read this documentation.
What I'm worried about is in regards to if anything existing will be overwritten on deleted as I'm creating a new template and parameter file in my services "solution" so to speak.
And I know that I have my CICD pipelines in devops set to "incremental" in regards to what it should be updating an creating.
Anyone have a crystal clear understanding regarding this?
Thanks in advance!
UPDATE:
So I think I managed to get it right here after all.
I Created a new key vault resource and added a couple of secrets and some access policies to emulate a situation of an already created resource which I want to add new secrets to.
Then I created this template:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"keyVault": {
"type": "string"
},
"Credentials1": {
"type": "secureString"
},
"SecretName1": {
"type": "string"
},
"Credentials2": {
"type": "secureString"
},
"SecretName2": {
"type": "string"
}
},
"variables": {
},
"resources": [
{
"type": "Microsoft.KeyVault/vaults/secrets",
"name": "[concat(parameters('keyVault'), '/', parameters('SecretName1'))]",
"apiVersion": "2015-06-01",
"properties": {
"contentType": "text/plain",
"value": "[parameters('Credentials1')]"
}
},
{
"type": "Microsoft.KeyVault/vaults/secrets",
"name": "[concat(parameters('keyVault'), '/', parameters('SecretName2'))]",
"apiVersion": "2015-06-01",
"properties": {
"contentType": "text/plain",
"value": "[parameters('Credentials2')]"
}
}
],
"outputs": {}
}
What I've learned is that if an existing shared key vault exists which I want to add some secrets to I only have to define the sub resources, in this case the secrets to be added to the existing key vault.
so this worked an resulted in not modifying anything else in the existing key vault except adding the new secrets.
even though this is not a fully automated way of adding a whole new key vault setup related to a new service, as one doesn't connect the new resources correctly by adding their principal ID's (identity). Its good for now as I don't have to add each secret manually. Though I do have to add the principal ID's manually.
When using incremental mode to deploy the template, it should not overwrite the things in the keyvault.
But to be foolproof, I recommend you to back up your keyvault key, secret, certificate firstly. For the access policies, you can also export the template of the keyvault firstly, save the accessPolicies for restore in case.
If you redeploy the existing KeyVault in incremental mode any child properties, such as access policies, will be configured as they’re defined in the template. That could result in the loss of some access policies if you haven’t been careful to define them all in your template. The documentation linked to above will give you a full list of the properties that would be affected. As per the docs this can affect properties even if they’re not explicitly defined.
KeyVault Secrets aren’t a child property of the KeyVault resource so won’t get overwritten. They can be defined in ARM either as a separate resource in the same template or in a different template file. You can define some, all or none of the existing secrets in ARM. Any that aren’t defined in the ARM template will be left as is.
If you’re using CI/CD to manage your deployments it’s worth considering setting up a test environment to apply the changes to first so you can validate that the result is as expected before applying them to your production environment.

Restrict gcloud service account to specific bucket

I have 2 buckets, prod and staging, and I have a service account. I want to restrict this account to only have access to the staging bucket. Now I saw on https://cloud.google.com/iam/docs/conditions-overview that this should be possible. I created a policy.json like this
{
"bindings": [
{
"role": "roles/storage.objectCreator",
"members": "serviceAccount:staging-service-account#lalala-co.iam.gserviceaccount.com",
"condition": {
"title": "staging bucket only",
"expression": "resource.name.startsWith(\"projects/_/buckets/uploads-staging\")"
}
}
]
}
But when i fire gcloud projects set-iam-policy lalala policy.json i get:
The specified policy does not contain an "etag" field identifying a
specific version to replace. Changing a policy without an "etag" can
overwrite concurrent policy changes.
Replace existing policy (Y/n)?
ERROR: (gcloud.projects.set-iam-policy) INVALID_ARGUMENT: Can't set conditional policy on policy type: resourcemanager_projects and id: /lalala
I feel like I misunderstood how roles, policies and service-accounts are related. But in any case: is it possible to restrict a service account in that way?
Following comments, i was able to solve my problem. Apparently bucket-permissions are somehow special, but i was able to set a policy on the bucket that allows access for my user, using gsutil:
gsutils iam ch serviceAccount:staging-service-account#lalala.iam.gserviceaccount.com:objectCreator gs://lalala-uploads-staging
After firing this, the access is as-expected. I found it a little bit confusing that this is not reflected on the service-account policy:
% gcloud iam service-accounts get-iam-policy staging-service-account#lalala.iam.gserviceaccount.com
etag: ACAB
Thanks everyone