cdk bootstrap fails - "Invalid principal in policy... Status Code: 400; Error Code: MalformedPolicyDocument" - aws-cloudformation

Using the AWS CDK (2.10.0 (build e5b301f)) I'm attempting to bootstrap 'Account-B' with --trust permissions so 'Account-A' can deploy into it.
cdk bootstrap --trust aws://Account-A/ap-southeast-2 aws://Account-B/ap-southeast-2 --cloudformation-execution-policies "arn:aws:iam::aws:policy/AdministratorAccess"
The CFT deployment (Account-B) fails with this error...
The following resource(s) failed to update: [ImagePublishingRole,
FilePublishingRole, CdkBootstrapVersion, LookupRole,
CloudFormationExecutionRole, ContainerAssetsRepository].
Invalid principal in policy: "AWS":"aws://Account-A/ap-southeast-2"
(Service: AmazonIdentityManagement; Status Code: 400; Error Code:
MalformedPolicyDocument; Request ID:
65543239-69db-4b09-a70e-52732b7620cf; Proxy: null)
It seems to me that the CFT I'm consuming has an error, then i would assume that everyone else is also running into this issue?
Actual Error:
Bootstrapping environment aws://711219499793/ap-southeast-2...
Trusted accounts for deployment: aws://864771865616/ap-southeast-2
Trusted accounts for lookup: (none)
Execution policies: arn:aws:iam::aws:policy/AdministratorAccess
CDKToolkit: creating CloudFormation changeset...
UPDATE_FAILED | AWS::IAM::Role | ImagePublishingRole
Invalid principal in policy: "AWS":"aws://864771865616/ap-southeast-2" (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: 72b1b1d4-b904-47ee-808c-deaf399fcdc9; Proxy
: null)

Related

gitlab kubernets runner failing to authenticate in private docker registry during prepare stage

Im setting up a gitlab runner in my kubernets cluster.
The runner is properly deployed and running. When I trigger any pipeline, during the prepare stage it fails with an authentication error to pull from my private docker registry:
Preparing the "kubernetes" executor 00:00
Using Kubernetes namespace: gitlab-runner
Using Kubernetes executor with image myprivaterepo.com/terraform:light ...
Using attach strategy to execute scripts...
Preparing environment 00:04
Waiting for pod gitlab-runner/runner-d8cjrcgf-project-2156-concurrent-0nhsjb to be running, status is Pending
ContainersNotInitialized: "containers with incomplete status: [init-permissions]"
ContainersNotReady: "containers with unready status: [build helper]"
ContainersNotReady: "containers with unready status: [build helper]"
WARNING: Failed to pull image with policy "": image pull failed: rpc error: code = Unknown desc = failed to pull and unpack image "myprivaterepo.com/terraform:light": failed to resolve reference "myprivaterepo.com/terraform:light": pulling from host myprivaterepo.com failed with status code [manifests light]: 401 Unauthorized
ERROR: Job failed: prepare environment: waiting for pod running: pulling image "myprivaterepo.com/terraform:light": image pull failed: rpc error: code = Unknown desc = failed to pull and unpack image "myprivaterepo.com/terraform:light": failed to resolve reference "myprivaterepo.com/terraform:light": pulling from host myprivaterepo.com failed with status code [manifests light]: 401 Unauthorized. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
I already tried by adding in the runner deployment imagePullSecrets (kubernetes.io/dockerconfigjson) and also in the gitlab -> Settings -> CI/CD -> environment variable -> DOCKER_AUTH_CONFIG but no success for any of those.
Where is the correct place to add it? Im using helm chart.
my .gitlab-ci.yaml:
.base-terraform:
image:
name: myprivaterepo.com/terraform:light
In my DOCKER_AUTH_CONFIG I had a domain with different port than the actual one. gilab-ci used this project env variable automatically as it should be.

InvalidIdentityToken: Couldn't retrieve verification key from your identity provider

I am new to aws and kubectl, I need to deploy one of the app to aws. After deploying to eks cluster, I edited the ingress in the kubectl but unfortunately it returned 404 not found. (i am pretty sure the new service container works fine)
after checking from kubectl describe ingress, here are some events reports:
Warning FailedBuildModel 40m ingress Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
status code: 400, request id: xxxxxxxx-4a93-4e27-9d6b-xxxxxxxx
Warning FailedBuildModel 22m ingress Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
status code: 400, request id: xxxxxxxx-5368-41e1-8a4d-xxxxxxxx
Warning FailedBuildModel 5m8s ingress Failed build model due to WebIdentityErr: failed to retrieve credentials
caused by: InvalidIdentityToken: Couldn't retrieve verification key from your identity provider, please reference AssumeRoleWithWebIdentity documentation for requirements
status code: 400, request id: xxxxxxxx-20ea-4bd0-b1cb-xxxxxxxx
Anyone has ideas about this issue?

Serverless error UPDATE_FAILED - AWS::IAM::Role - IamRoleLambdaExecution

When deploying a lambda using serverless I am getting the following error when running sls deploy -s dev -v:
UPDATE_FAILED - AWS::IAM::Role - IamRoleLambdaExecution
Serverless Error ----------------------------------------
An error occurred: IamRoleLambdaExecution - The policy failed legacy parsing (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: b80c863c-1a49-43ae-887c-9611c869e08c; Proxy: null)
This error was caused by a resource incorrectly written.
I had a resource as:
arn:aws:sqs:us-west-2:982028098624/bigtier-dev-sqs
and it had to be:
arn:aws:sqs:us-west-2:982028098624:bigtier-dev-sqs
It took me hours to find it (the error message didn't help), that's why I wanted to share it. If you get this error, read every resource char by char!

Hashicorp Vault: "Code: 400. Errors" Error Message

When using Vault Agent with a secret ID file, I received the following error message:
$ ./vault agent --config auth_config.hcl
==> Vault server started! Log data will stream in below:
==> Vault agent configuration:
Api Address 1: http://127.0.0.1:8300
Cgo: disabled
Log Level: info
Version: Vault v1.3.0
2020-02-04T14:08:28.352-0800 [INFO] auth.handler: starting auth handler
2020-02-04T14:08:28.352-0800 [INFO] auth.handler: authenticating
2020-02-04T14:08:28.352-0800 [INFO] sink.server: starting sink server
2020-02-04T14:08:28.352-0800 [INFO] template.server: starting template server
2020-02-04T14:08:28.352-0800 [INFO] template.server: no templates found
2020-02-04T14:08:28.352-0800 [INFO] template.server: template server stopped
2020-02-04T14:08:28.354-0800 [ERROR] auth.handler: error authenticating: error="Error making API request.
URL: PUT http://127.0.0.1:8200/v1/auth/approle/login
Code: 400. Errors:
* invalid secret id" backoff=2.190384035
The command I executed was:
vault agent --config auth_config.hcl
The contents of my auth_config.hcl file is:
vault {
address = "http://127.0.0.1:8200"
}
auto_auth {
method "approle" {
config {
role_id_file_path = "./role_id"
secret_id_file_path = "./secret_id"
remove_secret_id_file_after_reading = false
}
}
}
cache {
use_auto_auth_token = true
}
listener "tcp" {
address = "127.0.0.1:8300"
tls_disable = true
}
My secret ID was generated using the following command:
vault write -f auth/approle/role/payments_service/secret-id -format=json | sed -E -n 's/.*"secret_id": "([^"]*).*/\1/p' > secret_id
Why is this error happening?
I found that the usual reason that this happens because the secret ID file wasn't generated correctly in the first place. See this Github thread for example. Unfortunately, in my case, the file was generated. The file secret_id referenced in auth_config.hcl contained the secret ID.
In my case, the problem was that after I generated the file, secret_id, I executed the command vault write -f auth/approle/role/payments_service/secret-id a second time. This new command didn't write over the original file with a new secret ID. The consequence of this new command was that it respawned a new secret ID which invalidated the previous secret ID which was written to the secret_id file.
My solution was to rerun the command that wrote the secret ID to the file, secret_id, and then immediately run the Vault Agent. Problem solved.
My case was because the app (kes) was trying to use http, instead of https, to connect to vault, while the tls was enabled both in vault and the app (kes). Once it was updated, the app could connect to vault without any issue
Error: failed to connect to Vault: Error making API request.
URL: PUT http://vault.vault:8200/v1/auth/approle/login
Code: 400. Raw Message:
Client sent an HTTP request to an HTTPS server.
Authenticating to Hashicorp Vault 'http://vault.vault:8200'

Azure CD Issue : Failed to fetch App Service 'myAppServiceName' publishing credentials

I'm trying to deploy my release on a azure web App. It's not working and I don't know what to do. Maybe I'm missing something in the configuration in my app service or in my release pipeline. I've got the following error
Failed to fetch App Service 'myAppServiceName' publishing credentials. Error: Could not fetch access token for Managed Service Principal.
And here is a block of my debug :
2019-04-11T08:25:35.4761242Z ##[debug]Predeployment Step Started
2019-04-11T08:25:35.4776374Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data subscriptionid = xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
2019-04-11T08:25:35.4776793Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data subscriptionname = Paiement à l’utilisation
2019-04-11T08:25:35.4777798Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e auth param serviceprincipalid = null
2019-04-11T08:25:35.4778094Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data environmentAuthorityUrl = https://login.windows.net/
2019-04-11T08:25:35.4781237Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e auth param tenantid = ***
2019-04-11T08:25:35.4782509Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e=https://management.azure.com/
2019-04-11T08:25:35.4782769Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data environment = AzureCloud
2019-04-11T08:25:35.4785012Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e auth scheme = ManagedServiceIdentity
2019-04-11T08:25:35.4785626Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data msiclientId = undefined
2019-04-11T08:25:35.4785882Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data activeDirectoryServiceEndpointResourceId = https://management.core.windows.net/
2019-04-11T08:25:35.4786107Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data AzureKeyVaultServiceEndpointResourceId = https://vault.azure.net
2019-04-11T08:25:35.4786348Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data AzureKeyVaultDnsSuffix = vault.azure.net
2019-04-11T08:25:35.4786525Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e auth param authenticationType = null
2019-04-11T08:25:35.4786735Z ##[debug]33ddf4aa-03c4-4031-95fa-e2083d49cc9e data EnableAdfsAuthentication = false
2019-04-11T08:25:35.4792324Z ##[debug]{"subscriptionID":"mysubscriptionID","subscriptionName":"Paiement à l’utilisation","servicePrincipalClientID":null,"environmentAuthorityUrl":"https://login.windows.net/","tenantID":"***","url":"https://management.azure.com/","environment":"AzureCloud","scheme":"ManagedServiceIdentity","activeDirectoryResourceID":"https://management.azure.com/","azureKeyVaultServiceEndpointResourceId":"https://vault.azure.net","azureKeyVaultDnsSuffix":"vault.azure.net","authenticationType":null,"isADFSEnabled":false,"applicationTokenCredentials":{"clientId":null,"domain":"***","baseUrl":"https://management.azure.com/","authorityUrl":"https://login.windows.net/","activeDirectoryResourceId":"https://management.azure.com/","isAzureStackEnvironment":false,"scheme":0,"isADFSEnabled":false}}
2019-04-11T08:25:35.4809400Z Got service connection details for Azure App Service:'myAppServiceName'
2019-04-11T08:25:35.4846967Z ##[debug][GET]http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https://management.azure.com/
2019-04-11T08:25:35.5443632Z ##[debug]Deployment Failed with Error: Error: Failed to fetch App Service 'myAppServiceName' publishing credentials. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5444488Z ##[debug]task result: Failed
2019-04-11T08:25:35.5501745Z ##[error]Error: Failed to fetch App Service 'myAppServiceName' publishing credentials. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5511780Z ##[debug]Processed: ##vso[task.issue type=error;]Error: Failed to fetch App Service 'myAppServiceName' publishing credentials. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5512729Z ##[debug]Processed: ##vso[task.complete result=Failed;]Error: Failed to fetch App Service 'myAppServiceName' publishing credentials. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5512828Z Failed to add release annotation. Error: Failed to get App service 'myAppServiceName' application settings. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5645194Z (node:5004) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Failed to fetch App Service 'myAppServiceName' publishing profile. Error: Could not fetch access token for Managed Service Principal. Please configure Managed Service Identity (MSI) for virtual machine 'https://aka.ms/azure-msi-docs'. Status code: 400, status message: Bad Request
2019-04-11T08:25:35.5759915Z ##[section]Finishing: Deploy Azure App Service
And some screenshot of
azure missing configuration ?
release pipeline config 1
release pipeline config 2
release pipeline config 3
Let me know if you need more informations.. I'm new in this so maybe missing simple things... Best regards
do you have setting identity Status On ?
like below
In my case, we had just moved our app service to a new resource group, but the pipeline was still referencing the old resource group. Correcting the resource group fixed the issue
A simple typo can also be the reason for this error message.
You will get this error message even though if it's just a typo or wrong value in your "slotName".
Please do ensure that the "slotName" you've given is the actual slotname (the default is 'production'). So if you've added a slot that's called 'stage' then inside the portal it will have your '/stage' or '-stage', but it's still just called 'stage'.
I know several have had this error message shown and none of the above helped them out (I faced the same issue the first time).
My research indicated this to be an intermittent problem.
I redeployed 2 times and it worked.
The first redeploy - just seemed to wait for ages to connect to an available agent, so I cancelled that too, and redeployed - which worked without any issue.
If this is still an issue or if someone had this issue, all I did was just to rerun the release and it well went well. Hopefully someone has saved time by just re-releasing, if this wont work then probably try something else.