AWS CodePipeline github webhook not triggering on commit - github

I set up an AWS CodePipeline that uses github as a source, CodeBuild for build, and deploys to ElasticBeanstalk.
I was able to get it working when everything was set up in the console and I was an admin of the github account (I used a different account for testing)
The actual code I need to deploy belongs to an account where I am not an admin so following this guide I received a Personal Access Token and updated the CodePipeline using the CLI.
Once i updated the project using the cli, it no longer gets triggered when the code is committed.
I'm not sure what changed, because it still doesn't work even when I use the console and set up the webhook directly as an admin of the github account I was testing with.
This is the json I updated the pipeline with:
{
"pipeline": {
"roleArn": "arn:aws:iam::xxxxxxx:role/service-role/AWSCodePipelineServiceRole-us-west-2-xxxxx-xxxx",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"version": "1",
"provider": "GitHub"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"configuration": {
"Owner": "xxx",
"Repo": "xxx",
"PollForSourceChanges": "false",
"Branch": "stage"
},
"runOrder": 1
}
]
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"version": "1",
"provider": "CodeBuild"
},
"outputArtifacts": [
{
"name": "BuildArtifact"
}
],
"configuration": {
"ProjectName": "xxx-stage-codebuild"
},
"runOrder": 1
}
]
},
{
"name": "Deploy",
"actions": [
{
"inputArtifacts": [
{
"name": "BuildArtifact"
}
],
"name": "Deploy",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "ElasticBeanstalk"
},
"outputArtifacts": [],
"configuration": {
"ApplicationName": "xxx",
"EnvironmentName": "xxx-stage"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "xxx-artifacts-stage"
},
"name": "xxx-stage",
"version": 15
}
}

To fix the webhook for the updated GitHub source, you need to perform the following steps:
Use the steps in [1] to deregister and delete the existing webhook that is associated with the old GitHub repository.
Use the steps in [2] to recreate the webhook.
Ref:
[1] Delete the Webhook for Your GitHub Source - https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-webhooks-delete.html
[2] Create a Webhook for a GitHub Source - https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-webhooks-create.html
Let me know if you come across any challenges.

Related

Can't see my custom extension on Azure Devops Marketplace

My issue
I created an Azure Devops extension task. Deploy it on a publisher, shared it. But I can't find it on the MarkePlace.
What I did
This is my project:
This is my task.json:
{
"id": "0f6ee401-2a8e-4110-b51d-c8d05086c1d0",
"name": "deployRG",
"category": "Utility",
"visibility": [
"Build",
"Release"
],
"demands": [],
"version": {
"Major": "0",
"Minor": "1",
"Patch": "0"
},
"instanceNameFormat": "DeployRG $(name)",
"groups": [],
"inputs": [
{
"name": "Name",
"type": "string",
"label": "RG name",
"defaultValue": "",
"required": true,
}
],
"execution": {
"PowerShell3": {
"target": "CreateRG.ps1"
}
}
}
My manifest vss-extension.json:
{
"manifestVersion": 1,
"id": "deployRG",
"version": "0.1.0",
"name": "Deploy RG",
"publisher": "Amethyste-MyTasks",
"public": false,
"categories": [
"Azure Pipelines"
],
"tags": [
"amethyste"
],
"contributions": [
{
"id": "DeployRG",
"type": "ms.vss-distributed-task.task",
"targets": [
"ms.vss-distributed-task.tasks"
],
"properties": {
"name": "DeployRG"
}
}
],
"targets": [
{
"id": "Microsoft.VisualStudio.Services"
}
],
"files": [
{
"path": "DeployRG",
"packagePath": "DeployRG"
},
{
"path": "VstsTaskSdk"
}
]
}
What I checked
I am owner of the organization and belong to Project Collection Administrators group.
On the portal:
On the publisher portal:
What I need
I checked some tutorial on Internet and can't see what I do wrong.
Has anybody an idea?
Thank you
Aargh, I have just found and its easy.
After sharing, one should install the extension as indicated here:
https://learn.microsoft.com/en-us/azure/devops/extend/publish/overview?view=azure-devops
Don't know why so many tutorials skip this step

AWS Cloudformation stuck in UPDATE_ROLLBACK_FAILED

I deploy my AWS Lambdas via AWS Serverless Application Model (SAM). One of my Lambdas uses Numpy which I reference via a 3rd party layer from Klayers by #keithRozario. I was using Klayers-python38-numpy:16 but discovered that it was deprecated after I deployed today which left my stack in an UPDATE_ROLLBACK_FAILED state.
One recommendation is to use Stack actions -> Continue update rollback from the AWS console; which I tried but it didn't work. The other solution is to delete the stack. However, this would be my first time deleting a stack and what I'd like to know is: if I delete my stack via the console, will my stack get recreated when I redeploy it? I've looked for answers to my question but I'm only finding references to deleting resources within the stack.
What I'd also like to know is, my stack is the first stack of many in an AWS CodePipeline, will my pipeline still work if I delete my stack? Further, will I experience anymore failed stacks as I proceed to subsequent stacks within my pipeline?
Lastly, the plan is to update to Klayers-python38-numpy:19 when I redeploy.
EDIT: as per #marcin
The problem is that the Klayers-python38-numpy:16, that is already deployed throughout my stack, is no longer available. I tried deploying a change to my code this morning, my pipeline failed during the CreateChangeSet step. The fact that this layer is no longer available is, I'm assuming, the reason my stack is unable to rollback.
My pipeline looks like this:
{
"pipeline": {
"name": "my-pipeline",
"roleArn": "arn:aws:iam::123456789:role/my-pipeline-CodePipelineExecutionRole-4O8PAUJGLXYZ",
"artifactStore": {
"type": "S3",
"location": "my-pipeline-buildartifactsbucket-62byf2xqaa8z"
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "SourceCodeRepo",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"provider": "GitHub",
"version": "1"
},
"runOrder": 1,
"configuration": {
"Branch": "master",
"OAuthToken": "****",
"Owner": "hugo",
"Repo": "my-pipeline"
},
"outputArtifacts": [
{
"name": "SourceCodeAsZip"
}
],
"inputArtifacts": []
}
]
},
{
"name": "Build",
"actions": [
{
"name": "CodeBuild",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ProjectName": "my-pipeline"
},
"outputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
],
"inputArtifacts": [
{
"name": "SourceCodeAsZip"
}
]
}
]
},
{
"name": "CI",
"actions": [
{
"name": "CreateChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"Capabilities": "CAPABILITY_IAM",
"ChangeSetName": "my-pipeline-ChangeSet-ci",
"ParameterOverrides": "{\n \"MyEnvironment\" : \"ci\"\n}\n",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-ci",
"TemplatePath": "BuildArtifactAsZip::packaged.yaml"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
]
},
{
"name": "ExecuteChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "my-pipeline-ChangeSet-ci",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-ci"
},
"outputArtifacts": [
{
"name": "my-pipelineCIChangeSet"
}
],
"inputArtifacts": []
}
]
},
{
"name": "Staging",
"actions": [
{
"name": "CreateChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"Capabilities": "CAPABILITY_IAM",
"ChangeSetName": "my-pipeline-ChangeSet-staging",
"ParameterOverrides": "{\n \"MyEnvironment\" : \"staging\"\n}\n",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-staging",
"TemplatePath": "BuildArtifactAsZip::packaged.yaml"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
]
},
{
"name": "ExecuteChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "my-pipeline-ChangeSet-staging",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-staging"
},
"outputArtifacts": [
{
"name": "my-pipelineStagingChangeSet"
}
],
"inputArtifacts": []
}
]
},
{
"name": "Prod",
"actions": [
{
"name": "DeploymentApproval",
"actionTypeId": {
"category": "Approval",
"owner": "AWS",
"provider": "Manual",
"version": "1"
},
"runOrder": 1,
"configuration": {},
"outputArtifacts": [],
"inputArtifacts": []
},
{
"name": "CreateChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"Capabilities": "CAPABILITY_IAM",
"ChangeSetName": "my-pipeline-ChangeSet-prod",
"ParameterOverrides": "{\n \"MyEnvironment\" : \"prod\"\n}\n",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-prod",
"TemplatePath": "BuildArtifactAsZip::packaged.yaml"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
]
},
{
"name": "ExecuteChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 3,
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "my-pipeline-ChangeSet-prod",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-prod"
},
"outputArtifacts": [
{
"name": "my-pipelineProdChangeSet"
}
],
"inputArtifacts": []
}
]
}
],
"version": 1
}
}
if I delete my stack via the console, will my stack get recreated when I redeploy it?
Yes. You can try to deploy same stack again, but probably you should investigate why it failed in the first place.
What I'd also like to know is, my stack is the first stack of many in an AWS CodePipeline, will my pipeline still work if I delete my stack?
Don't know, but probably not. Its use case specific and you haven't provide any info about the CP.
Further, will I experience anymore failed stacks as I proceed to subsequent stacks within my pipeline?
If one action fails, you can't proceed with further actions. Even if you could, other stacks can depend on the first one, and they will fail as well.

How to create an ETL from BigQuery to Google Storage using CDAP?

I'm setting up CDAP in my Google Cloud Environment, but having problems to execute the following pipeline: run a query on BigQuery and save the result in a CSV file on Google Storage.
My process was:
Install CDAP using the CDAP OSS image at Google Marketplace.
Build the following pipeline:
{
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.0.0",
"scope": "SYSTEM"
},
"description": "Data Pipeline Application",
"name": "cdap_dsc_test",
"config": {
"resources": {
"memoryMB": 2048,
"virtualCores": 1
},
"driverResources": {
"memoryMB": 2048,
"virtualCores": 1
},
"connections": [
{
"from": "BigQuery",
"to": "Google Cloud Storage"
}
],
"comments": [],
"postActions": [],
"properties": {},
"processTimingEnabled": true,
"stageLoggingEnabled": true,
"stages": [
{
"name": "BigQuery",
"plugin": {
"name": "BigQueryTable",
"type": "batchsource",
"label": "BigQuery",
"artifact": {
"name": "google-cloud",
"version": "0.12.2",
"scope": "SYSTEM"
},
"properties": {
"project": "bi-data-science",
"serviceFilePath": "/home/ubuntu/bi-data-science-cdap-4cbf526de374.json",
"schema": "{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"destination_name\",\"type\":[\"string\",\"null\"]},{\"name\":\"destination_country\",\"type\":[\"string\",\"null\"]},{\"name\":\"timestamp\",\"type\":[\"double\",\"null\"]},{\"name\":\"desktop\",\"type\":[\"double\",\"null\"]},{\"name\":\"tablet\",\"type\":[\"double\",\"null\"]},{\"name\":\"mobile\",\"type\":[\"double\",\"null\"]}]}",
"referenceName": "test_tables",
"dataset": "google_trends",
"table": "devices"
}
},
"outputSchema": [
{
"name": "etlSchemaBody",
"schema": "{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"destination_name\",\"type\":[\"string\",\"null\"]},{\"name\":\"destination_country\",\"type\":[\"string\",\"null\"]},{\"name\":\"timestamp\",\"type\":[\"double\",\"null\"]},{\"name\":\"desktop\",\"type\":[\"double\",\"null\"]},{\"name\":\"tablet\",\"type\":[\"double\",\"null\"]},{\"name\":\"mobile\",\"type\":[\"double\",\"null\"]}]}"
}
]
},
{
"name": "Google Cloud Storage",
"plugin": {
"name": "GCS",
"type": "batchsink",
"label": "Google Cloud Storage",
"artifact": {
"name": "google-cloud",
"version": "0.12.2",
"scope": "SYSTEM"
},
"properties": {
"project": "bi-data-science",
"suffix": "yyyy-MM-dd",
"format": "json",
"serviceFilePath": "/home/ubuntu/bi-data-science-cdap-4cbf526de374.json",
"schema": "{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"destination_name\",\"type\":[\"string\",\"null\"]},{\"name\":\"destination_country\",\"type\":[\"string\",\"null\"]},{\"name\":\"timestamp\",\"type\":[\"double\",\"null\"]},{\"name\":\"desktop\",\"type\":[\"double\",\"null\"]},{\"name\":\"tablet\",\"type\":[\"double\",\"null\"]},{\"name\":\"mobile\",\"type\":[\"double\",\"null\"]}]}",
"delimiter": ",",
"referenceName": "gcs_cdap",
"path": "gs://hurb_sandbox/cdap_experiments/"
}
},
"outputSchema": [
{
"name": "etlSchemaBody",
"schema": "{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"destination_name\",\"type\":[\"string\",\"null\"]},{\"name\":\"destination_country\",\"type\":[\"string\",\"null\"]},{\"name\":\"timestamp\",\"type\":[\"double\",\"null\"]},{\"name\":\"desktop\",\"type\":[\"double\",\"null\"]},{\"name\":\"tablet\",\"type\":[\"double\",\"null\"]},{\"name\":\"mobile\",\"type\":[\"double\",\"null\"]}]}"
}
],
"inputSchema": [
{
"name": "BigQuery",
"schema": "{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"destination_name\",\"type\":[\"string\",\"null\"]},{\"name\":\"destination_country\",\"type\":[\"string\",\"null\"]},{\"name\":\"timestamp\",\"type\":[\"double\",\"null\"]},{\"name\":\"desktop\",\"type\":[\"double\",\"null\"]},{\"name\":\"tablet\",\"type\":[\"double\",\"null\"]},{\"name\":\"mobile\",\"type\":[\"double\",\"null\"]}]}"
}
]
}
],
"schedule": "0 * * * *",
"engine": "mapreduce",
"numOfRecordsPreview": 100,
"description": "Data Pipeline Application",
"maxConcurrentRuns": 1
}
}
The credential key has owner privileges and I'm able to access the query result using the "preview" option.
Pipeline result:
Files:
_SUCCESS (empty)
part-r-00000 (query result)
None csv file has been generated and I'm also not found a place where I can set a name to my output file in CDAP. Did I miss any configuration step?
Update:
We eventualy gave up on CDAP, and we're using Google DataFlow.
When configuring the GCS sink in the pipeline, there is a 'format' field, which you have set to JSON. You can set this to CSV to achieve the format you would like.

How do I create Virtual Machine with WinRM from an ARM Template?

I'm running into an issue when I attempt to run the 'Azure Resource Group Deploy' release task to create/update a resource group and the resources within it via an ARM Template. In particular, I need to have the Virtual Machine created by the ARM template accessible via WinRM; This needs to be done so that I can copy files (specifically a ZIP file containing the results of a build) to the VM in a later step.
Currently, I have the 'Template' portion of this task set up as follows: https://i.imgur.com/mvZDIMK.jpg (I can't post images since I don't have reputation here yet...)
Unless I've misunderstood (which is definitely possible), the "Configure with WinRM" option should allow the release step to create a WinRM Listener on any Virtual Machines created by this step.
I currently have the following resources in the ARM Template:
{
"type": "Microsoft.Storage/storageAccounts",
"sku": {
"name": "Standard_LRS",
"tier": "Standard"
},
"kind": "Storage",
"name": "[variables('StorageAccountName')]",
"apiVersion": "2018-02-01",
"location": "[parameters('LocationPrimary')]",
"scale": null,
"tags": {},
"properties": {
"networkAcls": {
"bypass": "AzureServices",
"virtualNetworkRules": [],
"ipRules": [],
"defaultAction": "Allow"
},
"supportsHttpsTrafficOnly": false,
"encryption": {
"services": {
"file": {
"enabled": true
},
"blob": {
"enabled": true
}
},
"keySource": "Microsoft.Storage"
}
},
"dependsOn": []
},
{
"name": "[variables('NetworkInterfaceName')]",
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2018-04-01",
"location": "[parameters('LocationPrimary')]",
"dependsOn": [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('NetworkSecurityGroupName'))]",
"[concat('Microsoft.Network/virtualNetworks/', variables('VNetName'))]",
"[concat('Microsoft.Network/publicIpAddresses/', variables('PublicIPAddressName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"privateIPAllocationMethod": "Dynamic",
"publicIpAddress": {
"id": "[resourceId(resourceGroup().name, 'Microsoft.Network/publicIpAddresses', variables('PublicIPAddressName'))]"
}
}
}
],
"networkSecurityGroup": {
"id": "[variables('nsgId')]"
}
},
"tags": {}
},
{
"name": "[variables('NetworkSecurityGroupName')]",
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2018-08-01",
"location": "[parameters('LocationPrimary')]",
"properties": {
"securityRules": [
{
"name": "RDP",
"properties": {
"priority": 300,
"protocol": "TCP",
"access": "Allow",
"direction": "Inbound",
"sourceAddressPrefix": "*",
"sourcePortRange": "*",
"destinationAddressPrefix": "*",
"destinationPortRange": "3389"
}
}
]
},
"tags": {}
},
{
"name": "[variables('VNetName')]",
"type": "Microsoft.Network/virtualNetworks",
"apiVersion": "2018-08-01",
"location": "[parameters('LocationPrimary')]",
"properties": {
"addressSpace": {
"addressPrefixes": [ "10.0.0.0/24" ]
},
"subnets": [
{
"name": "default",
"properties": {
"addressPrefix": "10.0.0.0/24"
}
}
]
},
"tags": {}
},
{
"name": "[variables('PublicIPAddressName')]",
"type": "Microsoft.Network/publicIpAddresses",
"apiVersion": "2018-08-01",
"location": "[parameters('LocationPrimary')]",
"properties": {
"publicIpAllocationMethod": "Dynamic"
},
"sku": {
"name": "Basic"
},
"tags": {}
},
{
"name": "[variables('VMName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-06-01",
"location": "[parameters('LocationPrimary')]",
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', variables('NetworkInterfaceName'))]",
"[concat('Microsoft.Storage/storageAccounts/', variables('StorageAccountName'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "Standard_A7"
},
"storageProfile": {
"osDisk": {
"createOption": "fromImage",
"managedDisk": {
"storageAccountType": "Standard_LRS"
}
},
"imageReference": {
"publisher": "MicrosoftWindowsDesktop",
"offer": "Windows-10",
"sku": "rs4-pro",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', variables('NetworkInterfaceName'))]"
}
]
},
"osProfile": {
"computerName": "[variables('VMName')]",
"adminUsername": "[parameters('AdminUsername')]",
"adminPassword": "[parameters('AdminPassword')]",
"windowsConfiguration": {
"enableAutomaticUpdates": true,
"provisionVmAgent": true
}
},
"licenseType": "Windows_Client",
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[concat('https://', variables('StorageAccountName'), '.blob.core.windows.net/')]"
}
}
},
"tags": {}
}
This ARM Template currently works if I do not attempt to configure the VM to have the WinRM Listener.
When I attempt to run the release, I get the following error message:
Error number: -2144108526 0x80338012
The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig".
In all honesty, my problem is likely a lack of understanding, as this is my first time working with VM Setup in any real capacity. Any insight and advice would be greatly appreciated.
you just need to add this to the "windowsConfiguration":
"winRM": {
"listeners": [
{
"protocol": "http"
},
{
"protocol": "https",
"certificateUrl": "<URL for the certificate you got in Step 4>"
}
]
}
you also need to provision certificates
reference: https://learn.microsoft.com/en-us/rest/api/compute/virtualmachines/createorupdate#winrmconfiguration
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/winrm

AWS Cloud Formation Stuck in Review_In_Progress

I was trying to set up AWS Code Pipeline with AWS SAM for Lambda using Java-8 as mentioned in the documentations
http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
(example is in node.js though).
However, my Staging is stuck at CloudFormation Stack is stuck in REVIEW_IN_PROGRESS for a long time. Is there any way to debug this issue?
I don't see any further events coming in console. Is there any specific things to check for?
The template is as follow
$ aws codepipeline get-pipeline --region us-east-1 --name aws-lexbot-facebook-pipeline
{
"pipeline": {
"roleArn": "arn:aws:iam::XXXXXXXXXXXX:role/AWS-CodePipeline-Service",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"version": "1",
"provider": "GitHub"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"configuration": {
"Owner": “xxxxxxx”,
"Repo": "lexbot",
"PollForSourceChanges": "true",
"Branch": "master",
"OAuthToken": "****"
},
"runOrder": 1
}
]
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "CodeBuild",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"version": "1",
"provider": "CodeBuild"
},
"outputArtifacts": [
{
"name": "MyAppBuild"
}
],
"configuration": {
"ProjectName": "aws-lexbot-facebook-codebuild"
},
"runOrder": 1
}
]
},
{
"name": "Staging",
"actions": [
{
"inputArtifacts": [
{
"name": "MyAppBuild"
}
],
"name": "LexBotBetaStack",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"ChangeSetName": "LexBotChangeSet",
"RoleArn": "arn:aws:iam::XXXXXXXXXXX:role/cloudformation-lambda-execution-role",
"Capabilities": "CAPABILITY_IAM",
"StackName": "LexBotBetaStack",
"TemplatePath": "MyAppBuild::SamTemplate.yaml"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": “XXXXXX-us-east-1-987802409920"
},
"name": "aws-lexbot-facebook-pipeline",
"version": 1
}
}
Overview
In your CodePipeline step, you're using the CHANGE_SET_CREATE action mode. This creates a change set on the CloudFormation Stack, but does not automatically execute it. You would need a second action that executes the change set using CHANGE_SET_EXECUTE. Alternatively, you can change the action mode on your action to CREATE_UPDATE which should directly update your action.
One reason you might want to use CHANGE_SET_CREATE and CHANGE_SET_EXECUTE in CodePipeline, is if you want to have an approval step between them. If you are expecting this to be completed automatically, I'd recommend CREATE_UPDATE.
CREATE_UPDATE example
Below is your CodePipeline Staging stage, but using CREATE_UPDATE instead of CREATE_CHANGE_SET. This creates a new stack named stack, or updates the existing one if one with that name already exists.
{
"inputArtifacts": [
{
"name": "MyAppBuild"
}
],
"name": "LexBotBetaStack",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"ActionMode": "CREATE_UPDATE",
"ChangeSetName": "LexBotChangeSet",
"RoleArn": "arn:aws:iam::XXXXXXXXXXX:role/cloudformation-lambda-execution-role",
"Capabilities": "CAPABILITY_IAM",
"StackName": "LexBotBetaStack",
"TemplatePath": "MyAppBuild::SamTemplate.yaml"
},
"runOrder": 1
}
CHANGE_SET_CREATE and CHANGE_SET_EXECUTE example
Below is an example of how you could use CHANGE_SET_CREATE and CHANGE_SET_EXECUTE together. It first creates a change set, on the named stack, then executes that change set. It's really useful if you want to have a CodePipeline Approval step between the change set, and executing it, so you can review the intended changes.
{
"inputArtifacts": [
{
"name": "MyAppBuild"
}
],
"name": "LexBotBetaStackChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"ChangeSetName": "LexBotChangeSet",
"RoleArn": "arn:aws:iam::XXXXXXXXXXX:role/cloudformation-lambda-execution-role",
"Capabilities": "CAPABILITY_IAM",
"StackName": "LexBotBetaStack",
"TemplatePath": "MyAppBuild::SamTemplate.yaml"
},
"runOrder": 1
},
{
"name": "LexBotBetaStackExecute",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "LexBotChangeSet",
"StackName": "LexBotBetaStack",
},
"runOrder": 2
}
I went to the change set and hit the execute button so it now shows CREATION_IN_PROGRESS.
Some one has already answered, but for more clarity, Please refer below screenshot. Click on Change Sets and then select the change set and hit Execute.
This can be due to some service bug in your template file/troposphere code. Make sure you can visualize the cf tree to check how the services communicate.