Making Resets Passwords functionality I got \Mail\\Factory] is not instantiable error - lumen

In my lumen 8.0 app I want to add Resets Passwords functionality reading
Trying to reset Passwords in Lumen
article but I got error :
{
"message": "Target [Illuminate\\Contracts\\Mail\\Factory] is not instantiable while building [Illuminate\\Notifications\\Channels\\MailChannel].",
"exception": "Illuminate\\Contracts\\Container\\BindingResolutionException",
"file": "/mnt/_work_sdb8/wwwroot/LumenProjects/PublishPagesAPI/vendor/illuminate/container/Container.php",
"line": 1089,
"trace": [
{
"file": "/mnt/_work_sdb8/wwwroot/LumenProjects/PublishPagesAPI/vendor/illuminate/container/Container.php",
"line": 882,
"function": "notInstantiable",
"class": "Illuminate\\Container\\Container",
"type": "->"
},
{
"file": "/mnt/_work_sdb8/wwwroot/LumenProjects/PublishPagesAPI/vendor/illuminate/container/Container.php",
"line": 754,
"function": "build",
"class": "Illuminate\\Container\\Container",
"type": "->"
},
{
"file": "/mnt/_work_sdb8/wwwroot/LumenProjects/PublishPagesAPI/vendor/illuminate/container/Container.php",
"line": 692,
"function": "resolve",
"class": "Illuminate\\Container\\Container",
"type": "->"
},
In my composer.json :
{
"name": "laravel/lumen",
"description": "The Laravel Lumen Framework.",
"keywords": ["framework", "laravel", "lumen"],
"license": "MIT",
"type": "project",
"require": {
"php": "^7.3|^8.0",
"anik/form-request": "^4.2",
"cviebrock/eloquent-sluggable": "^8.0",
"dusterio/lumen-passport": "^0.3.4",
"flipbox/lumen-generator": "^8.2",
"guzzlehttp/guzzle": "^7.3",
"illuminate/mail": "^8.48",
"illuminate/notifications": "^8.49",
"intervention/image": "^2.5",
"laravel/lumen-framework": "^8.0",
"league/flysystem": " ~1.0"
},
"require-dev": {
"fakerphp/faker": "^1.9.1",
"mockery/mockery": "^1.3.1",
"phpunit/phpunit": "^9.3"
},
"autoload": {
"files": [
"app/library/helper.php"
],
"psr-4": {
"App\\": "app/",
"Database\\Factories\\": "database/factories/",
"Database\\Seeders\\": "database/seeders/"
}
},
"autoload-dev": {
"classmap": [
"tests/"
]
},
"config": {
"preferred-install": "dist",
"sort-packages": true,
"optimize-autoloader": true
},
"minimum-stability": "dev",
"prefer-stable": true,
"scripts": {
"post-root-package-install": [
"#php -r \"file_exists('.env') || copy('.env.example', '.env');\""
]
}
}
Do I need to install some packages and how to init it in my package ?

Reading branch https://github.com/laravel/lumen-framework/issues/1057 I found decision with adding lines :
$app->alias('mail.manager', Illuminate\Mail\MailManager::class);
$app->alias('mail.manager', Illuminate\Contracts\Mail\Factory::class);
in file bootstrap/app.php. And it works!

Related

How to resolve "Validation error: The task.json file was not found in contribution task." when validating Azure DevOps extension?

I am attempting to build and validate an Azure DevOps extension that contains a task contribution but it fails to validate and returns "Validation error: The task.json file was not found in contribution task."
This is an example extension I have created that contains two things:
The root vss-extension.json file
A folder called "task" containing the "task.json" file
The command that I am running to validate the extension is as follows. I am running this from the root folder containing the vss-extension.json file.
tfx extension isvalid --publisher my-publisher --extension-id my-id --service-url https://marketplace.visualstudio.com/
vss-extension.json
{
"manifestVersion": 1,
"id": "demo-extension",
"name": "Demo Extension",
"version": "1.0.0",
"publisher": "demo-publisher",
"targets": [
{
"id": "Microsoft.VisualStudio.Services"
}
],
"description": "Demo extension",
"categories": ["Azure Pipelines"],
"files": [
{
"path": "task"
}
],
"contributions": [
{
"id": "build-task",
"type": "ms.vss-distributed-task.task",
"targets": ["ms.vss-distributed-task.tasks"],
"properties": {
"name": "task"
}
}
]
}
task/task.json
{
"$schema": "https://raw.githubusercontent.com/Microsoft/azure-pipelines-task-lib/master/tasks.schema.json",
"id": "a81df1d3-750f-4d60-a8dc-21970f1956e2",
"name": "DemoBuild",
"friendlyName": "Demo task",
"instanceNameFormat": "Demo task",
"description": "Demo task",
"helpMarkDown": "",
"category": "Build",
"author": "Demo Company",
"version": {
"Major": 0,
"Minor": 1,
"Patch": 0
},
"groups": [
{
"name": "someinput",
"displayName": "SomeInput",
"isExpanded": true
}
],
"inputs": [
{
"name": "someinput",
"type": "string",
"label": "Some Input",
"defaultValue": "",
"required": true,
"helpMarkDown": "Input something",
"groupName": "someinput"
}
],
"execution": {
"Node10": {
"target": "index.js"
}
}
}
In addition to the above I have also downloaded and attempted to validate these official Microsoft extensions with the same error message:
https://github.com/microsoft/azure-devops-extension-tasks
https://github.com/microsoft/PR-Metrics
How might I go about fixing this issue please?

AWS Cloudformation stuck in UPDATE_ROLLBACK_FAILED

I deploy my AWS Lambdas via AWS Serverless Application Model (SAM). One of my Lambdas uses Numpy which I reference via a 3rd party layer from Klayers by #keithRozario. I was using Klayers-python38-numpy:16 but discovered that it was deprecated after I deployed today which left my stack in an UPDATE_ROLLBACK_FAILED state.
One recommendation is to use Stack actions -> Continue update rollback from the AWS console; which I tried but it didn't work. The other solution is to delete the stack. However, this would be my first time deleting a stack and what I'd like to know is: if I delete my stack via the console, will my stack get recreated when I redeploy it? I've looked for answers to my question but I'm only finding references to deleting resources within the stack.
What I'd also like to know is, my stack is the first stack of many in an AWS CodePipeline, will my pipeline still work if I delete my stack? Further, will I experience anymore failed stacks as I proceed to subsequent stacks within my pipeline?
Lastly, the plan is to update to Klayers-python38-numpy:19 when I redeploy.
EDIT: as per #marcin
The problem is that the Klayers-python38-numpy:16, that is already deployed throughout my stack, is no longer available. I tried deploying a change to my code this morning, my pipeline failed during the CreateChangeSet step. The fact that this layer is no longer available is, I'm assuming, the reason my stack is unable to rollback.
My pipeline looks like this:
{
"pipeline": {
"name": "my-pipeline",
"roleArn": "arn:aws:iam::123456789:role/my-pipeline-CodePipelineExecutionRole-4O8PAUJGLXYZ",
"artifactStore": {
"type": "S3",
"location": "my-pipeline-buildartifactsbucket-62byf2xqaa8z"
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "SourceCodeRepo",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"provider": "GitHub",
"version": "1"
},
"runOrder": 1,
"configuration": {
"Branch": "master",
"OAuthToken": "****",
"Owner": "hugo",
"Repo": "my-pipeline"
},
"outputArtifacts": [
{
"name": "SourceCodeAsZip"
}
],
"inputArtifacts": []
}
]
},
{
"name": "Build",
"actions": [
{
"name": "CodeBuild",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ProjectName": "my-pipeline"
},
"outputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
],
"inputArtifacts": [
{
"name": "SourceCodeAsZip"
}
]
}
]
},
{
"name": "CI",
"actions": [
{
"name": "CreateChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"Capabilities": "CAPABILITY_IAM",
"ChangeSetName": "my-pipeline-ChangeSet-ci",
"ParameterOverrides": "{\n \"MyEnvironment\" : \"ci\"\n}\n",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-ci",
"TemplatePath": "BuildArtifactAsZip::packaged.yaml"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
]
},
{
"name": "ExecuteChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "my-pipeline-ChangeSet-ci",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-ci"
},
"outputArtifacts": [
{
"name": "my-pipelineCIChangeSet"
}
],
"inputArtifacts": []
}
]
},
{
"name": "Staging",
"actions": [
{
"name": "CreateChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"Capabilities": "CAPABILITY_IAM",
"ChangeSetName": "my-pipeline-ChangeSet-staging",
"ParameterOverrides": "{\n \"MyEnvironment\" : \"staging\"\n}\n",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-staging",
"TemplatePath": "BuildArtifactAsZip::packaged.yaml"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
]
},
{
"name": "ExecuteChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "my-pipeline-ChangeSet-staging",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-staging"
},
"outputArtifacts": [
{
"name": "my-pipelineStagingChangeSet"
}
],
"inputArtifacts": []
}
]
},
{
"name": "Prod",
"actions": [
{
"name": "DeploymentApproval",
"actionTypeId": {
"category": "Approval",
"owner": "AWS",
"provider": "Manual",
"version": "1"
},
"runOrder": 1,
"configuration": {},
"outputArtifacts": [],
"inputArtifacts": []
},
{
"name": "CreateChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"Capabilities": "CAPABILITY_IAM",
"ChangeSetName": "my-pipeline-ChangeSet-prod",
"ParameterOverrides": "{\n \"MyEnvironment\" : \"prod\"\n}\n",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-prod",
"TemplatePath": "BuildArtifactAsZip::packaged.yaml"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
]
},
{
"name": "ExecuteChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 3,
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "my-pipeline-ChangeSet-prod",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-prod"
},
"outputArtifacts": [
{
"name": "my-pipelineProdChangeSet"
}
],
"inputArtifacts": []
}
]
}
],
"version": 1
}
}
if I delete my stack via the console, will my stack get recreated when I redeploy it?
Yes. You can try to deploy same stack again, but probably you should investigate why it failed in the first place.
What I'd also like to know is, my stack is the first stack of many in an AWS CodePipeline, will my pipeline still work if I delete my stack?
Don't know, but probably not. Its use case specific and you haven't provide any info about the CP.
Further, will I experience anymore failed stacks as I proceed to subsequent stacks within my pipeline?
If one action fails, you can't proceed with further actions. Even if you could, other stacks can depend on the first one, and they will fail as well.

Google Cloud Data Fusion produces inconsistent output data

I am creating a DataFusion pipeline to ingest a CSV file from s3 bucket, applying wrangler directives and storing it in GCS bucket. The input CSV file had 18 columns. However, the output CSV file has only 8 columns. I have a doubt that this could be due to the CSV encoding format, but I am not sure. What could be the reason here?
Pipeline JSON
{
"name": "aws_fusion_v1",
"description": "Data Pipeline Application",
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.1.2",
"scope": "SYSTEM"
},
"config": {
"resources": {
"memoryMB": 2048,
"virtualCores": 1
},
"driverResources": {
"memoryMB": 2048,
"virtualCores": 1
},
"connections": [
{
"from": "Amazon S3",
"to": "Wrangler"
},
{
"from": "Wrangler",
"to": "GCS2"
},
{
"from": "Argument Setter",
"to": "Amazon S3"
}
],
"comments": [],
"postActions": [],
"properties": {},
"processTimingEnabled": true,
"stageLoggingEnabled": true,
"stages": [
{
"name": "Amazon S3",
"plugin": {
"name": "S3",
"type": "batchsource",
"label": "Amazon S3",
"artifact": {
"name": "amazon-s3-plugins",
"version": "1.11.0",
"scope": "SYSTEM"
},
"properties": {
"format": "text",
"authenticationMethod": "Access Credentials",
"filenameOnly": "false",
"recursive": "false",
"ignoreNonExistingFolders": "false",
"schema": "{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"body\",\"type\":\"string\"}]}",
"referenceName": "aws_source",
"path": "${input.bucket}",
"accessID": "${input.access_id}",
"accessKey": "${input.access_key}"
}
},
"outputSchema": [
{
"name": "etlSchemaBody",
"schema": "{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"body\",\"type\":\"string\"}]}"
}
],
"type": "batchsource",
"label": "Amazon S3",
"icon": "icon-s3"
},
{
"name": "Wrangler",
"plugin": {
"name": "Wrangler",
"type": "transform",
"label": "Wrangler",
"artifact": {
"name": "wrangler-transform",
"version": "4.1.5",
"scope": "SYSTEM"
},
"properties": {
"field": "*",
"precondition": "false",
"threshold": "1",
"workspaceId": "804a2995-7c06-4ab2-b342-a9a01aa03a3d",
"schema": "${output.schema}",
"directives": "${directive}"
}
},
"outputSchema": [
{
"name": "etlSchemaBody",
"schema": "${output.schema}"
}
],
"inputSchema": [
{
"name": "Amazon S3",
"schema": "{\"type\":\"record\",\"name\":\"etlSchemaBody\",\"fields\":[{\"name\":\"body\",\"type\":\"string\"}]}"
}
],
"type": "transform",
"label": "Wrangler",
"icon": "icon-DataPreparation"
},
{
"name": "GCS2",
"plugin": {
"name": "GCS",
"type": "batchsink",
"label": "GCS2",
"artifact": {
"name": "google-cloud",
"version": "0.14.2",
"scope": "SYSTEM"
},
"properties": {
"project": "auto-detect",
"suffix": "yyyy-MM-dd-HH-mm",
"format": "csv",
"serviceFilePath": "auto-detect",
"location": "us",
"referenceName": "gcs_sink",
"path": "${output.path}",
"schema": "${output.schema}"
}
},
"outputSchema": [
{
"name": "etlSchemaBody",
"schema": "${output.schema}"
}
],
"inputSchema": [
{
"name": "Wrangler",
"schema": ""
}
],
"type": "batchsink",
"label": "GCS2",
"icon": "fa-plug"
},
{
"name": "Argument Setter",
"plugin": {
"name": "ArgumentSetter",
"type": "action",
"label": "Argument Setter",
"artifact": {
"name": "argument-setter-plugins",
"version": "1.1.1",
"scope": "USER"
},
"properties": {
"method": "GET",
"connectTimeout": "60000",
"readTimeout": "60000",
"numRetries": "0",
"followRedirects": "true",
"url": "${argfile}"
}
},
"outputSchema": [
{
"name": "etlSchemaBody",
"schema": ""
}
],
"type": "action",
"label": "Argument Setter",
"icon": "fa-plug"
}
],
"schedule": "0 * * * *",
"engine": "spark",
"numOfRecordsPreview": 100,
"description": "Data Pipeline Application",
"maxConcurrentRuns": 1
}
}
Edit:
The missing columns in the output file were due to spaces in the column names. But I am facing another issue. In wrangler, when I pass a directive as
"parse-as-csv :body ',' false", the output file is empty. But when I pass something like "parse-as-csv :body ',' true", the output file has all the data without header as expected.

How to delete a connector plugin in Kafka

Here's the list of my connector plugins:
> curl localhost:8083/connector-plugins
{
"class": "io.confluent.connect.activemq.ActiveMQSourceConnector",
"type": "source",
"version": "4.1.0"
}, {
"class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"type": "sink",
"version": "4.1.0"
}, {
"class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"type": "sink",
"version": "4.1.0"
}, {
"class": "io.confluent.connect.hdfs.tools.SchemaSourceConnector",
"type": "source",
"version": "1.1.0-cp1"
}, {
"class": "io.confluent.connect.ibm.mq.IbmMQSourceConnector",
"type": "source",
"version": "4.1.0"
}, {
"class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"type": "sink",
"version": "4.1.0"
}, {
"class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"type": "source",
"version": "4.1.0"
}, {
"class": "io.confluent.connect.jms.JmsSourceConnector",
"type": "source",
"version": "4.1.0"
}, {
"class": "io.confluent.connect.replicator.ReplicatorSourceConnector",
"type": "source",
"version": "4.1.0"
}, {
"class": "io.confluent.connect.s3.S3SinkConnector",
"type": "sink",
"version": "4.1.0"
}, {
"class": "io.confluent.connect.storage.tools.SchemaSourceConnector",
"type": "source",
"version": "1.1.0-cp1"
}, {
"class": "io.debezium.connector.mysql.MySqlConnector",
"type": "source",
"version": "0.7.5"
}, {
"class": "org.apache.kafka.connect.file.FileStreamSinkConnector",
"type": "sink",
"version": "1.1.0-cp1"
}, {
"class": "org.apache.kafka.connect.file.FileStreamSourceConnector",
"type": "source",
"version": "1.1.0-cp1"
}
How can I delete the following connector?
{
"class": "io.debezium.connector.mysql.MySqlConnector",
"type": "source",
"version": "0.7.5"
}
Remove it from your plugin.path.
plugin.path is path at which all connector files gets installed. for example it could be /opt/confluent-7.0.0/share/confluent-hub-components/

AWS Cloud Formation Stuck in Review_In_Progress

I was trying to set up AWS Code Pipeline with AWS SAM for Lambda using Java-8 as mentioned in the documentations
http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
(example is in node.js though).
However, my Staging is stuck at CloudFormation Stack is stuck in REVIEW_IN_PROGRESS for a long time. Is there any way to debug this issue?
I don't see any further events coming in console. Is there any specific things to check for?
The template is as follow
$ aws codepipeline get-pipeline --region us-east-1 --name aws-lexbot-facebook-pipeline
{
"pipeline": {
"roleArn": "arn:aws:iam::XXXXXXXXXXXX:role/AWS-CodePipeline-Service",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"version": "1",
"provider": "GitHub"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"configuration": {
"Owner": “xxxxxxx”,
"Repo": "lexbot",
"PollForSourceChanges": "true",
"Branch": "master",
"OAuthToken": "****"
},
"runOrder": 1
}
]
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "CodeBuild",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"version": "1",
"provider": "CodeBuild"
},
"outputArtifacts": [
{
"name": "MyAppBuild"
}
],
"configuration": {
"ProjectName": "aws-lexbot-facebook-codebuild"
},
"runOrder": 1
}
]
},
{
"name": "Staging",
"actions": [
{
"inputArtifacts": [
{
"name": "MyAppBuild"
}
],
"name": "LexBotBetaStack",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"ChangeSetName": "LexBotChangeSet",
"RoleArn": "arn:aws:iam::XXXXXXXXXXX:role/cloudformation-lambda-execution-role",
"Capabilities": "CAPABILITY_IAM",
"StackName": "LexBotBetaStack",
"TemplatePath": "MyAppBuild::SamTemplate.yaml"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": “XXXXXX-us-east-1-987802409920"
},
"name": "aws-lexbot-facebook-pipeline",
"version": 1
}
}
Overview
In your CodePipeline step, you're using the CHANGE_SET_CREATE action mode. This creates a change set on the CloudFormation Stack, but does not automatically execute it. You would need a second action that executes the change set using CHANGE_SET_EXECUTE. Alternatively, you can change the action mode on your action to CREATE_UPDATE which should directly update your action.
One reason you might want to use CHANGE_SET_CREATE and CHANGE_SET_EXECUTE in CodePipeline, is if you want to have an approval step between them. If you are expecting this to be completed automatically, I'd recommend CREATE_UPDATE.
CREATE_UPDATE example
Below is your CodePipeline Staging stage, but using CREATE_UPDATE instead of CREATE_CHANGE_SET. This creates a new stack named stack, or updates the existing one if one with that name already exists.
{
"inputArtifacts": [
{
"name": "MyAppBuild"
}
],
"name": "LexBotBetaStack",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"ActionMode": "CREATE_UPDATE",
"ChangeSetName": "LexBotChangeSet",
"RoleArn": "arn:aws:iam::XXXXXXXXXXX:role/cloudformation-lambda-execution-role",
"Capabilities": "CAPABILITY_IAM",
"StackName": "LexBotBetaStack",
"TemplatePath": "MyAppBuild::SamTemplate.yaml"
},
"runOrder": 1
}
CHANGE_SET_CREATE and CHANGE_SET_EXECUTE example
Below is an example of how you could use CHANGE_SET_CREATE and CHANGE_SET_EXECUTE together. It first creates a change set, on the named stack, then executes that change set. It's really useful if you want to have a CodePipeline Approval step between the change set, and executing it, so you can review the intended changes.
{
"inputArtifacts": [
{
"name": "MyAppBuild"
}
],
"name": "LexBotBetaStackChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"ChangeSetName": "LexBotChangeSet",
"RoleArn": "arn:aws:iam::XXXXXXXXXXX:role/cloudformation-lambda-execution-role",
"Capabilities": "CAPABILITY_IAM",
"StackName": "LexBotBetaStack",
"TemplatePath": "MyAppBuild::SamTemplate.yaml"
},
"runOrder": 1
},
{
"name": "LexBotBetaStackExecute",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "LexBotChangeSet",
"StackName": "LexBotBetaStack",
},
"runOrder": 2
}
I went to the change set and hit the execute button so it now shows CREATION_IN_PROGRESS.
Some one has already answered, but for more clarity, Please refer below screenshot. Click on Change Sets and then select the change set and hit Execute.
This can be due to some service bug in your template file/troposphere code. Make sure you can visualize the cf tree to check how the services communicate.