Database persistence in ECS Fargate - mongodb

We are running a MongoDB container in ECS Fargate. The ECS services starts up fine and database is accessible.
But how do I make it persistent as we all know that container storage is ephemeral? I have tried to mount efs file system following AWS documentation. The service gets created and the task runs fine but I am unable to access the database anymore. As in, I am unable to even login.
As far as EFS goes, I have tried both versions - unencrypted as well as encrypted.
My Task definition for MongoDB is as follows: (have removed the part where username and password parameters are passed)
{
"taskDefinitionArn": "arn:aws:ecs:us-east-2:1234567890:task-definition/mongo_efs_test_1215:2",
"containerDefinitions": [
{
"name": "mongo_efs_container_1215",
"image": "public.ecr.aws/docker/library/mongo:latest",
"cpu": 0,
"portMappings": [
{
"containerPort": 27017,
"hostPort": 27017,
"protocol": "tcp"
}
],
"essential": true,
"entryPoint": [],
"command": [],
"environment": [],
"mountPoints": [
{
"sourceVolume": "efs-disk",
"containerPath": "/data/db"
}
],
"volumesFrom": [],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/mongo_efs_test_1215",
"awslogs-region": "us-east-2",
"awslogs-stream-prefix": "ecs"
}
}
}
],
"family": "mongo_efs_test_1215",
"taskRoleArn": "arn:aws:iam::1234567890:role/ecsTaskExecutionRole",
"executionRoleArn": "arn:aws:iam::1234567890:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"revision": 2,
"volumes": [
{
"name": "efs-disk",
"efsVolumeConfiguration": {
"fileSystemId": "fs-dvveweo837af981fa",
"rootDirectory": "/",
"transitEncryption": "DISABLED",
"authorizationConfig": {
"iam": "DISABLED"
}
}
}
],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "ecs.capability.efsAuth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "ecs.capability.efs"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.25"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "512",
"runtimePlatform": {
"operatingSystemFamily": "LINUX"
},
"registeredAt": "2022-12-15T12:06:34.477Z",
"registeredBy": "arn:aws:iam::1234567890:root",
"tags": []
}
Can someone please guide me? I am struggling with this for a really long time. Any help is really appreciated.
Thank you.

Related

Can't write to bind mount on ECS Fragate when using non-root user

I'm using ECS with Fargate and trying to create a bind mount on ephemeral storage but my user (id 1000) is unable to write to the volume.
According to the documentation, it should be possible.
However the documentation mentions:
By default, the volume permissions are set to 0755 and the owner as root. These permissions can be customized in the Dockerfile.
So in my Dockerfile I have
ARG PHP_VERSION=8.1.2-fpm-alpine3.15
FROM php:$PHP_VERSION as php_base
ENV APP_USER=app
ENV APP_USER_HOME=/home/app
ENV APP_USER_UID=1000
ENV APP_USER_GID=1000
ENV APP_HOME=/srv/app
# create the app user
RUN set -eux; \
addgroup -g $APP_USER_GID -S $APP_USER; \
adduser -S -D -h "$APP_USER_HOME" -u $APP_USER_UID -s /sbin/nologin -G $APP_USER -g $APP_USER $APP_USER
RUN set -eux; \
mkdir -p /var/run/php; \
chown -R ${APP_USER}:${APP_USER} /var/run/php; \
# TODO THIS IS A TEST
chmod 777 /var/run/php
# ...
FROM php_base as php_prod
# ...
VOLUME ["/var/run/php"]
USER $APP_USER
WORKDIR "${APP_HOME}"
ENTRYPOINT ["/usr/local/bin/docker-php-entrypoint"]
CMD ["php-fpm"]
And in my task definition I have:
{
"taskDefinitionArn": "arn:aws:ecs:us-east-1:999999999999:task-definition/app:2",
"containerDefinitions": [
{
"name": "app-php",
"image": "999999999999.dkr.ecr.us-east-1.amazonaws.com/php:latest",
"cpu": 0,
"portMappings": [],
"essential": true,
"environment": [
{
"name": "DATABASE_PORT",
"value": "3306"
},
{
"name": "DATABASE_USERNAME",
"value": "app"
},
{
"name": "DATABASE_NAME",
"value": "app"
},
{
"name": "DATABASE_HOST",
"value": "db.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com"
}
],
"mountPoints": [
{
"sourceVolume": "php_socket",
"containerPath": "/var/run/php",
"readOnly": false
}
],
"volumesFrom": [],
"secrets": [
{
"name": "DATABASE_PASSWORD",
"valueFrom": "arn:aws:ssm:us-east-1:999999999999:parameter/db-password"
}
],
"readonlyRootFilesystem": false,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "app"
}
},
"healthCheck": {
"command": [
"docker-healthcheck"
],
"interval": 10,
"timeout": 3,
"retries": 3,
"startPeriod": 15
}
},
{
"name": "app-proxy",
"image": "999999999999.dkr.ecr.us-east-1.amazonaws.com/proxy:latest",
"cpu": 0,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
],
"essential": true,
"environment": [],
"mountPoints": [
{
"sourceVolume": "php_socket",
"containerPath": "/var/run/php",
"readOnly": false
}
],
"volumesFrom": [],
"dependsOn": [
{
"containerName": "app-php",
"condition": "HEALTHY"
}
],
"readonlyRootFilesystem": false,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "app",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "app"
}
},
"healthCheck": {
"command": [
"curl",
"-s",
"localhost/status-nginx"
],
"interval": 10,
"timeout": 3,
"retries": 3,
"startPeriod": 15
}
}
],
"family": "bnc-stage-remises-app",
"taskRoleArn": "arn:aws:iam::999999999999:role/app-task",
"executionRoleArn": "arn:aws:iam::999999999999:role/app-exec",
"networkMode": "awsvpc",
"revision": 2,
"volumes": [
{
"name": "php_socket",
"host": {}
}
],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
},
{
"name": "ecs.capability.execution-role-awslogs"
},
{
"name": "com.amazonaws.ecs.capability.ecr-auth"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "ecs.capability.container-health-check"
},
{
"name": "ecs.capability.container-ordering"
},
{
"name": "ecs.capability.execution-role-ecr-pull"
},
{
"name": "ecs.capability.secrets.ssm.environment-variables"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2",
"FARGATE"
],
"requiresCompatibilities": [
"FARGATE"
],
"cpu": "256",
"memory": "2048",
"registeredAt": "2022-02-15T15:54:47.452Z",
"registeredBy": "arn:aws:sts::999999999999:assumed-role/OrganizationAccountAccessRole/9999999999999999999",
"tags": [
{
"key": "Project",
"value": "project-name"
},
{
"key": "Environment",
"value": "stage"
},
{
"key": "ManagedBy",
"value": "Terraform"
},
{
"key": "Client",
"value": "ClientName"
},
{
"key": "Namespace",
"value": "client-name"
},
{
"key": "Name",
"value": "app"
}
]
}
However, in ECS I keep getting
2022-02-15T20:36:14.679Z [15-Feb-2022 20:36:14] ERROR: unable to bind listening socket for address '/var/run/php/php-fpm.sock': Permission denied (13) app-php
2022-02-15T20:36:14.679Z [15-Feb-2022 20:36:14] ERROR: unable to bind listening socket for address '/var/run/php/php-fpm.sock': Permission denied (13) app-php
2022-02-15T20:36:14.679Z [15-Feb-2022 20:36:14] ERROR: FPM initialization failed app-php
2022-02-15T20:36:14.679Z [15-Feb-2022 20:36:14] ERROR: FPM initialization failed app-php
Turns out /var/run is a symlink to /run in my container and ECS wasn't able to handle this. I changed my setup to use /run/php instead of /var/run/php and everything works perfectly.

AWS Cloudformation stuck in UPDATE_ROLLBACK_FAILED

I deploy my AWS Lambdas via AWS Serverless Application Model (SAM). One of my Lambdas uses Numpy which I reference via a 3rd party layer from Klayers by #keithRozario. I was using Klayers-python38-numpy:16 but discovered that it was deprecated after I deployed today which left my stack in an UPDATE_ROLLBACK_FAILED state.
One recommendation is to use Stack actions -> Continue update rollback from the AWS console; which I tried but it didn't work. The other solution is to delete the stack. However, this would be my first time deleting a stack and what I'd like to know is: if I delete my stack via the console, will my stack get recreated when I redeploy it? I've looked for answers to my question but I'm only finding references to deleting resources within the stack.
What I'd also like to know is, my stack is the first stack of many in an AWS CodePipeline, will my pipeline still work if I delete my stack? Further, will I experience anymore failed stacks as I proceed to subsequent stacks within my pipeline?
Lastly, the plan is to update to Klayers-python38-numpy:19 when I redeploy.
EDIT: as per #marcin
The problem is that the Klayers-python38-numpy:16, that is already deployed throughout my stack, is no longer available. I tried deploying a change to my code this morning, my pipeline failed during the CreateChangeSet step. The fact that this layer is no longer available is, I'm assuming, the reason my stack is unable to rollback.
My pipeline looks like this:
{
"pipeline": {
"name": "my-pipeline",
"roleArn": "arn:aws:iam::123456789:role/my-pipeline-CodePipelineExecutionRole-4O8PAUJGLXYZ",
"artifactStore": {
"type": "S3",
"location": "my-pipeline-buildartifactsbucket-62byf2xqaa8z"
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "SourceCodeRepo",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"provider": "GitHub",
"version": "1"
},
"runOrder": 1,
"configuration": {
"Branch": "master",
"OAuthToken": "****",
"Owner": "hugo",
"Repo": "my-pipeline"
},
"outputArtifacts": [
{
"name": "SourceCodeAsZip"
}
],
"inputArtifacts": []
}
]
},
{
"name": "Build",
"actions": [
{
"name": "CodeBuild",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ProjectName": "my-pipeline"
},
"outputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
],
"inputArtifacts": [
{
"name": "SourceCodeAsZip"
}
]
}
]
},
{
"name": "CI",
"actions": [
{
"name": "CreateChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"Capabilities": "CAPABILITY_IAM",
"ChangeSetName": "my-pipeline-ChangeSet-ci",
"ParameterOverrides": "{\n \"MyEnvironment\" : \"ci\"\n}\n",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-ci",
"TemplatePath": "BuildArtifactAsZip::packaged.yaml"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
]
},
{
"name": "ExecuteChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "my-pipeline-ChangeSet-ci",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-ci"
},
"outputArtifacts": [
{
"name": "my-pipelineCIChangeSet"
}
],
"inputArtifacts": []
}
]
},
{
"name": "Staging",
"actions": [
{
"name": "CreateChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"Capabilities": "CAPABILITY_IAM",
"ChangeSetName": "my-pipeline-ChangeSet-staging",
"ParameterOverrides": "{\n \"MyEnvironment\" : \"staging\"\n}\n",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-staging",
"TemplatePath": "BuildArtifactAsZip::packaged.yaml"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
]
},
{
"name": "ExecuteChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "my-pipeline-ChangeSet-staging",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-staging"
},
"outputArtifacts": [
{
"name": "my-pipelineStagingChangeSet"
}
],
"inputArtifacts": []
}
]
},
{
"name": "Prod",
"actions": [
{
"name": "DeploymentApproval",
"actionTypeId": {
"category": "Approval",
"owner": "AWS",
"provider": "Manual",
"version": "1"
},
"runOrder": 1,
"configuration": {},
"outputArtifacts": [],
"inputArtifacts": []
},
{
"name": "CreateChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"Capabilities": "CAPABILITY_IAM",
"ChangeSetName": "my-pipeline-ChangeSet-prod",
"ParameterOverrides": "{\n \"MyEnvironment\" : \"prod\"\n}\n",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-prod",
"TemplatePath": "BuildArtifactAsZip::packaged.yaml"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifactAsZip"
}
]
},
{
"name": "ExecuteChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CloudFormation",
"version": "1"
},
"runOrder": 3,
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "my-pipeline-ChangeSet-prod",
"RoleArn": "arn:aws:iam::123456789:role/my-pipeline-CloudFormationExecutionRole-1O8GOB5C2VXYZ",
"StackName": "my-pipeline-prod"
},
"outputArtifacts": [
{
"name": "my-pipelineProdChangeSet"
}
],
"inputArtifacts": []
}
]
}
],
"version": 1
}
}
if I delete my stack via the console, will my stack get recreated when I redeploy it?
Yes. You can try to deploy same stack again, but probably you should investigate why it failed in the first place.
What I'd also like to know is, my stack is the first stack of many in an AWS CodePipeline, will my pipeline still work if I delete my stack?
Don't know, but probably not. Its use case specific and you haven't provide any info about the CP.
Further, will I experience anymore failed stacks as I proceed to subsequent stacks within my pipeline?
If one action fails, you can't proceed with further actions. Even if you could, other stacks can depend on the first one, and they will fail as well.

Push data from bitbucket pipeline to ECS as volume

I'm trying to set up a Bitbucket pipeline to run PHP application. The application itself will be running using separate containers for nginx and php-fpm so both will need application source code directory to operate similar to this snippet:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "php-app",
"host": {
"sourcePath": "/var/app/current/php-app"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/proxy/conf.d"
}
}
],
"containerDefinitions": [
{
"name": "php-app",
"image": "php:fpm",
"essential": true,
"memory": 128,
"mountPoints": [
{
"sourceVolume": "php-app",
"containerPath": "/var/www/html",
"readOnly": true
}
]
},
{
"name": "nginx-proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": [
"php-app"
],
"mountPoints": [
{
"sourceVolume": "php-app",
"containerPath": "/var/www/html",
"readOnly": true
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/conf.d",
"readOnly": true
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
}
]
}
]
}
(source: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecstutorial.html )
For this reason I don't want to embed application source within images (I could run COPY to both nginx and php-fpm but it wouldn't be nice) and I would like to have source stored within a volume.
The question is: how can I push application after it's built (I have custom bitbucket agent with composer and stuff) to ECR so that I can use it in task definition?

How do I create Virtual Machine with WinRM from an ARM Template?

I'm running into an issue when I attempt to run the 'Azure Resource Group Deploy' release task to create/update a resource group and the resources within it via an ARM Template. In particular, I need to have the Virtual Machine created by the ARM template accessible via WinRM; This needs to be done so that I can copy files (specifically a ZIP file containing the results of a build) to the VM in a later step.
Currently, I have the 'Template' portion of this task set up as follows: https://i.imgur.com/mvZDIMK.jpg (I can't post images since I don't have reputation here yet...)
Unless I've misunderstood (which is definitely possible), the "Configure with WinRM" option should allow the release step to create a WinRM Listener on any Virtual Machines created by this step.
I currently have the following resources in the ARM Template:
{
"type": "Microsoft.Storage/storageAccounts",
"sku": {
"name": "Standard_LRS",
"tier": "Standard"
},
"kind": "Storage",
"name": "[variables('StorageAccountName')]",
"apiVersion": "2018-02-01",
"location": "[parameters('LocationPrimary')]",
"scale": null,
"tags": {},
"properties": {
"networkAcls": {
"bypass": "AzureServices",
"virtualNetworkRules": [],
"ipRules": [],
"defaultAction": "Allow"
},
"supportsHttpsTrafficOnly": false,
"encryption": {
"services": {
"file": {
"enabled": true
},
"blob": {
"enabled": true
}
},
"keySource": "Microsoft.Storage"
}
},
"dependsOn": []
},
{
"name": "[variables('NetworkInterfaceName')]",
"type": "Microsoft.Network/networkInterfaces",
"apiVersion": "2018-04-01",
"location": "[parameters('LocationPrimary')]",
"dependsOn": [
"[concat('Microsoft.Network/networkSecurityGroups/', variables('NetworkSecurityGroupName'))]",
"[concat('Microsoft.Network/virtualNetworks/', variables('VNetName'))]",
"[concat('Microsoft.Network/publicIpAddresses/', variables('PublicIPAddressName'))]"
],
"properties": {
"ipConfigurations": [
{
"name": "ipconfig1",
"properties": {
"subnet": {
"id": "[variables('subnetRef')]"
},
"privateIPAllocationMethod": "Dynamic",
"publicIpAddress": {
"id": "[resourceId(resourceGroup().name, 'Microsoft.Network/publicIpAddresses', variables('PublicIPAddressName'))]"
}
}
}
],
"networkSecurityGroup": {
"id": "[variables('nsgId')]"
}
},
"tags": {}
},
{
"name": "[variables('NetworkSecurityGroupName')]",
"type": "Microsoft.Network/networkSecurityGroups",
"apiVersion": "2018-08-01",
"location": "[parameters('LocationPrimary')]",
"properties": {
"securityRules": [
{
"name": "RDP",
"properties": {
"priority": 300,
"protocol": "TCP",
"access": "Allow",
"direction": "Inbound",
"sourceAddressPrefix": "*",
"sourcePortRange": "*",
"destinationAddressPrefix": "*",
"destinationPortRange": "3389"
}
}
]
},
"tags": {}
},
{
"name": "[variables('VNetName')]",
"type": "Microsoft.Network/virtualNetworks",
"apiVersion": "2018-08-01",
"location": "[parameters('LocationPrimary')]",
"properties": {
"addressSpace": {
"addressPrefixes": [ "10.0.0.0/24" ]
},
"subnets": [
{
"name": "default",
"properties": {
"addressPrefix": "10.0.0.0/24"
}
}
]
},
"tags": {}
},
{
"name": "[variables('PublicIPAddressName')]",
"type": "Microsoft.Network/publicIpAddresses",
"apiVersion": "2018-08-01",
"location": "[parameters('LocationPrimary')]",
"properties": {
"publicIpAllocationMethod": "Dynamic"
},
"sku": {
"name": "Basic"
},
"tags": {}
},
{
"name": "[variables('VMName')]",
"type": "Microsoft.Compute/virtualMachines",
"apiVersion": "2018-06-01",
"location": "[parameters('LocationPrimary')]",
"dependsOn": [
"[concat('Microsoft.Network/networkInterfaces/', variables('NetworkInterfaceName'))]",
"[concat('Microsoft.Storage/storageAccounts/', variables('StorageAccountName'))]"
],
"properties": {
"hardwareProfile": {
"vmSize": "Standard_A7"
},
"storageProfile": {
"osDisk": {
"createOption": "fromImage",
"managedDisk": {
"storageAccountType": "Standard_LRS"
}
},
"imageReference": {
"publisher": "MicrosoftWindowsDesktop",
"offer": "Windows-10",
"sku": "rs4-pro",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaces": [
{
"id": "[resourceId('Microsoft.Network/networkInterfaces', variables('NetworkInterfaceName'))]"
}
]
},
"osProfile": {
"computerName": "[variables('VMName')]",
"adminUsername": "[parameters('AdminUsername')]",
"adminPassword": "[parameters('AdminPassword')]",
"windowsConfiguration": {
"enableAutomaticUpdates": true,
"provisionVmAgent": true
}
},
"licenseType": "Windows_Client",
"diagnosticsProfile": {
"bootDiagnostics": {
"enabled": true,
"storageUri": "[concat('https://', variables('StorageAccountName'), '.blob.core.windows.net/')]"
}
}
},
"tags": {}
}
This ARM Template currently works if I do not attempt to configure the VM to have the WinRM Listener.
When I attempt to run the release, I get the following error message:
Error number: -2144108526 0x80338012
The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig".
In all honesty, my problem is likely a lack of understanding, as this is my first time working with VM Setup in any real capacity. Any insight and advice would be greatly appreciated.
you just need to add this to the "windowsConfiguration":
"winRM": {
"listeners": [
{
"protocol": "http"
},
{
"protocol": "https",
"certificateUrl": "<URL for the certificate you got in Step 4>"
}
]
}
you also need to provision certificates
reference: https://learn.microsoft.com/en-us/rest/api/compute/virtualmachines/createorupdate#winrmconfiguration
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/winrm

AWS Cloud Formation Stuck in Review_In_Progress

I was trying to set up AWS Code Pipeline with AWS SAM for Lambda using Java-8 as mentioned in the documentations
http://docs.aws.amazon.com/lambda/latest/dg/automating-deployment.html
(example is in node.js though).
However, my Staging is stuck at CloudFormation Stack is stuck in REVIEW_IN_PROGRESS for a long time. Is there any way to debug this issue?
I don't see any further events coming in console. Is there any specific things to check for?
The template is as follow
$ aws codepipeline get-pipeline --region us-east-1 --name aws-lexbot-facebook-pipeline
{
"pipeline": {
"roleArn": "arn:aws:iam::XXXXXXXXXXXX:role/AWS-CodePipeline-Service",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "ThirdParty",
"version": "1",
"provider": "GitHub"
},
"outputArtifacts": [
{
"name": "MyApp"
}
],
"configuration": {
"Owner": “xxxxxxx”,
"Repo": "lexbot",
"PollForSourceChanges": "true",
"Branch": "master",
"OAuthToken": "****"
},
"runOrder": 1
}
]
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "MyApp"
}
],
"name": "CodeBuild",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"version": "1",
"provider": "CodeBuild"
},
"outputArtifacts": [
{
"name": "MyAppBuild"
}
],
"configuration": {
"ProjectName": "aws-lexbot-facebook-codebuild"
},
"runOrder": 1
}
]
},
{
"name": "Staging",
"actions": [
{
"inputArtifacts": [
{
"name": "MyAppBuild"
}
],
"name": "LexBotBetaStack",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"ChangeSetName": "LexBotChangeSet",
"RoleArn": "arn:aws:iam::XXXXXXXXXXX:role/cloudformation-lambda-execution-role",
"Capabilities": "CAPABILITY_IAM",
"StackName": "LexBotBetaStack",
"TemplatePath": "MyAppBuild::SamTemplate.yaml"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": “XXXXXX-us-east-1-987802409920"
},
"name": "aws-lexbot-facebook-pipeline",
"version": 1
}
}
Overview
In your CodePipeline step, you're using the CHANGE_SET_CREATE action mode. This creates a change set on the CloudFormation Stack, but does not automatically execute it. You would need a second action that executes the change set using CHANGE_SET_EXECUTE. Alternatively, you can change the action mode on your action to CREATE_UPDATE which should directly update your action.
One reason you might want to use CHANGE_SET_CREATE and CHANGE_SET_EXECUTE in CodePipeline, is if you want to have an approval step between them. If you are expecting this to be completed automatically, I'd recommend CREATE_UPDATE.
CREATE_UPDATE example
Below is your CodePipeline Staging stage, but using CREATE_UPDATE instead of CREATE_CHANGE_SET. This creates a new stack named stack, or updates the existing one if one with that name already exists.
{
"inputArtifacts": [
{
"name": "MyAppBuild"
}
],
"name": "LexBotBetaStack",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"ActionMode": "CREATE_UPDATE",
"ChangeSetName": "LexBotChangeSet",
"RoleArn": "arn:aws:iam::XXXXXXXXXXX:role/cloudformation-lambda-execution-role",
"Capabilities": "CAPABILITY_IAM",
"StackName": "LexBotBetaStack",
"TemplatePath": "MyAppBuild::SamTemplate.yaml"
},
"runOrder": 1
}
CHANGE_SET_CREATE and CHANGE_SET_EXECUTE example
Below is an example of how you could use CHANGE_SET_CREATE and CHANGE_SET_EXECUTE together. It first creates a change set, on the named stack, then executes that change set. It's really useful if you want to have a CodePipeline Approval step between the change set, and executing it, so you can review the intended changes.
{
"inputArtifacts": [
{
"name": "MyAppBuild"
}
],
"name": "LexBotBetaStackChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"ChangeSetName": "LexBotChangeSet",
"RoleArn": "arn:aws:iam::XXXXXXXXXXX:role/cloudformation-lambda-execution-role",
"Capabilities": "CAPABILITY_IAM",
"StackName": "LexBotBetaStack",
"TemplatePath": "MyAppBuild::SamTemplate.yaml"
},
"runOrder": 1
},
{
"name": "LexBotBetaStackExecute",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "LexBotChangeSet",
"StackName": "LexBotBetaStack",
},
"runOrder": 2
}
I went to the change set and hit the execute button so it now shows CREATION_IN_PROGRESS.
Some one has already answered, but for more clarity, Please refer below screenshot. Click on Change Sets and then select the change set and hit Execute.
This can be due to some service bug in your template file/troposphere code. Make sure you can visualize the cf tree to check how the services communicate.