I'm experimenting with AWS CDK by converting a console-developed application (just API Gateway and Lambdas for now). All is well--I can hit the API's resources and methods and the appropriate lambdas get executed.
I'm trying to understand what triggers a deployment (and what doesn't). For example, if I try to change the API's endpoint type from the default (EDGE) to REGIONAL:
const api = new apigateway.RestApi(this, "cy-max-api", {
restApiName: "CY Max Service",
description: "CDK version of Max AWS demo app.",
endpointConfiguration: [EndpointType.REGIONAL] // <-- add only this line and deploy
});
and deploy (cdk deploy), nothing is deployed (I checked the logs, console says no stack changes). I even tried forcing the deploy (cdk deploy -f)--no joy.
I suspect this is the expected behavior, but would like to understand why this change doesn't trigger a deploy (and what would be necessary to force one).
Update based on response by #balu-vyamajala (thanks for taking the time to test it).
I am using version 1.82.0 of the CDK. Here's the result of cdk diff when the only change is adding the endpointConfiguration line:
Stack CyMaxStack
Resources
[-] AWS::ApiGateway::Deployment CyMaxcymaxapiDeploymentD64E3EA0186ed2bffe1dbc3004a8457d0ce5eb28 destroy
[+] AWS::ApiGateway::Deployment CyMax/cy-max-api/Deployment CyMaxcymaxapiDeploymentD64E3EA0cd62c1e6cd1229987f977199cc5906ea
[~] AWS::ApiGateway::RestApi CyMax/cy-max-api CyMaxcymaxapi48ECF39D
└─ [+] EndpointConfiguration
└─ {}
[~] AWS::ApiGateway::Stage CyMax/cy-max-api/DeploymentStage.prod CyMaxcymaxapiDeploymentStageprod5291AAF0
└─ [~] DeploymentId
└─ [~] .Ref:
├─ [-] CyMaxcymaxapiDeploymentD64E3EA0186ed2bffe1dbc3004a8457d0ce5eb28
└─ [+] CyMaxcymaxapiDeploymentD64E3EA0cd62c1e6cd1229987f977199cc5906ea
and here's what cdk deploy says:
CyMaxStack: deploying...
[0%] start: Publishing 6280a7c7fbc87dd62aeb85e098d6de2f0b644eea442dcbfc67063a56c08ce151:current
[100%] success: Published 6280a7c7fbc87dd62aeb85e098d6de2f0b644eea442dcbfc67063a56c08ce151:current
CyMaxStack: creating CloudFormation changeset...
[█████████████████████████████·····························] (5/10)
✅ CyMaxStack
Outputs:
CyMaxStack.CyMaxcymaxapiEndpoint52D905B0 = https://...my URL...
Stack ARN:
arn:aws:cloudformation:us-west-1:...my ARN...
When I check the console the API has not been updated to REGIONAL. Also, endpointConfiguration is either missing, or {} in cdk.out/tree.json. I never see {REGIONAL} in that file.
I am guessing you are asking about update to AWS::ApiGateway::Deployment which doesn't automatically happen and cdk generates a hash of methods and resources to append to resource name to force deployment.
But in your case, EndpointConfiguration is a property of AWS::ApiGateway::RestApi which is directly referred in AWS::ApiGateway::Deployment. Irrespective of any other changes, it must trigger a new Deployment.
which version of cdk you are using?
I just tested it with 1.80.0, it did trigger a change in three resources AWS::ApiGateway::Deployment, AWS::ApiGateway::Stage and AWS::ApiGateway::RestApi.
Please try cdk synth and observe generated CloudFormation for resource AWS::ApiGateway::RestApi before and after compiling your change
Related
Terraform apply error 'The number of path segments is not divisible by 2' for Azure App Feature Flag
Why am I seeing this error? Hard to find any answer to this anywhere. I am using Terraform v2.93.0
and I also tried 2.90.0 and 2.56.0, and got the same problem. I was adding configs just fine but
as soon as I tried to configure a Feature Flag, it breaks the Terraform project AND
I am forced to rebuild re-init from scratch. Terraform is not able to recover on its own if I remove the config and running plan again.
╷
│ Error: while parsing resource ID: while parsing resource ID:
| The number of path segments is not divisible by 2 in
| "subscriptions/{key}/resourceGroups/my-config-test/providers/Microsoft.AppConfiguration/configurationStores/my-app-configuration/AppConfigurationFeature/.appconfig.featureflag/DEBUG/Label/my-functions-test"
│
│ while parsing resource ID: while parsing resource ID:
| The number of path segments is not divisible by 2 in
│ "subscriptions/{key}/resourceGroups/my-config-test/providers/Microsoft.AppConfiguration/configurationStores/my-app-configuration/AppConfigurationFeature/.appconfig.featureflag/DEBUG/Label/my-functions-test"
╵
╷
│ Error: obtaining auth token for "https://my-app-configuration.azconfig.io": getting authorization token for endpoint https://my-app-configuration.azconfig.io:
| obtaining Authorization Token from the Azure CLI: parsing json result from the Azure CLI: waiting for the Azure CLI: exit status 1: ERROR: The command failed with an unexpected error. Here is the traceback:
│ ERROR: [Errno 2] No such file or directory
WHY is the slash missing from the front of the ID????
And here is the config that breaks it:
resource "azurerm_app_configuration_feature" "my_functions_test_DEBUG" {
configuration_store_id = azurerm_app_configuration.my_app_configuration.id
description = "Debug Flag"
name = "DEBUG"
label = "my-functions-test"
enabled = false
}
When it is healthy, the apply on configs works, and looks like this:
Plan: 4 to add, 0 to change, 0 to destroy.
Do you want to perform these actions in workspace "my-app-config-test"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
azurerm_resource_group.my_config_rg_test: Creating...
azurerm_resource_group.my_config_rg_test: Creation complete after 0s [id=/subscriptions/{key}/resourceGroups/my-config-test]
Ok, I figured it out. There is a bug: when create a azurerm_app_configuration_key resource, the key can be like so key = "/application/config.EXSTREAM_DOMAIN" BUT when you create a azurerm_app_configuration_feature, you will HOSE your terraform config if you try to set the name field to name = .appconfig.featureflag/DEBUG. Instead, just set the name field to DEBUG. If you don't do that, you have to completely reset your terraform and re-initialize all the resources. Had to learn the hard way. There error message was not helpful but could be updated to be helpful in this respect.
I've deployed nodejs based IBM cloud function using manifest file. I'll have few other functions which may share some common codes. Here is the folder structure
manifest.yml
actions/
- myFunction1/
-- index.js
-- package.json
- myFunction2/
-- index.js
-- package.json
- common/
-- utils.js
Here is my manifest.yml -
packages:
myfunctions:
version: 1.0
license: Apache-2.0
actions:
myFunction1:
function: actions/myFunction1
runtime: nodejs:10
include:
- ["actions/common/*.js", "./common/"]
myFunction2:
function: actions/myFunction2/index.js
runtime: nodejs:10
I deployed the functions using following command from cmd-
ibmcloud fn deploy --manifest manifest.yml
The deployment went successful, both functions are created. Second function(myFunction2) executes properly but the first function throws error when try to execute. Here is the error message -
{
"error": "Initialization has failed due to: There was an error uncompressing the action archive."
}
I even tried with the inclusion of the dependencies in manifest and codes but throws same error. I was following this article -
https://medium.com/openwhisk/whisk-deploy-zip-actions-with-include-exclude-30ba6d96ad8b
Still struggling, appreciate any help.
Thanks
Musa
I wanted to create a CodePipeline which builds a container image from CodeCommit source and afterwards deploys the new image in Blue/Green fashion to my ECS service (EC2 launchtype).
The source stage is CodeCommit, which already includes appspec.json
as well as taskdef.json
The build stage is building the new
container & pushing it to ECR successfully, the file
imagedefinition.json is the BuildArtifact created at this step,
containing the container and the recently created image with its
tag corresponding to the CodeCommit commit-id.
The deploy stage
is made of action "Amazon ECS (Blue/Green)" using the
SourceArtifact and BuildArtifact as InputArtifacts, to take the
appspec and taskdef from the SourceArtifact and the image
description from the BuildArtifact, to finally deploy the new
container in Blue/Green manner.
The problem is with the image definition from the BuildArtifact. The pipeline fails in the Deploy phase with error:
""
Invalid action configuration
Exception while trying to read the image artifact file from the artifact: BuildArtifact.
""
How to properly configure the "Amazon ECS (Blue/Green)" deploy phase, so that it can use the recently created image and deploy it....by replacing placeholder IMAGE_NAME inside taskdef.json ?
Any hint highly appreciated :D
answering my own question here, hopefully it helps others who facing the same situation.
the file imagedefinitions.json is inappropriate for deploy action "Amazon ECS Blue/Green". For that you have to create file imageDetail.json within the build step and provide it as artifact to the deploy step. How ? This is how the bottom of my buildspec.yaml looks like:
- printf '{"ImageURI":"%s"}' $REPOSITORY_URI:$IMAGE_TAG > imageDetail.json
artifacts:
files:
- 'image*.json'
- 'appspec.yaml'
- 'taskdef.json'
secondary-artifacts:
DefinitionArtifact:
files:
- appspec.yaml
- taskdef.json
ImageArtifact:
files:
- imageDetail.json
In the Deploy phase of CodePipeline, use DefinitionArtifact and ImageArtifact as Input Artifacts and configure them in the corresponding section "Amazon ECS task definition" and "AWS CodeDeploy AppSpec file".
Ensure that your appspec.yaml contains placeholder for the task definition. Here is my appspec.yaml:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: <TASK_DEFINITION>
LoadBalancerInfo:
ContainerName: "my-test-container"
ContainerPort: 8000
Also ensure that your taskdef.json contains placeholder for the final image, like
...
"image": <IMAGE1_NAME>,
...
use that placeholder in the codepipeline config of your blue/green deploy phase in the section "Dynamically update task definition image - optional" by choosing the input artifact as "ImageArtifact" and the placeholder <IMAGE1_NAME>
Amazon ECS Blue/Green (or CodeDeployToECS) CodePipeline action requires the TaskDefinitionTemplateArtifact parameter (see [1]).
In addition to the above file note an imageDetail.json is required for ECS Blue/Green deployments (not 'imagedefinition.json'). The file structure and details are available here [2]. Add this file to the root of your deployment artifact/version control. If you do not want to add this file manually you can use the ECR source action to the CodePipeline and configure this with the Image you are using in the ECS service/taskdef.json. This is all discussed at [2] for clarity.
To see how this is all brought together you can also follow the step by step instructions for ECS Blue/Green deployments here [3].
References:
[1] https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements : CodePipeline Pipeline Structure Reference - Action Structure Requirements in CodePipeline
[2] https://docs.aws.amazon.com/codepipeline/latest/userguide/file-reference.html#file-reference-ecs-bluegreen : Image Definitions File Reference - imageDetail.json File for Amazon ECS Blue/Green Deployment Actions
[3] https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html : Tutorial: Create a Pipeline with an Amazon ECR Source and ECS-to-CodeDeploy Deployment
I ran into the same problem.
tl:dr
I was not passing the correct input artefact with the imageDetail.json to the pipeline CodeDeployToECS action.
Summary:
Instead of checking in a version of the task definition with the '<IMAGE1_NAME>' placeholder, I'm dynamically generating the task definition input to CodeDeploy inside the pipeline.
The task definition early in the project is quite volatile, with new variables etc being passed to the container. It's generated and registered within the pipeline (Cloudformation) and then read out via a Codebuild project, substituting the image placeholder with '<IMAGE1_NAME>' and passed to the next stage in the pipeline via a pipeline artefact.
Fixing it:
I have a CodeBuild project within the pipeline that produces the imageDetail.json:
{"ImageURI":"########.dkr.ecr.eu-west-1.amazonaws.com/##/#####:2739511dd87d4e4e1f65ed69c9e779b63fb72e36-master-fbe73fdc-6213-4bd6-a784-dcc3d2ae7845"}
It's pipeline output is named 'BuildDockerOutput'
I have another Codebuild project that produces:
taskdef.json
{
"containerDefinitions": [
{
"name": "ronantest1",
"image": "<IMAGE1_NAME>",
]
}
appspec.json
{
"version": 0.0,
"Resources": [
{
"TargetService": {
"Type": "AWS::ECS::Service",
"Properties": {
"TaskDefinition": "<TASK_DEFINITION>",
"LoadBalancerInfo": {
"ContainerName": "ronantest1",
"ContainerPort": "8080"
}
}
}
}
],
"Hooks": [
{
"AfterAllowTestTraffic": "arn:aws:lambda:eu-west-1:######:function:code-deploy-after-allow-test-traffic"
}
]
}
It's pipeline output is named 'PrepareCodeDeployOutputTesting'
My final CodeDeploy action is like the following:
- Name: BlueGreenDeploy
InputArtifacts:
- Name: BuildDockerOutput
- Name: PrepareCodeDeployOutputTesting
Region: !Ref DeployRegion1
ActionTypeId:
Category: Deploy
Owner: AWS
Version: '1'
Provider: CodeDeployToECS
RoleArn: !Sub arn:aws:iam::${TestingAccountId}:role/######/CrossAccountsDeploymentRole
Configuration:
AppSpecTemplateArtifact: PrepareCodeDeployOutputTesting
AppSpecTemplatePath: appspec.json
ApplicationName: !Ref ApplicationName
DeploymentGroupName: !Ref ApplicationName
TaskDefinitionTemplateArtifact: PrepareCodeDeployOutputTesting
TaskDefinitionTemplatePath: taskdef.json
Image1ArtifactName: BuildDockerOutput
Image1ContainerName: "IMAGE1_NAME"
RunOrder: 4
Note the different aspects of the CodeDeployToECS needed artefacts from different InputArtifacts, specifically 'Image1ArtifactName'
Thanks, to all, this gives me some light into solving the issue.
I would like to add that when you use aws cli, cloudformation, or Terraform to configure codepipeline, some parameters and options are not available with the console and setting some variables in these tools like the empty string "" will cause an exception error.
Always check for codepipeline settings in the console when you deploy using these tools.
so the error occur when you defined Image Artifact but not define the placeholder
imageDetail.json can be passed into codedeploy using the following methods:
git source ( codecommit or github ) the file that exist in your app codebase
ECR source - the file will be autogenerated by ECR, but will use SHA256 instead of the image tag
CodeBuild source - you update the file using codebuild buildspec.yml and pass it down to codedeploy stage.
Using Terraform 11.14
My terraform file contains the following resource:
resource "google_storage_bucket" "assets-bucket" {
name = "${local.assets_bucket_name}"
storage_class = "MULTI_REGIONAL"
force_destroy = true
}
And this bucket has already been created (it exists on the infrastructure based on a previous apply)
However the state (remote on gcs) is inconsistent and doesn't seem to include this bucket.
As a result, terraform apply fails with the following error:
google_storage_bucket.assets-bucket: googleapi: Error 409: You already own this bucket. Please select another name., conflict
How can I reconcile the state? (terraform refresh doesn't help)
EDIT
Following #ydaetskcoR's response, I did:
terraform import module.bf-nathan.google_storage_bucket.assets-bucket my-bucket
The output:
module.bf-nathan.google_storage_bucket.assets-bucket: Importing from ID "my-bucket"...
module.bf-nathan.google_storage_bucket.assets-bucket: Import complete! Imported google_storage_bucket (ID: next-assets-bf-nathan-botfront-cloud)
module.bf-nathan.google_storage_bucket.assets-bucket: Refreshing state... (ID: next-assets-bf-nathan-botfront-cloud)
Error: module.bf-nathan.provider.kubernetes: 1:11: unknown variable accessed: var.cluster_ip in:
https://${var.cluster_ip}
The refreshing step doesn't work. I ran the command from the project's root where a terraform.tfvars file exists.
I tried adding -var-file=terraform.tfvars but no luck. Any idea?
You need to import it into the existing state file. You can do this with the terraform import command for any resource that supports it.
Thankfully the google_storage_bucket resource does support it:
Storage buckets can be imported using the name or project/name. If the project is not passed to the import command it will be inferred from the provider block or environment variables. If it cannot be inferred it will be queried from the Compute API (this will fail if the API is not enabled).
e.g.
$ terraform import google_storage_bucket.image-store image-store-bucket
$ terraform import google_storage_bucket.image-store tf-test-project/image-store-bucket
I am using kubebuilder to create kubernetes operator project. After running the project init command described in quickstart guide
kubebuilder init --domain k8s.io --license apache2 --owner "The Kubernetes Authors"
dep ensure returns with error log given below.
Solving failure: No versions of k8s.io/client-go met constraints:
v8.0.0: Could not introduce k8s.io/client-go#v8.0.0, as it is not allowed by constraints from the following projects:
kubernetes-1.10.1 from (root)
kubernetes-1.10.1 from sigs.k8s.io/controller-runtime#master
v7.0.0: Could not introduce k8s.io/client-go#v7.0.0, as it is not allowed by constraints from the following projects:
kubernetes-1.10.1 from (root)
kubernetes-1.10.1 from sigs.k8s.io/controller-runtime#master
v6.0.0: Could not introduce k8s.io/client-go#v6.0.0, as it is not allowed by constraints from the following projects:
kubernetes-1.10.1 from (root)
kubernetes-1.10.1 from sigs.k8s.io/controller-runtime#master
Try using the latest kubebuilder from here. It's likely that the dependencies for the version in the quick start are out of date.
It works fine for me with v1.0.3
~/go/src/github.com $ kubebuilder init --domain k8s.io --license apache2 --owner "The Kubernetes Authors"
Run `dep ensure` to fetch dependencies (Recommended) [y/n]?
y
dep ensure
Running make...
make
go generate ./pkg/... ./cmd/...
go fmt ./pkg/... ./cmd/...
go vet ./pkg/... ./cmd/...
go run vendor/sigs.k8s.io/controller-tools/cmd/controller-gen/main.go all
CRD manifests generated under '/root/go/src/github.com/config/crds'
RBAC manifests generated under '/root/go/src/github.com/config/rbac'
go test ./pkg/... ./cmd/... -coverprofile cover.out
? github.com/pkg/apis [no test files]
? github.com/pkg/controller [no test files]
ok github.com/pkg/errors 0.207s coverage: 100.0% of statements
? github.com/cmd/manager [no test files]
go build -o bin/manager github.com/cmd/manager
Next: Define a resource with:
$ kubebuilder create api