How to solve Serverless split stack plugin failure around resourceConcurrency - aws-cloudformation

So I have a stack exceeding 500 resources and found out this serverless plugin which splits stack according to the several configurations.
Below is my configuration for splitting the stack. Upon using the below configuration I was able to split the stacks in 2. with that I also got the warning Serverless: Recoverable error occurred (TooManyRequestsException: Rate exceeded
custom:
splitStacks:
nestedStackCount: 2 # Controls the number of created nested stacks
perFunction: false
perType: false
perGroupFunction: true
To resolve the API rate limit I used resourceConcurrency property as below
custom:
splitStacks:
nestedStackCount: 2 # Controls the number of created nested stacks
perFunction: false
perType: false
perGroupFunction: true
resourceConcurrency: 20 # Controls how much resources are deployed in parallel. Disabled if absent.
Upon deployment, I received following error
ServerlessError: The CloudFormation template is invalid: ValidationError: Circular dependency between resources: [GetAllUsersLambdaFunction,.....
is there any workaround to resolve this issue? Is resourceConcurrency even in a working state?

Related

Metrics from spring batch are not pushed to prometheus push gateway

I followed the approaches mentioned in this post. Basically I have my local prometheus and push gateway setup using docker from spring batch examples.
I have the below dependencies added in my build.gradle which means PrometheusPushGatewayManager bean is auto-configured and needs to push metrics to the gateway configured.
implementation("io.micrometer:micrometer-registry-prometheus:1.8.4")
implementation("io.prometheus:simpleclient_pushgateway:0.16.0")
My application.yml looks like below
metrics:
export:
prometheus:
enabled: true
pushgateway:
enabled: true
base-url: http://0.0.0.0:9091
job: main-job
push-rate: 5s
descriptions: true
But when I navigate to /metrics endpoint, the metrics are having values as 0.
example :
spring_batch_step_seconds_max{instance="",job="job",job_name="job-job-flow",name="process-5.csv",status="FAILED"} 0
spring_batch_step_seconds_max{instance="",job="job",job_name="job-job-flow",name="process-6.csv",status="COMPLETED"} 0
spring_batch_step_seconds_max{instance="",job="job",job_name="job-job-flow",name="process-7.csv",status="FAILED"} 0
spring_batch_step_seconds_max{instance="",job="job",job_name="job-job-flow",name="process-2csv",status="FAILED"} 0
spring_batch_step_seconds_max{instance="",job="job",job_name="job-job-flow",name="start-job-job",status="COMPLETED"} 0
I've checked this post, which indicates that we need to configure a registry but if I'm using the auto configured PrometheusPushGatewayManager by adding the simpleclient_pushgateway dependency, how to configure a registry ?
Keeping a breakpoint and viewing the value of Metrics.globalRegistry.meters[1] shows values like SampleImpl{duration(seconds)=392.074203242, duration(nanos)=3.92074203242E11, startTimeNanos=1098399187818886}. So they are captured but not pushed properly.
Am I missing something to configure for getting the metrics pushed properly to the gateway ?

Cloudformation root stack resources are not split properly with serverless-plugin-split-stacks

We use serverless-plugin-split-stacks to break resources into nested stacks and have set it up in serverless.yml as follows.
custom:
splitStacks:
perFunction: false
perType: true
perGroupFunction: false
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
Everything was going well until we were greeted with the following error
Error: The CloudFormation template is invalid: Template format error: Number of resources, 206, is > greater than maximum allowed, 200
When this error happens, the condition of the nested stacks is as follows.
Serverless: [serverless-plugin-split-stacks]: Resources per stack:
Serverless: [serverless-plugin-split-stacks]: - (root): 206
Serverless: [serverless-plugin-split-stacks]: - APINestedStack: 55
Serverless: [serverless-plugin-split-stacks]: - PermissionsNestedStack: 49
My problem is that even though we have set up split-stacks properly, why doesn't it split the resources in the root stack into new stacks?
If there's anything I have missed here, please educate me on this. Thanks for all helpful suggestions.
This npm package serverless-plugin-split-stacks not working. This package has been Deprecated - FAILED - BUG
For more information: https://www.npmjs.com/package/serverless-plugin-split-stacks
To resolve your problem (Serverless Workarounds for CloudFormation's 200 Resource Limit) read some tips:
https://www.serverless.com/blog/serverless-workaround-cloudformation-200-resource-limit

SAM Deployment failed Error- Waiter StackCreateComplete failed: Waiter encountered a terminal failure state

When I try to deploy package on SAM, the very first status comes in cloud formation console is ROLLBACK_IN_PROGRESS after that it gets changed to ROLLBACK_COMPLETE
I have tried deleting the stack and trying again, but every time same issue occurs.
Error in terminal looks like this-
Sourcing local options from ./SAMToolkit.devenv
SAM_PARAM_PKG environment variable not set
SAMToolkit will operate in legacy mode.
Please set SAM_PARAM_PKG in your .devenv file to run modern packaging.
Run 'sam help package' for more information
Runtime: java
Attempting to assume role from AWS Identity Broker using account 634668058279
Assumed role from AWS Identity Broker successfully.
Deploying stack sam-dev* from template: /home/***/1.0/runtime/sam/template.yml
sam-additional-artifacts-url.txt was not found, which is fine if there is no additional artifacts uploaded
Replacing BATS::SAM placeholders in template...
Uploading template build/private/tmp/sam-toolkit.yml to s3://***/sam-toolkit.yml
make_bucket failed: s3://sam-dev* An error occurred (BucketAlreadyOwnedByYou) when calling the CreateBucket operation: Your previous request to create the named bucket succeeded and you already own it.
upload: build/private/tmp/sam-toolkit.yml to s3://sam-dev*/sam-toolkit.yml
An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id sam-dev* does not exist
sam-dev* will be created.
Creating ChangeSet ChangeSet-2020-01-20T12-25-56Z
Deploying stack sam-dev*. Follow in console: https://aws-identity-broker.amazon.com/federation/634668058279/CloudFormation
ChangeSet ChangeSet-2020-01-20T12-25-56Z in sam-dev* succeeded
"StackStatus": "REVIEW_IN_PROGRESS",
sam-dev* reached REVIEW_IN_PROGRESS
Deploying stack sam-dev*. Follow in console: https://console.aws.amazon.com/cloudformation/home?region=us-west-2
Waiting for stack-create-complete
Waiter StackCreateComplete failed: Waiter encountered a terminal failure state
Command failed.
Please see the logs above.
I set SQS as event source for Lambda, but didn't provided the permissions like this
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource: "*"
in lambda policies.
I found this error in "Events" tab of "CloudFormation" service.

Concourse 3.3.0 spitting hard to debug error: "json: unsupported type: map[interface {}]interface {}"

We are using some community custom resource types (https://github.com/ljfranklin/terraform-resource and https://github.com/cloudfoundry/bosh-deployment-resource). After upgrading to concourse 3.3.0, we've begun consistently seeing the following error on a few of our jobs at the same step: json: unsupported type: map[interface {}]interface {}.
This is fairly hard to debug as there is no other log output other than that. We are unsure what is incompatible between those resources and Concourse.
Notes about our pipeline:
We originally had substituted all of our usages of {{}} to (()), but reverting that did not lead to the error going away.
We upgraded concourse from v3.0.1.
The failing step can be found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L731-L739
We are using a resource called elsa-aws-storage-terraform, found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L731-L739
That resource is of a custom resource-type terraform found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L45-L48
A similar failing step can be found here: https://github.com/cloudfoundry/capi-ci/blob/6a73764d09f544820ce39f16dca166d6d6861996/ci/pipeline.yml#L871-L886
This is related to issue of not being able to define nested maps in resource configuration https://github.com/concourse/concourse/issues/1345

JMeter performance plugin report always showing 100% of errors on 200 success response code

After the build is completed, in the performance Trend report error column displays 100% error whereas the HTTP Response code is 200 (Successful)
Expected Result: That should be 0% error in error column.
We have performance plugin 1.13 in jenkins 1.607
My .jtl file contains:
1434631428652,2082,Deactivate_Enrollee,200,OK,setUp Thread Group 1-1,text,true,536,2073
1434631430748,574,Activate_Enrollee,200,OK,setUp Thread Group 1-1,text,true,536,574
1434631431323,315,User_Status,200,OK,setUp Thread Group 1-1,text,true,1317,315
1434631431711,1,Debug Sampler,200,OK,setUp Thread Group 1-1,text,true,807,0
Console output:
Started by user anonymous
Building in workspace /results/jtls
Performance: Percentage of errors greater or equal than 0% sets the build as unstable
Performance: Percentage of errors greater or equal than 0% sets the build as failure
Performance: Recording JMeter reports '*.jtl'
Performance: Parsing JMeter report file APITest_JMeter.jtl
Performance: File APITest_JMeter.jtl reported 100.0% of errors [FAILURE]. Build status is: FAILURE
Build step 'Publish Performance test result report' changed build result to FAILURE
Finished: FAILURE
Can anyone solve this for Jenkins?
It seems that it is due to a defect in Performance Plugin version 1.13.
You may use Performance Plugin version 1.9 or below and let us know if this resolves your issue.
Your jtl file is wrong for the plugin from :
https://github.com/jenkinsci/performance-plugin/blob/master/src/main/java/hudson/plugins/performance/JMeterCsvParser.java#L157
So this leads to failure of parsing this value to a boolean by this code:
sample.setSuccessful(Boolean.valueOf(values[successIdx]));
I think your configuration of saveservice is not well suited for the plugin, you should set:
jmeter.save.saveservice.response_message=false
I think you may be facing well-known critical bug 28426 of Performance Plugin.