Terraform resource Elastic Load Balancer - reduce ConnectionDrainingPolicy timeout - aws-cloudformation

Would like to reduce ConnectionDrainingPolicy (CloudFormation term) Timeout (default is 300), but can not find an argument in list https://www.terraform.io/docs/providers/aws/r/lb.html that allows to do it. Any suggestions?

In AWS ELBs are different from LBs/ALBs. ConnectionDrainingPolicy is an ELB concept.
Look at the aws_elb resource type and the
connection_draining and connection_draining_timeout attributes.
Resources:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-elb-connectiondrainingpolicy.html
https://www.terraform.io/docs/providers/aws/r/elb.html#connection_draining
https://www.terraform.io/docs/providers/aws/r/elb.html#connection_draining_timeout

Related

How to solve Serverless split stack plugin failure around resourceConcurrency

So I have a stack exceeding 500 resources and found out this serverless plugin which splits stack according to the several configurations.
Below is my configuration for splitting the stack. Upon using the below configuration I was able to split the stacks in 2. with that I also got the warning Serverless: Recoverable error occurred (TooManyRequestsException: Rate exceeded
custom:
splitStacks:
nestedStackCount: 2 # Controls the number of created nested stacks
perFunction: false
perType: false
perGroupFunction: true
To resolve the API rate limit I used resourceConcurrency property as below
custom:
splitStacks:
nestedStackCount: 2 # Controls the number of created nested stacks
perFunction: false
perType: false
perGroupFunction: true
resourceConcurrency: 20 # Controls how much resources are deployed in parallel. Disabled if absent.
Upon deployment, I received following error
ServerlessError: The CloudFormation template is invalid: ValidationError: Circular dependency between resources: [GetAllUsersLambdaFunction,.....
is there any workaround to resolve this issue? Is resourceConcurrency even in a working state?

Consul agent on kubernetes, on node or pod?

I deployed an aws eks cluster via terraform. I also deployed Consul following hasicorp’s tutorial and I see the nodes in consul’s UI.
Now I’m wondering how al the consul agents will know about the pods I deploy? I deploy something and it’s not shown anywhere on consul.
I can’t find any documentation as to how to register pods (services) on consul via the node’s consul agent, do I need to configure that somewhere? Should I not use the node’s agent and register the service straight from the pod? Hashicorp discourages this since it may increase resource utilization depending on how many pods one deploy on a given node. But then how does the node’s agent know about my services deployed on that node?
Moreover, when I deploy a pod in a node and ssh into the node, and install consul, consul’s agent can’t find the consul server (as opposed from the node, which can find it)
EDIT:
Bottom line is I can't find WHERE to add the configuration. If I execute ON THE POD:
consul members
It works properly and I get:
Node Address Status Type Build Protocol DC Segment
consul-consul-server-0 10.0.103.23:8301 alive server 1.10.0 2 full <all>
consul-consul-server-1 10.0.101.151:8301 alive server 1.10.0 2 full <all>
consul-consul-server-2 10.0.102.112:8301 alive server 1.10.0 2 full <all>
ip-10-0-101-129.ec2.internal 10.0.101.70:8301 alive client 1.10.0 2 full <default>
ip-10-0-102-175.ec2.internal 10.0.102.244:8301 alive client 1.10.0 2 full <default>
ip-10-0-103-240.ec2.internal 10.0.103.245:8301 alive client 1.10.0 2 full <default>
ip-10-0-3-223.ec2.internal 10.0.3.249:8301 alive client 1.10.0 2 full <default>
But if i execute:
# consul agent -datacenter=voip-full -config-dir=/etc/consul.d/ -log-file=log-file -advertise=$(wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4)
I get the following error:
==> Starting Consul agent...
Version: '1.10.1'
Node ID: 'f10070e7-9910-06c7-0e12-6edb6cc4c9b9'
Node name: 'ip-10-0-3-223.ec2.internal'
Datacenter: 'voip-full' (Segment: '')
Server: false (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600)
Cluster Addr: 10.0.3.223 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
==> Log data will now stream in as it occurs:
2021-08-16T18:23:06.936Z [WARN] agent: skipping file /etc/consul.d/consul.env, extension must be .hcl or .json, or config format must be set
2021-08-16T18:23:06.936Z [WARN] agent: Node name "ip-10-0-3-223.ec2.internal" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2021-08-16T18:23:06.946Z [WARN] agent.auto_config: skipping file /etc/consul.d/consul.env, extension must be .hcl or .json, or config format must be set
2021-08-16T18:23:06.947Z [WARN] agent.auto_config: Node name "ip-10-0-3-223.ec2.internal" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2021-08-16T18:23:06.948Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: ip-10-0-3-223.ec2.internal 10.0.3.223
2021-08-16T18:23:06.948Z [INFO] agent.router: Initializing LAN area manager
2021-08-16T18:23:06.950Z [INFO] agent: Started DNS server: address=127.0.0.1:8600 network=udp
2021-08-16T18:23:06.950Z [WARN] agent.client.serf.lan: serf: Failed to re-join any previously known node
2021-08-16T18:23:06.950Z [INFO] agent: Started DNS server: address=127.0.0.1:8600 network=tcp
2021-08-16T18:23:06.951Z [INFO] agent: Starting server: address=127.0.0.1:8500 network=tcp protocol=http
2021-08-16T18:23:06.951Z [WARN] agent: DEPRECATED Backwards compatibility with pre-1.9 metrics enabled. These metrics will be removed in a future version of Consul. Set `telemetry { disable_compat_1.9 = true }` to disable them.
2021-08-16T18:23:06.953Z [INFO] agent: started state syncer
2021-08-16T18:23:06.953Z [INFO] agent: Consul agent running!
2021-08-16T18:23:06.953Z [WARN] agent.router.manager: No servers available
2021-08-16T18:23:06.954Z [ERROR] agent.anti_entropy: failed to sync remote state: error="No known Consul servers"
2021-08-16T18:23:34.169Z [WARN] agent.router.manager: No servers available
2021-08-16T18:23:34.169Z [ERROR] agent.anti_entropy: failed to sync remote state: error="No known Consul servers"
So where to add the config?
I also tried adding a service in k8s pointing to the pod, but the service doesn't come up on consul's UI...
What do you guys recommend?
Thanks
Consul knows where these services are located because each service
registers with its local Consul client. Operators can register
services manually, configuration management tools can register
services when they are deployed, or container orchestration platforms
can register services automatically via integrations.
if you planning to use manual option you have to register the service into the consul.
Something like
echo '{
"service": {
"name": "web",
"tags": [
"rails"
],
"port": 80
}
}' > ./consul.d/web.json
You can find the good example at : https://thenewstack.io/implementing-service-discovery-of-microservices-with-consul/
Also this is a very nice document for having detailed configuration of the health check and service discovery : https://cloud.spring.io/spring-cloud-consul/multi/multi_spring-cloud-consul-discovery.html
Official document : https://learn.hashicorp.com/tutorials/consul/get-started-service-discovery
BTW, I was finally able to figure out the issue.
consul-dns is not deployed by default, i had to manually deploy it, then forward all .consul requests from coredns to consul-dns.
All is working now. Thanks!

Kubernetes cluster working but getting this error from the NGINX controller

Although the cluster is working as expected this error is somewhat troublesome.
Kubernetes Version: v1.17.3
E0407 17:57:54.426952 1 reflector.go:123]
github.com/nginxinc/kubernetes-ingress/nginx-ingress/internal/k8s/controller.go:341:
Failed to list *v1.VirtualServerRoute:
virtualserverroutes.k8s.nginx.org is forbidden: User
"system:serviceaccount:kube-system:default" cannot list resource
"virtualserverroutes" in API group "k8s.nginx.org" at the cluster
scope
To fix the problem you have to disable list/watch operations on virtualserver and virtualserverroutes - set the --enable-custom-resources flag to false in your deployment/daemonset manifest.
--enable-custom-resources
Enables custom resources (default true)
Take a look also at: nginx-ingress-controller-configuration, disabling-list-watch-virtualserver.

Configure Connection Draining for AWS Load Balancer v2 in CloudFormation

This blog post (here specifically) details how to configure connection draining for a 'classic' version 1 load balancer using the AWS::ElasticLoadBalancing::LoadBalancer type, like so:
"ElasticLoadBalancer": {
"Type": "AWS::ElasticLoadBalancing::LoadBalancer",
"Properties": {
"ConnectionDrainingPolicy": {
"Enabled": "true",
"Timeout": "300"
},
...
}
}
How can I do this using the version 2 load balancer with type AWS::ElasticLoadBalancingV2::LoadBalancer?
My best guess from the documentation is that I should use LoadBalancerAttributes, but I can't find anything related to connection draining in the list of attributes here.
In Application Load Balancer(ELB V2 ) it in configured using TargetGroups and TargetGroupAttributes and is called Deregistration delay, not Connection draining.
deregistration_delay.timeout_seconds - The amount time for Elastic
Load Balancing to wait before changing the state of a deregistering
target from draining to unused. The range is 0-3600 seconds. The
default value is 300 seconds.
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
TargetGroupAttributes:
- Key: deregistration_delay.timeout_seconds
Value: '20'

Hystrix.stream and management.context

Setting
management.context-path = /admin
and using
#EnableCircuitBreaker
makes Hystrix endpoint /admin/hystrix.stream
This becomes an issue when using Turbine to aggregate metrics as its looking for
instanceserver:port/hystrix.stream
when discovering instances via Eureka
Any suggestions?
Full config for turbine:
server.port=8082
spring.application.name=turbine
management.endpoint.health.enabled=true
management.endpoints.jmx.exposure.include=*
management.endpoints.web.exposure.include=*
management.endpoints.web.base-path=/actuator
management.endpoints.web.cors.allowed-origins=true
management.endpoint.health.show-details=always
eureka.client.serviceUrl.defaultZone=${EUREKA_URI:http://localhost:8761/eureka}
eureka.instance.lease-expiration-duration-in-seconds=5
eureka.instance.lease-renewal-interval-in-seconds=5
turbine.aggregator.cluster-config=default
turbine.app-config=google
turbine.cluster-name-expression= new String("default")
turbine.combine-host-port=true
turbine.instanceUrlSuffix.default: actuator/hystrix.stream