How Do I Attach an ASG to an ALB Target Group? - aws-cloudformation

In AWS' Cloudformation, how do I attach an Autoscaling Group (ASG) to an Application Load Balancer Target Group?
There does not appear to be any direct way to do that directly in a Cloudformation Template (CFT), though it it possible using the AQWS CLI or API. The AWS::ElasticLoadBalancingV2::TargetGroup resource only offers these target types:
instance. Targets are specified by instance ID.
ip. Targets are specified by IP address.
lambda. The target groups contains a single Lambda function.

That is because, apparently, one does not attach an ASG to a target group; instead, one attaches a target group or groups to an ASG.
Seems a little backwards to me, but I'm sure it has to do with the ASG needing to register/deregister its instances as it scales in and out.
See the documentation for the AWS::AutoScaling::AutoScalingGroup resource for details.
Example:
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
VpcId: !Ref VPC
TargetType: instance
Port: 80
Protocol: HTTP
AutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AvailabilityZones: !GetAZs !Ref "AWS::Region"
MaxSize: "3"
MinSize: "1"
TargetGroupArns:
- !Ref TargetGroup

Related

How to set Inbound Rule Name via Cloudformation in AWS

I'm trying to set the name of this Ingress Rule in my Security Group:
I've tried two methods and looked at the documentation and can't find a way to do it. I've tried:
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: -1
Name: Allow ICMP
Description: Allow ICMP
CidrIp: 0.0.0.0/0
And I've tried this:
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: -1
Description: Allow ICMP
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: Allow ICMP
I've looked for examples, and I've looked through the documentation and I don't see a reference to this. Any ideas?
The Name that you see in the console is the Name tag of the resource. Currently in CloudFormation both AWS::EC2::SecurityGroupIngress and Ingress objects in AWS::EC2::SecurityGroup don't support tags (Tags for individual rules is a recent feature, added in July 2021. CloudFormation doesn't support all new features on release). If this is a crucial requirement you can use Lambda backed custom CloudFormation resources to create an AWS Lambda function which will tag the resource for you.

strimzi operator 0.20 kafka 'useServiceDnsDomain' has no effect

PROBLEM:
For some reason client pod can only resolve fully qualified fully-qualified DNS names including the cluster service suffix.
That problem is stated in this quesion:
AKS, WIndows Node, dns does not resolve service until fully qualified name is used
To workaround this issue, I'm using useServiceDnsDomain flag. The documentation (https://strimzi.io/docs/operators/master/using.html#type-GenericKafkaListenerConfiguration-schema-reference ) explains it as
Configures whether the Kubernetes service DNS domain should be used or
not. If set to true, the generated addresses with contain the service
DNS domain suffix (by default .cluster.local, can be configured using
environment variable KUBERNETES_SERVICE_DNS_DOMAIN). Defaults to
false.This field can be used only with internal type listener.
Part of my yaml is as follows
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: tt-kafka
namespace: shared
spec:
kafka:
version: 2.5.0
replicas: 3
listeners:
- name: local
port: 9092
type: internal
tls: false
useServiceDnsDomain: true
This didn't do anything, so I also tried adding KUBERNETES_SERVICE_DNS_DOMAIN like below
template:
kafkaContainer:
env:
- name: KUBERNETES_SERVICE_DNS_DOMAIN
value: .cluster.local
strimzi/operator:0.20.0 image is being used.
In my client (.net Confluent.Kafka 1.4.4), I used tt-kafka-kafka-bootstrap.shared.svc.cluster.local as BootstrapServers.
It gives me error as
Error: GroupCoordinator: Failed to resolve
'tt-kafka-kafka-2.tt-kafka-kafka-brokers.shared.svc:9092': No such
host is known.
I'm expecting the broker service to provide full names to the client but from the error it seems useServiceDnsDomain has no effect.
Any help is appreciated. Thanks.
As exmaplained in https://github.com/strimzi/strimzi-kafka-operator/issues/3898, there is a typo in the docs. The right YAML is:
listeners:
- name: plain
port: 9092
type: internal
tls: false
configuration:
useServiceDnsDomain: true
If your domain is different from .cluster.local, you can use the KUBERNETES_SERVICE_DNS_DOMAIN env var to override it. But you have to configure it on the Strimzi Cluster Operator pod. Not on the Kafka pods: https://strimzi.io/docs/operators/latest/full/using.html#ref-operator-cluster-str

How to Sync K8s Service to Consul Cluster which is outside the K8s?

From the consul-k8s document:
The Consul server cluster can run either in or out of a Kubernetes cluster.
The Consul server cluster does not need to be running on the same machine or same platform as the sync process.
The sync process needs to be configured with the address to the Consul cluster as well as any additional access information such as ACL tokens.
The consul cluster I am trying to sync is outside the k8s cluster, based on the document, I must pass the address to consul cluster for sync process.However, the helm chart for installing the sync process didn’t contains any value to configure the consul cluster ip address.
syncCatalog:
# True if you want to enable the catalog sync. "-" for default.
enabled: false
image: null
default: true # true will sync by default, otherwise requires annotation
# toConsul and toK8S control whether syncing is enabled to Consul or K8S
# as a destination. If both of these are disabled, the sync will do nothing.
toConsul: true
toK8S: true
# k8sPrefix is the service prefix to prepend to services before registering
# with Kubernetes. For example "consul-" will register all services
# prepended with "consul-". (Consul -> Kubernetes sync)
k8sPrefix: null
# consulPrefix is the service prefix which preprends itself
# to Kubernetes services registered within Consul
# For example, "k8s-" will register all services peprended with "k8s-".
# (Kubernetes -> Consul sync)
consulPrefix: null
# k8sTag is an optional tag that is applied to all of the Kubernetes services
# that are synced into Consul. If nothing is set, defaults to "k8s".
# (Kubernetes -> Consul sync)
k8sTag: null
# syncClusterIPServices syncs services of the ClusterIP type, which may
# or may not be broadly accessible depending on your Kubernetes cluster.
# Set this to false to skip syncing ClusterIP services.
syncClusterIPServices: true
# nodePortSyncType configures the type of syncing that happens for NodePort
# services. The valid options are: ExternalOnly, InternalOnly, ExternalFirst.
# - ExternalOnly will only use a node's ExternalIP address for the sync
# - InternalOnly use's the node's InternalIP address
# - ExternalFirst will preferentially use the node's ExternalIP address, but
# if it doesn't exist, it will use the node's InternalIP address instead.
nodePortSyncType: ExternalFirst
# aclSyncToken refers to a Kubernetes secret that you have created that contains
# an ACL token for your Consul cluster which allows the sync process the correct
# permissions. This is only needed if ACLs are enabled on the Consul cluster.
aclSyncToken:
secretName: null
secretKey: null
# nodeSelector labels for syncCatalog pod assignment, formatted as a muli-line string.
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# Example:
# nodeSelector: |
# beta.kubernetes.io/arch: amd64
nodeSelector: null
So How can I set the consul cluster ip address for sync process?
It looks like the sync service runs via the consul agent on the k8s host.
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- consul-k8s sync-catalog \
-http-addr=${HOST_IP}:8500
That can't be configured directly but helm can configure the agent/client via client.join (yaml src):
If this is null (default), then the clients will attempt to automatically join the server cluster running within Kubernetes. This means that with server.enabled set to true, clients will automatically join that cluster. If server.enabled is not true, then a value must be specified so the clients can join a valid cluster.
This value is passed to the consul agent as the --retry-join option.
client:
enabled: true
join:
- consul1
- consul2
- consul3
syncCatalog:
enabled: true

Metricbeat kubernetes module can’t connect to kubelet

We have a setup, where Metricbeat is deployed as a DaemonSet on a Kubernetes cluster (specifically -- AWS EKS).
All seems to be functioning properly, but the kubelet connection.
To clarify, the following module:
- module: kubernetes
enabled: true
metricsets:
- state_pod
period: 10s
hosts: ["kube-state-metrics.system:8080"]
works properly (the events flow into logstash/elastic).
This module configuration, however, doesn't work in any variants of hosts value (localhost/kubernetes.default/whatever):
- module: kubernetes
period: 10s
metricsets:
- pod
hosts: ["localhost:10255"]
enabled: true
add_metadata: true
in_cluster: true
NOTE: using cluster IP instead of localhost (so that it goes to
control plane) also works (although doesn't retrieve the needed
information, of course).
The configuration above was taken directly from the Metricbeat
documentation and immediately struck me as odd -- how does localhost
get translated (from within Metricbeat docker) to corresponding
kubelet?
The error is, as one would expect, in light of the above:
error making http request: Get http://localhost:10255/stats/summary:
dial tcp [::1]:10255: connect: cannot assign requested address
which indicates some sort of connectivity issue.
However, when SSH-ing to any node Metricbeat is deployed on, http://localhost:10255/stats/summary provides the correct output:
{
"node": {
"nodeName": "...",
"systemContainers": [
{
"name": "pods",
"startTime": "2018-12-06T11:22:07Z",
"cpu": {
"time": "2018-12-23T06:54:06Z",
...
},
"memory": {
"time": "2018-12-23T06:54:06Z",
"availableBytes": 17882275840,
....
I must be missing something very obvious. Any suggestion would do.
NOTE: I cross-posted (and got no response for a couple of days) the same on Elasticsearch Forums
Inject the Pod's Node's IP via the valueFrom provider in the env: list:
env:
- name: HOST_IP
valueFrom:
fieldRef: status.hostIP
and then update the metricbeat config file to use the host's IP:
hosts: ["${HOST_IP}:10255"]
which metricbeat will resolve via its environment variable config injection

Kubernetes error building cluster, utility subnet can't be found

Why is it that when I try to update a new Kubernetes cluster it gives the following error:
$ kops update cluster --name k8s-web-dev
error building tasks: could not find utility subnet in zone: "us-east-1b"
I have not been able to deploy it into aws yet. It only creates configs inside s3.
Also because I have private and public subnets I am updating manually k8s config to point to correct subnet-ids. e.g: The ids were added manually.
subnets:
- cidr: 10.0.0.0/19
id: subnet-3724bb40
name: us-east-1b
type: Private
zone: us-east-1b
- cidr: 10.0.64.0/19
id: subnet-918a35c8
name: us-east-1c
type: Private
zone: us-east-1c
- cidr: 10.0.32.0/20
id: subnet-4824bb3f
name: utility-us-east-1b
type: Public
zone: us-east-1b
- cidr: 10.0.96.0/20
id: subnet-908a35c9
name: utility-us-east-1c
type: Public
zone: us-east-1c
Also interestingly enough I did no change in my config. But when I run the kops update once and then once more I get two different results. How is that possible?
kops update cluster --name $n
error building tasks: could not find utility subnet in zone: "us-east-1c"
and then this
kops update cluster --name $n
error building tasks: could not find utility subnet in zone: "us-east-1b"
Using --bastion parameter within kops command line options assumes that bastion instance group is already in place. To create bastion instance group you can use --role flag:
kops create instancegroup bastions --role Bastion --subnet $SUBNET
Check this link for more information.