How to set Inbound Rule Name via Cloudformation in AWS - aws-cloudformation

I'm trying to set the name of this Ingress Rule in my Security Group:
I've tried two methods and looked at the documentation and can't find a way to do it. I've tried:
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: -1
Name: Allow ICMP
Description: Allow ICMP
CidrIp: 0.0.0.0/0
And I've tried this:
SecurityGroupIngress:
- IpProtocol: icmp
FromPort: 0
ToPort: -1
Description: Allow ICMP
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: Allow ICMP
I've looked for examples, and I've looked through the documentation and I don't see a reference to this. Any ideas?

The Name that you see in the console is the Name tag of the resource. Currently in CloudFormation both AWS::EC2::SecurityGroupIngress and Ingress objects in AWS::EC2::SecurityGroup don't support tags (Tags for individual rules is a recent feature, added in July 2021. CloudFormation doesn't support all new features on release). If this is a crucial requirement you can use Lambda backed custom CloudFormation resources to create an AWS Lambda function which will tag the resource for you.

Related

Envoy (Service mesh) and the term Clusters

We need to learn envoy well enough to create a service mesh. In the Envoy Documentation they talk about "Clusters" without defining the term. Are they talking about Kubernetis Clusters, or does this term have a specific meaning when configuring Envoy? (for a cluster of servers)
You can find the definition in the terminology documentation:
Cluster: A cluster is a group of logically similar upstream hosts that Envoy connects to. Envoy discovers the members of a cluster via service discovery. It optionally determines the health of cluster members via active health checking. The cluster member that Envoy routes a request to is determined by the load balancing policy.
Only the first sentence (A cluster is a group of logically similar upstream hosts that Envoy connects to.) is needed to understand what a cluster is. It has nothing to do with Kubernetes, cluster is an Envoy term.
Let's say that you have two hosts running the same service, and you want that Envoy connects to one of these hosts (load-balancing the traffic), then you will define a cluster with these two hosts:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: service
clusters:
- name: service
connect_timeout: 15s
type: LOGICAL_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 10.0.0.43
port_value: 80
- endpoint:
address:
socket_address:
address: 10.0.0.44
port_value: 80
In this example, a request made by a client to Envoy on port 8080 will be forwarded to one of the cluster hosts (10.0.0.43:80 or 10.0.0.44:80).
You can find more documentation about clusters here: https://www.envoyproxy.io/docs/envoy/v1.21.1/intro/arch_overview/upstream/upstream.

K8s Network policy endPort can not be applied

I'm trying to apply egress port range for my k8s network policy like this:
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 32000
endPort: 32768
Starting fine but when I describe that, I only see that port 32000 is allowed.
Do I miss something? Or have I made some mistake?
Thanks.
It seems you took this example from Targeting a range of Ports. Here are 2 questions:
I see endPort works only with NetworkPolicyEndPort enabled feature. Despite the fact it is states, this feature enabled by default, can you please
check if it turned for you?
Whats your CNI plugin and does it support endPort in NetworkPolicy spec?

haproxy behavior when dns lookup returns multiple IP addresses for a server domain

Honestly, I've tested this and I think haproxy picks the first IP address. though I'm not sure. In addition, I thought there would be some configuration I'm not aware of which I can use to balance the traffic over those IPs. so I'm decided to ask here. This is my backend block defined in the haproxy config file:
backend lb_webapp_backend
http-send-name-header Host
server webapp webapp:80 check
and this is the DNS lookup output from the haproxy server shell:
/ $ nslookup webapp
Server: 127.0.0.11
Address: 127.0.0.11:53
Non-authoritative answer:
*** Can't find webapp: No answer
Non-authoritative answer:
Name: webapp
Address: 172.23.0.5
Name: webapp
Address: 172.23.0.3
Name: webapp
Address: 172.23.0.4
but the result is that all the traffic is going through one of these IP addresses.
Is there any solution to balance the traffic in a circumstance like this?

How Do I Attach an ASG to an ALB Target Group?

In AWS' Cloudformation, how do I attach an Autoscaling Group (ASG) to an Application Load Balancer Target Group?
There does not appear to be any direct way to do that directly in a Cloudformation Template (CFT), though it it possible using the AQWS CLI or API. The AWS::ElasticLoadBalancingV2::TargetGroup resource only offers these target types:
instance. Targets are specified by instance ID.
ip. Targets are specified by IP address.
lambda. The target groups contains a single Lambda function.
That is because, apparently, one does not attach an ASG to a target group; instead, one attaches a target group or groups to an ASG.
Seems a little backwards to me, but I'm sure it has to do with the ASG needing to register/deregister its instances as it scales in and out.
See the documentation for the AWS::AutoScaling::AutoScalingGroup resource for details.
Example:
TargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
VpcId: !Ref VPC
TargetType: instance
Port: 80
Protocol: HTTP
AutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AvailabilityZones: !GetAZs !Ref "AWS::Region"
MaxSize: "3"
MinSize: "1"
TargetGroupArns:
- !Ref TargetGroup

How to expose kubernetes service to public without hardcoding to minion IP?

I have a kubernetes cluster running with 2 minions.
Currently I make my service accessible in 2 steps:
Start replication controller & pod
Get minion IP (using kubectl get minions) and set it as publicIPs for the Service.
What is the suggested practice for exposing service to the public? My approach seems wrong because I hard-code the IP-s of individual minion IP-s. It also seems to bypass load balancing capabilities of kubernetes services because clients would have to access services running on individual minions directly.
To set up the replication controller & pod I use:
id: frontend-controller
kind: ReplicationController
apiVersion: v1beta1
desiredState:
replicas: 2
replicaSelector:
name: frontend-pod
podTemplate:
desiredState:
manifest:
version: v1beta1
id: frontend-pod
containers:
- name: sinatra-docker-demo
image: madisn/sinatra_docker_demo
ports:
- name: http-server
containerPort: 4567
labels:
name: frontend-pod
To set up the service (after getting minion ip-s):
kind: Service
id: frontend-service
apiVersion: v1beta1
port: 8000
containerPort: http-server
selector:
name: frontend-pod
labels:
name: frontend
publicIPs: [10.245.1.3, 10.245.1.4]
As I mentioned in the comment above, the createExternalLoadBalancer is the appropriate abstraction that you are looking for, but unfortunately it isn't yet implemented for all cloud providers, and in particular for vagrant, which you are using locally.
One option would be to use the public IPs for all minions in your cluster for all of the services you want to be externalized. The traffic destined for the service will end up on one of the minions, where it will be intercepted by the kube-proxy process and redirected to a pod that matches the label selector for the service. This could result in an extra hop across the network (if you land on a node that doesn't have the pod running locally) but for applications that aren't extremely sensitive to network latency this will probably not be noticeable.
As Robert said in his reply this is something that is coming up, but unfortunately isn't available yet.
I am currently running a Kubernetes cluster on our datacenter network. I have 1 master and 3 minions all running on CentOS 7 virtuals (vcenter). The way I handled this was to create a dedicated "kube-proxy" server. I basically am just running the Kube-Proxy service (along with Flannel for networking) and then assigning "public" IP addresses to the network adapter attached to this server. When I say public I mean addresses on our local datacenter network. Then when I create a service that I would like to access outside of the cluster I just set the publicIPs value to one of the available IP addresses on the kube-proxy server. When someone or something attempts to connect to this service from outside the cluster it will hit the kube-proxy and then be redirected to the proper minion.
While this might seem like a work around, this is actually similar to what I would expect to be happening once they come up with a built in solution to this issue.
If you're running a cluster locally, a solution I used was to expose the service on your kubernetes nodes using the nodeport directive in your service definition and then round robin to every node in your cluster with HAproxy.
Here's what exposing the nodeport looks like:
apiVersion: v1
kind: Service
metadata:
name: nginx-s
labels:
name: nginx-s
spec:
type: NodePort
ports:
# must match the port your container is on in your replication controller
- port: 80
nodePort: 30000
selector:
name: nginx-s
Note: the value you specify must be within the configured range for node ports. (default: 30000-32767)
This exposes the service on the given nodeport on every node in your cluster. Then I set up a separate machine on the internal network running haproxy and a firewall that's reachable externally on the specified nodeport(s) you want to expose.
If you look at your nat table on one of your hosts, you can see what it's doing.
root#kube01:~# kubectl create -f nginx-s.yaml
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:30000) to serve traffic.
See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details.
services/nginx-s
root#kube01:~# iptables -L -t nat
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-PORTALS-CONTAINER all -- anywhere anywhere /* handle ClusterIPs; NOTE: this must be before the NodePort rules */
DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL
KUBE-NODEPORT-CONTAINER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: this must be the last rule in the chain */
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-PORTALS-HOST all -- anywhere anywhere /* handle ClusterIPs; NOTE: this must be before the NodePort rules */
DOCKER all -- anywhere !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
KUBE-NODEPORT-HOST all -- anywhere anywhere ADDRTYPE match dst-type LOCAL /* handle service NodePorts; NOTE: this must be the last rule in the chain */
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
MASQUERADE all -- 172.17.0.0/16 anywhere
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain KUBE-NODEPORT-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere anywhere /* default/nginx-s: */ tcp dpt:30000 redir ports 42422
Chain KUBE-NODEPORT-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere anywhere /* default/nginx-s: */ tcp dpt:30000 to:169.55.21.75:42422
Chain KUBE-PORTALS-CONTAINER (1 references)
target prot opt source destination
REDIRECT tcp -- anywhere 192.168.3.1 /* default/kubernetes: */ tcp dpt:https redir ports 51751
REDIRECT tcp -- anywhere 192.168.3.192 /* default/nginx-s: */ tcp dpt:http redir ports 42422
Chain KUBE-PORTALS-HOST (1 references)
target prot opt source destination
DNAT tcp -- anywhere 192.168.3.1 /* default/kubernetes: */ tcp dpt:https to:169.55.21.75:51751
DNAT tcp -- anywhere 192.168.3.192 /* default/nginx-s: */ tcp dpt:http to:169.55.21.75:42422
root#kube01:~#
Particularly this line
DNAT tcp -- anywhere anywhere /* default/nginx-s: */ tcp dpt:30000 to:169.55.21.75:42422
And finally, if you look at netstat, you can see kube-proxy is listening and waiting for that service on that port.
root#kube01:~# netstat -tupan | grep 42422
tcp6 0 0 :::42422 :::* LISTEN 20748/kube-proxy
root#kube01:~#
Kube-proxy will listen on a port for each service, and do network address translation into your virtual subnet that your containers reside in. (I think?) I used flannel.
For a two node cluster, that HAproxy configuration might look similiar to this:
listen sampleservice 0.0.0.0:80
mode http
stats enable
balance roundrobin
option httpclose
option forwardfor
server noname 10.120.216.196:30000 check
server noname 10.155.236.122:30000 check
option httpchk HEAD /index.html HTTP/1.0
And your service is now reachable on port 80 via HAproxy. If any of your nodes go down, the containers will be moved to another node thanks to replication controllers and HAproxy will only route to your nodes that are alive.
I'm curious what methods others have used though, that's just what I came up with. I don't usually post on stack overflow, so apologies if I'm not following conventions or proper formatting.
This is for MrE. I did not have enough space in the comments area to post this answer so I had to create another answer. Hope this helps:
We have actually moved away from Kubernetes since posting this reply. If I remember correctly though all I really had to do was run the kube-proxy executable on a dedicated CentOS VM. Here is what I did:
First I removed Firewalld and put iptables in place. Kube-proxy relies on iptables to handle its NAT and redirections.
Second, you need to install flanneld so you can have a bridge adapter on the same network as the Docker services running on your minions.
Then what I did was assign multiple ip addresses to the local network adapter installed on the machine. These will be the ip addresses you can use when setting up a service. These will be the addresses available OUTSIDE your cluster.
Once that is all taken care of you can start the proxy service. It will connect to the Master, and grab an IP address for the flannel bridge network. Then it will sync up all the IPtables rules and you should be set. Every time a new service is added it will create the proxy rules and replicate those rules across all minions (and your proxy). As long as you specified an ip address available on your proxy server then that proxy server will forward all traffic for that ip address over to the proper minion.
Hope this is a little more clear. Remember though I have not been part of the Kubernetes project for about 6 months now so I am not sure what changed have been made since I left. They might even have a feature in place that handles this sort of thing. If not hopefully this helps you get it taken care of.
You can use Ingress resource to allow external connections from outside of a Kubernetes cluster to reach the cluster services.
Assuming that you already have a Pod deployed, you now need a Service resource, e.g.:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
labels:
tier: frontend
spec:
type: ClusterIP
selector:
name: frontend-pod
ports:
- name: http
protocol: TCP
# the port that will be exposed by this service
port: 8000
# port in a docker container; defaults to what "port" has set
targetPort: 8000
And you need an Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
backend:
serviceName: frontend-service
# the targetPort from service (the port inside a container)
servicePort: 8000
In order to be able to use Ingress resources, you need some ingress controller deployed.
Now, providing that you know your Kubernetes master IP, you can access your application from outside of a Kubernetes cluster with:
curl http://<master_ip>:80/ -H 'Host: foo.bar.com'
If you use some DNS server, you can add this record: foo.bar.com IN A <master_ip> or add this line to your /etc/hosts file: <master_ip> foo.bar.com and now you can just run:
curl foo.bar.com
Notice, that this way you will always access foo.bar.com using port 80. If you want to use some other port, I recommend using a Service of type NodePort, only for that one not-80 port. It will make that port resolvable, no matter which Kubernetes VM IP you use (any master or any minion IP is fine). Example of such a Service:
apiVersion: v1
kind: Service
metadata:
name: frontend-service-ssh
labels:
tier: frontend
spec:
type: NodePort
selector:
name: frontend-pod
ports:
- name: ssh
targetPort: 22
port: 22
nodePort: 2222
protocol: TCP
And if you have <master_ip> foo.bar.com in your /etc/hosts file, then you can access: foo.bar.com:2222