Kubernetes Ingress whitelist-source-range Allow all IPs - deployment

I am setting an IP whitelist in my kubernetes ingress config via the annotation ingress.kubernetes.io/whitelist-source-range. But, I have different whitelists to apply for different environments (dev, staging, production, etc.) Therefore, I have a placeholder variable in my deploy.yaml that I find/replace with the appropriate IP whitelist based on the environment.
At the moment, for dev, I am passing an empty string as the whitelist because it should be able to receive traffic from all IPs, which currently works as intended. But, logs in nginx-ingress-controller complain about the following:
error reading Whitelist annotation in Ingress default/{service}: the
annotation does not contain a valid IP address or network: invalid CIDR
address:
How do I set a proper whitelist-source-range to allow all IPs but not create noisy error logs like the one above?

Yes, it will give you error reading according to the ingress annotations ip whitelist.
It suspecting a IP range or Multiple IP ranges, even you didn't put a value, you will get the reading error as well, to get ride of this error and allow all the IPs put 0.0.0.0/0 this will bypass and fix your issue.

Related

K8s, apparent IP of a pod that tries to connect to an external database

I am trying to access an external postgres from my pod. The problem is that in order to do this, I need to allow in the external database pg_hba.conf the "host address"/ "IP" of the pod. It is clear that I can temporarily use the address of the node, e.g. someNode.online-server.cloud.
The problem is that of course, if the pod restarts, it might restart on another node. For the converse problem, I could use a service/endpoint that would provide an anchor for all external traffic to go through... Is there a way to do something like this in my case? I am thinking port forwarding on a host can be both ways, but not sure what to do in K8s.
It's documented that the address field can be a CIDR.
Specifies the client machine address(es) that this record matches. This field can contain either a host name, an IP address range, or one of the special key words mentioned below.
Therefore, you can add the CIDR of the subnet of your cluster, given the assumption, this is within a private network.
This might look something like this:
# TYPE DATABASE USER ADDRESS METHOD
host app app 192.168.0.0/16 scram-sha-256
If this goes through the public web, all pods are likely to go through the same gateway and therefore get the same IP assigned, which you can use.
Alternatively, you can also use a hostname.

Kubernetes internal wildcard DNS record

I'd like to create a wildcard DNS record that maps to a virtual IP inside my k8s cluster. This is because I want requests from my pods to any subdomain of a given name to map to a specific set of endpoints. I.e. requests from:
something.my-service.my-namespace.svc.cluster.local
something-else.my-service.my-namespace.svc.cluster.local
any-old-thing-my-pod-came-up-with.my-service.my-namespace.svc.cluster.local
to all resolve to the same virtual IP, and therefore to the same cluster (i.e. I would like these requests to be routed to endpoints in the same way a service does).
I've seen some other solutions that involve creating and modifying the cluster DNS service (i.e. kube-dns or CoreDNS) config. This doesn't work for me- the main reason I'm asking this question is to achieve declarative config.
What I've tried:
Service .metadata.name: '*.my-service'. Failed because '*.my-service' is not a valid service name.
Service .spec.ports.name: '*'. Not a valid port name.
Not an option:
Ingress. I cannot expose these services to the wider internet.
Pod hostname/subdomain. AFAIK DNS entries created by pod hostname/subdomain will not have a virtual IP that may resolve to any of a number of pods. (Quoting from https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-hostname-and-subdomain-fields) "DNS serves an A record at that name, pointing to the Pod’s IP."
wild card dns is not supported for kubernetes services. what you can do is front the service with an ingress controller. with ingress you can use wild card dns. refer the below PR
https://github.com/kubernetes/kubernetes/pull/29204

DNS problem on AWS EKS when running in private subnets

I have an EKS cluster setup in a VPC. The worker nodes are launched in private subnets. I can successfully deploy pods and services.
However, I'm not able to perform DNS resolution from within the pods. (It works fine on the worker nodes, outside the container.)
Troubleshooting using https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/ results in the following from nslookup (timeout after a minute or so):
Server: 172.20.0.10
Address 1: 172.20.0.10
nslookup: can't resolve 'kubernetes.default'
When I launch the cluster in an all-public VPC, I don't have this problem. Am I missing any necessary steps for DNS resolution from within a private subnet?
Many thanks,
Daniel
I feel like I have to give this a proper answer because coming upon this question was the answer to 10 straight hours of debugging for me. As #Daniel said in his comment, the issue I found was with my ACL blocking outbound traffic on UDP port 53 which apparently kubernetes uses to resolve DNS records.
The process was especially confusing for me because one of my pods worked actually worked the entire time since (I think?) it happened to be in the same zone as the kubernetes DNS resolver.
To elaborate on the comment from #Daniel, you need:
an ingress rule for UDP port 53
an ingress rule for UDP on ephemeral ports (e.g. 1025–65535)
I hadn't added (2) and was seeing CoreDNS receiving requests and trying to respond, but the response wasn't getting back to the requester.
Some tips for others dealing with these kinds of issues, turn on CoreDNS logging by adding the log configuration to the configmap, which I was able to do with kubectl edit configmap -n kube-system coredns. See CoreDNS docs on this https://github.com/coredns/coredns/blob/master/README.md#examples This can help you figure out whether the issue is CoreDNS receiving queries or sending the response back.
I ran into this as well. I have multiple node groups, and each one was created from a CloudFormation template. The CloudFormation template created a security group for each node group that allowed the nodes in that group to communicate with each other.
The DNS error resulted from Pods running in separate node groups from the CoreDNS Pods, so the Pods were unable to reach CoreDNS (network communications were only permitted withing node groups). I will make a new CloudFormation template for the node security group so that all my node groups in my cluster can share the same security group.
I resolved the issue for now by allowing inbound UDP traffic on port 53 for each of my node group security groups.
So I been struggling for a couple of hours i think, lost track of time, with this issue as well.
Since i am using the default VPC but with the worker nodes inside the private subnet, it wasn't working.
I went through the amazon-vpc-cni-k8s and found the solution.
We have to sff the environment variable of the aws-node daemonset AWS_VPC_K8S_CNI_EXTERNALSNAT=true.
You can either get the new yaml and apply or just fix it through the dashboard. However for it to work you have to restart the worker node instance so the ip route tables are refreshed.
issue link is here
thankz
Re: AWS EKS Kube Cluster and Route53 internal/private Route53 queries from pods
Just wanted to post a note on what we needed to do to resolve our issues. Noting that YMMV and everyone has different environments and resolutions, etc.
Disclaimer:
We're using the community terraform eks module to deploy/manage vpcs and the eks clusters. We didn't need to modify any security groups. We are working with multiple clusters, regions, and VPC's.
ref:
Terraform EKS module
CoreDNS Changes:
We have a DNS relay for private internal, so we needed to modify coredns configmap and add in the dns-relay IP address
...
ec2.internal:53 {
errors
cache 30
forward . 10.1.1.245
}
foo.dev.com:53 {
errors
cache 30
forward . 10.1.1.245
}
foo.stage.com:53 {
errors
cache 30
forward . 10.1.1.245
}
...
VPC DHCP option sets:
Update with the IP of the above relay server if applicable--requires regeneration of the option set as they cannot be modified.
Our DHCP options set looks like this:
["AmazonProvidedDNS", "10.1.1.245", "169.254.169.253"]
ref: AWS DHCP Option Sets
Route-53 Updates:
Associate every route53 zone with the VPC-ID that you need to associate it with (where our kube cluster resides and the pods will make queries from).
there is also a terraform module for that:
https://www.terraform.io/docs/providers/aws/r/route53_zone_association.html
We had run into a similar issue where DNS resolution times out on some of the pods, but re-creating the pod couple of times resolves the problem. Also its not every pod on a given node showing issues, only some pods.
It turned out to be due to a bug in version 1.5.4 of Amazon VPC CNI, more details here -- https://github.com/aws/amazon-vpc-cni-k8s/issues/641.
Quick solution is to revert to the recommended version 1.5.3 - https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html
As many others, I've been struggling with this bug a few hours.
In my case the issue was this bug https://github.com/awslabs/amazon-eks-ami/issues/636 that basically sets up an incorrect DNS when you specify endpoint and certificate but not certificate.
To confirm, check
That you have connectivity (NACL and security groups) allowing DNS on TCP and UDP. For me the better way was to ssh into the cluster and see if it resolves (nslookup). If it doesn't resolve (most likely it is either NACL or SG), but check that the DNS nameserver in the node is well configured.
If you can get name resolution in the node, but not inside the pod, check that the nameserver in /etc/resolv.conf points to an IP in your service network (if you see 172.20.0.10, your service network should be 172.20.0.0/24 or so)

Kubernetes: reserving subrange of service-cluster-ip-range for manual allocation

When creating a service, I can either specify static IP address from cluster IP range or don't specify any IP address in which case such address will be dynamically assigned.
But when specifying static IP address, how can I make sure that it will not conflict with existing dynamically assigned IP address? I could for example programmatically query if such IP address is already in use. Or, what I would more prefer is to specify IP range that is cluster-wise reserved for manual allocation. For example
Service cluster IP range: 10.20.0.0/16
Service cluster IP manual range: 10.20.5.0/24
Now, I can manage IP address in range from 10.20.5.0-10.22.5.255 myself and kubernetes can use remaining pool for dynamic allocation. Sort of how usually DHCP/static IP range works on home routers.
Is this scenario possible in kubernetes?
The service ip you manually select has to be part of the selected range or you'll receive an invalid (422) response from kubernetes. The kubernetes documentation has a choosing your own ip section for services. If you have admin rights to the cluster the easiest option is going to perform kubectl get services --all-namespaces which will show you every service provisioned in your cluster with its corresponding CLUSTER-IP shown.

Pass IP of kube-dns as env to container

Is there a way to pass the kube-dns server IP to the container so that services inside the container can resolve the names properly?
I am trying to run nginx and it needs a resolver directive to be specified to resolve names against a DNS server.
I do not want to use public DNS servers; only the one provided by kube-dns.
Also, I need a dynamic way to pass the IP as the DNS server IP can change across various cloud platforms or bare-metal configurations. So, I cannot use a hardcoded 10.0.0.10 IP.
Alright, it seems quite simple.
A few points I had missed.
kube-dns runs as a Kubernetes Service in the kube-system namespace.
The DNS name for the service is kube-dns.kube-system.svc.cluster.local
We can pass this to the container using env.
EDIT:
It seems I was looking at the wrong place. It indeed uses the local resolver resolve. The problem is I hit a 'feature' in NGINX which caches the lookups for 300 secs and causes name resolution failure, and I was investigating k8s.