How to point my domain to my EKS cluster? - kubernetes

I have followed the AWS getting started guide to provision an EKS cluster (3 public subnets and 3 private subnets). After creating it, I get the following API server endpoint https://XXXXXXXXXXXXXXXXXXXX.gr7.us-east-2.eks.amazonaws.com (replaced the URL with X's for privacy reasons).
Accessing the URL in the browser I get the expected output from the cluster endpoint.
Question: How do I point my registered domain in Route 53 to my cluster endpoint?
I can't use a cname record because my domain is a root domain and will receive an apex domain error.
I don' have access to a static ip, and I don't believe my EKS cluster has a public IP address I can directly used. This would mean I can't use an A record (as I need an IP address).
Can I please get help/instructions as to how I can point my domain straight to my cluster?
Below is my AWS VPC architecture:

Don't try and assign a pretty name to the API endpoint. Your cluster endpoint is the address that's used to talk to the control plane. When you configure your kubectl tool, the api endpoint is what kubectl talks to.
Once you've got an application running on your EKS cluster, and have a load balancer, or Ingress, or something for incoming connections, that's when you worry about creating pretty names.
And yes, If you're dealing with AWS load balancers, you don't get the option of A records, so you can't use the apex of the domain, unless you're hosting DNS in route 53, in which case, you can use "alias" records to point the apex of a domain at a load balancer.
Kubernetes is a massively complex thing to try understand and get running. Given that this is the type of question you're asking, it sounds like you don't have the full picture yet. I recommend (1) joining the Kubenetes slack channel. It'll be a much faster way to get help than SO, and (2) take in Jeff Geerling's excellent Kubernetes 101 course on youtube.

Related

Restrict IP-range in GKE cluster when using VPN?

We're integrating with a new partner that requires us to use VPN when communicating with them (over HTTPS). We're running all of our services in a (non-private) Google Kubernetes Engine (GKE) cluster and it's only a single pod that needs to communicate with the partner's API.
The problem we face is that our partner's VPN provider won't allow us to use the private IP-range provided by GKE, 10.244.0.0/14, because the subnet is too large.
Preferably, we don't want to deploy something outside our GKE cluster, like a Compute Engine instance, that is somehow used to proxy our traffic (we will of course do it if this is the only/best way to proceed). We're hoping that, perhaps, it'll be possible to create a new node pool in the same cluster with a different (smaller) subnet, but so far we haven't found a way to do this. We've also looked briefly at CloudVPN, but if we understand it correctly, it only works with private GKE clusters.
Question:
What's the recommended way to obtain a smaller subnet/IP-range for a pod in an existing (public) GKE cluster to allow it to communicate with a third-party API over VPN?
The problem I see is that you have to maintain your VPN connection within your pod, it is possible but looks like an antipattern.
I would recommend using CloudVPN in a separate GCP project (due to cost separation and security) to establish the connection with a specific and limited VPC and then route that traffic to the pod, that might be in a specific ip range as you mentioned.
Take a look at the docs on how to create the vpn:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Redirect traffic between VPCs:
https://cloud.google.com/vpc/docs/vpc-peering
Create the nodepool with an IP range: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
Assign your deployment to that nodepool
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector

kops kubernetes cluster with multiple DNS

There are two parts to this.
I am using kops v1.17.0 to standup kubernetes cluster on ec2 instances. I am followinf these docs for doing so. https://kubernetes.io/docs/setup/production-environment/tools/kops/
on of the points go as follows.
kops has a strong opinion on the cluster name: it should be a valid
DNS name.
this got me confused. Can my cluster serve requests to only one DNS and its subdomains?
I tried this on a domain example.com I created a hosted zone for it. created a cluster named example.com.k8s.local.
I pointed this domain to my clusters load balancer. and I can access example.com. All good till now.
now, I want one of the services in my cluster to be served on abc.com. I created another hosted zone, and a new record set within it which points to this load balancer. I am expecting to visit abc.com and see this service but all I see is nginx 404 not found
Is this happening because of the first point I mentioned or totally separate issue? If it is because of 1st point is there aa way around or one cluster is always tied to one domain in the kops world?
As far as the first part is concerned, Yes I can serve multiple domains from same kubernetes cluster with this setup. upto certain version there was a hard requirement of matching domain name with cluster name, its not the case anymore.
Couple of things you need to consider. while issuing a certificate from ACM, make sure all your domains are listed
example
example.com
*example.com
bar.com
*.bar.com
make sure that all of the domains are validated and are not in pending or any other state.
I think reason for second issue was one of the domains in my certificate generated by ACM was invalid state and thus in pending state.
#jt97 ^^

Access restrictions when using Gcloud vpn with Kubernetes

This is my first question on Stack Overflow:
We are using Gcloud Kubernetes.
A customer specifically requested a VPN Tunnel to scrape a single service in our Cluster (I know ingress would be more suited for this).
Since VPN is IP based and Kubernetes changes these, I can only configure the VPN to the whole IP range of services.
I'm worried that the customer will get full access to all services if I do so.
I have been searching for days on how to treat incoming VPN traffic, but haven't found anything.
How can I restrict the access? Or is it restricted and I need netpols to unrestrict it?
Incoming VPN traffic can either be terminated at the service itself, or at the ingress - as far as I see it. Termination at the ingress would probably be better though.
I hope this is not too confusing, thanks you so much in advance
As you mentioned, an external Load Balancer would be ideal here as you mentioned, but if you must use GCP Cloud VPN then you can restrict access into your GKE cluster (and GCP VPC in general) by using GCP Firewall rules along with GKE internal LBs HTTP or TCP.
As a general picture, something like this.
Second, we need to add two firewall rules to the dedicated networks (project-a-network and project-b-network) we created. Go to Networking-> Networks and click the project-[a|b]-network. Click “Add firewall rule”. The first rule we create allows SSH traffic from the public so that we can SSH into the instances we just created. The second rule allows icmp traffic (ping uses the icmp protocol) between the two networks.

Kubernetes cluster outgoing traffic IP

I have a Kubernetes cluster on Google Kubernetes Engine. I want to assign a static IP for all outgoing traffic of a cluster.
I already have reserved external IPs but I can't assign them to a cluster with the GCP console.
I found a solution to do it with the cli :
Static outgoing IP in Kubernetes
but it targets the VM and I will need to set it each time I deploy. So it's not targeting the cluster.
Can anybody provide any pointers? Thanks.
GKE currently doesn't have an option to create the cluster with all your nodes using a reserved public IP. All you get in advanced networking options is something like this:
You will have to use the gcloud API that you mentioned which should be easy to put in a script.
Or you can also use the UI by editing the instance(s) and going into 'Network Interfaces' like this:
I agree with something in the previous answer you can't do something like this directly in the cluster, but you can use another service to do what you are looking for: nat gateway that will use a fixe public ip.
For more security, you can even deploy the gateways in multiple zones to have some redundancy and your cluster will always have outgoing trafic go by the gateways.
I won't explain how it works here, because google already provided a tutorial to what you want to do here: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
Enjoy.

Deterministic connection to cloud-internal IP of K8S service or its underlying endpoint?

I have a Kubernetes cluster (1.3.2) in the the GKE and I'd like to connect VMs and services from my google project which shares the same network as the cluster.
Is there a way for a VM that's internal to the subnet but not internal to the cluster itself to connect to the service without hitting the external IP?
I know there's a ton of things you can do to unambiguously determine the IP and port of services, such as the ENVs and DNS...but the clusterIP is not reachable outside of the cluster (obviously).
Is there something I'm missing? An important component to this is that this is meant to be a service "public" to the project, such that I don't know which VMs on the project will want to connect to the service (this could rule out loadBalancerSourceRanges). I understand the endpoint which the services actually wraps is the internal IP I can hit, but the only good way to get to that IP is though the Kube API or kubectl, both of which are not prod-ideal ways of hitting my service.
Check out my more thorough answer here, but the most common solution to this is to create bastion routes in your GCP project.
In the simplest form, you can create a single GCE Route to direct all traffic w/ dest_ip in your cluster's service IP range to land on one of your GKE nodes. If that SPOF scares you, you can create several routes pointing to different nodes, and traffic will round-robin between them.
If that management overhead isn't something you want to do going forward, you could write a simple controller in your GKE cluster to watch the Nodes API endpoint, and make sure that you have a live bastion route to at least N nodes at any given time.
GCP internal load balancing was just released as alpha, so in the future, kube-proxy on GCP could be implemented using that, which would eliminate the need for bastion routes to handle internal services.