I've got a Kubernetes cluster with nginx ingress setup for public endpoints. That works great, but I have one service that I don't want to expose to the public, but I do want to expose to people who have vpc access via vpn. The people who will need to access this route will not have kubectl setup, so they can't use port-forward to send it to localhost.
What's the best way to setup ingress for a service that will be restricted to only people on the VPN?
Edit: thanks for the responses. As a few people guessed I'm running an EKS cluster in AWS.
It depends a lot on your Ingress Controller and cloud host, but roughly speaking you would probably set up a second copy of your controller using a internal load balancer service rather than a public LB and then set that service and/or ingress to only allow from the IP of the VPN pods.
Since you are talking about "VPC" and assuming you have your cluster in AWS, you probably need to do what #coderanger said.
Deploy a new ingress controller with "LoadBalancer" in the service type and add an the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "true".
Check here what are the possible annotations that you can add to a Load Balancer in AWS: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#load-balancers
You can also create a security group for example and add it to the load balancer with service.beta.kubernetes.io/aws-load-balancer-security-groups.
Related
I used to work with Openshift/OKD cluster deployed in AWS and there it was possible to connect cluster to some domain name from Route53. Then as soon as I was deploying ingress with some hosts mappings (and the hosts defined in ingres were subdomains of the basis domain) all necessary lb rules (Routes in Openshift) and subdomain itself were created by Openshift and were directly available. For example: Openshift is connected to domain "somedomain.com" which is registered in Route53. In ingress I have the host mapping like:
hosts:
- host: sub1.somedomain.com
paths:
- path
After deployment I can reach sub1.somedomain.com. Is this kind of functionality available in GKE?
So far I have seen only mapping to static IP.
Also I red here https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-http2 that if I need to connect service with ingress, the service have to be of type NodePort. Is it realy so? In Openshift it was not required any normal ClusterIP service could be connected to ingress.
Thanks in advance!
I think you should consider the other Ingress Controllers for your use cases.
I'm not an expert of the GKE, but as I can see Best practices for enterprise multi-tenancy as follows,
you need to consider how to route the multiple Ingress hostnames through wildcard subdomain like the OpenShift additionally.
Set up HTTP(S) Load Balancing with Ingress
:
You can create and configure an HTTP(S) load balancer by creating a Kubernetes Ingress resource,
which defines how traffic reaches your Services and how the traffic is routed to your tenant's application.
By registering Services with the Ingress resource, the Services' naming convention becomes consistent,
showing a single ingress, such as tenanta.example.com and tenantb.example.com.
The routing feature depends on the Ingress Controllers basically.
In my finding, the default Ingress Controllers of the GKE just creates a Google Cloud HTTP(S) Load Balancer, but it does not consider multi-tenancy by default like the OpenShift.
In contrast, in the OpenShift, the Ingress Controller was implemented using HAProxy with dynamic configuration feature as follows.
LB -tenanta.example.com--> HAProxy(directly forward the tenanta.example.com traffic to the target pod IPs) ---> Target Pods
The type of service exposition depends on the K8S implementation on each cloud provider.
If the ingress controller is a component inside your cluster, a ClusterIP is enough to have your service reachable (internally from inside the cluster itself)
If the ingress definition configure an external element (in case of GKE, a load balancer), this element isn't a part of the cluster and can't know the ClusterIP (because it is only accessible internally). A node port is required in this case.
So, in your case, either you expose your service in NodePort, or you configure GKE with another Ingress controller, locally installed in the cluster, instead of using this one by default.
So far GKE does not provide the possibility to dynamically create subdomains. The wished situation would be if GKE cluster can be set some DNS zone managed in GCP and there is a mimik of OpenShift Routes using for example ingress annotations.
But the reality tight now - you have to create subdomain or domain youself as well as IP address wich you connect this domain to. And this particular GCP IP address (using name) can be connected to ingress using annotations. Or it can be used in loadbalancer service.
I am deploying kubernetes nginx-ingress on AWS. Is there any way to prevent auto creation of network loadbalancer and me assigning an already existing load balancer in the config?
If not, is there any way to provide custom name to AWS NLB from within the nginx ingress configuration?
No, what you're asking for is not supported. There's no way to configure nginx ingress controller to create NLBs with specific names or use existing NLBs.
You can however do this manually if you set the nginx ingress controller serviceType to NodePort and then manually register the targets into the NLB (via the console, CLI etc).
Note that it's not ideal to have things configured outside of Kubernetes in this way because there is a tendency to forget that ingress changes aren't synced to the NLB.
Make sure to check the security groups of your nodes to allow traffic from the load balancer to the exposed ports on your nodes.
We have a request to expose certain pods in an AKS environment to the internet for 3rd party use.
Currently we have a private AKS cluster with a managed standard SKU load balancer in front using the advanced azure networking (basically Calico) where each Pod gets its own private IP from the Vnet IP space. All private IPs currently route through a firewall via user defined route in order to reach the internet, and vice versa. Traffic between on prem routes over a VPN connection through the azure virtual wan. I don’t want to change any existing routing behavior unless 100% necessary.
My question is, how do you expose an existing private AKS cluster’s specific Pods to be accessible from the internet? The entire cluster does not need to be exposed to the internet. The issue I foresee is the ephemeral Pods and ever changing IPs making simple NATing in the firewalls not an option. I’ve also thought about simply making a new AKS cluster with a public load balancer. The issue here though is security as it must still go through the firewalls and likely could with existing user defined routes
What is the recommended way to setup the architecture where certain Pods in AKS can be accessible over the internet, while still allowing those Pods to access the Pods over the private network. I want to avoid exposing all Pods to the internet
There are a couple of options that you can use in order to expose your application to
outside your network, such as: Service:
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Also, there is another option that is use an ingress, IMO this is the best way to expose HTTP applications externally, because it's possible to create rules by path and host, and gives you much more flexibility than services. For ingress only HTTP/HTTPS is supported, if you need TCP then go to Services
I'd recommend you take a look in this links to understand in deep how services and ingress works:
Kubernetes Services
Kubernetes Ingress
NGINX Ingress
AKS network concepts
Deploy nginx ingress controller and bind the ingress controller service to a public Load Balancer. Define Ingress rules for the kubernetes services that you want to access from internet. Note that ingress controller enables entry point to the services running inside kubernetes
Several years later and wanted to update.
We did successfully implement a scalable ingress option into our private AKS cluster using NGINX as the ingress. The basic flow was
Public IP > NAT to frontend private IP of NGINX > NGINX path rules that point to your pod/service
Taking a URL as an example for a microservice of www.example.com/service1, the public DNS entry you create is what resolves www.example.com to the public IP that you will NAT to the private IP of NGINX. Then, the rules you create within NGINX take the specific /service1 path of the URL and use it to route to the specific service you pointed it at. It behaves much like URL switching in other load balancers. That is really all NGINX is doing for you. In NGINX syntax, this involves specifying a hosts name (URL) and an associated rule with a backend path and service name. The service name in this example is service1 and the path is / because service1 sits just behind the root.
Something like this saves cost by using less public IPs. For example, you can use a subdomain to easily NAT traffic to a seperate test environment. www.test.example.com and www.example.com can point to separate public IPs, which you can NAT to separate AKS clusters running NGINX. In this way, your NGINX rules can be identical because it's only looking for /service1 which hopefully you've mirrored test and prod environments.
Many ways to do this but a few recommendations from lessons learned
use subdomains to break out multiple environments
standardize your NGINX private front end IP across envronments (make them all end in .100 as an example
create a standard NGINX ingress template where you really only need to modify the serviceName. Your hostName should be static within an environment
have your devs include this and deploy their microservices with helm rather than relying on an infrastructure team to update NGINX services. Sort of defeats the devops mentality and speed gains
I have a Kubernetes services that I would like to be accessible from outside the cluster.
I've setup Traefik and have created an Ingress file for that service and am able to go to 'somemadeupdomain.com' and access the service fine. (Having locally added a line in my hosts file).
However my question is with the service type, I've currently set as ClusterIP. I can access the service fine, so is it fine to continue to use that or should I use NodePort.
Of course if I use NodePort I'm aware that when doing minikube service list I'll get a specific URL created by Kubernetes to access that service, but I feel I don't need to do that as I have that ingress file?
Any explanation would be appreciated.
Thanks
As you are using ingress already it does not make much sense to use NodePort. As you already have a way to access your application. Its totally fine to have a service type you need at service level that you need for the internal access(within Kubernets) purpose.
Ingress will redirect your external traffic to your service within the cluster, so ClusterIP is a good choice. No need to use NodePort.
From Documentation
Ingress, added in Kubernetes v1.1, exposes HTTP and HTTPS routes from
outside the cluster to services within the cluster
I am new to Kubernetes and wanted to understand how I can expose a service running in Kubernetes to the outside world. I have exposed it using a NodePort on the cluster.
So, for example: A service exposes port 31234 on the host and I can get to the service from another server through https://kubeserverIP:31234.
What I want to achieve is serve this service through nginx (on a different server, out of Kube control) via a url,say, http://service.example.com. I have tried deploying nginx with an upstream pointing to the service but that is not working and get a bad gateway error.
Is there something which I am missing here? Or is there a neater way of achieving this.
I have a baremetal installation of Kubernetes cluster and have no access to gce load balancer or other vendor LBs.
Thanks
Thanks for pointing in the right direction.
Essential steps broadly were:
Create an app and its service definition.
Create a namespace for ingress.
Create a default backend deployment and service for redirecting all requests not defined in Ingress rules. Create these in the ingress space
Create the nginx ingress controller deployment.
Create RBAC rules.
Finally create the ingress rule for the applications with the paths and the ports.
Found a very useful guide which explained things in details:
https://akomljen.com/kubernetes-nginx-ingress-controller/
You're almost there! Your next step will be to setup a ingress controller. There is an NGINX Ingress controller plugin that you can checkout here.
Edit: Here's an example configuration: https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example