How to create additional istio ingress gateway? - kubernetes

We are using the default ingress gateway for istio. We would like to create two different ingress gateway for using private and public external load balancer.
Is there any way to achieve this?

See this example, step 3: Deploy a private ingress gateway and mount the new secrets as data volumes by the following command. You may want to edit the helm values of the example, for example remove the mounted volumes with the certificates, change the name of the gateway, the namespace it is deployed to.

Related

AWS EKS - Load Balancer (Ingress)

I am deploying kubernetes nginx-ingress on AWS. Is there any way to prevent auto creation of network loadbalancer and me assigning an already existing load balancer in the config?
If not, is there any way to provide custom name to AWS NLB from within the nginx ingress configuration?
No, what you're asking for is not supported. There's no way to configure nginx ingress controller to create NLBs with specific names or use existing NLBs.
You can however do this manually if you set the nginx ingress controller serviceType to NodePort and then manually register the targets into the NLB (via the console, CLI etc).
Note that it's not ideal to have things configured outside of Kubernetes in this way because there is a tendency to forget that ingress changes aren't synced to the NLB.
Make sure to check the security groups of your nodes to allow traffic from the load balancer to the exposed ports on your nodes.

Is it possible to have multiple ingress resources with a single GKE ingress controller

In GKE Ingress documentation
it states that:
When you create an Ingress object, the GKE Ingress controller creates a Google Cloud HTTP(S) Load Balancer and configures it according to the information in the Ingress and its associated Services.
To me it seems that I can not have multiple ingress resources with single GCP ingress controller. Instead, GKE creates a new ingress controller for every ingress resource.
Is this really so, or is it possible to have multiple ingress resources with a single ingress controller in GKE?
I would like to have one GCP LoadBalancer as ingress controller with static IP and DNS configured, and then have multiple applications running in cluster, each application registering its own ingress resource with application specific host and/or path specifications.
Please note that I'm very new to GKE, GCP and Kubernetes in general, so it might be that I have misunderstood something.
I think the question you're actually asking is slightly different than what you have written. You want to know if multiple Ingress resources can be linked to a single GCP Load Balancer, not GKE Ingress controller. Based on the concept of a controller, there is only one GKE Ingress controller in a cluster, which is responsible for fulfilling multiple resources and provisioning multiple load balancers.
So, to answer the question directly (because I've been searching for a straight answer for a long time!):
Combining multiple Ingress resources into a single Google Cloud load
balancer is not supported.
Source: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
Sad.
However, using the nginx-ingress controller is one way to at least minimize the number of external (GCP) load balancers provisioned (it only provisions a single TCP load balancer), but since the load balancer is for TCP traffic, it cannot terminate SSL, or apply Firewall rules for you (Cloud Armor cannot be used, for instance).
The only way I know of to have a single HTTPS load-balancer in GCP terminate SSL and route traffic to multiple services in GKE is to combine the ingresses into a single resource with all paths and certificates defined in one place.
(If anybody figures out a way to do it with multiple separate ingress resources, I'd love to hear it!)
Yes it is possible to have the single ingress controller for multiple ingress resources.
You can create multiple ingress resources as per path requirement and all will be managed by single ingress controller.
There are multiple ingress controller options also available you can use Nginx also that will create one LB and manage the paths.
Inside Kubernetes if you are creating a service with type LoadBalancer it will create the new LB resource in GCP so make sure your microservice type is ClusterIP and your all traffic goes inside K8s cluster via ingress path.
When you setup the ingress controller it will create one service with type LoadBalancer you can can use that IP in DNS servers to forward the subdomain and path to K8s cluster.

How would I setup kuberentes ingress to for VPN-only access?

I've got a Kubernetes cluster with nginx ingress setup for public endpoints. That works great, but I have one service that I don't want to expose to the public, but I do want to expose to people who have vpc access via vpn. The people who will need to access this route will not have kubectl setup, so they can't use port-forward to send it to localhost.
What's the best way to setup ingress for a service that will be restricted to only people on the VPN?
Edit: thanks for the responses. As a few people guessed I'm running an EKS cluster in AWS.
It depends a lot on your Ingress Controller and cloud host, but roughly speaking you would probably set up a second copy of your controller using a internal load balancer service rather than a public LB and then set that service and/or ingress to only allow from the IP of the VPN pods.
Since you are talking about "VPC" and assuming you have your cluster in AWS, you probably need to do what #coderanger said.
Deploy a new ingress controller with "LoadBalancer" in the service type and add an the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "true".
Check here what are the possible annotations that you can add to a Load Balancer in AWS: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#load-balancers
You can also create a security group for example and add it to the load balancer with service.beta.kubernetes.io/aws-load-balancer-security-groups.

what is an ingress controller and how do I create it?

Good morning guys, so I took down a staging environment for a product on GCP and ran the deployment scripts again, the backend and frontend service have been setup. I have an ingress resource and a load balancer up, however, the service is not running. A look at the production app revealed there was something like an nginx-ingress-controller. I really don't understand all these and how it was created. Can someone help me understand because I have not seen anything online that makes it clear for me. Am I missing something?
loadBalancer: https://gist.github.com/davidshare/5a571e56febe7dacd580282b373f3095
Ingress Resource: https://gist.github.com/davidshare/d0f53912bc7da8310ec3d64f1c8a44f1
Ingress allows access to your Kubernetes services from outside the Kubernetes cluster. There are different kubernetes aka K8 resources alternatively you can use like (Node Port / Loadbalancer) which you can use to expose.
Ingress is independent resource to your service , you can specify routing rules declaratively, so each url with some context can be mapped to different services.
This makes it decoupled and isolated from the services you want to expose.
So to work ingress it needs an Ingress Controller for your cluster.
Like deployment resource in K8, ingress can be created simply by
kubectl create -f ingress.yaml
First, you have to implement Ingress Controller in order to apply Ingress resource, as described in #Shubhu answer. Ingress controller, as an edge router, applies specific logical structure with aim to route external traffic to your Kubernetes cluster underlying services via basic pattern routing rules defined in Ingress resource.
If you select Nginx Ingress Controller then it might be useful to proceed with installation guide approaching some specific prerequisites based on cloud provider environment. In order to simplify Nginx Ingress controller installation procedure it is also possible to use Helm package manager and install appropriate stable/nginx-ingress Helm chart.

Exposing a service in Kubernetes using nginx reverse proxy

I am new to Kubernetes and wanted to understand how I can expose a service running in Kubernetes to the outside world. I have exposed it using a NodePort on the cluster.
So, for example: A service exposes port 31234 on the host and I can get to the service from another server through https://kubeserverIP:31234.
What I want to achieve is serve this service through nginx (on a different server, out of Kube control) via a url,say, http://service.example.com. I have tried deploying nginx with an upstream pointing to the service but that is not working and get a bad gateway error.
Is there something which I am missing here? Or is there a neater way of achieving this.
I have a baremetal installation of Kubernetes cluster and have no access to gce load balancer or other vendor LBs.
Thanks
Thanks for pointing in the right direction.
Essential steps broadly were:
Create an app and its service definition.
Create a namespace for ingress.
Create a default backend deployment and service for redirecting all requests not defined in Ingress rules. Create these in the ingress space
Create the nginx ingress controller deployment.
Create RBAC rules.
Finally create the ingress rule for the applications with the paths and the ports.
Found a very useful guide which explained things in details:
https://akomljen.com/kubernetes-nginx-ingress-controller/
You're almost there! Your next step will be to setup a ingress controller. There is an NGINX Ingress controller plugin that you can checkout here.
Edit: Here's an example configuration: https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example