How to specify custom Istio ingress gateway in Kubernetes ingress - kubernetes

I deployed Istio using the operator and added a custom ingress gateway which is only accessible from a certain source range (our VPN).
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: ground-zero-ingressgateway
spec:
profile: empty
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
- name: istio-vpn-ingressgateway
label:
app: istio-vpn-ingressgateway
istio: vpn-ingressgateway
enabled: true
k8s:
serviceAnnotations:
...
service:
loadBalancerSourceRanges:
- "x.x.x.x/x"
Now I want to configure Istio to expose a service outside of the service mesh cluster, using the Kubernetes Ingress resource. I use the kubernetes.io/ingress.class annotation to tell the Istio gateway controller that it should handle this Ingress.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: istio
spec:
...
Kubernetes version (EKS): 1.19
Istio version: 1.10.3
Which ingress gateway controller is now used (istio-ingressgateway or istio-vpn-ingressgateway)? Is there a way to specify which one should be used?
P.S. I know that I could create a VirtualService and specify the correct gateway but we want to write a manifest that also works without Istio by specifying the correct ingress controller with an annotation.

You can create an ingress class that references the ingress controller that is deployed by default in the istio-system namespace. This configuration with ingress will work, however to my current knowledge, this is only used for backwards compatibility. If you want to use istio ingress controller functionality, you should use istio gateway and virtual service instead:
Using the Istio Gateway, rather than Ingress, is recommended to make use of the full feature set that Istio offers, such as rich traffic management and security features.
If this solution is not optimal for you, you should use e.g. nginx ingress controller and you can still bind it with annotations (deprecated) or using IngressClass. To my present knowledge it is not possible to bind this ingress class with an additional ingress controller. If you need an explanation, documentation, you should create an issue on github.
Summary: The recommended option is to use the gateway with virtual service. Another possibility is to use nginx alone ingress with different classes and an ingress resource for them.

Related

Should annotations from IngressClass be applied to Ingress itself?

I am a bit confused with the way how IngressClass works. I moved all annotations for ALB to IngressClass and made it the default one, however, I noticed that load balancer cannot be created as the certificate couldn't be found.
Default IngressClass:
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-southeast-2:000045211111:certificate/ee65c0af-044b-4c48-abc6-b4b44d4a3c76
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/wafv2-acl-arn: arn:aws:wafv2:ap-southeast-2:000045211111:regional/webacl/waf-api-regional-1e3042/d495cc4f-b24f-4465-afb4-ae5df32acb56
ingressclass.kubernetes.io/is-default-class: "true"
labels:
app.kubernetes.io/component: controller
name: alb-default
spec:
controller: ingress.k8s.aws/alb
When I move all these annotations to Ingress itself, the load balancer can be created successfully. I thought that annotations are taken from IngressClass and applied to Ingress itself when it is created.
I managed IngressClass from the terraform and populate these values during the infra provisioning so that I don't need to copy ARNs for the resources again and provide them when deploy service to k8s with Helm.
Am I missing anything? Is there any way to fix this?
Thank you.
https://github.com/kubernetes-sigs/aws-load-balancer-controller/pull/2963
solved this issue in this merge request

how to use ALB Ingress with api networking.k8s.io/v1 in EKS

Previously I was using the extensions/v1beta1 api to create ALB on Amazon EKS. After upgrading the EKS to v1.19 I started getting warnings:
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
So I started to update my ingress configuration accordingly and deployed the ALB but the ALB is not launching in AWS and also not getting the ALB address.
Ingress configuration -->
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "pub-dev-alb"
namespace: "dev-env"
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- host: "dev.test.net"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: "dev-test-tg"
port:
number: 80
Node port configuration -->
apiVersion: v1
kind: Service
metadata:
name: "dev-test-tg"
namespace: "dev-env"
spec:
ports:
- port: 80
targetPort: 3001
protocol: TCP
type: NodePort
selector:
app: "dev-test-server"
Results --->
Used this documentation to create ALB ingress controller.
Could anyone help me on here?
Your ingress should work fine even if you use newest Ingress. The warnings you see indicate that a new version of the API is available. You don't have to worry about it.
Here is the explanation why this warning occurs, even if you you use apiVersion: networking.k8s.io/v1:
This is working as expected. When you create an ingress object, it can be read via any version (the server handles converting into the requested version). kubectl get ingress is an ambiguous request, since it does not indicate what version is desired to be read.
When an ambiguous request is made, kubectl searches the discovery docs returned by the server to find the first group/version that contains the specified resource.
For compatibility reasons, extensions/v1beta1 has historically been preferred over all other api versions. Now that ingress is the only resource remaining in that group, and is deprecated and has a GA replacement, 1.20 will drop it in priority so that kubectl get ingress would read from networking.k8s.io/v1, but a 1.19 server will still follow the historical priority.
If you want to read a specific version, you can qualify the get request (like kubectl get ingresses.v1.networking.k8s.io ...) or can pass in a manifest file to request the same version specified in the file (kubectl get -f ing.yaml -o yaml)
You can also see a similar question.

Istio's default gateway is not a gateway, it is a service

I am trying to understand the Istio traffic routing. I installed Istio in demo mode and got to playing around with the samples. The samples have you install a few gateways (I did bookinfo-gateway and httpbin-gateway.
But it seems all my traffic goes through the "http2" port defined in istio-ingressgateway in the istio-system namespace.
The documentation makes reference to this:
Istio provides some preconfigured gateway proxy deployments (istio-ingressgateway and istio-egressgateway) that you can use - both are deployed if you use our demo installation
But when I run: kubectl -n istio-system get service istio-ingressgateway -o yaml the result shows kind: Service.
The other gateways the demos had me made show kind: Gateway.
So I am left confused...
Is there a difference between a service and a gateway?
How would I use the sample application gateways instead of the istio-ingressgateway (that is really a service).
How does istio connect my VirtualService to the istio-ingressgateway. Is it just looking for all VirtualServices?
Is there a difference between a service and a gateway?
Yes.
The istio-ingressgateway is a kubernetes service of type LoadBalancer (or NodePort, depending on your setup) that serves as the entry point into your cluster. The ingressgateway is the ingress controller of istio and it is completely optional.
The gateway is a custom resource of istio, that serves as an entry into your mesh. It is bound to an ingressgateway by the selector, eg see https://github.com/istio/istio/blob/master/samples/httpbin/httpbin-gateway.yaml
kind: Gateway
[...]
spec:
selector:
istio: ingressgateway
How would I use the sample application gateways instead of the istio-ingressgateway (that is really a service).
You need both (or another form of ingress controller and route all traffic through the mesh gateway, more on that see below).
How does istio connect my VirtualService to the istio-ingressgateway. Is it just looking for all VirtualServices?
See this yaml file again: https://github.com/istio/istio/blob/master/samples/httpbin/httpbin-gateway.yaml
The gateway is bound to the ingressgateway.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway
[...]
A VirtualService like the one in the file is bound to a gateway.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
gateways:
- httpbin-gateway
[...]
So if the traffic uses your gateway the VirtualService is considered.
Beside the gateways you configure, there is always the mesh gateway. So if you want your internal cluster traffic to use the istio configuration, you need to either add the mesh gateway to your virutal service:
gateways:
- httpbin-gateway
- mesh
or create a separat virutal service for that. If you don't set any gateway, mesh gateway will be used, since it is the default.
See: https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService -> the gateways entry

Kubernetes ingress with default config does not work

I have a problem with ingress. It just doesn't work. How to understand and find
what is wrong?
I have kubernetes bare metal.
Installed helm chart
helm install stable/nginx-ingress --name ingress --namespace nginx-ingress
In the same namespace deployed ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /test
backend:
serviceName: efk-kibana
servicePort: 5601
Changed the ingress type of the service from LoadBalancer to NodePort because it was not created otherwise.
After installation
curl http://example.com – get an answer example page.
Now all services work for me through NodePort, for example - myweb.com:31555.
In any tutorials does not write that i need to add something to / etc / hosts or something like that.
Thanks for the help.
If you're using a baremetal cluster, you're missing a piece of the puzzle.
Ingresses lie behind an ingress controller - you still need to expose that using a service with Type=LoadBalancer which isn't possible by default with a cloud provider.
There is however, a solution. MetalLB is a provider which will allow you to specify IPs for services with type LoadBalancer.
If you deploy this with a layer 2 configuration and update your ingress controller deployment, it will work without needing NodePort.

kubernetes v1.1 baremetal => how to connect ingress to outside world

I have a setup of kubernetes on a coreos baremetal.
For now I did the connection from outside world to service with a nginx reverse-proxy.
I'm trying the new Ingress resource.
for now I have added a simple ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kube-ui
spec:
backend:
serviceName: kube-ui
servicePort: 80
that starts like this:
INGRESS
NAME RULE BACKEND ADDRESS
kube-ui - kube-ui:80
My question is how to connect from the outside internet to that ingress point as this resource have no ADDRESS ... ?
POSTing this to the API server will have no effect if you have not configured an Ingress controller. You need to choose the ingress controller implementation that is the best fit for your cluster, or implement one. Examples and instructions can be found here.
check this gist
This is for the ingress-nginx, not kubernetes-ingress
Pre-requirement
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
Exposing hostNetwork (hope you know what you are doing. As documented, other than this, you can use nodePort or loadbalancer.)
kubectl edit deployment.apps/nginx-ingress-controller -n ingress-nginx
add
template:
spec:
hostNetwork: true
port forwarding
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
9000: "default/example-go:8080"
Also, you can use ingress object to expose the service