Expose opensource Helm charts through Istio Gateway/VirtualService - kubernetes

I want to expose some Helm Charts through Istio ingress.
For example, today I can expose Kubernetes Dashboard via Ingress type (with NginX Ingress):
helm install stable/kubernetes-dashboard --set ingress.enabled=true
However, for Istio would I have to fork the Kubernetes Dashboard Helm chart to add the required Gateway and VirtualService yaml?
Or is there a better way to patch opensource charts to work with Istio ingress?

You could create your own chart that includes the stable/kubernetes-dashboard as dependency in the requirements.yaml. Then you effectively have a wrapper chart that includes the dashboard and you can include the Istio ingress configuration at the wrapper level.

Actually you can do this without wrapping. In my case I had to expose Keycloak as VirtualService. Also keycloak was in other namespace.
I wrote Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: keycloak-gateway
namespace: keycloak
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
I wrote VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: demo-keycloak-http
namespace: keycloak
spec:
gateways:
- keycloak-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /auth
route:
- destination:
host: demo-keycloak-http.keycloak.svc.cluster.local
port:
number: 80
Notice that I am routing the service name.
As you can see, it is possible to expose the helm chart from other namespace, in addition. In your case, maybe you will not need to have to write Gateway
You just need find the name of service and write for it VirtualService.

Related

Accessing Jaeger /tracing on from k8s cluster returns index.html and 503 Service Unavailable

I have a Kubernetes cluster which runs with Istio as a service mesh and load balancing provided by Metallb. I have 4 Istio addons (Prometheus, Kiali, Grafana, and Jaeger) running on the cluster in the istio namespace, but running firefox on the virtual machine is relatively slow and I also don't want to rely on the "istioctl dashboard" command in order to access my monitoring tools.
I've successfully been able to access Kiali and Grafana by tunneling in with putty and utilizing Istio ingressgateway with Gateway/Virtual service resources similar to those found in istio documentation here - https://istio.io/latest/docs/tasks/observability/gateways/. The istio ingressgateway pod is listening on 10.10.1.10 and my putty tunnel is directed to 10.10.1.10:80 with a source port of 90. Everything is done in http for testing at this time
I've listed my specific configuration below -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tracing-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http-tracing
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tracing-vs
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- tracing-gateway
http:
- route:
- destination:
host: tracing
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: tracing
namespace: istio-system
spec:
host: tracing
trafficPolicy:
tls:
mode: DISABLE
---
Whenever I attempt to access Jaeger by hitting the /tracing , however, I always receive a 503 service unavailable error. I know that the application can be functional though because if I run the istioctl dashboard jaeger command I can access it through the VM's firefox browser. I'm wondering what I need to configure within Jaeger to allow me to access it
Initially, when working with Jaeger I attempted to use a gateway/virtualsservice configuration that was identical to what worked for Grafana and Kiali but replacing names/ports/prefixes. which is shown below -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
host: grafana
port:
number: 3000
When running this for jaeger I only ever received HTTP 503 responses. After trying different combinations of ports I used the yaml definition from the Istio page listed in the link above, changing only the hosts line since I don't have a domain and everything is IP based.
At this point, when I navigate to /tracing using my putty tunnel, it returns a blank page which, if inspected, is the jaegers index.html page. Inspecting the page shows that it attempts to redirect to jaeger_tracing but returns the net::ERR_ABORTED 503 (Service Unavailable) code shown in the screenshot below /tracing_error_image
A workaround solution was found by running the kubectl port-forward command on port 16686 but the same can be done with the istioctl dashboard jaeger command.
I ran one of them in the background and tunneled to localhost using my putty instance using the url /jaeger_tracing which is defined in my jaeger manifest. From there I could hit jaeger from my local instance without the sluggish VM performance.

How to expose Traefik v2 dashboard with Kubernetes Ingress

Currently I use Traefik IngressRoute to expose the Traefik dashboard. I am using this configuration:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dashboard
namespace: my-namespace
spec:
routes:
- match: Host(`traefik.example.com`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))
kind: Rule
services:
- name: api#internal
kind: TraefikService
middlewares:
- name: traefik-dashboard-https-redirect
- name: traefik-dashboard-basic-auth
tls:
certResolver: le
and it works fine.
However I would like to expose it with a native Kubernetes Ingress. I can't find any resource which shows how to access api#internal from an Ingress. Is it even possible?
It is not possible to reference api#internal from an Ingress.
There is a workaround I think, which could be:
expose the api as insecure, it exposes the dashboard by default on an entrypoint called traefik on port 8080.
update the entrypoint manually in the static conf: entrypoints.traefik.address=<what-you-want>
create a service pointing to the traefik entrypoint (port 8080 by default).
create an ingress pointing to the service

How can istio mesh-virtual-service manage traffic from ingress-virtual-service?

I am defining canary routes in mesh-virtual-service and wondering whether I can make it applicable for ingress traffic (with ingress-virtual-service) as well. With something like below, but it does not work (all traffic from ingress is going to non-canary version)
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-deployment-app
namespace: test-ns
spec:
gateways:
- mesh
hosts:
- test-deployment-app.test-ns.svc.cluster.local
http:
- name: canary
match:
- headers:
x-canary:
exact: "true"
- port: 8080
headers:
response:
set:
x-canary: "true"
route:
- destination:
host: test-deployment-app-canary.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
- name: stable
route:
- destination:
host: test-deployment-app.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-deployment-app-internal
namespace: test-ns
spec:
gateways:
- istio-system/default-gateway
hosts:
- myapp.dev.bla
http:
- name: default
route:
- destination:
host: test-deployment-app.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
So I am expecting x-canary:true response header when I call myapp.dev.bla but I don't see that.
Well the answer is only partially inside the link you included. I think the essential thing to realize when working with Istio 'what even is Istio Service Mesh'. Service mesh is every pod with Istio envoy-proxy sidecar + all the gateways (gateway is standalone envoy-proxy). They all know about each other because of IstioD so they can cooperate.
Any pod without Istio sidecar (including ingress pods or i.e. kube-system pods) in your k8s cluster doesn't know anything about Istio or Service Mesh. If such pod wants to send traffic to Service Mesh (to apply some Traffic Management rules like you have) must send it through Istio Gateway. Gateway is object that creates standard deployment + service. Pods in the deployment consist standalone envoy-proxy container.
Gateway object is a very similar concept to k8s ingress. But it doesn't have to listen on nodePort necessarily. You can use it also as an 'internal' gateway. Gateway serves as entry point into your service mesh. Either for external or even internal traffic.
If you're using i.e. Nginx as the Ingress solution you must reconfigure the Ingress rule to send traffic to one of the gateways instead of the target service. Most likely to your mesh gateway. It's nothing else than k8s Service inside istio-gateway or istio-system namespace
Alternatively you can configure Istio Gateway as 'new' Ingress. As I'm not sure if some default Istio Gateway listens on nodePort you need to check it (again in istio-gateway or istio-system namespace. Alternatively you can create new Gateway just for your application and apply VirtualService to the new gateway as well.

why treafik https config not work in kubernetes cluster

I am trying to configure https with traefik(v2.1.6) in kubernetes cluster(v1.15.2) by following this documentation.
My traefik deployment YAML looks like this:
And this is my IngressRoute config:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik-dashboard-route
namespace: kube-system
spec:
entryPoints:
- websecure
tls:
certresolver: ali
routes:
- match: Host(`traefik.example.com`)
kind: Rule
services:
- name: traefik
port: 8080
When I access the website, it gives me following message: not secure.
What should I do to make it work?
Since this certificate is from acme staging its root ca not present in browsers. You need to add it to your systems trust store.

Route traffic to a service in a different namespace with Traefik and Kubernetes

Using Traefik as an ingress controller (on a kube cluster in GCP).
Is it possible to create an ingress rule that uses a backend service from a different namespace?
We have a namespace for each of our "major" versions of code.
1-service.com -> 1-service.com ingress in the 1-service ns -> 1-service svc in the same ns
2-service.com -> 2-service.com ingress in the 2-service ns... and so on
I also would like another ingress rule in the "unversioned" namespace that will route traffic to one of the major releases.
service.com -> service.com ingress in the "service" ns -> X-service in the X-service namespace
I would like to keep major versions separate in k8s using versioned host names (1-service.com etc), but still have a "latest" that points to the latest of the releases.
I believe voyager can do cross namespace ingress -> svc. can Traefik do the same??
You can use a workaround like this:
Create a Service with type ExternalName in your namespace when you want to create an ingress:
apiVersion: v1
kind: Service
metadata:
name: service-1
namespace: unversioned
spec:
type: ExternalName
externalName: service-1.service-1-ns.svc.cluster.local
ports:
- name: http
port: 8080
protocol: TCP
Create an ingress that point to this service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: traefik
name: ingress-to-other-ns
namespace: service-1-ns
spec:
rules:
- host: latest.example.com
http:
paths:
- backend:
serviceName: service-1
servicePort: 8080
path: /
Just tested with the following example on EKS. Traefik is deployed in default namespace. This is the config used for the k8s service:
---
apiVersion: v1
kind: Service
metadata:
name: 1-service
namespace: 1-service
labels:
app: 1-service
spec:
selector:
app: 1-service
ports:
- name: http
port: 80
targetPort: 80
And this is the config used for Traefik service that will send the request to different namespace:
services:
1-service:
loadBalancer:
servers:
- url: http://1-service.1-service.svc.cluster.local:80
# - url: http://1-service.1-service:80 # This should work perfectly as well, didn't test it explicitly
As you probably already get that, you can reference to services from different namespace by using SERVICE.NAMESPACE notation, instead of the SERVICE, which will automatically assume that you are referencing a service from the current namespace.