Can we trace external API calls with Istio behind proxy via Kiali? - kubernetes

We have a Nodejs based microservices running in our on-prem kubernetes v1.19 with Istio v1.8.0. What I would like to achieve is trace or display the external API calls in Kiali where we have Jaeger clients for each microservices and able to trace internal traffics.
But so far I could not able to trace any external API calls hits from any microservices.The only thing that I can see the traffic for proxy in Kiali's graph overview.
We have a cooperate proxy, and each container have env proxies set for both http_proxy, https_proxy.Any external service accessible via a cooperate proxy thus traffics should go through the our cooperate proxy first. We have a secured gateway with TLS and we do not have egressgateway where only have istio-ingressgateway.
So is there anyway to trace external traffics likewise the internal traffics inside cluster?If yes what might be the missing thing?
$ kubectl get pods -n dev
NAME READY STATUS RESTARTS AGE
api-dev-74896ff4f9-slxt5 3/3 Running 0 7h1m
auth-dev-98f77d487-qt5zd 3/3 Running 0 3d5h
backend-dev-bb7765464-b7bpr 2/2 Running 0 7d3h
mp-dev-86d6b8b978-slqp7 3/3 Running 0 5d9h
ui-dev-d5667946b-sdvlc 2/2 Running 0 5d4h
Here are the ServiceEntries and VirtualServices that I created where I would like to use the retry feature as well the calls for proxy and externalAPI
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: company-proxy
namespace: dev
spec:
hosts:
- foo-proxy.net
ports:
- number: PORT
name: tcp
protocol: TCP
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: proxy
namespace: dev
spec:
hosts:
- "foo-proxy.net"
http:
- name: "company-proxy"
match:
- uri:
prefix: "/"
route:
- destination:
host: "foo-proxy.com"
timeout: 90s
retries:
retryOn: "5xx"
attempts: 3
perTryTimeout: 30s
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: foo-example.com
namespace: dev
spec:
hosts:
- "foo-example.com"
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
location: MESH_EXTERNAL
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: foo-example.com
namespace: dev
spec:
hosts:
- "foo-example.com"
http:
- name: "developer-api"
match:
- uri:
prefix: "/"
route:
- destination:
host: "foo-example.com"
timeout: 90s
retries:
retryOn: "5xx"
attempts: 3
perTryTimeout: 30s

I am not sure why Istio doesn't automatically trace your calls to external APIs. Perhaps it requires an egress gateway to be used, I'm not sure. Note also that Istio creates traces for http(s) traffic, not TCP.
However, this is something you can still do programmatically. You can use any of the Jaeger client libraries to augment"the traces already created by Envoy by appending your own spans.
To do so, you need first to extract the trace context from the HTTP headers of the incoming request (assuming that your external API calls are consecutive to an incoming request), and then create a new span as child of that previous span context. A good idea would be to use OpenTracing semantic conventions when you tag your new span. Tools like Kiali will be able to leverage some information if it follows this convention.
I've found this blog post that explains how to do it with the nodejs jaeger client: https://rhonabwy.com/2019/01/06/adding-tracing-with-jaeger-to-an-express-application/

Related

Accessing Jaeger /tracing on from k8s cluster returns index.html and 503 Service Unavailable

I have a Kubernetes cluster which runs with Istio as a service mesh and load balancing provided by Metallb. I have 4 Istio addons (Prometheus, Kiali, Grafana, and Jaeger) running on the cluster in the istio namespace, but running firefox on the virtual machine is relatively slow and I also don't want to rely on the "istioctl dashboard" command in order to access my monitoring tools.
I've successfully been able to access Kiali and Grafana by tunneling in with putty and utilizing Istio ingressgateway with Gateway/Virtual service resources similar to those found in istio documentation here - https://istio.io/latest/docs/tasks/observability/gateways/. The istio ingressgateway pod is listening on 10.10.1.10 and my putty tunnel is directed to 10.10.1.10:80 with a source port of 90. Everything is done in http for testing at this time
I've listed my specific configuration below -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tracing-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http-tracing
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tracing-vs
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- tracing-gateway
http:
- route:
- destination:
host: tracing
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: tracing
namespace: istio-system
spec:
host: tracing
trafficPolicy:
tls:
mode: DISABLE
---
Whenever I attempt to access Jaeger by hitting the /tracing , however, I always receive a 503 service unavailable error. I know that the application can be functional though because if I run the istioctl dashboard jaeger command I can access it through the VM's firefox browser. I'm wondering what I need to configure within Jaeger to allow me to access it
Initially, when working with Jaeger I attempted to use a gateway/virtualsservice configuration that was identical to what worked for Grafana and Kiali but replacing names/ports/prefixes. which is shown below -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
host: grafana
port:
number: 3000
When running this for jaeger I only ever received HTTP 503 responses. After trying different combinations of ports I used the yaml definition from the Istio page listed in the link above, changing only the hosts line since I don't have a domain and everything is IP based.
At this point, when I navigate to /tracing using my putty tunnel, it returns a blank page which, if inspected, is the jaegers index.html page. Inspecting the page shows that it attempts to redirect to jaeger_tracing but returns the net::ERR_ABORTED 503 (Service Unavailable) code shown in the screenshot below /tracing_error_image
A workaround solution was found by running the kubectl port-forward command on port 16686 but the same can be done with the istioctl dashboard jaeger command.
I ran one of them in the background and tunneled to localhost using my putty instance using the url /jaeger_tracing which is defined in my jaeger manifest. From there I could hit jaeger from my local instance without the sluggish VM performance.

Use istio ServiceEntry resource to send traffic to internal kubernetes FQDN over external connection

CONTEXT:
I'm in the middle of planning a migration of kubernetes services from one cluster to another, the clusters are in separate GCP projects but need to be able to communicate across the clusters until all apps are moved across. The projects have VPC peering enabled to allow internal traffic to an internal load balancer (tested and confirmed that's fine).
We run Anthos service mesh (v1.12) in GKE clusters.
PROBLEM:
I need to find a way to do the following:
PodA needs to be migrated, and references a hostname in its ENV which is simply 'serviceA'
Running in the same cluster this resolves fine as the pod resolves 'serviceA' to 'serviceA.default.svc.cluster.local' (the internal kubernetes FQDN).
However, when I run PodA on the new cluster I need serviceA's hostname to actually resolve back to the internal load balancer on the other cluster, and not on its local cluster (and namespace), seen as serviceA is still running on the old cluster.
I'm using an istio ServiceEntry resource to try and achieve this, as follows:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: GRPC
resolution: STATIC
endpoints:
- address: 'XX.XX.XX.XX' # IP Redacted
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: resources
namespace: default
spec:
hosts:
- 'serviceA.default.svc.cluster.local'
gateways:
- mesh
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
This doesn't appear to work and I'm getting Error: 14 UNAVAILABLE: upstream request timeout errors on PodA running in the new cluster.
I can confirm that running telnet to the hostname from another pod on the mesh appears to work (i.e. don't get connection timeout or connection refused).
Is there a limitation on what you can use in the hosts on a serviceentry? Does it have to be a .com or .org address?
The only way I've got this to work properly is to use a hostAlias in PodA to add a hostfile entry for the hostname, but I really want to try and avoid doing this as it means making the same change in lots of files, I would rather try and use Istio's serviceentry to try and achieve this.
Any ideas/comments appreciated, thanks.
Fortunately I came across someone with a similar (but not identical) issue, and the answer in this stackoverflow post gave me the outline of what kubernetes (and istio) resources I needed to create.
I was heading in the right direction, just needed to really understand how istio uses Virtual Services and Service Entries.
The end result was this:
apiVersion: v1
kind: Service
metadata:
name: serviceA
namespace: default
spec:
type: ExternalName
externalName: serviceA.example.com
ports:
- name: grpc
protocol: TCP
port: 50051
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.example.com
location: MESH_EXTERNAL
ports:
- number: 50051
name: grpc
protocol: TCP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: serviceA
namespace: default
spec:
hosts:
- serviceA.default.svc.cluster.local
http:
- timeout: 5s
route:
- destination:
host: serviceA.default.svc.cluster.local
rewrite:
authority: serviceA.example.com

How can I proxy an external site through my Kubernetes(OpenShift) Ingress?

I have a website that needs to be proxied through my web app.
Traditionally we've accomplished it via apache proxy with proxy directives.
The proxy also rewrites some of the headers and adds a couple of new ones.
Now the app has moved to OpenShift (Kubernetes) and I'm trying to avoid deploying another pod with apache.
Can I perform this header rewriting and proxying via K8 ingress? or router?
I've tried this approach, but it didn't work.
I also don't know how to get OpenShift Ingress logs, nothing seems to happen in there.
I tried using an external name, but it doesn't work:
kind: Service
metadata:
name: es3
spec:
externalName: google.com
type: ExternalName
---
kind: Route
apiVersion: route.openshift.io/v1
spec:
host: host.my-cluster-url.net
to:
kind: Service
name: es3
port:
targetPort: es3
I also tried using Endpoints , same result
apiVersion: v1
kind: Service
metadata:
name: mysvc
spec:
ports:
- name: app
port: 80
protocol: TCP
targetPort: 80
clusterIP: None
type: ClusterIP
---
apiVersion: v1
kind: Endpoints
metadata:
name: mysvc
subsets:
- addresses:
- ip: my.ip.address
ports:
- name: app
port: 80
protocol: TCP
you want to proxy non kubernetes service, right? if yes, use end point and create service from end point, I have used this with kubernetes will work with openshift too my wild guess
https://kubernetes.io/docs/concepts/services-networking/endpoint-slices/

How can istio mesh-virtual-service manage traffic from ingress-virtual-service?

I am defining canary routes in mesh-virtual-service and wondering whether I can make it applicable for ingress traffic (with ingress-virtual-service) as well. With something like below, but it does not work (all traffic from ingress is going to non-canary version)
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-deployment-app
namespace: test-ns
spec:
gateways:
- mesh
hosts:
- test-deployment-app.test-ns.svc.cluster.local
http:
- name: canary
match:
- headers:
x-canary:
exact: "true"
- port: 8080
headers:
response:
set:
x-canary: "true"
route:
- destination:
host: test-deployment-app-canary.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
- name: stable
route:
- destination:
host: test-deployment-app.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test-deployment-app-internal
namespace: test-ns
spec:
gateways:
- istio-system/default-gateway
hosts:
- myapp.dev.bla
http:
- name: default
route:
- destination:
host: test-deployment-app.test-ns.svc.cluster.local
port:
number: 8080
weight: 100
So I am expecting x-canary:true response header when I call myapp.dev.bla but I don't see that.
Well the answer is only partially inside the link you included. I think the essential thing to realize when working with Istio 'what even is Istio Service Mesh'. Service mesh is every pod with Istio envoy-proxy sidecar + all the gateways (gateway is standalone envoy-proxy). They all know about each other because of IstioD so they can cooperate.
Any pod without Istio sidecar (including ingress pods or i.e. kube-system pods) in your k8s cluster doesn't know anything about Istio or Service Mesh. If such pod wants to send traffic to Service Mesh (to apply some Traffic Management rules like you have) must send it through Istio Gateway. Gateway is object that creates standard deployment + service. Pods in the deployment consist standalone envoy-proxy container.
Gateway object is a very similar concept to k8s ingress. But it doesn't have to listen on nodePort necessarily. You can use it also as an 'internal' gateway. Gateway serves as entry point into your service mesh. Either for external or even internal traffic.
If you're using i.e. Nginx as the Ingress solution you must reconfigure the Ingress rule to send traffic to one of the gateways instead of the target service. Most likely to your mesh gateway. It's nothing else than k8s Service inside istio-gateway or istio-system namespace
Alternatively you can configure Istio Gateway as 'new' Ingress. As I'm not sure if some default Istio Gateway listens on nodePort you need to check it (again in istio-gateway or istio-system namespace. Alternatively you can create new Gateway just for your application and apply VirtualService to the new gateway as well.

Use https protocol for endpoints in Kubernetes Services

Tried creating a Kubernetes endpoints service to invoke resource hosted outside the cluster via static IP's over HTTPS protocol.
Below is the endpoint code
kind: Service
apiVersion: v1
metadata:
name: serviceRequest
spec:
ports:
- port: 8081
targetPort: 8094
---
kind: Endpoints
apiVersion: v1
metadata:
name: serviceRequest
subsets:
- addresses:
- ip: XX.XX.XX.XX // **external IP which is accessible as https://XX.XX.XX.XX:8094**
ports:
- port: 8094
But the above configuration is giving 400 Bad Request with message as "This combination of host and port requires TLS."
Same is working for http not for https exposed "ip".Could someone please guide how to achieve this.
##Update1
This is how the flow is configured.
Ingress->service->endpoints
This is the error message your get when calling a https endpoint with http. Are you sure that whoever is calling your service, is calling it with https:// at the beginning?
Kubernetes Service is no more than a set of forwarding rules in iptables (most often), and it knows nothing about TLS.
If you want to enforce https redirection you might use ingress controller for this. All major ingress controllers have this capability.
For example, check for nginx-ingress.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect.
Basically, all you need is to add this annotation to your ingress rule.
nginx.ingress.kubernetes.io/ssl-redirect: "true"
Easypeasy, just add port 443 to the Service that will make the request TLS/https:
kind: Service
apiVersion: v1
metadata:
name: serviceRequest
spec:
ports:
- port: 443 # <-- this is the way
targetPort: 8094
---
kind: Endpoints
apiVersion: v1
metadata:
name: serviceRequest
subsets:
- addresses:
- ip: XX.XX.XX.XX # **external IP which is accessible as https://XX.XX.XX.XX:8094**
ports:
- port: 8094
So you can reach your serviceRequest from your containers on https://serviceRequest url.
Also keep in mind that in yaml the # character is the comment sing not //