TCP Ingress with Istio 0.8 and v1alpha3 Gateway - kubernetes

I am attempting to open a TCP connection into an Istio service mesh using the v1alpha3 routing. I can successfully open a connection with the external load balancer. That traffic is making it into the default IngressGateway as expected; I have verified this with tcpdump on the IngressGateway pod.
Unfortunately, the traffic is never forwarded into the service mesh; it seems to die in the IngressGateway.
The following is an example of my configuration:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: echo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 31400
protocol: TCP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: echo-gateway
spec:
hosts:
- "*"
gateways:
- echo-gateway
tcp:
- match:
- port: 31400
route:
- destination:
host: echo.default.svc.cluster.local
port:
number: 6060
I have verified that the IngressGateway can reach the Service via netcat on the specified port. Running tcpdump on the Service pod with the envoy indicates that there is never a communication attempt with the pod or proxy.
I've read over the documentation several times and I'm at a loss as how to proceed. This line from the documentation is suspicious to me:
While Istio will configure the proxy to listen on these ports, it is the responsibility of the user to ensure that external traffic to these ports are allowed into the mesh.
Any thoughts?

You should give the Gateway port a name such as
port:
name: not_http
number: 80
protocol: HTTP
(When I tried to create your cluster without a name in Istio 1.0 it was rejected). Using "not_http" helps to remind us that this is a TCP gateway and won't have access to all of the Istio configuration features.
The VirtualService looks correct. Make sure that you have only one VirtualService for the host "*" (use istioctl get all --all-namespaces).

Related

Issue running KNative with MicroK8S on Multipass

I'm trying to get KNative to be able to create services on my Multipass VM with MacOS as the host OS and I am using MicroK8S. I have DNS enabled and I am using metallb as my ingress controller. I have also changed Multipass to use hyperkit instead of VirtualBox. I don't know what's not been configured or missconfigured. The error I get when I try to create a new service is pasted below:
ubuntu#uncommon-javelin:~/sandbox/sessions/serverless_k8s/yaml$ kn service create nginx --image nginx --port 80
Error: Internal error occurred: failed calling webhook "webhook.serving.knative.dev": failed to call webhook: Post "https://webhook.knative-serving.svc:443/defaulting?timeout=10s": dial tcp 10.152.183.167:443: connect: connection refused
Run 'kn --help' for usage
When I ping that IP, it times out. So it seems like that IP address is either locked down or doesn't exist. Port 443 is configured in my ingress-service.yaml file
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
And here is what I have configured for metallb address pool
apiVersion: v1
kind: Service
metadata:
name: test-service
annotations:
metallb.univers.tf/address-pool: custom-addresspool
spec:
selector:
name: nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
And here's another address-pool.yaml I have configured for my cluster, I'm pretty sure that I have either something networking misconfigured or I'm missing a configuration somewhere.
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: custom-addresspool
namespace: metallb-system
spec:
addresses:
- 192.168.1.1-192.168.1.100
Knative uses validating admission webhooks to ensure that the resources in the cluster are valid. It seems like the Knative webhooks are not running on your cluster, but the validatingwebhookconfiguration has been created, as has the service in front of the webhook (the IP in the error message is a ClusterIP of a Kubernetes service on your cluster).
I'd look at the webhook pods in the knative-serving namespace for more details.

Use https protocol for endpoints in Kubernetes Services

Tried creating a Kubernetes endpoints service to invoke resource hosted outside the cluster via static IP's over HTTPS protocol.
Below is the endpoint code
kind: Service
apiVersion: v1
metadata:
name: serviceRequest
spec:
ports:
- port: 8081
targetPort: 8094
---
kind: Endpoints
apiVersion: v1
metadata:
name: serviceRequest
subsets:
- addresses:
- ip: XX.XX.XX.XX // **external IP which is accessible as https://XX.XX.XX.XX:8094**
ports:
- port: 8094
But the above configuration is giving 400 Bad Request with message as "This combination of host and port requires TLS."
Same is working for http not for https exposed "ip".Could someone please guide how to achieve this.
##Update1
This is how the flow is configured.
Ingress->service->endpoints
This is the error message your get when calling a https endpoint with http. Are you sure that whoever is calling your service, is calling it with https:// at the beginning?
Kubernetes Service is no more than a set of forwarding rules in iptables (most often), and it knows nothing about TLS.
If you want to enforce https redirection you might use ingress controller for this. All major ingress controllers have this capability.
For example, check for nginx-ingress.
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect.
Basically, all you need is to add this annotation to your ingress rule.
nginx.ingress.kubernetes.io/ssl-redirect: "true"
Easypeasy, just add port 443 to the Service that will make the request TLS/https:
kind: Service
apiVersion: v1
metadata:
name: serviceRequest
spec:
ports:
- port: 443 # <-- this is the way
targetPort: 8094
---
kind: Endpoints
apiVersion: v1
metadata:
name: serviceRequest
subsets:
- addresses:
- ip: XX.XX.XX.XX # **external IP which is accessible as https://XX.XX.XX.XX:8094**
ports:
- port: 8094
So you can reach your serviceRequest from your containers on https://serviceRequest url.
Also keep in mind that in yaml the # character is the comment sing not //

Unable to open Istio ingress-gateway for gRPC

This question is about my inability to connect a gRPC client to a gRPC service hosted in Kubernetes (AWS EKS), with an Istio ingress gateway.
On the kubernetes side: I have a container with a Go process listening on port 8081 for gRPC. The port is exposed at the container level. I define a kubernetes service and expose 8081. I define an istio gateway which selects istio: ingressgateway and opens port 8081 for gRPC. Finally I define an istio virtualservice with a route for anything on port 8081.
On the client side: I have a Go client which can send gRPC requests to the service.
It works fine when I kubectl port-forward -n mynamespace service/myservice 8081:8081 and call my client via client -url localhost:8081.
When I close the port forward, and call with client -url [redacted]-[redacted].us-west-2.elb.amazonaws.com:8081 my client fails to connect. (That url is the output of kubectl get svc istio-ingressgateway -n istio-system -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' with :8081 appended.
Logs:
I looked at the istio-system/istio-ingressgateway service logs. I do not see an attempted connection.
I do see the bookinfo connections I made earlier when going over the istio bookinfo tutorial. That tutorial worked, I was able to open a browser and see the bookinfo product page, and the ingressgateway logs show "GET /productpage HTTP/1.1" 200. So the Istio ingress-gateway works, it's just that I don't know how to configure it for a new gRPC endpoint.
Istio's Ingress-Gateway
kubectl describe service -n istio-system istio-ingressgateway
outputs the following, which I suspect is the problem, port 8081 is not listed despite my efforts to open it. I'm puzzled by how many ports are opened by default, I didn't open them (comments on how to close ports I don't use would be welcome but aren't the reason for this SO question)
Name: istio-ingressgateway
Namespace: istio-system
Labels: [redacted]
Annotations: [redacted]
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: LoadBalancer
IP: [redacted]
LoadBalancer Ingress: [redacted]
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 31125/TCP
Endpoints: 192.168.101.136:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 30717/TCP
Endpoints: 192.168.101.136:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 31317/TCP
Endpoints: 192.168.101.136:8443
Port: tcp 31400/TCP
TargetPort: 31400/TCP
NodePort: tcp 31102/TCP
Endpoints: 192.168.101.136:31400
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 30206/TCP
Endpoints: 192.168.101.136:15443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
So I think I did not properly open port 8081 for GRPC. What other logs or test can I run to help identify where this is coming from?
Here is the relevant yaml:
Kubernetes Istio virtualservice: whose intent is to route anything on port 8081 to myservice
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myservice
namespace: mynamespace
spec:
hosts:
- "*"
gateways:
- myservice
http:
- match:
- port: 8081
route:
- destination:
host: myservice
Kubernetes Istio gateway: whose intent is to open port 8081 for GRPC
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myservice
namespace: mynamespace
spec:
selector:
istio: ingressgateway
servers:
- name: myservice-plaintext
port:
number: 8081
name: grpc-svc-plaintext
protocol: GRPC
hosts:
- "*"
Kubernetes service: showing port 8081 is exposed at the service level, which I confirmed with the port-forward test mentioned earlier
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
selector:
app: myservice
ports:
- protocol: TCP
port: 8081
targetPort: 8081
name: grpc-svc-plaintext
Kubernetes deployment: showing port 8081 is exposed at the container level, which I confirmed with the port-forward test mentioned earlier
apiVersion: apps/v1
kind: Deployment
metadata:
name: myservice
namespace: mynamespace
labels:
app: myservice
spec:
replicas: 1
selector:
matchLabels:
app: myservice
template:
metadata:
labels:
app: myservice
spec:
containers:
- name: myservice
image: [redacted]
ports:
- containerPort: 8081
Re checking DNS works on the client:
getent hosts [redacted]-[redacted].us-west-2.elb.amazonaws.com
outputs 3 IP's, I'm assuming that's good.
[IP_1 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_2 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
[IP_3 redacted] [redacted]-[redacted].us-west-2.elb.amazonaws.com
Checking Istio Ingressgateway's routes:
istioctl proxy-status istio-ingressgateway-[pod name]
istioctl proxy-config routes istio-ingressgateway-[pod name]
returns
Clusters Match
Listeners Match
Routes Match (RDS last loaded at Wed, 23 Sep 2020 13:59:41)
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
http.8081 * /* myservice.mynamespace
* /healthz/ready*
* /stats/prometheus*
Port 8081 is routed to myservice.mynamespace, seems good to me.
UPDATE 1:
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case.
The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I am starting to understand I can't open port 8081 using the default istio ingress gateway. That service does not expose that port, and I was assuming creating a gateway would update the service "under the hood" but that's not the case. The external ports that I can pick from are: 80, 443, 31400, 15443, 15021 and I think my gateway needs to rely only on those. I've updated my gateway and virtual service to use port 80 and the client then connects to the server just fine.
I'm not sure how exactly did you try to add custom port for ingress gateway but it's possible.
As far as I checked here it's possible to do in 3 ways, here are the options with links to examples provided by #A_Suh, #Ryota and #peppered.
Kubectl edit
Helm
Istio Operator
Additional resources:
How to create custom istio ingress gateway controller?
How to configure ingress gateway in istio?
That means I have to differentiate between multiple services not by port (can't route from the same port to two services obviously), but by SNI, and I'm unclear how to do that in gRPC, I'm guessing I can add a Host:[hostname] in the gRPC header. Unfortunately, if that's how I can route, it means headers need to be read on the gateway, and that mandates terminating TLS at the gateway when I was hoping to terminate at the pod.
I see you have already create new question here, so let's just move there.
I have added the port to ingress gateway successfully, but still I couldn't get client connected to server. For me too, port-forwarding works, but when I try to connected through ingress getting below error. Here istio ingressgateway is on GKE, so it's using global HTTPS load balancer.
Jun 14, 2021 8:28:08 PM com.manning.mss.ch12.grpc.sample01.InventoryClient updateInventory
WARNING: RPC failed: Status{code=INTERNAL, description=http2 exception, cause=io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception: First received frame was not SETTINGS. Hex dump for first 5 bytes: 485454502f
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:85)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.verifyFirstFrameIsSettings(Http2ConnectionHandler.java:350)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler$PrefaceDecoder.decode(Http2ConnectionHandler.java:251)
at io.grpc.netty.shaded.io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:450)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
at io.grpc.netty.shaded.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
at io.grpc.netty.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
at io.grpc.netty.shaded.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
at io.grpc.netty.shaded.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
at io.grpc.netty.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
at io.grpc.netty.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
at io.grpc.netty.shaded.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)

Istio Custom Ingress Gateway Works 80 only

I want to create my own ingress gateway with Istio. Here's my intention:
traffic on 4000 > my-gateway > my-virtualservice > web service (listening on 4000)
I've deployed the following YAML:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 4000
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-virtualservice
spec:
hosts:
- "*"
gateways:
- my-gateway
http:
- route:
- destination:
host: web
port:
number: 4000
This doesn't work, but changing the gateway port number: 4000 to number: 80 does work.
Presumably because the istio-ingressgateway is open on 80.
Which leads me to believe that this chain is actually:
traffic on 4000 > my-gateway > my-virtualservice > istio-ingressgateway > web service
I assume I can fix this by opening 4000 on the istio-ingressgateway but doesn't that defeat the point of creating a custom gateway?
I thought the whole point of creating my-gateway was to avoid using the istio-ingressgateway?
Help me understand! :D
Traffic Flow: Clent -> LoadBalancer(Ingress Gateway Service) -> Ingress Gateway Envoy -> Sidecar Envoy for your application -> Your application.
The ingress gateway is an envoy deployed at the edge of a Kubernetes cluster. All incoming request(HTTP, TCP) to the services inside the cluster arrives at the ingress gateway.The Gateway and Virtual Service kind is what lets you configure Envoy proxy of the ingress gateway.
Creating a gateway object does not really deploy a new gateway,it just configures the same envoy proxy running as the ingress gateway.
Here is a good reference

istio - tracing egress traffic

I installed Istio with
gateways.istio-egressgateway.enabled = true
I have a service that consumes external services, so I define the following egress rule.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-service1
spec:
hosts:
- external-service1.com
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
But using Jaeger I can not see the traffic to the external service, and thus be able to detect problems in the network.
I'm forwarding the appropriate headers to the external service (x-request-id, x-b3-traceid, x-b3-spanid, b3-parentspanid, x-b3-sampled, x-b3-flags, x-ot-span-context)
Is this the correct behavior?
what is happening?
Can I only have statistics of internal calls?
How can I have statistics for egress traffic?
Assuming that your services are defined in Istio’s internal service registry. If not please configure it according to instruction service-defining.
In HTTPS all the HTTP-related information like method, URL path, response code, is encrypted so Istio cannot see and cannot monitor that information for HTTPS.
If you need to monitor HTTP-related information in access to external HTTPS services, you may want to let your applications issue HTTP requests and configure Istio to perform TLS origination.
First you have to redefine your ServiceEntry and create VirtualService to rewrite the HTTP request port and add a DestinationRule to perform TLS origination.
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-service1
spec:
hosts:
- external-service1.com
ports:
- number: 80
name: http-port
protocol: HTTP
- number: 443
name: http-port-for-tls-origination
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: external-service1
spec:
hosts:
- external-service1.com
http:
- match:
- port: 80
route:
- destination:
host: external-service1.com
port:
number: 443
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: external-service1
spec:
host: external-service1.com
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 443
tls:
mode: SIMPLE # initiates HTTPS when accessing external-service1.com
EOF
The VirtualService redirects HTTP requests on port 80 to port 443 where the corresponding DestinationRule then performs the TLS origination. Unlike the previous ServiceEntry, this time the protocol on port 443 is HTTP, instead of HTTPS, because clients will only send HTTP requests and Istio will upgrade the connection to HTTPS.
I hope it helps.