Why Secure GRPC calls do not reach ingress gateway? - kubernetes

I have installed istio 1.22.2 inside kubernetes (1.12.x) with sds enabled. I have been following this and I am able to do ssl termination at the ingress gateway for normal services (on HTTP/1.1). And I could see it in the access logs of the gateway.
gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 31400
name: tcp
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: "review-this-co" # must be the same as secret
hosts:
- "xyz.example.com"
However when GRPC is used over secure channel I could not see any access logs. (Grpc client fails). I Was expecting similar behavior for grpc as well(ie ssl termination at the ingress gateway).
NOTE: same grpc client works(call reaches the ingress gateway, visible in the access logs) with plaintext if the gateway is configured like following
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 31400
name: tcp
protocol: GRPC
hosts:
- "xyz.example.com"
Network loadbalancer has been used (pass through)

If I understand you correctly, the thing here is that:
GRPC currently works over a HTTP2 type transport
The current ingress is not capable of HTTP2
So are you sure your client is using HTTP1? Because otherwise it might not work.
Please let me know if that helped.

Try it out grpc greeter with istio, it works for me.
# greeter.yaml
apiVersion: v1
kind: Service
metadata:
name: greeter
labels:
app: greeter
spec:
ports:
- name: grpc
port: 50051
selector:
app: greeter
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: greeter
spec:
replicas: 1
template:
metadata:
labels:
app: greeter
version: v1
spec:
containers:
- image: tobegit3hub/grpc-helloworld
imagePullPolicy: IfNotPresent
name: greeter
ports:
- containerPort: 50051
# gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: greeter-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- 'xyz.example.com'
# virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: greeter
spec:
hosts:
- 'xyz.example.com'
gateways:
- greeter-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: greeter
port:
number: 50051
# grpc greeter client
docker run -it tobegit3hub/grpc-helloworld /greeter_client.py xyz.example.com:80

Related

Access log entries are not logged in istio sidecar for ingress traffic

I have alb ingress which routes its traffic to istio-ingressgateway.
From there I have a gateway:
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: "X-gateway"
namespace: dev
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "dev.xxx.com"
Also I have the virtual service in place:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: vs-istio-ingress
namespace: dev
spec:
gateways:
- X-gateway
hosts:
- "dev.xxx.com"
http:
- route:
- destination:
host: serviceX
port:
number: 8080
From there I have the service defined:
apiVersion: v1
kind: Service
metadata:
namespace: dev
name: serviceX
labels:
app: appX
spec:
selector:
app: podX
ports:
- port: 8080
I have access log enabled in the operator by setting:
spec:
meshConfig:
accessLogFile: /dev/stdout
The issue is when I hit the service from the ingress, the ingressgateway itself has the access log entry there, but not the sidecar of the service! (it's single pod), however, when the request is coming to the service via one of the service mesh the log entry is there in sidecar proxy access log.
Istio version is : 1.10.0
k8s version is : v1.21.4
The service port name should be there:
spec:
selector:
app: podX
ports:
- name: http
port: 8080
This solves the issue.

How to access the prometheus & grafana via Istion ingress gateway? I have installed the promethius anfd grafana through Helm

I used below command to bring up the pod:
kubectl create deployment grafana --image=docker.io/grafana/grafana:5.4.3 -n monitoring
Then I used below command to create custerIp:
kubectl expose deployment grafana --type=ClusterIP --port=80 --target-port=3000 --protocol=TCP -n monitoring
Then I have used below virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "*"
gateways:
- cogtiler-gateway.skydeck
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
port:
number: 3000
host: grafana
kubectl apply -f grafana-virtualservice.yaml -n monitoring
Output:
virtualservice.networking.istio.io/grafana created
Now, when I try to access it, I get below error from grafana:
**If you're seeing this Grafana has failed to load its application files
1. This could be caused by your reverse proxy settings.
2. If you host grafana under subpath make sure your grafana.ini root_path setting includes subpath
3. If you have a local dev build make sure you build frontend using: npm run dev, npm run watch, or npm run build
4. Sometimes restarting grafana-server can help **
The easiest and working out of the box solution to configure that would be with a grafana host and / prefix.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: monitoring
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: monitoring
spec:
hosts:
- "grafana.example.com"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: grafana
port:
number: 80
As you mentioned in the comments, I want to use path based routing something like my.com/grafana, that's also possible to configure. You can use istio rewrite to configure that.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: monitoring
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: monitoring
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana
rewrite:
uri: /
route:
- destination:
host: grafana
port:
number: 80
But, according to this github issue you would have also additionally configure grafana for that. As without the proper grafana configuration that won't work correctly.
I found a way to configure grafana with different url with the following env variable GF_SERVER_ROOT_URL in grafana deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: grafana
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: grafana
spec:
containers:
- image: docker.io/grafana/grafana:5.4.3
name: grafana
env:
- name: GF_SERVER_ROOT_URL
value: "%(protocol)s://%(domain)s/grafana/"
resources: {}
Also there is a Virtual Service and Gateway for that deployment.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana/
rewrite:
uri: /
route:
- destination:
host: grafana
port:
number: 80
You need to create a Gateway to allow routing between the istio-ingressgateway and your VirtualService.
Something in the lines of :
kind: Gateway
metadata:
name: ingress
namespace: istio-system
spec:
selector:
# Make sure that the istio-ingressgateway pods have this label
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my.domain.com
You also need a DNS entry for your domain (my-domain.com) that points to the IP address of your istio-ingressgateway.
When your browser will hit my.domain.com, then it'll be redirected to the istio-ingressgateway. The istio-ingressgateway will inspect the Host field from the request, and redirect the request to grafana (according to VirtualService rules).
You can check kubectl get svc -n istio-system | grep istio-ingressgateway to get the public IP of your ingress gateway.
If you want to enable TLS, then you need to provision a TLS certificate for your domain (most easy with cert-manager). Then you can use https redirect in your gateway, like so :
kind: Gateway
metadata:
name: ingress
namespace: whatever
spec:
selector:
# Make sure that the istio-ingressgateway pods have this label
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my.domain.com
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- my.domain.com
tls:
mode: SIMPLE
# name of the secret containing the TLS certificate + keys. The secret must exist in the same namespace as the istio-ingressgateway (probably istio-system namespace)
# This secret can be created by cert-manager
# Or you can create a self-signed certificate
# and add it to manually inside the browser trusted certificates
credentialName: my-domain-tls
Then you VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "my.domain.com"
gateways:
- ingress
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
port:
number: 3000
host: grafana

Kubernetes Istio exposure not working with Virtualservice and Gateway

So we have the following use case running on Istio 1.8.2/Kubernetes 1.18:
Our cluster is exposed via a External Loadbalancer on Azure. When we expose the app the following way, it works:
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
annotations:
...
name: frontend
namespace: frontend
spec:
replicas: 1
selector:
matchLabels:
app: applicationname
template:
metadata:
labels:
app: appname
name: frontend
customer: customername
spec:
imagePullSecrets:
- name: yadayada
containers:
- name: frontend
image: yadayada
imagePullPolicy: Always
ports:
- name: https
protocol: TCP
containerPort: 80
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
namespace: frontend
labels:
name: frontend-svc
customer: customername
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
selector:
name: frontend
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: frontend
namespace: frontend
annotations:
kubernetes.io/ingress.class: istio
kubernetes.io/tls-acme: "true"
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
rules:
- host: "customer.domain.com"
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: frontend-svc
servicePort: 80
tls:
- hosts:
- "customer.domain.com"
secretName: certificate
When we start using a Virtualservice and Gateway, we fail to make it work for some reason. We wanna use VSVC and Gateways cause they offer more flexibility and options (like url rewriting). Other apps dont have this issue running on istio (much simpler as well), we dont have networkpolicy in place (yet). We simply cannot reach the webpage. Anyone has an idea? Virtualservice and Gateway down below. with the other 2 replicasets not mentioned cause they are not the problem:
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
creationTimestamp: null
name: virtualservice-name
namespace: frontend
spec:
gateways:
- frontend
hosts:
- customer.domain.com
http:
- match:
- uri:
prefix: /
route:
- destination:
host: frontend
port:
number: 80
weight: 100
- match:
- uri:
prefix: /api/
route:
- destination:
host: backend
port:
number: 8080
weight: 100
- match:
- uri:
prefix: /auth/
route:
- destination:
host: keycloak
port:
number: 8080
weight: 100
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: frontend
namespace: frontend
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http2
protocol: HTTP
tls:
httpsRedirect: True
hosts:
- "customer.domain.com"
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: PASSTHROUGH
credentialName: customer-cert
hosts:
- "customer.domain.com"
Your Gateway specifies PASSTHROUGH, however your VirtualService provides an HttpRoute. This means the TLS connection is not terminated by the Gateway, but the VirtualService expects terminated TLS. See also this somewhat similar question.
How do I properly HTTPS secure an application when using Istio?
#user140547 Correct, we changed that now. But we still couldn't access the application.
We found out that one of the important services was not receiving gateway traffic, since that one wasn't setup correctly. It is our first time having an istio deployment with multiple services. So we thought each of them needed their own Gateway. Little did we know that 1 gateway was more then enough...

K8S: Routing with Istio return 404

I'm new in the k8s world.
Im my dev enviroment, I use ngnix as proxy(with CORS configs and with headers forwarding like ) for the different microservices (all made with spring boot) I have. In a k8s cluster, I had to replace it with Istio?
I'm trying to run a simple microservice(for now) and use Istio for routing to it. I've installed istio with google cloud.
If I navigate to IstioIP/auth/api/v1 it returns 404
This is my yaml file
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
name: http
number: 80
protocol: HTTP
hosts:
- '*'
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtual-service
spec:
hosts:
- "*"
gateways:
- gateway
http:
- match:
- uri:
prefix: /auth
route:
- destination:
host: auth-srv
port:
number: 8082
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
labels:
app: auth-srv
spec:
ports:
- name: http
port: 8082
selector:
app: auth-srv
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth-srv
spec:
replicas: 1
template:
metadata:
labels:
app: auth-srv
version: v1
spec:
containers:
- name: auth-srv
image: gcr.io/{{MY_PROJECT_ID}}/auth-srv:1.5
imagePullPolicy: IfNotPresent
env:
- name: JAVA_OPTS
value: '-DZIPKIN_SERVER=http://zipkin:9411'
ports:
- containerPort: 8082
livenessProbe:
httpGet:
path: /api/v1
port: 8082
initialDelaySeconds: 60
periodSeconds: 5
Looks like istio doesn't know anything about the url. Therefore you are getting a 404 error response.
If you look closer at the configuration in the virtual server you have configured istio to match on path /auth.
So if you try to request ISTIOIP/auth you will reach your microservice application. Here is image to describe the traffic flow and why you are getting a 404 response.

How to expose redis to outside with istio sidecar?

I'm using redis with k8s 1.15.0, istio 1.4.3, it works well inside the network.
However when I tryed to use the istio gateway and sidecar to expose it to outside network, it failed.
Then I removed the istio sidecar and just started the redis server in k8s, it worked.
After searching I added DestinationRule to the config, but it didn't help.
So what's the problem of it? Thanks for any tips!
Here is my redis.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: docker.io/redis:5.0.5-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 16379
protocol: TCP
name: redis-port
volumeMounts:
- name: redis-data
mountPath: /data
- name: redis-conf
mountPath: /etc/redis
command:
- "redis-server"
args:
- "/etc/redis/redis.conf"
- "--protected-mode"
- "no"
volumes:
- name: redis-conf
configMap:
name: redis-conf
items:
- key: redis.conf
path: redis.conf
- name: redis-data
nfs:
path: /data/redis
server: 172.16.8.34
---
apiVersion: v1
kind: Service
metadata:
name: redis-svc
labels:
app: redis-svc
spec:
type: ClusterIP
ports:
- name: redis-port
port: 16379
protocol: TCP
selector:
app: redis
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: tcp
protocol: TCP
hosts:
- "redis.basic.svc.cluster.local"
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: redis-svc
spec:
host: redis-svc
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "redis.basic.svc.cluster.local"
gateways:
- redis-gateway
tcp:
- route:
- destination:
host: redis-svc.basic.svc.cluster.local
port:
number: 16379
Update:
This is how I request
[root]# redis-cli -h redis.basic.svc.cluster.local -p 80
redis.basic.svc.cluster.local:80> get Test
Error: Protocol error, got "H" as reply type byte
There are few thing that need to be different in case of exposing TCP application with istio.
The hosts: needs to be "*" as TCP protocol works only with IP:PORT. There are no headers in L4.
There needs to be TCP port match Your VirtualService that matches GateWay. I suggest to name it in a unique way and match Deployment port name.
I suggest avoiding using port 80 as it is already used in default ingress configuration and it could result in port conflict, so i changed it to 11337.
So Your GateWay should look something like this:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 11337
name: redis-port
protocol: TCP
hosts:
- "*"
And Your VirtualService like this:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "*"
gateways:
- redis-gateway
tcp:
- match:
- port: 11337
route:
- destination:
host: redis-svc
port:
number: 16379
Note that I removed namespaces for clarity.
Then add our custom port to default ingress gateway use the following command:
kubectl edit svc istio-ingressgateway -n istio-system
And add following next other port definitions:
- name: redis-port
nodePort: 31402
port: 11337
protocol: TCP
targetPort: 16379
To access the exposed application use istio gateway external IP and port that we
just set up.
To get Your gateway external IP you can use:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
redis-cli -h $INGRESS_HOST -p 11337
If Your istio-ingressgateway does not have external IP assigned, use one of Your nodes IP address and port 31402.
Hope this helps.
Thanks for suren's answer.
But i think redis.basic.svc.cluster.local is outside DNS host to match by VirtualService, and VirtualService.host is route to service redis-svc with full namespace path.
Maybe not for that reason.