How to expose redis to outside with istio sidecar? - kubernetes

I'm using redis with k8s 1.15.0, istio 1.4.3, it works well inside the network.
However when I tryed to use the istio gateway and sidecar to expose it to outside network, it failed.
Then I removed the istio sidecar and just started the redis server in k8s, it worked.
After searching I added DestinationRule to the config, but it didn't help.
So what's the problem of it? Thanks for any tips!
Here is my redis.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: docker.io/redis:5.0.5-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 16379
protocol: TCP
name: redis-port
volumeMounts:
- name: redis-data
mountPath: /data
- name: redis-conf
mountPath: /etc/redis
command:
- "redis-server"
args:
- "/etc/redis/redis.conf"
- "--protected-mode"
- "no"
volumes:
- name: redis-conf
configMap:
name: redis-conf
items:
- key: redis.conf
path: redis.conf
- name: redis-data
nfs:
path: /data/redis
server: 172.16.8.34
---
apiVersion: v1
kind: Service
metadata:
name: redis-svc
labels:
app: redis-svc
spec:
type: ClusterIP
ports:
- name: redis-port
port: 16379
protocol: TCP
selector:
app: redis
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: tcp
protocol: TCP
hosts:
- "redis.basic.svc.cluster.local"
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: redis-svc
spec:
host: redis-svc
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "redis.basic.svc.cluster.local"
gateways:
- redis-gateway
tcp:
- route:
- destination:
host: redis-svc.basic.svc.cluster.local
port:
number: 16379
Update:
This is how I request
[root]# redis-cli -h redis.basic.svc.cluster.local -p 80
redis.basic.svc.cluster.local:80> get Test
Error: Protocol error, got "H" as reply type byte

There are few thing that need to be different in case of exposing TCP application with istio.
The hosts: needs to be "*" as TCP protocol works only with IP:PORT. There are no headers in L4.
There needs to be TCP port match Your VirtualService that matches GateWay. I suggest to name it in a unique way and match Deployment port name.
I suggest avoiding using port 80 as it is already used in default ingress configuration and it could result in port conflict, so i changed it to 11337.
So Your GateWay should look something like this:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 11337
name: redis-port
protocol: TCP
hosts:
- "*"
And Your VirtualService like this:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "*"
gateways:
- redis-gateway
tcp:
- match:
- port: 11337
route:
- destination:
host: redis-svc
port:
number: 16379
Note that I removed namespaces for clarity.
Then add our custom port to default ingress gateway use the following command:
kubectl edit svc istio-ingressgateway -n istio-system
And add following next other port definitions:
- name: redis-port
nodePort: 31402
port: 11337
protocol: TCP
targetPort: 16379
To access the exposed application use istio gateway external IP and port that we
just set up.
To get Your gateway external IP you can use:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
redis-cli -h $INGRESS_HOST -p 11337
If Your istio-ingressgateway does not have external IP assigned, use one of Your nodes IP address and port 31402.
Hope this helps.

Thanks for suren's answer.
But i think redis.basic.svc.cluster.local is outside DNS host to match by VirtualService, and VirtualService.host is route to service redis-svc with full namespace path.
Maybe not for that reason.

Related

Istio using IPv6 instead of IPv4

I am using Kubernetes with Minikube on a Windows 10 Home machine to "host" a gRPC service. I am working on getting Istio working in the cluster and have been running into the same issue over and over and I cannot figure out why. The problem is that once everything is up and running, the Istio gateway uses IPv6, seemingly for no reason at all. IPv6 is even disabled on my machine (via regedit) and network adapters. My other services are accessible from IPv4. Below are my steps for installing my environment:
minikube start
kubectl create namespace abc
kubectl apply -f service.yml -n abc
kubectl apply -f gateway.yml
istioctl install --set profile=default -y
kubectl label namespace abc istio-injection=enabled
Nothing is accessible over the network at this point, until I run the following in its own terminal:
minikube tunnel
Now I can access the gRPC service directly using IPv4: 127.0.0.1:5000. However, accessing the gateway is inaccessible from 127.0.0.1:443 and instead is only accessible from [::1]:443.
Here is the service.yml:
apiVersion: v1
kind: Service
metadata:
name: account-grpc
spec:
ports:
- name: grpc
port: 5000
protocol: TCP
targetPort: 5000
selector:
service: account
ipc: grpc
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: account
ipc: grpc
name: account-grpc
spec:
replicas: 1
selector:
matchLabels:
service: account
ipc: grpc
template:
metadata:
labels:
service: account
ipc: grpc
spec:
containers:
- image: account-grpc
name: account-grpc
imagePullPolicy: Never
ports:
- containerPort: 5000
Here is the gateway.yml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtual-service
spec:
hosts:
- "*"
gateways:
- gateway
http:
- match:
- uri:
prefix: /account
route:
- destination:
host: account-grpc
port:
number: 5000
And here are the results of kubectl get service istio-ingressgateway -n istio-system -o yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: ...
creationTimestamp: "2021-08-27T01:21:21Z"
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.11.1
release: istio
name: istio-ingressgateway
namespace: istio-system
resourceVersion: "4379"
uid: b4db0e2f-0f45-4814-b187-287acb28d0c6
spec:
clusterIP: 10.97.4.216
clusterIPs:
- 10.97.4.216
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: status-port
nodePort: 32329
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 31913
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32382
port: 443
protocol: TCP
targetPort: 8443
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 127.0.0.1
Changing the port number to port 80 resolved my issue. The problem was that my gRPC service was not using HTTPS. Will return if I have trouble once I change the service to use HTTPS.

cannot hit pod in kubernetes cluster from other pod but can from ingress

I'm able to hit a pod from outside my k8s cluster using an ingress but cannot from within the cluster and am getting a "connection refused" error. I tried to shell into the pod that's refusing connections and run the following curls which work just fine when running in my local/host environment:
curl localhost:4000/api/v1/users
curl 127.0.0.1:4000/api/v1/users
curl 0.0.0.0:4000/api/v1/users
curl :4000/api/v1/users
to no avail. The cluster ip is 10.99.224.173 but that times out and I'd prefer not to bypass dns since this is dynamically assigned by k8s. And it's not working anyway. The service is a nodejs based one. I can add more information but figured I'd try to err on the side of too little information than too much. To isolate the issue as being a k8s problem, I've run the two services locally outside of k8s with no issues. I think a good starting point would be to identify why I can't curl to the server from within the same pod. Thanks!
EDIT 2: closing the cluster from skaffold and re-running skaffold dev resolved this issue and I'm now able to run the following just fine:
curl localhost:4000/api/v1/users
curl 127.0.0.1:4000/api/v1/users
curl 0.0.0.0:4000/api/v1/users
curl :4000/api/v1/users
I found that the tchannel-node library does not accept 0.0.0.0 as a valid ip address to listen to, and the closest I can pass is 127.0.0.1. Unfortunately, this means that calling to the cluster ip 10.99.224.173:9090 will never be registered by the server as 127.0.0.1:9090 the way 0.0.0.0:9090 will. I'm wondering how I can fix my understanding to pass the correct ip address.
EDIT (requested yaml files):
client
apiVersion: apps/v1
kind: Deployment
metadata:
name: tickets-depl
spec:
replicas: 1
selector:
matchLabels:
app: tickets
template:
metadata:
labels:
app: tickets
spec:
containers:
- name: tickets
image: mine/tickets-go
---
apiVersion: v1
kind: Service
metadata:
name: tickets-svc
spec:
selector:
app: tickets
ports:
- name: tickets
protocol: TCP
port: 4004
targetPort: 4004
server that refuses connections
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: mine/auth
env:
- name: PORT
value: "4000"
- name: TCHANNEL_PORT
value: "9090"
---
apiVersion: v1
kind: Service
metadata:
name: auth-svc
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 4000
targetPort: 4000
- name: auth-thrift
protocol: TCP
port: 9090
targetPort: 9090
ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-svc
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: foo.com
http:
paths:
- path: /api/v1/users/?(.*)
backend:
service:
name: auth-svc
port:
number: 4000
pathType: Prefix
- path: /api/v1/tickets/?(.*)
backend:
service:
name: tickets-svc
port:
number: 4004
pathType: Prefix

Can't access kubernetes cluster with Istio gateway

I have a k8s cluster with Istio ingress.
I deployed a deployment, service, gateway and a virtual service but I still can't access my service from outside the cluster.
I'm able to access my service by accessing the workers on the nodePort specified, but I'd expect that the Istio gateway will still listen on port 80 on my master but it doesn't look like that.
What am I doing wrong here?
service.yaml:
apiVersion: v1
kind: Service
metadata:
name: microservices-service
spec:
type: NodePort
selector:
app: microservices-deployment
ports:
- port: 5001
targetPort: 5001
nodePort: 30007
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: microservices-deployment
labels:
app: microservices-deployment
spec:
replicas: 3
template:
metadata:
name: microservices-deployment
labels:
app: microservices-deployment
spec:
containers:
- name: microservices-deployment
image: *** private docker registry ***
imagePullPolicy: Always
ports:
- containerPort: 5001
restartPolicy: Always
imagePullSecrets:
- name: regcred
selector:
matchLabels:
app: microservices-deployment
ingress.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: microservices-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: microservices
spec:
hosts:
- "*"
gateways:
- microservices-gateway
http:
- match:
route:
- destination:
host: *** master hostname ***
port:
number: 5001
Thanks a lot!
I checked your configuration and everything looks set up correctly. There is only one little mistake to fix is your virtual service.
Change it from
http:
- match:
route:
- destination:
host: *** master hostname ***
port:
number: 5001
to
http:
- route:
- destination:
host: microservices-service
port:
number: 5001
And you should be able to access it with your istio gateway external-ip LoadBalancer/NodePort.
More about it here.
kubectl get svc -n istio-system | grep istio-ingress
Quick example with nginx, note that I'm using LoadBalancer instead of NodePort.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx1
spec:
selector:
matchLabels:
run: nginx1
replicas: 1
template:
metadata:
labels:
run: nginx1
app: frontend
spec:
containers:
- name: nginx1
image: nginx
ports:
- containerPort: 80
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "echo Hello nginx1 > /usr/share/nginx/html/index.html"]
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: frontend
spec:
ports:
- port: 80
protocol: TCP
selector:
app: frontend
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: nginx-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginx-virtual
spec:
gateways:
- nginx-gateway
hosts:
- "*"
http:
- route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
kubectl get svc -n istio-system | grep ingress
istio-ingressgateway LoadBalancer xx.x.xx.xxx xx.xx.xx.xx 15021:30880/TCP,80:31983/TCP,443:31510/TCP,15443:32267/TCP 2d2h
Test with curl
curl -v xx.xx.xx.xx/
GET / HTTP/1.1
HTTP/1.1 200 OK
Hello nginx1

Why Secure GRPC calls do not reach ingress gateway?

I have installed istio 1.22.2 inside kubernetes (1.12.x) with sds enabled. I have been following this and I am able to do ssl termination at the ingress gateway for normal services (on HTTP/1.1). And I could see it in the access logs of the gateway.
gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 31400
name: tcp
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: "review-this-co" # must be the same as secret
hosts:
- "xyz.example.com"
However when GRPC is used over secure channel I could not see any access logs. (Grpc client fails). I Was expecting similar behavior for grpc as well(ie ssl termination at the ingress gateway).
NOTE: same grpc client works(call reaches the ingress gateway, visible in the access logs) with plaintext if the gateway is configured like following
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 31400
name: tcp
protocol: GRPC
hosts:
- "xyz.example.com"
Network loadbalancer has been used (pass through)
If I understand you correctly, the thing here is that:
GRPC currently works over a HTTP2 type transport
The current ingress is not capable of HTTP2
So are you sure your client is using HTTP1? Because otherwise it might not work.
Please let me know if that helped.
Try it out grpc greeter with istio, it works for me.
# greeter.yaml
apiVersion: v1
kind: Service
metadata:
name: greeter
labels:
app: greeter
spec:
ports:
- name: grpc
port: 50051
selector:
app: greeter
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: greeter
spec:
replicas: 1
template:
metadata:
labels:
app: greeter
version: v1
spec:
containers:
- image: tobegit3hub/grpc-helloworld
imagePullPolicy: IfNotPresent
name: greeter
ports:
- containerPort: 50051
# gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: greeter-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- 'xyz.example.com'
# virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: greeter
spec:
hosts:
- 'xyz.example.com'
gateways:
- greeter-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: greeter
port:
number: 50051
# grpc greeter client
docker run -it tobegit3hub/grpc-helloworld /greeter_client.py xyz.example.com:80

How to configure ingress gateway in istio?

I'm new to istio, and I want to access my app through istio ingress gateway, but I do not know why it does not work.
This is my kubenetes_deploy.yaml file content:
apiVersion: v1
kind: Service
metadata:
name: batman
labels:
run: batman
spec:
#type: NodePort
ports:
- port: 8000
#nodePort: 32000
targetPort: 7000
#protocol: TCP
name: batman
selector:
run: batman
#version: v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: batman-v1
spec:
replicas: 1
selector:
matchLabels:
run: batman
template:
metadata:
labels:
run: batman
version: v1
spec:
containers:
- name: batman
image: leowu/batman:v1
ports:
- containerPort: 7000
env:
- name: MONGODB_URL
value: mongodb://localhost:27017/articles_demo_dev
- name: mongo
image: mongo
And here is my istio ingress_gateway.yaml config file:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: batman-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15000
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: batman
spec:
hosts:
- "*"
gateways:
- batman-gateway
http:
- match:
route:
- destination:
host: batman
port:
number: 7000
I created the ingress gateway from example, and it looks well but when I run kubectl get svc istio-ingressgateway -n istio-system I can't see the listening port 15000 in the output。I donot know way.
Is there anyone can help me? Thanks.
First of all, as #Abhyudit Jain mentioned you need to correct port in VirtualService to 8000
And then you just add another port to your istio-ingressgateway service
kubectl edit svc istio-ingressgateway -n istio-system
add section:
ports:
- name: http
nodePort: 30001
port: 15000
protocol: TCP
targetPort: 80
This will accept HTTP traffic on port 15000 and rout it to your destination service on port 8000
simple schema as follows:
incoming traffic --> istio-gateway service --> istio-gateway --> virtual service --> service --> pod
You batman service listens on port 8000 and forwards traffic to container's port 7000.
The istio traffic works like this:
ingress-gateway -> virtual-service -> destination-rule [optional] -> service
So your virtual service should be like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: batman
spec:
hosts:
- "*"
gateways:
- batman-gateway
http:
- match:
route:
- destination:
host: batman
port:
number: 8000 <--- change