Traefik 2.0 IPWhitelist for TCP - Kubernetes CRD - kubernetes

We are using Kubernetes along with Traefik 2.0.
We are using Kubernetes CRD (IngressRoute) as provider with Traefik.
From Traefik Documentaion, it doesn't look like Middlewares can be used for TCP routers.
We would like to use IP Whitelist middleware with TCP router , but so far it's been working with Http Router only.
Here is our ipWhitelist definition:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: testIPwhitelist
spec:
ipWhiteList:
sourceRange:
- 127.0.0.1/32
- 192.168.1.7
Here is Traefik Service Definition:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- protocol: TCP
name: web
port: 8000
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 4443
- protocol: TCP
name: mongodb
port: 27017
selector:
app: traefik
IngressRoutes defintions:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: simpleingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: PathPrefix(`/who`)
kind: Rule
services:
- name: whoami
port: 80
middlewares:
- name: testIPwhitelist
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: ingressroute.mongo
spec:
entryPoints:
- mongodb
routes:
# Match is the rule corresponding to an underlying router.
- match: HostSNI(`*`)
services:
- name: mongodb
port: 27017
middlewares:
- name: testIPwhitelist
Is there any way of restricting IPs with traefik TCP router ?
For more resources on the traefik with Kubernetes CRD you can go here

You are right, Middlewares can't be used for TCP routers. IPWhitelist through Middleware concept is acceptable only for HTTP router.
You can follow issue on github requesting middlewares for TCP routers.

Related

Connect local Docker Kubernetes to localhost's app

I have Mac OS and local Docker Desktop with Kubernetes enabled.
I want to have a service in local Kubernetes connected to my local java app running on port 8087.
Here is what I have so far:
Service
apiVersion: v1
kind: Service
metadata:
name: auth
spec:
selector:
app: app-auth
ports:
- protocol: TCP
port: 80
targetPort: 8087
---
kind: Endpoints
apiVersion: v1
metadata:
name: auth
subsets:
- addresses:
- ip: <127.0.0.1 outside of cluster>
ports:
- port: 8087
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: router
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: localhost
http:
paths:
- path: /api/user
pathType: Prefix
backend:
service:
name: auth
port:
number: 80
I already checked this, but without success since I am not using either minikube either virtual box
access-mysql-running-on-localhost-from-minikube
how-to-access-hosts-localhost-from-inside-kubernetes-cluster
The Question: what IP should I use for my to-os-localhost-service?
Thank you

Istio using IPv6 instead of IPv4

I am using Kubernetes with Minikube on a Windows 10 Home machine to "host" a gRPC service. I am working on getting Istio working in the cluster and have been running into the same issue over and over and I cannot figure out why. The problem is that once everything is up and running, the Istio gateway uses IPv6, seemingly for no reason at all. IPv6 is even disabled on my machine (via regedit) and network adapters. My other services are accessible from IPv4. Below are my steps for installing my environment:
minikube start
kubectl create namespace abc
kubectl apply -f service.yml -n abc
kubectl apply -f gateway.yml
istioctl install --set profile=default -y
kubectl label namespace abc istio-injection=enabled
Nothing is accessible over the network at this point, until I run the following in its own terminal:
minikube tunnel
Now I can access the gRPC service directly using IPv4: 127.0.0.1:5000. However, accessing the gateway is inaccessible from 127.0.0.1:443 and instead is only accessible from [::1]:443.
Here is the service.yml:
apiVersion: v1
kind: Service
metadata:
name: account-grpc
spec:
ports:
- name: grpc
port: 5000
protocol: TCP
targetPort: 5000
selector:
service: account
ipc: grpc
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: account
ipc: grpc
name: account-grpc
spec:
replicas: 1
selector:
matchLabels:
service: account
ipc: grpc
template:
metadata:
labels:
service: account
ipc: grpc
spec:
containers:
- image: account-grpc
name: account-grpc
imagePullPolicy: Never
ports:
- containerPort: 5000
Here is the gateway.yml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: grpc
protocol: GRPC
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: virtual-service
spec:
hosts:
- "*"
gateways:
- gateway
http:
- match:
- uri:
prefix: /account
route:
- destination:
host: account-grpc
port:
number: 5000
And here are the results of kubectl get service istio-ingressgateway -n istio-system -o yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: ...
creationTimestamp: "2021-08-27T01:21:21Z"
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.11.1
release: istio
name: istio-ingressgateway
namespace: istio-system
resourceVersion: "4379"
uid: b4db0e2f-0f45-4814-b187-287acb28d0c6
spec:
clusterIP: 10.97.4.216
clusterIPs:
- 10.97.4.216
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: status-port
nodePort: 32329
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 31913
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 32382
port: 443
protocol: TCP
targetPort: 8443
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 127.0.0.1
Changing the port number to port 80 resolved my issue. The problem was that my gRPC service was not using HTTPS. Will return if I have trouble once I change the service to use HTTPS.

How to open custom port in Kubernetes

I deploy rabbit mq on cluster, so far running well on port 15672 : http://test.website.com/
but there need open some other ports (25672, 15672, 15674). I has defined in yaml like this :
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
name: rabbitmq
ports:
- port: 80
name: http
targetPort: 15672
protocol: TCP
- port: 443
name: https
targetPort: 15672
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rabbitmq
spec:
selector:
matchLabels:
app: rabbitmq
strategy:
type: RollingUpdate
template:
metadata:
name: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
ports:
- containerPort: 15672
name: http
protocol: TCP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq
spec:
hosts:
- “test.website.com”
gateways:
- gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: rabbitmq
How do I setup in yaml file to open some other ports ?
Assuming that Istio Gateway is serving TCP network connections, you might be able to combine one Gateway configuration for two external ports.
Here is an example:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: port1
protocol: TCP
hosts:
- example.myhost.com
- port:
number: 443
name: port2
protocol: TCP
hosts:
- example.myhost.com
Field hosts identifies here a list of target addresses that have to be exposed by this Gateway.
In order to make appropriate network routing to the nested Pods specify VirtualService with the matching set for the ports:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq-virtual-service
spec:
hosts:
- example.myhost.com
gateways:
- gateway
tcp:
- match:
- port: 80
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15672
- match:
- port: 443
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15674
Above VirtualService defines the rules to route network traffic coming on 80 and 443 ports for test.website.com to the rabbitmq service ports 15672, 15674 respectively.
You can adjust these files to your needs to open some other ports.
Take a look: virtualservice-for-a-service-which-exposes-multiple-ports.

How to expose redis to outside with istio sidecar?

I'm using redis with k8s 1.15.0, istio 1.4.3, it works well inside the network.
However when I tryed to use the istio gateway and sidecar to expose it to outside network, it failed.
Then I removed the istio sidecar and just started the redis server in k8s, it worked.
After searching I added DestinationRule to the config, but it didn't help.
So what's the problem of it? Thanks for any tips!
Here is my redis.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: docker.io/redis:5.0.5-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 16379
protocol: TCP
name: redis-port
volumeMounts:
- name: redis-data
mountPath: /data
- name: redis-conf
mountPath: /etc/redis
command:
- "redis-server"
args:
- "/etc/redis/redis.conf"
- "--protected-mode"
- "no"
volumes:
- name: redis-conf
configMap:
name: redis-conf
items:
- key: redis.conf
path: redis.conf
- name: redis-data
nfs:
path: /data/redis
server: 172.16.8.34
---
apiVersion: v1
kind: Service
metadata:
name: redis-svc
labels:
app: redis-svc
spec:
type: ClusterIP
ports:
- name: redis-port
port: 16379
protocol: TCP
selector:
app: redis
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: tcp
protocol: TCP
hosts:
- "redis.basic.svc.cluster.local"
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: redis-svc
spec:
host: redis-svc
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "redis.basic.svc.cluster.local"
gateways:
- redis-gateway
tcp:
- route:
- destination:
host: redis-svc.basic.svc.cluster.local
port:
number: 16379
Update:
This is how I request
[root]# redis-cli -h redis.basic.svc.cluster.local -p 80
redis.basic.svc.cluster.local:80> get Test
Error: Protocol error, got "H" as reply type byte
There are few thing that need to be different in case of exposing TCP application with istio.
The hosts: needs to be "*" as TCP protocol works only with IP:PORT. There are no headers in L4.
There needs to be TCP port match Your VirtualService that matches GateWay. I suggest to name it in a unique way and match Deployment port name.
I suggest avoiding using port 80 as it is already used in default ingress configuration and it could result in port conflict, so i changed it to 11337.
So Your GateWay should look something like this:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 11337
name: redis-port
protocol: TCP
hosts:
- "*"
And Your VirtualService like this:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "*"
gateways:
- redis-gateway
tcp:
- match:
- port: 11337
route:
- destination:
host: redis-svc
port:
number: 16379
Note that I removed namespaces for clarity.
Then add our custom port to default ingress gateway use the following command:
kubectl edit svc istio-ingressgateway -n istio-system
And add following next other port definitions:
- name: redis-port
nodePort: 31402
port: 11337
protocol: TCP
targetPort: 16379
To access the exposed application use istio gateway external IP and port that we
just set up.
To get Your gateway external IP you can use:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
redis-cli -h $INGRESS_HOST -p 11337
If Your istio-ingressgateway does not have external IP assigned, use one of Your nodes IP address and port 31402.
Hope this helps.
Thanks for suren's answer.
But i think redis.basic.svc.cluster.local is outside DNS host to match by VirtualService, and VirtualService.host is route to service redis-svc with full namespace path.
Maybe not for that reason.

Why Secure GRPC calls do not reach ingress gateway?

I have installed istio 1.22.2 inside kubernetes (1.12.x) with sds enabled. I have been following this and I am able to do ssl termination at the ingress gateway for normal services (on HTTP/1.1). And I could see it in the access logs of the gateway.
gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 31400
name: tcp
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: "review-this-co" # must be the same as secret
hosts:
- "xyz.example.com"
However when GRPC is used over secure channel I could not see any access logs. (Grpc client fails). I Was expecting similar behavior for grpc as well(ie ssl termination at the ingress gateway).
NOTE: same grpc client works(call reaches the ingress gateway, visible in the access logs) with plaintext if the gateway is configured like following
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: mygateway
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 31400
name: tcp
protocol: GRPC
hosts:
- "xyz.example.com"
Network loadbalancer has been used (pass through)
If I understand you correctly, the thing here is that:
GRPC currently works over a HTTP2 type transport
The current ingress is not capable of HTTP2
So are you sure your client is using HTTP1? Because otherwise it might not work.
Please let me know if that helped.
Try it out grpc greeter with istio, it works for me.
# greeter.yaml
apiVersion: v1
kind: Service
metadata:
name: greeter
labels:
app: greeter
spec:
ports:
- name: grpc
port: 50051
selector:
app: greeter
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: greeter
spec:
replicas: 1
template:
metadata:
labels:
app: greeter
version: v1
spec:
containers:
- image: tobegit3hub/grpc-helloworld
imagePullPolicy: IfNotPresent
name: greeter
ports:
- containerPort: 50051
# gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: greeter-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- 'xyz.example.com'
# virtualservice.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: greeter
spec:
hosts:
- 'xyz.example.com'
gateways:
- greeter-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: greeter
port:
number: 50051
# grpc greeter client
docker run -it tobegit3hub/grpc-helloworld /greeter_client.py xyz.example.com:80