How to open custom port in Kubernetes - kubernetes

I deploy rabbit mq on cluster, so far running well on port 15672 : http://test.website.com/
but there need open some other ports (25672, 15672, 15674). I has defined in yaml like this :
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
selector:
name: rabbitmq
ports:
- port: 80
name: http
targetPort: 15672
protocol: TCP
- port: 443
name: https
targetPort: 15672
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rabbitmq
spec:
selector:
matchLabels:
app: rabbitmq
strategy:
type: RollingUpdate
template:
metadata:
name: rabbitmq
spec:
containers:
- name: rabbitmq
image: rabbitmq:latest
ports:
- containerPort: 15672
name: http
protocol: TCP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq
spec:
hosts:
- “test.website.com”
gateways:
- gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: rabbitmq
How do I setup in yaml file to open some other ports ?

Assuming that Istio Gateway is serving TCP network connections, you might be able to combine one Gateway configuration for two external ports.
Here is an example:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: port1
protocol: TCP
hosts:
- example.myhost.com
- port:
number: 443
name: port2
protocol: TCP
hosts:
- example.myhost.com
Field hosts identifies here a list of target addresses that have to be exposed by this Gateway.
In order to make appropriate network routing to the nested Pods specify VirtualService with the matching set for the ports:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rabbitmq-virtual-service
spec:
hosts:
- example.myhost.com
gateways:
- gateway
tcp:
- match:
- port: 80
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15672
- match:
- port: 443
route:
- destination:
host: app.example.svc.cluster.local
port:
number: 15674
Above VirtualService defines the rules to route network traffic coming on 80 and 443 ports for test.website.com to the rabbitmq service ports 15672, 15674 respectively.
You can adjust these files to your needs to open some other ports.
Take a look: virtualservice-for-a-service-which-exposes-multiple-ports.

Related

How to use Istio Ingress to forward STOMP protocol of RabbitMQ in Kubernetes?

I tried with this Gateway, and VirtualService, didn't work.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: stomp
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: stomp
protocol: TCP
hosts:
- rmq-stomp.mycompany.com
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rmq-stomp
spec:
hosts:
- rmq-stomp.mycompany.com
gateways:
- stomp
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 61613
host: rabbitmq.default.svc.cluster.local
There's no problem with the service, because when I tried to connect from other pod, it's connected.
Use tcp.match, not http.match. Here is the example I have found in istio gateway docs and in istio virtualservice dosc
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo-mongo
namespace: bookinfo-namespace
spec:
hosts:
- mongosvr.prod.svc.cluster.local # name of internal Mongo service
gateways:
- some-config-namespace/my-gateway # can omit the namespace if gateway is in same namespace as virtual service.
tcp:
- match:
- port: 27017
route:
- destination:
host: mongo.prod.svc.cluster.local
port:
number: 5555
So your would look sth like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: rmq-stomp
spec:
hosts:
- rmq-stomp.mycompany.com
gateways:
- stomp
tcp:
- match:
- port: 80
route:
- destination:
host: rabbitmq.default.svc.cluster.local
port:
number: 61613
Here is a similar question answered: how-to-configure-istios-virtualservice-for-a-service-which-exposes-multiple-por

How to expose redis to outside with istio sidecar?

I'm using redis with k8s 1.15.0, istio 1.4.3, it works well inside the network.
However when I tryed to use the istio gateway and sidecar to expose it to outside network, it failed.
Then I removed the istio sidecar and just started the redis server in k8s, it worked.
After searching I added DestinationRule to the config, but it didn't help.
So what's the problem of it? Thanks for any tips!
Here is my redis.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: docker.io/redis:5.0.5-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 16379
protocol: TCP
name: redis-port
volumeMounts:
- name: redis-data
mountPath: /data
- name: redis-conf
mountPath: /etc/redis
command:
- "redis-server"
args:
- "/etc/redis/redis.conf"
- "--protected-mode"
- "no"
volumes:
- name: redis-conf
configMap:
name: redis-conf
items:
- key: redis.conf
path: redis.conf
- name: redis-data
nfs:
path: /data/redis
server: 172.16.8.34
---
apiVersion: v1
kind: Service
metadata:
name: redis-svc
labels:
app: redis-svc
spec:
type: ClusterIP
ports:
- name: redis-port
port: 16379
protocol: TCP
selector:
app: redis
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: tcp
protocol: TCP
hosts:
- "redis.basic.svc.cluster.local"
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: redis-svc
spec:
host: redis-svc
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "redis.basic.svc.cluster.local"
gateways:
- redis-gateway
tcp:
- route:
- destination:
host: redis-svc.basic.svc.cluster.local
port:
number: 16379
Update:
This is how I request
[root]# redis-cli -h redis.basic.svc.cluster.local -p 80
redis.basic.svc.cluster.local:80> get Test
Error: Protocol error, got "H" as reply type byte
There are few thing that need to be different in case of exposing TCP application with istio.
The hosts: needs to be "*" as TCP protocol works only with IP:PORT. There are no headers in L4.
There needs to be TCP port match Your VirtualService that matches GateWay. I suggest to name it in a unique way and match Deployment port name.
I suggest avoiding using port 80 as it is already used in default ingress configuration and it could result in port conflict, so i changed it to 11337.
So Your GateWay should look something like this:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: redis-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 11337
name: redis-port
protocol: TCP
hosts:
- "*"
And Your VirtualService like this:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: redis-vs
spec:
hosts:
- "*"
gateways:
- redis-gateway
tcp:
- match:
- port: 11337
route:
- destination:
host: redis-svc
port:
number: 16379
Note that I removed namespaces for clarity.
Then add our custom port to default ingress gateway use the following command:
kubectl edit svc istio-ingressgateway -n istio-system
And add following next other port definitions:
- name: redis-port
nodePort: 31402
port: 11337
protocol: TCP
targetPort: 16379
To access the exposed application use istio gateway external IP and port that we
just set up.
To get Your gateway external IP you can use:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
redis-cli -h $INGRESS_HOST -p 11337
If Your istio-ingressgateway does not have external IP assigned, use one of Your nodes IP address and port 31402.
Hope this helps.
Thanks for suren's answer.
But i think redis.basic.svc.cluster.local is outside DNS host to match by VirtualService, and VirtualService.host is route to service redis-svc with full namespace path.
Maybe not for that reason.

Traefik 2.0 IPWhitelist for TCP - Kubernetes CRD

We are using Kubernetes along with Traefik 2.0.
We are using Kubernetes CRD (IngressRoute) as provider with Traefik.
From Traefik Documentaion, it doesn't look like Middlewares can be used for TCP routers.
We would like to use IP Whitelist middleware with TCP router , but so far it's been working with Http Router only.
Here is our ipWhitelist definition:
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: testIPwhitelist
spec:
ipWhiteList:
sourceRange:
- 127.0.0.1/32
- 192.168.1.7
Here is Traefik Service Definition:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- protocol: TCP
name: web
port: 8000
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 4443
- protocol: TCP
name: mongodb
port: 27017
selector:
app: traefik
IngressRoutes defintions:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: simpleingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: PathPrefix(`/who`)
kind: Rule
services:
- name: whoami
port: 80
middlewares:
- name: testIPwhitelist
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: ingressroute.mongo
spec:
entryPoints:
- mongodb
routes:
# Match is the rule corresponding to an underlying router.
- match: HostSNI(`*`)
services:
- name: mongodb
port: 27017
middlewares:
- name: testIPwhitelist
Is there any way of restricting IPs with traefik TCP router ?
For more resources on the traefik with Kubernetes CRD you can go here
You are right, Middlewares can't be used for TCP routers. IPWhitelist through Middleware concept is acceptable only for HTTP router.
You can follow issue on github requesting middlewares for TCP routers.

How to configure ingress gateway in istio?

I'm new to istio, and I want to access my app through istio ingress gateway, but I do not know why it does not work.
This is my kubenetes_deploy.yaml file content:
apiVersion: v1
kind: Service
metadata:
name: batman
labels:
run: batman
spec:
#type: NodePort
ports:
- port: 8000
#nodePort: 32000
targetPort: 7000
#protocol: TCP
name: batman
selector:
run: batman
#version: v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: batman-v1
spec:
replicas: 1
selector:
matchLabels:
run: batman
template:
metadata:
labels:
run: batman
version: v1
spec:
containers:
- name: batman
image: leowu/batman:v1
ports:
- containerPort: 7000
env:
- name: MONGODB_URL
value: mongodb://localhost:27017/articles_demo_dev
- name: mongo
image: mongo
And here is my istio ingress_gateway.yaml config file:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: batman-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15000
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: batman
spec:
hosts:
- "*"
gateways:
- batman-gateway
http:
- match:
route:
- destination:
host: batman
port:
number: 7000
I created the ingress gateway from example, and it looks well but when I run kubectl get svc istio-ingressgateway -n istio-system I can't see the listening port 15000 in the output。I donot know way.
Is there anyone can help me? Thanks.
First of all, as #Abhyudit Jain mentioned you need to correct port in VirtualService to 8000
And then you just add another port to your istio-ingressgateway service
kubectl edit svc istio-ingressgateway -n istio-system
add section:
ports:
- name: http
nodePort: 30001
port: 15000
protocol: TCP
targetPort: 80
This will accept HTTP traffic on port 15000 and rout it to your destination service on port 8000
simple schema as follows:
incoming traffic --> istio-gateway service --> istio-gateway --> virtual service --> service --> pod
You batman service listens on port 8000 and forwards traffic to container's port 7000.
The istio traffic works like this:
ingress-gateway -> virtual-service -> destination-rule [optional] -> service
So your virtual service should be like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: batman
spec:
hosts:
- "*"
gateways:
- batman-gateway
http:
- match:
route:
- destination:
host: batman
port:
number: 8000 <--- change

How to expose multiple port using a load balancer services in Kubernetes

I have created a cluster using the google cloud platform (container engine) and deployed a pod using the following YAML file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: deployment-name
spec:
replicas: 1
template:
metadata:
name: pod-name
labels:
app: app-label
spec:
containers:
- name: container-name
image: gcr.io/project-id/image-name
resources:
requests:
cpu: 1
ports:
- name: port80
containerPort: 80
- name: port443
containerPort: 443
- name: port6001
containerPort: 6001
Then I want to create a service that enables the pod to listen on all these ports. I know that the following YAML file works to create a service that listens on one port:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app-label
type: LoadBalancer
However when I want the pod to listen on multiple ports like this, it doesn't work:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- port: 80
targetPort: 80
- port: 443
targetPort: 443
- port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
How can I make my pod listen to multiple ports?
You have two options:
You could have multiple services, one for each port. As you pointed out, each service will end up with a different IP address
You could have a single service with multiple ports. In this particular case, you must give all ports a name.
In your case, the service becomes:
apiVersion: v1
kind: Service
metadata:
name: service-name
spec:
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: something
port: 6001
targetPort: 6001
selector:
app: app-label
type: LoadBalancer
This is necessary so that endpoints can be disambiguated.