I am trying to do rate-limiting with ambassador following this tutorial. I am using minikube and local docker image.I have tested all api is responding correctly after deploying to Kubernetes only the rate-limiting function isn't working.
Here is my deploy.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nodejs-deployment
spec:
ports:
- name: http
port: 80
targetPort: 3000
- name: https
port: 443
targetPort: 3000
selector:
app: nodejs-deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-deployment
spec:
selector:
matchLabels:
app: nodejs-deployment
replicas: 2
template:
metadata:
labels:
app: nodejs-deployment
spec:
containers:
- name: nodongo
image: soham/nodejs-starter
imagePullPolicy: "Never"
ports:
- containerPort: 3000
Here is my rate-limit.yaml
---
apiVersion: getambassador.io/v2
kind: Mapping
metadata:
name: nodejs-backend
spec:
prefix: /delete/
service: nodejs-deployment
labels:
ambassador:
- request_label_group:
- delete
---
apiVersion: getambassador.io/v2
kind: RateLimit
metadata:
name: backend-rate-limit
spec:
domain: ambassador
limits:
- pattern: [{generic_key: delete}]
rate: 1
unit: minute
injectResponseHeaders:
- name: "x-test-1"
value: "my-rl-test"
When I am executing the command -- curl -vLk 10.107.60.125/delete/
It is returning
* Trying 10.107.60.125:80...
* TCP_NODELAY set
* Connected to 10.107.60.125 (10.107.60.125) port 80 (#0)
> GET /delete/ HTTP/1.1
> Host: 10.107.60.125
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: text/html; charset=utf-8
< Content-Length: 11
< ETag: W/"b-CgqQ9sWpkiO3HKmStsUvuC/rZLU"
< Date: Tue, 03 Nov 2020 17:13:00 GMT
< Connection: keep-alive
<
* Connection #0 to host 10.107.60.125 left intact
Delete User
The response I am getting is 200 however I am expecting 429 error code.
Related
I have a stateful set as follows:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jgroups-leader-poc
labels:
app: jgroups-leader-poc
spec:
serviceName: jgroups-leader-poc
replicas: 3
selector:
matchLabels:
app: jgroups-leader-poc
template:
metadata:
labels:
app: jgroups-leader-poc
spec:
containers:
- name: jgroups-leader-poc-container
image: localhost:5001/jgroups-leader-ui:1.0
imagePullPolicy: Always
env:
- name: jgroups.gossip_routers
value: "localhost[12001]"
- name: jgroups.tcp.ip
value: "site_local,match-interface:eth0"
- name: jgroups.tcp.ntfnport
value: "7800"
- name: JGROUPS_EXTERNAL_ADDR
value: "match-interface:eth0"
ports:
- name: http
containerPort: 8080
protocol: TCP
- containerPort: 7800
name: k8sping-port
TcpGossip.xml as follows
<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:org:jgroups"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">
<TCP external_addr="${JGROUPS_EXTERNAL_ADDR:match-interface:eth0}"
bind_addr="${jgroups.tcp.ip}" bind_port="${jgroups.tcp.ntfnport:0}"
sock_conn_timeout="300"
max_bundle_size="60000"
enable_diagnostics="false"
thread_naming_pattern="cl"
thread_pool.enabled="true"
thread_pool.min_threads="1"
thread_pool.max_threads="25"
thread_pool.keep_alive_time="5000" />
<TCPGOSSIP initial_hosts="${jgroups.gossip_routers:localhost[12001]}" reconnect_interval="3000"/>
<MERGE3 min_interval="10000" max_interval="30000"/>
<FD_SOCK/>
<FD_ALL timeout="60000" interval="15000" timeout_check_interval="5000"/>
<FD_HOST check_timeout="5000" interval="15000" timeout="60000"/>
<VERIFY_SUSPECT timeout="5000"/>
<pbcast.NAKACK2 use_mcast_xmit="false" use_mcast_xmit_req="false" xmit_interval="1000"/>
<UNICAST3 xmit_table_max_compaction_time="3000" />
<pbcast.STABLE desired_avg_gossip="50000" max_bytes="4M"/>
<pbcast.GMS print_local_addr="true" join_timeout="5000" view_bundling="true"/>
<MFC max_credits="2M"/>
<FRAG2 frag_size="30K"/>
<pbcast.STATE buffer_size="1048576" max_pool="10"/>
<pbcast.FLUSH timeout="0"/>
</config>
And I have started Gossip Router as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: gossiprouter
labels:
run: gossiprouter
spec:
replicas: 1
selector:
matchLabels:
run: gossiprouter
template:
metadata:
labels:
run: gossiprouter
spec:
containers:
- image: belaban/gossiprouter:latest
name: gossiprouter
ports:
- containerPort: 8787
- containerPort: 9000
- containerPort: 12001
env:
- name: LogLevel
value: "TRACE"
---
apiVersion: v1
kind: Service
metadata:
name: gossiprouter
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- port: 8787
targetPort: 8787
name: debug
protocol: TCP
- port: 9000
targetPort: 9000
name: netcat
protocol: TCP
- port: 12001
targetPort: 12001
name: gossiprouter
protocol: TCP
selector:
run: gossiprouter
when I do kubectl get pods
kubectl get svc shows
Source code for reference:
public class Chatter extends ReceiverAdapter {
JChannel channel;
#Value("${app.jgroups.config:jgroups-config.xml}")
private String jGroupsConfig;
#Value("${app.jgroups.cluster:chat-cluster}")
private String clusterName;
#PostConstruct
public void init() {
try {
channel = new JChannel(jGroupsConfig);
channel.setReceiver(this);
channel.connect(clusterName);
checkLeaderStatus();
channel.getState(null, 10000);
} catch (Exception ex) {
log.error("registering the channel in JMX failed: {}", ex);
}
}
public void close() {
channel.close();
}
public void viewAccepted(View newView) {
log.info("view: " + newView);
checkLeaderStatus();
}
private void checkLeaderStatus() {
Address address = channel.getView().getMembers().get(0);
if (address.equals(channel.getAddress())) {
log.info("Nodes are started, I'm the master!");
}
else
{
log.info("Nodes are started, I'm a slave!");
}
}
}
The issue here is none of the pods are getting connected to running gossip router although localhost:12001 is given as jgroups.gossip_routers in the Statefulset. Hence all the pods are forming separate jgroups cluster instead of single cluster.
Gossip router service details as follows:
Sorry for the delay! Do you have a public image for jgroups-leader-ui:1.0?
I suspect that the members don't communicate with each other because they connect directly to each other, possibly some issue with external_addr.
Why are you using TCP:TCPGOSSIP instead of TUNNEL:PING?
OK, so I tried this out with a sample app/image (chat.sh) / belaban/jgroups.
Try the following steps:
kubectl apply -f gossiprouter.yaml (your existing YAML)
Find the address of eth of the GossipRouter pod (in my case below: 172.17.0.3)
kubectl -f jgroups.yaml (the Yaml is pasted below)
kubectl exec jgroups-0 probe.sh should list 3 members:
/Users/bela$ kubectl exec jgroups-0 probe.sh
#1 (180 bytes):
local_addr=jgroups-1-42392
physical_addr=172.17.0.5:7800
view=[jgroups-0-13785|2] (3) [jgroups-0-13785, jgroups-1-42392, jgroups-2-14656]
cluster=chat
version=4.2.4.Final (Julier)
#2 (180 bytes):
local_addr=jgroups-0-13785
physical_addr=172.17.0.4:7800
view=[jgroups-0-13785|2] (3) [jgroups-0-13785, jgroups-1-42392, jgroups-2-14656]
cluster=chat
version=4.2.4.Final (Julier)
#3 (180 bytes):
local_addr=jgroups-2-14656
physical_addr=172.17.0.6:7800
view=[jgroups-0-13785|2] (3) [jgroups-0-13785, jgroups-1-42392, jgroups-2-14656]
cluster=chat
version=4.2.4.Final (Julier)
3 responses (3 matches, 0 non matches)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jgroups
labels:
run: jgroups
spec:
replicas: 3
selector:
matchLabels:
run: jgroups
serviceName: "jgroups"
template:
metadata:
labels:
run: jgroups
spec:
containers:
- image: belaban/jgroups
name: jgroups
command: ["chat.sh"]
args: ["-props gossip.xml"]
env:
- name: DNS_QUERY
value: "jgroups.default.svc.cluster.local"
- name: DNS_RECORD_TYPE
value: A
- name: TCPGOSSIP_INITIAL_HOSTS
value: "172.17.0.3[12001]"
---
apiVersion: v1
kind: Service
metadata:
# annotations:
# service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
name: jgroups
labels:
run: jgroups
spec:
publishNotReadyAddresses: true
clusterIP: None
ports:
- name: ping
port: 7800
protocol: TCP
targetPort: 7800
- name: probe
port: 7500
protocol: UDP
targetPort: 7500
- name: debug
port: 8787
protocol: TCP
targetPort: 8787
- name: stomp
port: 9000
protocol: TCP
targetPort: 9000
selector:
run: jgroups
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
---
I have two services one for serving static files and other for serving apis. I have created a single ingress controller for these.
I want to serve / from service1 and /api from service2. My services are running fine.
but I am getting 404 for /api path.
Below is my kubernetes yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: "myapp"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "service2"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
spec:
replicas: 2
selector:
matchLabels:
project: "myapp"
run: "service2"
matchExpressions:
- {key: project, operator: In, values: ["myapp"]}
template:
metadata:
labels:
project: "myapp"
env: "prod"
run: "service2"
spec:
securityContext:
sysctls:
- name: net.ipv4.ip_local_port_range
value: "1024 65535"
imagePullSecrets:
- name: tildtr
containers:
- name: "node-container"
image: "images2"
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "service1"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
spec:
replicas: 2
selector:
matchLabels:
project: "myapp"
run: "service1"
matchExpressions:
- {key: project, operator: In, values: ["myapp"]}
template:
metadata:
labels:
project: "myapp"
env: "prod"
run: "service1"
spec:
securityContext:
sysctls:
- name: net.ipv4.ip_local_port_range
value: "1024 65535"
imagePullSecrets:
- name: tildtr
containers:
- name: "nginx-container"
image: "image1"
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: "service1"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
run: "service1"
spec:
selector:
project: "myapp"
type: ClusterIP
ports:
- name: "service1"
port: 80
targetPort: 80
selector:
run: "service1"
---
apiVersion: v1
kind: Service
metadata:
name: "service2"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
run: "service2"
spec:
selector:
project: "myapp"
type: ClusterIP
ports:
- name: "service2"
port: 80
targetPort: 3000
selector:
run: "service2"
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "myapp"
namespace: "myapp"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-alias: "*.xyz.in"
nginx.ingress.kubernetes.io/server-snippet: |
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 50;
keepalive_requests 100000;
reset_timedout_connection on;
client_body_timeout 20;
send_timeout 2;
types_hash_max_size 2048;
client_max_body_size 20M;
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml application/json;
gzip_disable "MSIE [1-6]\.";
spec:
rules:
- host: "myhost.in"
http:
paths:
- path: /api
backend:
serviceName: "service2"
servicePort: 80
- path: /
backend:
serviceName: "service1"
servicePort: 80
And this is my ingress desc.
Name: myapp
Namespace: myapp
Address: 10.100.160.106
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
ds-vaccination.timesinternet.in
/api service2:3000 (10.0.1.113:3000,10.0.2.123:3000)
/ service1:80 (10.0.1.37:80,10.0.2.59:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-alias: *.xyz.in
nginx.ingress.kubernetes.io/server-snippet:
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 50;
keepalive_requests 100000;
reset_timedout_connection on;
client_body_timeout 20;
send_timeout 2;
types_hash_max_size 2048;
client_max_body_size 20M;
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml application/json;
gzip_disable "MSIE [1-6]\.";
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6s (x5 over 31m) nginx-ingress-controller Scheduled for sync
Normal Sync 6s (x5 over 31m) nginx-ingress-controller Scheduled for sync
remove this annotation and try
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
if your cluster is supporting old API : extensions/v1beta1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-ingress
spec:
rules:
- host: service1.example.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: service2.example.com
http:
paths:
- backend:
serviceName: service2
servicePort: 80
I have problem setting path ingress controller for backend service. For example i want setup :
frontend app with angular (Path :/)
backend service with NodeJs (Path :/webservice).
NodeJS : Index.js
const express = require('express')
const app = express()
const port = 4000
app.get('/', (req, res) => res.send('Welcome to myApp!'))
app.use('/data/office', require('./roffice'));
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Another Route:roffice.js
var express = require('express')
var router = express.Router()
router.get('/getOffice', async function (req, res) {
res.send('Get Data Office')
});
module.exports = router
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ws-stack
spec:
selector:
matchLabels:
run: ws-stack
replicas: 2
template:
metadata:
labels:
run: ws-stack
spec:
containers:
- name: ws-stack
image: wsstack/node/img
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4000
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: service-wsstack
labels:
run: service-wsstack
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
nodePort: 30009
targetPort: 4000
selector:
run: ws-stack
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: stack-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: hello-world.info
- http:
paths:
- path: /
backend:
serviceName: service-ngstack --> frondend
servicePort: 80
- path: /webservice
backend:
serviceName: service-wsstack --> backend
servicePort: 80
i setup deployment, service and ingress successfully. but when i called with curl
curl http://<minikubeip>/webservice --> Welcome to myApp! => Correct
curl http://<minikubeip>/webservice/data/office/getOffice --> Welcome to myApp! => Not correct
if i called another route, the result is the same 'Welcome to myApp'. But if i used Nodeport
curl http://<minikubeip>:30009/data/office/getOffice => 'Get Data Office', working properly.
What is the problem? any solution? Thank you
TL;DR
nginx.ingress.kubernetes.io/rewrite-target: /$2
path: /webservice($|/)(.*)
Explanation
The problem is from that line in your ingress:
nginx.ingress.kubernetes.io/rewrite-target: /
You're telling nginx to rewrite your url to / whatever it matched.
/webservice => /
/webservice/data/office/getOffice => /
To do what you're trying to do use regex, here is a simple example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: stack-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: hello-world.info
- http:
paths:
- path: /
backend:
serviceName: service-ngstack --> frondend
servicePort: 80
- path: /webservice($|/)(.*)
backend:
serviceName: service-wsstack --> backend
servicePort: 80
This way you're asking nginx to rewrite your url with the second matching group.
Finally it gives you:
/webservice => /
/webservice/data/office/getOffice => /data/office/getOffice
I'm setting up traefik 2.0-alpha with Let's Encrypt certificates inside GKE, but now i'm in stupor with "server.go:3012: http: TLS handshake error from 10.32.0.1:2244: remote error: tls: bad certificate" error in container logs.
Connections via http working fine. When i try to connect via https, traefik return 404 with its own default certificates.
I found same problem for traefik v1 on github. Solution was in adding to config:
InsecureSkipVerify = true
passHostHeader = true
It doesn't help me.
Here is my configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-ingress-configmap
namespace: kube-system
data:
traefik.toml: |
[Global]
sendAnonymousUsage = true
debug = true
logLevel = "DEBUG"
[ServersTransport]
InsecureSkipVerify = true
[entrypoints]
[entrypoints.web]
address = ":80"
[entryPoints.web-secure]
address = ":443"
[entrypoints.mongo-port]
address = ":11111"
[providers]
[providers.file]
[tcp] # YAY!
[tcp.routers]
[tcp.routers.everything-to-mongo]
entrypoints = ["mongo-port"]
rule = "HostSNI(`*`)" # Catches every request
service = "database"
[tcp.services]
[tcp.services.database.LoadBalancer]
[[tcp.services.database.LoadBalancer.servers]]
address = "mongodb-service.default.svc:11111"
[http]
[http.routers]
[http.routers.for-jupyterx-https]
entryPoints = ["web-secure"] # won't listen to entrypoint mongo-port
# rule = "Host(`clients-ui.ddns.net`)"
# rule = "Path(`/jupyterx`)" # abo /jupyterx/*
rule = "PathPrefix(`/jupyterx`)"
service = "jupyterx"
[http.routers.for-jupyterx.tls]
[http.routers.for-jupyterx-http]
entryPoints = ["web"] # won't listen to entrypoint mongo-port
# rule = "Host(`clients-ui.ddns.net`)"
# rule = "Path(`/jupyterx`)" # abo /jupyterx/*
rule = "PathPrefix(`/jupyterx`)"
service = "jupyterx"
[http.services]
[http.services.jupyterx.LoadBalancer]
PassHostHeader = true
# InsecureSkipVerify = true
[[http.services.jupyterx.LoadBalancer.servers]]
url = "http://jupyter-service.default.svc/"
weight = 100
[acme] # every router with TLS enabled will now be able to use ACME for its certificates
email = "account#mail.com"
storage = "acme.json"
# onHostRule = true # dynamic generation based on the Host() & HostSNI() matchers
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
[acme.httpChallenge]
entryPoint = "web" # used during the challenge
And DaemonSet yaml:
# ---
# apiVersion: v1
# kind: ServiceAccount
# metadata:
# name: traefik-ingress-controller
# namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
volumes:
# - name: traefik-ui-tls-cert
# secret:
# secretName: traefik-ui-tls-cert
- name: traefik-ingress-configmap
configMap:
name: traefik-ingress-configmap
containers:
- image: traefik:2.0 # The official v2.0 Traefik docker image
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: web-secure
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
- name: mongodb
containerPort: 11111
volumeMounts:
- mountPath: "/config"
name: "traefik-ingress-configmap"
args:
- --api
- --configfile=/config/traefik.toml
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 443
name: web-secure
- protocol: TCP
port: 8080
name: admin
- port: 11111
protocol: TCP
name: mongodb
type: LoadBalancer
loadBalancerIP: 1.1.1.1
Have any suggestions, how to fix it?
Due to lack of manuals for traefik2.0-alpha, config file was written using only manual from traefik official page.
There is a "routers for HTTP & HTTPS" configuration example here https://docs.traefik.io/v2.0/routing/routers/ look like:
[http.routers]
[http.routers.Router-1-https]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
[http.routers.Router-1.tls] # will terminate the TLS request
[http.routers.Router-1-http]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
But working config looks like:
[http.routers]
[http.routers.Router-1-https]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
[http.routers.Router-1-https.tls] # will terminate the TLS request
[http.routers.Router-1-http]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
So, in my config string
[http.routers.for-jupyterx.tls]
should be changed on
[http.routers.for-jupyterx-https.tls]
I was able to achieve load-balancing with sample istio applications
https://github.com/piomin/sample-istio-services
https://istio.io/docs/guides/bookinfo/
But was not able to get istio load-balancing working with single private service having 2 versions. Example: 2 consul servers with different versions .
Service and pod definition :
apiVersion: v1
kind: Service
metadata:
name: consul-test
labels:
app: test
spec:
ports:
- port: 8500
name: http
selector:
app: test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: consul-test-v1
spec:
replicas: 1
template:
metadata:
labels:
app: test
version: v1
spec:
containers:
- name: consul-test-v1
image: consul:latest
ports:
- containerPort: 8500
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: consul-test-v2
spec:
replicas: 1
template:
metadata:
labels:
app: test
version: v2
spec:
containers:
- name: consul-test-v2
image: consul:1.1.0
ports:
- containerPort: 8500
Gateway definition:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: con-gateway
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
exact: /catalog
route:
- destination:
host: consul-test
port:
number: 8500
Routing rules in virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: consul-test
spec:
hosts:
- consul-test
gateways:
- con-gateway
- mesh
http:
- route:
- destination:
host: consul-test
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: consul-test
spec:
host: consul-test
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Though I route all traffic ( http requests ) to consul server version v1, my http requests on consul-service lands on v1 and v2 alternately i.e, it follows Round-Robin rule .
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-test ClusterIP 10.97.200.140 <none> 8500/TCP 9m
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
[
{
"ID": "ebfa341b-4557-a392-9f8a-8ee307113faa",
"Node": "consul-test-v1-765dd566dd-6cmj9",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 9,
"ModifyIndex": 10
}
]
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
[
{
"ID": "1b60a5bd-9a17-ff18-3a65-0ff95b3a836a",
"Node": "consul-test-v2-fffd475bc-st4mv",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 5,
"ModifyIndex": 6
}
]
I have the above mentioned issue when curl is done on the service ClusterIP:ClusterPort
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-test ClusterIP 10.97.200.140 <none> 8500/TCP 9m
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
But LoadBalancing works as expected when curl is done on INGRESS_HOST and INGRESS_PORT ( determining INGRESS_HOST and INGRESS_PORT present here )
$ curl -L http://$INGRESS_HOST:$INGRESS_PORT/v1/catalog/nodes --- WORKS