Exposing virtual service with istio and mTLS globally enabled - kubernetes

I've this configuration on my service mesh:
mTLS globally enabled and meshpolicy default
simple-web deployment exposed as clusterip on port 8080
http gateway for port 80 and virtualservice routing on my service
Here the gw and vs yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway # Specify the ingressgateway created for us
servers:
- port:
number: 80 # Service port to watch
name: http-gateway
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: simple-web
spec:
gateways:
- http-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /simple-web
rewrite:
uri: /
route:
- destination:
host: simple-web
port:
number: 8080
Both vs and gw are in the same namespace.
The deployment was created and exposed with these commands:
k create deployment --image=yeasy/simple-web:latest simple-web
k expose deployment simple-web --port=8080 --target-port=80 --name=simple-web
and with k get pods I receive this:
pod/simple-web-9ffc59b4b-n9f85 2/2 Running
What happens is that from outside, pointing to ingress-gateway load balancer I receive 503 HTTP error.
If I try to curl from ingressgateway pod I can reach the simple-web service.
Why I can't reach the website with mTLS enabled? What's the correct configuration?

As #suren mentioned in his answer this issue is not present in istio version 1.3.2 . So one of solutions is to use newer version.
If you chose to upgrade istio to newer version please review documentation 1.3 Upgrade Notice and Upgrade Steps as Istio is still in development and changes drastically with each version.
Also as mentioned in comments by #Manuel Castro this is most likely issue addressed in Avoid 503 errors while reconfiguring service routes and newer version simply handles them better.
Creating both the VirtualServices and DestinationRules that define the
corresponding subsets using a single kubectl call (e.g., kubectl apply
-f myVirtualServiceAndDestinationRule.yaml is not sufficient because the resources propagate (from the configuration server, i.e.,
Kubernetes API server) to the Pilot instances in an eventually
consistent manner. If the VirtualService using the subsets arrives
before the DestinationRule where the subsets are defined, the Envoy
configuration generated by Pilot would refer to non-existent upstream
pools. This results in HTTP 503 errors until all configuration objects
are available to Pilot.
It should be possible to avoid this issue by temporarily disabling mTLS or by using permissive mode during the deployment.

I just installed istio-1.3.2, and k8s 1.15.1, to reproduced your issue, and it worked without any modifications. This is what I did:
0.- create a namespace called istio and enable sidecar injection automatically.
1.- $ kubectl run nginx --image nginx -n istio
2.- $ kubectl expose deploy nginx --port 8080 --target-port 80 --name simple-web -n istio
3.- $kubectl craete -f gw.yaml -f vs.yaml
Note: these are your files.
The test:
$ curl a.b.c.d:31380/simple-web -I
HTTP/1.1 200 OK
server: istio-envoy
date: Fri, 11 Oct 2019 10:04:26 GMT
content-type: text/html
content-length: 612
last-modified: Tue, 24 Sep 2019 14:49:10 GMT
etag: "5d8a2ce6-264"
accept-ranges: bytes
x-envoy-upstream-service-time: 4
[2019-10-11T10:04:26.101Z] "HEAD /simple-web HTTP/1.1" 200 - "-" "-" 0 0 6 4 "10.132.0.36" "curl/7.52.1" "4bbc2609-a928-9f79-9ae8-d6a3e32217d7" "a.b.c.d:31380" "192.168.171.73:80" outbound|8080||simple-web.istio.svc.cluster.local - 192.168.171.86:80 10.132.0.36:37078 - -
And to be sure mTLS was enabled, this is from ingress-gateway describe command:
--controlPlaneAuthPolicy
MUTUAL_TLS
So, I don't know what is wrong, but you might want to go through these steps and discard things.
Note: the reason I am attacking istio gateway on port 31380 is because my k8s is on VMs right now, and I didn't want to spin up a GKE cluster for a test.
EDIT
Just deployed another deployment with your image, exposed it as simple-web-2, and worked again. May be I'm lucky with istio:
$ curl a.b.c.d:31380/simple-web -I
HTTP/1.1 200 OK
server: istio-envoy
date: Fri, 11 Oct 2019 10:28:45 GMT
content-type: text/html
content-length: 354
last-modified: Fri, 11 Oct 2019 10:28:46 GMT
x-envoy-upstream-service-time: 4
[2019-10-11T10:28:46.400Z] "HEAD /simple-web HTTP/1.1" 200 - "-" "-" 0 0 5 4 "10.132.0.36" "curl/7.52.1" "df0dd00a-875a-9ae6-bd48-acd8be1cc784" "a.b.c.d:31380" "192.168.171.65:80" outbound|8080||simple-web-2.istio.svc.cluster.local - 192.168.171.86:80 10.132.0.36:42980 - -
What's your k8s environment?
EDIT2
# istioctl authn tls-check curler-6885d9fd97-vzszs simple-web.istio.svc.cluster.local -n istio
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
simple-web.istio.svc.cluster.local:8080 OK mTLS mTLS default/ default/istio-system

Related

Kubernetes Nginx ingress appears to remove headers before sending requests to backend service

I am trying to deploy a SpringbBoot Java application hosted on Apache Tomcat on a Kubernetes cluster using Ngninx Ingress for URL routing. More specifically, I am deploying on Minikube on my local machine, exposing the SpringBoot application as a Cluster IP Service, and executing the
minikube tunnel
command to expose services to my local machine. Visually, the process is the following...
Browser -> Ingress -> Service -> Pod hosting docker container of Apache Tomcat Server housing SpringBoot Java API
The backend service requires a header "SM_USER", which for the moment can be any value. When running the backend application as a Docker container with port forwarding, accessing the backend API works great. However, when deploying to a Kubernetes cluster behind an Nginx ingress, I get 403 errors from the API stating that the SM_USER header is missing. I suspect the following. My guess is that the header is included with the request to the ingress, but removed when being routed to the backend service.
My setup is the following.
Deploying on Minikube
minikube start
minikube addons enable ingress
eval $(minikube docker-env)
docker build . -t api -f Extras/DockerfileAPI
docker build . -t ui -f Extras/DockerfileUI
kubectl apply -f Extras/deployments-api.yml
kubectl apply -f Extras/deployments-ui.yml
kubectl expose deployment ui --type=ClusterIP --port=8080
kubectl expose deployment api --type=ClusterIP --port=8080
kubectl apply -f Extras/ingress.yml
kubectl apply -f Extras/ingress-config.yml
edit /etc/hosts file to resolve mydomain.com to localhost
minikube tunnel
Ingress YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
# Tried forcing the header by manual addition, no luck (tried with and without)
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header SM_USER $http_Admin;
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /ui
pathType: Prefix
backend:
service:
name: ui
port:
number: 8080
- path: /api
pathType: Prefix
backend:
service:
name: api
port:
number: 8080
Ingress Config YAML
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: default
labels:
app.kubernetes.io/name: my-ingress
app.kubernetes.io/part-of: my-ingress
data:
# Tried allowing underscores and also using proxy protocol (tried with and without)
enable-underscores-in-headers: "true"
use-proxy-protocol: "true"
When navigating to mydomain.com/api, I expect to receive the ROOT API interface but instead receive the 403 error page indicating that the SM_USER is missing. Note, this is not a 403 forbidden error regarding the ingress or access from outside the cluster, the specific SpringBoot error page I receive is from within my application and custom to indicate that the header is missing. In other words, my routing is definitely correct, and I am able to access the API, its just that the header is missing.
Are there configs or parameters I am missing? Possibly an annotation?
This is resolved. The issue was that the config was being applied in the Application namespace. Note, even if the Ingress object is in the application namespace, if you are using minikubes built in ingress functionality, the ConfigMap must be applied in the Nginx ingress namespace.

cetic-nifi Invalid host header issue

Helm version: v3.5.2
Kubernetes version: v1.20.4
nifi chart version:latest : 1.0.2 rel
Issue: [cetic/nifi]-issue
I'm trying to connect to nifi UI deployed in kubernetes.
I have set following properties in values yaml
properties:
# use externalSecure for when inbound SSL is provided by nginx-ingress or other external mechanism
sensitiveKey: changeMechangeMe # Must to have minimal 12 length key
algorithm: NIFI_PBKDF2_AES_GCM_256
externalSecure: false
isNode: false
httpsPort: 8443
webProxyHost: 10.0.39.39:30666
clusterPort: 6007
# ui service
service:
type: NodePort
httpsPort: 8443
nodePort: 30666
annotations: {}
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## OIDC authentication requires "sticky" session on the LoadBalancer for JWT to work properly...but AWS doesn't like it on creation
# sessionAffinity: ClientIP
# sessionAffinityConfig:
# clientIP:
# timeoutSeconds: 10800
10.0.39.39 - is the kubernetes masternode internal ip.
When nifi get started i get follwoing
WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /home/k8sadmin/.kube/config
WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /home/k8sadmin/.kube/config
NAME: nifi
LAST DEPLOYED: Thu Nov 25 12:38:00 2021
NAMESPACE: jeed-cluster
STATUS: deployed
REVISION: 1
NOTES:
Cluster endpoint IP address will be available at:
kubectl get svc nifi -n jeed-cluster -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
Cluster endpoint domain name is: 10.0.39.39:30666 - please update your DNS or /etc/hosts accordingly!
Once you are done, your NiFi instance will be available at:
https://10.0.39.39:30666/nifi
and when i do a curl
curl https://10.0.39.39:30666 put sample.txt -k
<h1>System Error</h1>
<h2>The request contained an invalid host header [<code>10.0.39.39:30666</
the request [<code>/</code>]. Check for request manipulation or third-part
t.</h2>
<h3>Valid host headers are [<code>empty
<ul><li>127.0.0.1</li>
<li>127.0.0.1:8443</li>
<li>localhost</li>
<li>localhost:8443</li>
<li>[::1]</li>
<li>[::1]:8443</li>
<li>nifi-0.nifi-headless.jeed-cluste
<li>nifi-0.nifi-headless.jeed-cluste
<li>10.42.0.8</li>
<li>10.42.0.8:8443</li>
<li>0.0.0.0</li>
<li>0.0.0.0:8443</li>
</ul>
Tried lot of things but still cannot whitelist master node ip in
proxy hosts
Ingress is not used
edit: it looks like properties set in values.yaml is not set in nifi.properties in side the pod. Is there any reason for this?
Appreciate help!
As NodePort service you can also assign port number from 30000-32767.
You can apply values when you install your chart with:
properties:
webProxyHost: localhost
httpsPort:
This should let nifi whitelist your https://localhost:

How to expose a service to outside Kubernetes cluster via ingress?

I'm struggling to expose a service in an AWS cluster to outside and access it via a browser. Since my previous question haven't drawn any answers, I decided to simplify the issue in several aspects.
First, I've created a deployment which should work without any configuration. Based on this article, I did
kubectl create namespace tests
created file probe-service.yaml based on paulbouwer/hello-kubernetes:1.8 and deployed it kubectl create -f probe-service.yaml -n tests:
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes-first
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes-first
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes-first
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes-first
template:
metadata:
labels:
app: hello-kubernetes-first
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: Hello from the first deployment!
created ingress.yaml and applied it (kubectl apply -f .\probes\ingress.yaml -n tests)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
spec:
rules:
- host: test.projectname.org
http:
paths:
- pathType: Prefix
path: "/test"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
- host: test2.projectname.org
http:
paths:
- pathType: Prefix
path: "/test2"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
ingressClassName: nginx
Second, I can see that DNS actually point to the cluster and ingress rules are applied:
if I open http://test.projectname.org/test or any irrelevant path (http://test.projectname.org/test3), I'm shown NET::ERR_CERT_AUTHORITY_INVALID, but
if I use "open anyway" in browser, irrelevant paths give ERR_TOO_MANY_REDIRECTS while http://test.projectname.org/test gives Cannot GET /test
Now, TLS issues aside (those deserve a separate question), why can I get Cannot GET /test? It looks like ingress controller (ingress-nginx) got the rules (otherwise it wouldn't descriminate paths; that's why I don't show DNS settings, although they are described in the previous question) but instead of showing the simple hello-kubernetes page at /test it returns this simple 404 message. Why is that? What could possibly go wrong? How to debug this?
Some debug info:
kubectl version --short tells Kubernetes Client Version is v1.21.5 and Server Version is v1.20.7-eks-d88609
kubectl get ingress -n tests shows that hello-kubernetes-ingress exists indeed, with nginx class, 2 expected hosts, address equal to that shown for load balancer in AWS console
kubectl get all -n tests shows
NAME READY STATUS RESTARTS AGE
pod/hello-kubernetes-first-6f77d8ff99-gjw5d 1/1 Running 0 5h4m
pod/hello-kubernetes-first-6f77d8ff99-ptwsn 1/1 Running 0 5h4m
pod/hello-kubernetes-first-6f77d8ff99-x8w87 1/1 Running 0 5h4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-kubernetes-first ClusterIP 10.100.18.189 <none> 80/TCP 5h4m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hello-kubernetes-first 3/3 3 3 5h4m
NAME DESIRED CURRENT READY AGE
replicaset.apps/hello-kubernetes-first-6f77d8ff99 3 3 3 5h4m
ingress-nginx was installed before me via the following chart:
apiVersion: v2
name: nginx
description: A Helm chart for Kubernetes
type: application
version: 4.0.6
appVersion: "1.0.4"
dependencies:
- name: ingress-nginx
version: 4.0.6
repository: https://kubernetes.github.io/ingress-nginx
and the values overwrites applied with the chart differ from the original ones mostly (well, those got updated since the installation) in extraArgs: default-ssl-certificate: "nginx-ingress/dragon-family-com" is uncommneted
PS To answer Andrew, I indeed tried to setup HTTPS but it seemingly didn't help, so I haven't included what I tried into the initial question. Yet, here's what I did:
installed cert-manager, currently without a custom chart: kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.5.4/cert-manager.yaml
based on cert-manager's tutorial and SO question created a ClusterIssuer with the following config:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-backoffice
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
# use https://acme-v02.api.letsencrypt.org/directory after everything is fixed and works
privateKeySecretRef: # this secret will be created in the namespace of cert-manager
name: letsencrypt-backoffice-private-key
# email: <will be used for urgent alerts about expiration etc>
solvers:
# TODO: add for each domain/second-level domain/*.projectname.org
- selector:
dnsZones:
- test.projectname.org
- test2.projectname.org
# haven't made it to work yet, so switched to the simpler to configure http01 challenge
# dns01:
# route53:
# region: ... # that of load balancer (but we also have ...)
# accessKeyID: <of IAM user with access to Route53>
# secretAccessKeySecretRef: # created that
# name: route53-credentials-secret
# key: secret-access-key
# role: arn:aws:iam::645730347045:role/cert-manager
http01:
ingress:
class: nginx
and applied it via kubectl apply -f issuer.yaml
created 2 certificates in the same file and applied it again:
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-certificate
spec:
secretName: tls-secret
issuerRef:
kind: ClusterIssuer
name: letsencrypt-backoffice
commonName: test.projectname.org
dnsNames:
- test.projectname.org
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-certificate-2
spec:
secretName: tls-secret-2
issuerRef:
kind: ClusterIssuer
name: letsencrypt-backoffice
commonName: test2.projectname.org
dnsNames:
- test2.projectname.org
made sure that the certificates are issued correctly (skipping the pain part, the result is: kubectl get certificates shows that both certificates have READY = true and both tls secrets are created)
figured that my ingress is in another namespace and secrets for tls in ingress spec can only be referred in the same namespace (haven't tried the wildcard certificate and --default-ssl-certificate option yet), so for each one copied them to tests namespace:
opened existing secret, like kubectl edit secret tls-secret-2, copied data and annotations
created an empty (Opaque) secret in tests: kubectl create secret generic tls-secret-2-copy -n tests
opened it (kubectl edit secret tls-secret-2-copy -n tests) and inserted data an annotations
in ingress spec, added the tls bit:
tls:
- hosts:
- test.projectname.org
secretName: tls-secret-copy
- hosts:
- test2.projectname.org
secretName: tls-secret-2-copy
I hoped that this will help, but actually it made no difference (I get ERR_TOO_MANY_REDIRECTS for irrelevant paths, redirect from http to https, NET::ERR_CERT_AUTHORITY_INVALID at https and Cannot GET /test if I insist on getting to the page)
Since you've used your own answer to complement the question, I'll kind of answer all the things you asked, while providing a divide and conquer strategy to troubleshooting kubernetes networking.
At the end I'll give you some nginx and IP answers
This is correct
- host: test3.projectname.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
Breaking down troubleshooting with Ingress
DNS
Ingress
Service
Pod
Certificate
1.DNS
you can use the command dig to query the DNS
dig google.com
Ingress
the ingress controller doesn't look for the IP, it just looks for the headers
you can force a host using any tool that lets you change the headers, like curl
curl --header 'Host: test3.projectname.com' http://123.123.123.123 (your public IP)
Service
you can be sure that your service is working by creating ubuntu/centos pod, using kubectl exec -it podname -- bash and trying to curl your service form withing the cluster
Pod
You're getting this
192.168.14.57 - - [14/Nov/2021:12:02:58 +0000] "GET /test2 HTTP/2.0" 404 144
"-" "<browser's user-agent header value>" 448 0.002
This part GET /test2 means that the request got the address from the DNS, went all the way from the internet, found your clusters, found your ingress controller, got through the service and reached your pod. Congratz! Your ingress is working!
But why is it returning 404?
The path that was passed to the service and from the service to the pod is /test2
Do you have a file called test2 that nginx can serve? Do you have an upstream config in nginx that has a test2 prefix?
That's why, you're getting a 404 from nginx, not from the ingress controller.
Those IPs are internal, remember, the internet traffic ended at the cluster border, now you're in an internal network. Here's a rough sketch of what's happening
Let's say that you're accessing it from your laptop. Your laptop has the IP 192.168.123.123, but your home has the address 7.8.9.1, so when your request hits the cluster, the cluster sees 7.8.9.1 requesting test3.projectname.com.
The cluster looks for the ingress controller, which finds a suitable configuration and passed the request down to the service, which passes the request down to the pod.
So,
your router can see your private IP (192.168.123.123)
Your cluster(ingress) can see your router's IP (7.8.9.1)
Your service can see the ingress's IP (192.168.?.?)
Your pod can see the service's IP (192.168.14.57)
It's a game of pass around.
If you want to see the public IP in your nginx logs, you need to customize it to get the X-Real-IP header, which is usually where load-balancers/ingresses/ambassador/proxies put the actual requester public IP.
Well, I haven't figured this out for ArgoCD yet (edit: figured, but the solution is ArgoCD-specific), but for this test service it seems that path resolving is the source of the issue. It may be not the only source (to be retested on test2 subdomain), but when I created a new subdomain in the hosted zone (test3, not used anywhere before) and pointed it via A entry to the load balancer (as "alias" in AWS console), and then added to the ingress a new rule with / path, like this:
- host: test3.projectname.org
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-kubernetes-first
port:
number: 80
I've finally got the hello kubernetes thing on http://test3.projectname.org. I have succeeded with TLS after a number of attempts/research and some help in a separate question.
But I haven't succeeded with actual debugging: looking at kubectl logs -n nginx <pod name, see kubectl get pod -n nginx> doesn't really help understanding what path was passed to the service and is rather difficult to understand (can't even find where those IPs come from: they are not mine, LB's, cluster IP of the service; neither I understand what tests-hello-kubernetes-first-80 stands for – it's just a concatenation of namespace, service name and port, no object has such name, including ingress):
192.168.14.57 - - [14/Nov/2021:12:02:58 +0000] "GET /test2 HTTP/2.0" 404 144
"-" "<browser's user-agent header value>" 448 0.002
[tests-hello-kubernetes-first-80] [] 192.168.49.95:8080 144 0.000 404 <some hash>
Any more pointers on debugging will be helpful; also suggestions regarding correct path ~rewriting for nginx-ingress are welcome.

control headers and routing depcrecated in istio 1.6

I need to route my traffic based on the headers using istio but this option is deprecated in istio1.6 version. Why control headers and routing is deprecated in istio?
As mentioned in istio documentation
The mixer policy is deprecated in Istio 1.5 and not recommended for production usage.
Consider using Envoy ext_authz filter, lua filter, or write a filter using the Envoy-wasm sandbox.
Control headers and routing are not deprecated, it's just mixer which was used to do that. There are different ways to do that now, as mentioned above.
I'm not sure what exactly you want to do, but take a look at envoy filter and virtual service.
Envoy filter
There is envoy filter which add some custom headers to all the outbound responses
kind: EnvoyFilter
metadata:
name: lua-filter
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_response(response_handle)
response_handle:logInfo(" ========= XXXXX ========== ")
response_handle:headers():add("X-User-Header", "worked")
end
And tests from curl
$ curl -s -I -X HEAD x.x.x.x/
HTTP/1.1 200 OK
server: istio-envoy
date: Mon, 06 Jul 2020 08:35:37 GMT
content-type: text/html
content-length: 13
last-modified: Thu, 02 Jul 2020 12:11:16 GMT
etag: "5efdcee4-d"
accept-ranges: bytes
x-envoy-upstream-service-time: 2
x-user-header: worked
Few links worth to check about that:
https://blog.opstree.com/2020/05/27/ip-whitelisting-using-istio-policy-on-kubernetes-microservices/
https://github.com/istio/istio/wiki/EnvoyFilter-Samples
https://istio.io/latest/docs/reference/config/networking/envoy-filter/
Virtual Service
Another thing worth to check here would be virtual service, you can do header routing based on matches here.
Take a look at example from istio documentation
HttpMatchRequest specifies a set of criterion to be met in order for the rule to be applied to the HTTP request. For example, the following restricts the rule to match only requests where the URL path starts with /ratings/v2/ and the request contains a custom end-user header with value jason.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings-route
spec:
hosts:
- ratings.prod.svc.cluster.local
http:
- match:
- headers:
end-user:
exact: jason
uri:
prefix: "/ratings/v2/"
ignoreUriCase: true
route:
- destination:
host: ratings.prod.svc.cluster.local
Additionally there is my older example with header based routing in virtual service.
Let me know if you have any more questions.

Kubernetes service is reachable from node but not from my machine

I have a timeout problem with my site hosted on Kubernetes cluster provided by DigitalOcean.
u#macbook$ curl -L fork.example.com
curl: (7) Failed to connect to fork.example.com port 80: Operation timed out
I have tried everything listed on the Debug Services page. I use a k8s service named df-stats-site.
u#pod$ nslookup df-stats-site
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
It gives the same output when I do it from node:
u#node$ nslookup df-stats-site.deepfork.svc.cluster.local 10.245.0.10
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
With the help of Does the Service work by IP? part of the page, I tried the following command and got the expected output.
u#node$ curl 10.245.16.96
*correct response*
Which should mean that everything is fine with DNS and service. I confirmed that kube-proxy is running with the following command:
u#node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 13:56 /hyperkube proxy --config=...
But I have something wrong with iptables rules:
u#node$ iptables-save | grep df-stats-site
(unfortunately, I was not able to copy the output from node, see the screenshot below)
It is recommended to restart kube-proxy with with the -v flag set to 4, but I don't know how to do it with DigitalOcean provided cluster.
That's the configuration I use:
apiVersion: v1
kind: Service
metadata:
name: df-stats-site
spec:
ports:
- port: 80
targetPort: 8002
selector:
app: df-stats-site
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: df-stats-site
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- fork.example.com
secretName: letsencrypt-prod
rules:
- host: fork.example.com
http:
paths:
- backend:
serviceName: df-stats-site
servicePort: 80
Also, I have a NGINX Ingress Controller set up with the help of this answer.
I must note that it worked fine before. I'm not sure what caused this, but restarting the cluster would be great, though I don't know how to do it without removing all the resources.
The solution for me was to add HTTP and HTTPS inbound rules in the Firewall (these are missing by default).
For DigitalOcean provided Kubernetes cluster, you can open it at https://cloud.digitalocean.com/networking/firewalls/.
UPDATE: Make sure to create a new firewall record rather than editing an existing one. Otherwise, your rules will be automatically removed in a couple of hours/days, because DigitalOcean k8s persists the set of rules in the firewall.
ClusterIP services are only accessible from within the cluster. If you want to access it from outside the cluster, it needs to be configured as NodePort or LoadBalancer.
If you are just trying to test something locally, you can use kubectl port-forward to forward a port on your local machine to a ClusterIP service on a remote cluster. Here's an example of creating a deployment from an image, exposing it as a ClusterIP service, then accessing it via kubectl port-forward:
$ kubectl run --image=rancher/hello-world hello-world --replicas 2
$ kubectl expose deployment hello-world --type=ClusterIP --port=8080 --target-port=80
$ kubectl port-forward svc/hello-world 8080:8080
This service is now accessible from my local computer at http://127.0.0.1:8080