How to deny default but allow HTTP and TCP traffic in istio kubernetes cluster? - kubernetes

I have a cluster with istio injection enabled and cockroach db stateful set defined:
apiVersion: v1
kind: ServiceAccount
metadata:
name: cockroachdb-serviceaccount
---
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: tcp
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cockroachdb-statefulset
labels:
version: v20.1.2
spec:
serviceName: cockroachdb
replicas: 3
selector:
matchLabels:
app: cockroachdb
template:
metadata:
labels:
app: cockroachdb
version: v20.1.2
spec:
serviceAccountName: cockroachdb-serviceaccount
containers:
- name: cockroachdb
image: cockroachdb/cockroach:v20.1.2
ports:
- containerPort: 26257
name: tcp
- containerPort: 8080
name: http
volumeMounts:
- name: datadir
mountPath: /cockroach/cockroach-data
env:
- name: COCKROACH_CHANNEL
value: kubernetes-insecure
command:
- "/bin/bash"
- "-ecx"
# The use of qualified `hostname -f` is crucial:
# Other nodes aren't able to look up the unqualified hostname.
- "exec /cockroach/cockroach start --logtostderr --insecure --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join cockroachdb-statefulset-0.cockroachdb,cockroachdb-statefulset-1.cockroachdb,cockroachdb-statefulset-2.cockroachdb --cache 25% --max-sql-memory 25%"
# No pre-stop hook is required, a SIGTERM plus some time is all that's
# needed for graceful shutdown of a node.
terminationGracePeriodSeconds: 5
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 4Gi
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: cockroachdb-public
spec:
host: cockroachdb-public
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: cockroachdb-public
spec:
hosts:
- cockroachdb-public
http:
- match:
- port: 8080
route:
- destination:
host: cockroachdb-public
port:
number: 8080
tcp:
- match:
- port: 26257
route:
- destination:
host: cockroachdb-public
port:
number: 26257
and a service that accesses it:
apiVersion: v1
kind: ServiceAccount
metadata:
name: downstream-serviceaccount
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: downstream-deployment-v1
labels:
app: downstream
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: downstream
version: v1
template:
metadata:
labels:
app: downstream
version: v1
spec:
serviceAccountName: downstream-serviceaccount
containers:
- name: downstream
image: downstream:0.1
ports:
- containerPort: 80
env:
- name: DATABASE_URL
value: postgres://roach#cockroachdb-public:26257/roach?sslmode=disable
---
apiVersion: v1
kind: Service
metadata:
name: downstream-service
labels:
app: downstream
spec:
type: ClusterIP
selector:
app: downstream
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: downstream-service
spec:
host: downstream-service
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: downstream-service
spec:
hosts:
- downstream-service
http:
- name: "downstream-service-routes"
match:
- port: 80
route:
- destination:
host: downstream-service
port:
number: 80
Now I'd like to restrict access to cockroach db only to downstream-service and to cockroachdb itself (since nodes need intercommunication between each other).
I'm trying to restrict the traffic with something like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: default-deny-all
namespace: default
spec:
{}
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-downstream
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/downstream-serviceaccount"]
- to:
- operation:
ports: ["26257"]
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/cockroachdb-serviceaccount"]
- to:
- operation:
ports: ["26257"]
but doesn't seem to do anything. I can still e.g. access cockroachdb-public:8080 cluster HTTP UI from downstream-service.
Now when I add the following:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: default-deny-all-to-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: DENY
rules:
- to:
- operation:
ports: ["26257"]
then all the traffic is blocked (including the traffic between cockroachdb nodes).
What am I doing wrong here?

You are having the same problem that a guy couple of days ago. In your authorization policy you have two policies:
service account downstream-serviceaccount (and cockroachdb-serviceaccount for the other authorization policy) from default namespace can access the service with labels app: cockroachdb on any port on default namespace.
Any service account, from any namespace can access the service with labels app: cockroachdb, on port 26257.
In order to make it an AND, you would do this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/cockroachdb-serviceaccount"]
to: <- remove the dash from here
- operation:
ports: ["26257"]
Same with the other AuthorizationPolicy object. Also note that you don't need to explicitly create a DENY policy. When you create an ALLOW one, it automatically denies everything else.

Related

Ingress creating health check on HTTP instead of TCP

I am actually trying to run 3 containers in my gke cluster. I have them exposed via a network load balancer and over that, I am using ingress so I can reach my services from different domains with SSL certs on them.
Here is the complete manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app:web
spec:
containers:
- name: web
image: us-east4-docker.pkg.dev/web:e856485 # docker image
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms
spec:
replicas: 3
selector:
matchLabels:
app: cms
template:
metadata:
labels:
app: cms
spec:
containers:
- name: cms
image: us-east4-docker.pkg.dev/cms:4e1fe2f # docker image
ports:
- containerPort: 8055
env:
- name : DB
value : "postgres"
- name : DB_HOST
value : 10.142.0.3
- name : DB_PORT
value : "5432"
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us-east4-docker.pkg.dev/api:4e1fe2f # docker image
ports:
- containerPort: 8080
env:
- name : HOST
value : "0.0.0.0"
- name : PORT
value : "8080"
- name : NODE_ENV
value : production
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: web-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: web
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: web
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: cms-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: cms
spec:
ports:
- port: 8055
protocol: TCP
targetPort: 8055
selector:
app: cms
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: api-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: api
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: api
type: NodePort
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: web-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: cms-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: api-cert
type: kubernetes.io/tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
spec:
tls:
- secretName: api-cert
- secretName: cms-cert
- secretName: web-cert
rules:
- host: web-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: web-lb
port:
number: 3000
- host: cms-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: cms-lb
port:
number: 8055
- host: api-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: api-lb
port:
number: 8080
The containers are accessible through the load balancer(network), but from ingress(L7 lb) the health check is failing.
I tried editing the health checks manually from HTTP:80 to TCP:8080/8055/3000 for 3 services and it works.
But eventually, ingress reverts it back to HTTP health check and it fails again. I also tried using NodePort instead of load balancer as service type but no luck.
Any help?
The first thing I would like to mention is that you need to recheck your implementation because from what I see, you are creating an Ingress which will create a LoadBanacer, and this Ingress is using three services of type LoadBalancer in which each one of them will also create its LoadBalancer (I'm assuming the default behaviour, unless you applied the famous workaround of deleting the service's LoadBalancer manually after it is created).
And I don't think this is correct unless you need that design for some reason. So, my suggestion is that you might want to change your services types to NodePort.
As for answering your question, what you are missing is:
You need to implement a BackendConfig with custom HealthCheck configurations.
1- Create the Backendconfig:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: api-lb-backendconfig
spec:
healthCheck:
checkIntervalSec: INTERVAL
timeoutSec: TIMEOUT
healthyThreshold: HEALTH_THRESHOLD
unhealthyThreshold: UNHEALTHY_THRESHOLD
type: PROTOCOL
requestPath: PATH
port: PORT
2- Use this config in your service/s
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"ports": {
"PORT_NAME_1":"api-lb-backendconfig"
}}'
spec:
ports:
- name: PORT_NAME_1
port: PORT_NUMBER_1
protocol: TCP
targetPort: TARGET_PORT
Once you apply such configurations, your Ingress's LoadBalanacer will be created with the BackendConfig "api-lb-backendconfig"
Consider this documentation page as your reference.

How to create a HTTPS route to a Service that is listening on Https with Traefik, and Kubernetes

I'm a newbie in kubernetes and Traefik.
I follow up that tutorial:
https://docs.traefik.io/user-guides/crd-acme/
And I changed it to use my Service in Scala, that it is under https and 9463 port.
I'm trying to deploy my Scala service with kubernetes and traefik.
When I forward directly to the service :
kubectl port-forward service/core-service 8001:9463
And I perform a curl -k 'https://localhost:8001/health' :
I get the "{Message:Ok}"
But when I perform a port forward to traefik
kubectl port-forward service/traefik 9463:9463 -n default
And perform a curl -k 'https://ejemplo.com:9463/tls/health'
I get an "Internal server error"
I guess the problem is that my "core-service" is listening over HTTPS protocol, that's what I add scheme:https.
I tried to find the solution over the documentation but it is confusing.
Those are my yml files:
Services.yaml
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 9463
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: core-service
spec:
ports:
- protocol: TCP
name: websecure
port: 9463
selector:
app: core-service
Deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.0
args:
- --api.insecure
- --accesslog
- --entrypoints.websecure.Address=:9463
- --providers.kubernetescrd
- --certificatesresolvers.default.acme.tlschallenge
- --certificatesresolvers.default.acme.email=foo#you.com
- --certificatesresolvers.default.acme.storage=acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
- --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
ports:
- name: websecure
containerPort: 9463
- name: admin
containerPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: core-service
labels:
app: core-service
spec:
replicas: 1
selector:
matchLabels:
app: core-service
template:
metadata:
labels:
app: core-service
spec:
containers:
- name: core-service
image: core-service:0.1.4-SNAPSHOT
ports:
- name: websecure
containerPort: 9463
livenessProbe:
httpGet:
port: 9463
scheme: HTTPS
path: /health
initialDelaySeconds: 10
IngressRoute2.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
From the docs
A TLS router will terminate the TLS connection by default. However,
the passthrough option can be specified to set whether the requests
should be forwarded "as is", keeping all data encrypted.
In your case SSL Passthrough need to be enabled because the pod is expecting HTTPS traffic.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
passthrough: true

enable https on local domain with Kubernetes / Traefik Ingress

When I test my Spring boot app without docker, I test it with:
https://localhost:8081/points/12345/search
And it works great. I get an error if I use http
Now, I want to deploy it with Kubernetes in local, with url: https://sge-api.local
When I use http, I get the same error as when I don't use docker.
But when I use https, I get:
<html><body><h1>404 Not Found</h1></body></html>
Here is my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: sge-api-local
name: sge-api-local
namespace: sge
spec:
selector:
matchLabels:
app: sge-api-local
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: sge-api-local
spec:
containers:
- image: sge_api:local
name: sge-api-local
Here is my ingress:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: sge-ingress
namespace: sge
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: sge-api.local
http:
paths:
- backend:
serviceName: sge-api-local
servicePort: 8081
tls:
- secretName: sge-api-tls-cert
with :
kubectl -n kube-system create secret tls sge-api-tls-cert --key=../certs/privkey.pem --cert=../certs/cert1.pem
Finally, here is my service:
apiVersion: v1
kind: Service
metadata:
labels:
app: sge-api-local
name: sge-api-local
namespace: sge
spec:
ports:
- name: "8081"
port: 8081
selector:
app: sge-api-local
What should I do ?
EDIT:
traefik-config.yml:
kind: ConfigMap
apiVersion: v1
metadata:
name: traefik-config
data:
traefik.toml: |
# traefik.toml
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
traefik-deployment:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
labels:
k8s-app: traefik-ingress-lb
spec:
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:1.7
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
hostPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
traefik-service.yml
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
Please make sure that you have enable TLS. Let’s Encrypt is a free TLS Certificate Authority (CA) and you can use it to automatically request and renew Let’s Encrypt certificates for public domain names. Make sure that you have created configmap. Check if you follow every steps during traefik setup: traefik-ingress-controller.
Then you have to assign to which hosts creted secret have to be assigned, egg.
tls:
- secretName: sge-api-tls-cert
hosts:
- sge-api.local
Remember to add specific port assigned to host while executing link.
In your case should be: https://sge-api.local:8081
When using SSL offloading outside of cluster it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available.
You could also add annotations to ingress configuration file:
traefik.ingress.kubernetes.io/frontend-entry-points: http, https
traefik.ingress.kubernetes.io/redirect-entry-point: https
to enable Redirect to another entryPoint for that frontend (e.g. HTTPS).
Let me know if it helps.

Deploying ambassador to kubernetes

I've been learning about how to deploy ambassador on kubernetes on minikube by this tutorial, and that works as I can see the page for successfully installed ambassador. The main problem is, when I try to change the image of the UI such that it should open other app in the link, it opens the same successfull page of ambassador.
Previous tour.yaml
---
apiVersion: v1
kind: Service
metadata:
name: tour
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: tour-ui_mapping
prefix: /
service: tour:5000
---
apiVersion: ambassador/v1
kind: Mapping
name: tour-backend_mapping
prefix: /backend/
service: tour:8080
labels:
ambassador:
- request_label:
- backend
spec:
ports:
- name: ui
port: 5000
targetPort: 5000
- name: backend
port: 8080
targetPort: 8080
selector:
app: tour
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tour
spec:
replicas: 1
selector:
matchLabels:
app: tour
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: tour
spec:
containers:
- name: tour-ui
image: quay.io/datawire/tour:ui-0.2.1
ports:
- name: http
containerPort: 5000
- name: quote
image: quay.io/datawire/tour:backend-0.2.1
ports:
- name: http
containerPort: 8080
resources:
limits:
cpu: "0.1"
memory: 100Mi
modified tour.yaml(removed backend and changed the image)
---
apiVersion: v1
kind: Service
metadata:
name: tour
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: tour-ui_mapping
prefix: /
service: tour:5000
spec:
ports:
- name: ui
port: 5000
targetPort: 5000
selector:
app: tour
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tour
spec:
replicas: 1
selector:
matchLabels:
app: tour
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: tour
spec:
containers:
- name: tour-ui
image: quay.io/integreatly/tutorial-web-app:2.10.5
ports:
- name: http
containerPort: 5000
resources:
limits:
cpu: "0.1"
memory: 100Mi
ambassador-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: ambassador
spec:
type: NodePort
externalTrafficPolicy: Local
ports:
- port: 80
targetPort: 8080
selector:
service: ambassador
Please help, I'm really confused what is the cause behind it and how I can resolve it.
What you're doing above is replacing the tour Kubernetes service and deployment with your own alternative. This is a bit of an unusual pattern; I'd suspect that there's probably a typo somewhere which means Kubernetes isn't picking up on your change.
I'd suggest creating a unique test Kubernetes service and deployment, and pointing the image in your deployment to your new image. Then you can register a new prefix with Ambassador.
You can also look at the Ambassador diagnostics (see https://www.getambassador.io/reference/diagnostics/) which will tell you which routes are registered with Ambassador.

Istio Origin Authentication Using JWT does not work

I’ve been applying Authentication Policy to my testing service using JWT. I have followed this guide and it did work as expected. But, when I tried to using a different pod image, it did not work even though almost everything is the same.
Is there anyone facing this issue? or know the reason why it did not work in my case?
Thank you very much!
These are my configuration files:
Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hostname
spec:
replicas: 1
selector:
matchLabels:
app: hostname
version: v1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: hostname
version: v1
spec:
containers:
- image: rstarmer/hostname:v1
imagePullPolicy: Always
name: hostname
resources: {}
restartPolicy: Always
Service
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hostname
name: hostname
spec:
ports:
- name: http
port: 8001
targetPort: 80
selector:
app: hostname
Gateway
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hostname-gateway
namespace: foo
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
VirtualService
---
piVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hostname-vs
namespace: foo
spec:
hosts:
- "*"
gateways:
- hostname-gateway
http:
- route:
- destination:
port:
number: 8001
host: hostname.foo.svc.cluster.local
Policy
---
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "jwt-example"
namespace: foo
spec:
targets:
- name: hostname
origins:
- jwt:
issuer: "testing#secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.0/security/tools/jwt/samples/jwks.json"
principalBinding: USE_ORIGIN
As stated by OP on the Istio forums you need to respect the naming convention for the port name of your service.
It can either be "http" or "http2".
For instance this is valid
apiVersion: v1
kind: Service
metadata:
name: somename
namespace: auth
spec:
selector:
app: someapp
ports:
- port: 80
targetPort: 3000
name: http
And this is not
apiVersion: v1
kind: Service
metadata:
name: somename
namespace: auth
spec:
selector:
app: someapp
ports:
- port: 80
targetPort: 3000
Not specifying a name for the port is not valid.