I am using Istio-1.0.6 to implement Authentication/Authorization. I am attempting to use Jason Web Tokens (JWT). I followed most of the examples from the documentation but I am not getting the expected outcome. Here are my settings:
Service
kubectl describe services hello
Name: hello
Namespace: agud
Selector: app=hello
Type: ClusterIP
IP: 10.247.173.177
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
Endpoints: 172.16.0.193:8080
Session Affinity: None
Gateway
kubectl describe gateway
Name: hello-gateway
Namespace: agud
Kind: Gateway
Metadata:
Cluster Name:
Creation Timestamp: 2019-03-15T13:40:43Z
Resource Version: 1374497
Self Link:
/apis/networking.istio.io/v1alpha3/namespaces/agud/gateways/hello-gateway
UID: ee483065-4727-11e9-a712-fa163ee249a9
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Virtual Service
kubectl describe virtualservices
Name: hello
Namespace: agud
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
Cluster Name:
Creation Timestamp: 2019-03-18T07:38:52Z
Generation: 0
Resource Version: 2329507
Self Link:
/apis/networking.istio.io/v1alpha3/namespaces/agud/virtualservices/hello
UID: e099b560-4950-11e9-82a1-fa163ee249a9
Spec:
Gateways:
hello-gateway
Hosts:
*
Http:
Match:
Uri:
Exact: /hello
Uri:
Exact: /secured
Route:
Destination:
Host: hello.agud.svc.cluster.local
Port:
Number: 8080
Policy
kubectl describe policies
Name: jwt-hello
Namespace: agud
API Version: authentication.istio.io/v1alpha1
Kind: Policy
Metadata:
Cluster Name:
Creation Timestamp: 2019-03-18T07:45:33Z
Generation: 0
Resource Version: 2331381
Self Link:
/apis/authentication.istio.io/v1alpha1/namespaces/agud/policies/jwt-hello
UID: cf9ed2aa-4951-11e9-9f64-fa163e804eca
Spec:
Origins:
Jwt:
Audiences:
hello
Issuer: testing#secure.istio.io
Jwks Uri: https://raw.githubusercontent.com/istio/istio/release-1.0/security/tools/jwt/samples/jwks.json
Principal Binding: USE_ORIGIN
Targets:
Name: hello.agud.svc.cluster.local
RESULT
I am expecting to get a 401 error but I am getting a 200. What is wrong with my configuration and how do I fix this?
curl $INGRESS_HOST/hello -s -o /dev/null -w "%{http_code}\n"
200
You have:
Port: <unset> 8080/TCP
For Istio routing and security, you must set the port name to http or http-<something>.
I tried with Istio 1.1. I got a 503 rather than a 401.
Related
I have created kubernetes ingress with frontend config and the ECDSA P-384 TLS cert on Google Cloud Platform, after few seconds of creating process i received the followind error:
Error syncing to GCP: error running load balancer syncing routine:
loadbalancer -default--ingress-****** does not exist:
Cert creation failures -
k8s2-cr---***** Error:googleapi:
Error 400: The ECDSA curve is not supported.,
sslCertificateUnsupportedCurve
Why The ECDSA curve is not supported? Is there any way to enable this support?
Create tls-secret command:
kubectl create secret tls tls --key [key-path] --cert [cert-path]
Frontend-config:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: default
labels:
kind: ingress
annotations:
networking.gke.io/v1beta1.FrontendConfig: frontend-config
spec:
tls:
- hosts:
- '*.mydomain.com'
secretName: tls
rules:
- host: mydomain.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: spa-ingress-service
port:
number: 80
- host: api.mydomain.com
http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: api-ingress-service
port:
number: 80
spa services:
# SERVICE LOAD BALANCER
apiVersion: v1
kind: Service
metadata:
name: spa-service
labels:
app/name: spa
spec:
type: LoadBalancer
selector:
app/template: spa
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
# SERVICE NODE PORT - FOR INGRESS
apiVersion: v1
kind: Service
metadata:
name: spa-ingress-service
labels:
app/name: ingress.spa
spec:
type: NodePort
selector:
app/template: spa
ports:
- name: https
protocol: TCP
port: 80
targetPort: http
api services:
# SERVICE LOAD BALANCER
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app/name: api
spec:
type: LoadBalancer
selector:
app/template: api
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
# SERVICE NODE PORT - FOR INGRESS
apiVersion: v1
kind: Service
metadata:
name: api-ingress-service
labels:
app/name: ingress.api
spec:
type: NodePort
selector:
app/template: api
ports:
- name: https
protocol: TCP
port: 80
targetPort: http
kubectl describe ingress response:
The gcp load balancer supports RSA-2048 or ECDSA P-256 certificates. Also DownstreamTlsContexts support multiple TLS certificates. These may be a mix of RSA and P-256 ECDSA certificates.
The following error is due to the incompatibility with the P-384 certificate currently being used rather than the P-256 certificate.
For additional information refer to the Load Balancing Overview.
I used below command to bring up the pod:
kubectl create deployment grafana --image=docker.io/grafana/grafana:5.4.3 -n monitoring
Then I used below command to create custerIp:
kubectl expose deployment grafana --type=ClusterIP --port=80 --target-port=3000 --protocol=TCP -n monitoring
Then I have used below virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "*"
gateways:
- cogtiler-gateway.skydeck
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
port:
number: 3000
host: grafana
kubectl apply -f grafana-virtualservice.yaml -n monitoring
Output:
virtualservice.networking.istio.io/grafana created
Now, when I try to access it, I get below error from grafana:
**If you're seeing this Grafana has failed to load its application files
1. This could be caused by your reverse proxy settings.
2. If you host grafana under subpath make sure your grafana.ini root_path setting includes subpath
3. If you have a local dev build make sure you build frontend using: npm run dev, npm run watch, or npm run build
4. Sometimes restarting grafana-server can help **
The easiest and working out of the box solution to configure that would be with a grafana host and / prefix.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: monitoring
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: monitoring
spec:
hosts:
- "grafana.example.com"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: grafana
port:
number: 80
As you mentioned in the comments, I want to use path based routing something like my.com/grafana, that's also possible to configure. You can use istio rewrite to configure that.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: monitoring
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: monitoring
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana
rewrite:
uri: /
route:
- destination:
host: grafana
port:
number: 80
But, according to this github issue you would have also additionally configure grafana for that. As without the proper grafana configuration that won't work correctly.
I found a way to configure grafana with different url with the following env variable GF_SERVER_ROOT_URL in grafana deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: grafana
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: grafana
spec:
containers:
- image: docker.io/grafana/grafana:5.4.3
name: grafana
env:
- name: GF_SERVER_ROOT_URL
value: "%(protocol)s://%(domain)s/grafana/"
resources: {}
Also there is a Virtual Service and Gateway for that deployment.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
spec:
hosts:
- "*"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /grafana/
rewrite:
uri: /
route:
- destination:
host: grafana
port:
number: 80
You need to create a Gateway to allow routing between the istio-ingressgateway and your VirtualService.
Something in the lines of :
kind: Gateway
metadata:
name: ingress
namespace: istio-system
spec:
selector:
# Make sure that the istio-ingressgateway pods have this label
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my.domain.com
You also need a DNS entry for your domain (my-domain.com) that points to the IP address of your istio-ingressgateway.
When your browser will hit my.domain.com, then it'll be redirected to the istio-ingressgateway. The istio-ingressgateway will inspect the Host field from the request, and redirect the request to grafana (according to VirtualService rules).
You can check kubectl get svc -n istio-system | grep istio-ingressgateway to get the public IP of your ingress gateway.
If you want to enable TLS, then you need to provision a TLS certificate for your domain (most easy with cert-manager). Then you can use https redirect in your gateway, like so :
kind: Gateway
metadata:
name: ingress
namespace: whatever
spec:
selector:
# Make sure that the istio-ingressgateway pods have this label
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my.domain.com
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- my.domain.com
tls:
mode: SIMPLE
# name of the secret containing the TLS certificate + keys. The secret must exist in the same namespace as the istio-ingressgateway (probably istio-system namespace)
# This secret can be created by cert-manager
# Or you can create a self-signed certificate
# and add it to manually inside the browser trusted certificates
credentialName: my-domain-tls
Then you VirtualService
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
spec:
hosts:
- "my.domain.com"
gateways:
- ingress
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
port:
number: 3000
host: grafana
Hi today when I try to expose my service using route I'm getting 502 bad gateway...my openshift cluster version is 3.11. I used oc expose my-service to expose my service using the route. I have described my route below.
Name: hello-world
Namespace: uvindu-k8soperator
Labels: app=hello-world
Annotations: openshift.io/host.generated: true
API Version: route.openshift.io/v1
Kind: Route
Metadata:
Creation Timestamp: 2020-03-31T05:45:05Z
Resource Version: 15860504
Self Link: /apis/route.openshift.io/v1/namespaces/uvindu-k8soperator/routes/hello-world
UID: c5e6e8cc-7312-11ea-b6ad-fa163e41f92e
Spec:
Host: hello-world-uvindu-k8soperator.apps.novalocal
Port:
Target Port: port-9095
To:
Kind: Service
Name: hello-world
Weight: 100
Wildcard Policy: None
Status:
Ingress:
Conditions:
Last Transition Time: 2020-03-31T05:45:05Z
Status: True
Type: Admitted
Host: hello-world-uvindu-k8soperator.apps.novalocal
Router Name: router
Wildcard Policy: None
Events: <none>
I am running a service on kubernetes Azure AKS Cluster.
Istio-version: 1.3.2
My service is listening to both port 80 and 443:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes ClusterIP 10.0.43.233 <none> 80/TCP,443/TCP 28h
Also istio-gateway.yaml file looks like below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
#tls:
#httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: "mycert" # must be the same as secret
privateKey: sds
serverCertificate: sds
#serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
#privateKey: /etc/istio/ingressgateway-certs/tls.key
The secret is created by below command- I have a custom certificate that I have uploaded on the cluster:
kubectl create -n istio-system secret generic mycert \
--from-file=key=/home/user/istio-1.3.2/ssl/myprivate.key \
--from-file=cert=/home/user/istio-1.3.2/ssl/mycert.pem
mycert.pem file includes both certificate key and intermediate key.
The VirtualService file is like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-kubernetes
spec:
hosts:
- "mydomain.com"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: /hello-k8s
route:
- destination:
host: hello-kubernetes
If I curl it with http, it give me 200 OK response however when I curl it with https port, it gives HTTP/1.1 503 Service Unavailable.
Error message on the browser is:
NET::ERR_CERT_AUTHORITY_INVALID
Any idea of what it is missing?
The error is fixed by adding:
port:
number: 80
in destination part of virtual service file.
I am using helm to install istio-1.0.0 version with --set grafana.enabled=true.
To access the grafana dashboard, I have to do port forwarding using kubectl command. It works okay. However, i want to access it using public ip, hence I am using this gateway yaml file
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: agung-ns
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 15031
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-global-route
namespace: agung-ns
spec:
hosts:
- "grafana.domain"
gateways:
- grafana-gateway
- mesh
http:
- route:
- destination:
host: "grafana.istio-system"
port:
number: 3000
weight: 100
I tried to curl it, but it returns 404 status, which means something wrong with routing logic and/or my configuration above.
curl -HHost:grafana.domain http://<my-istioingressgateway-publicip>:15031 -I
HTTP/1.1 503 Service Unavailable
date: Tue, 14 Aug 2018 13:04:27 GMT
server: envoy
transfer-encoding: chunked
Any idea?
I think the problem is that you refer service in different namespace. You need to add FQDN (grafana.istio-system.svc.cluster.local).
If you need istio, grafana, prometheus and jaeger integrated, exposed through gateway and with enabled security you can check the project I am working on:
https://github.com/kyma-project/kyma
I did expose it like this:
grafana.yml
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "my.dns.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vts
namespace: istio-system
spec:
hosts:
- "my.dns.com"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: grafana
port:
number: 3000
then:
kubectl apply grafana.yml