I just set up istio for the first time on a service, and i cannot get the gateway/vs working.
Here is my configuration, it is according with the docs:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: dragon-gateway
spec:
selector:
# use Istio default gateway implementation
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: dragon
spec:
hosts:
- "vtest.westus.cloudapp.azure.com"
gateways:
- dragon-gateway
http:
- match:
- uri:
prefix: /
- uri:
prefix: /status
- uri:
prefix: /delay
- uri:
prefix: /api/values
route:
- destination:
host: dragon
port:
number: 80
The kubectl describe looks fine:
Name: dragon-gateway
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"dragon-gateway","namespace":"default"},...
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
Creation Timestamp: 2019-09-22T22:54:31Z
Generation: 1
Resource Version: 723889
Self Link: /apis/networking.istio.io/v1alpha3/namespaces/default/gateways/dragon-gateway
UID: f0738082-dd8b-11e9-b099-e259debf6109
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Name: dragon
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"VirtualService","metadata":{"annotations":{},"name":"dragon","namespace":"default"},"...
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
Creation Timestamp: 2019-09-22T22:54:31Z
Generation: 1
Resource Version: 723891
Self Link: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices/dragon
UID: f0988c3c-dd8b-11e9-b099-e259debf6109
Spec:
Gateways:
dragon-gateway
Hosts:
vtest.westus.cloudapp.azure.com
Http:
Match:
Uri:
Prefix: /
Uri:
Prefix: /status
Uri:
Prefix: /delay
Uri:
Prefix: /api/values
Route:
Destination:
Host: dragon
Port:
Number: 80
The service has the configuration as follow:
apiVersion: v1
kind: Service
metadata:
namespace: flight
name: dragon
labels:
app: dragon
release: r1
version: 1.0.0
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 80
selector:
app: dragon
release: r1
The docker file is quite simple:
FROM microsoft/dotnet:latest AS runtime
# ports
EXPOSE 80
EXPOSE 443
WORKDIR /
COPY /publish /app
RUN dir /app
WORKDIR /app
FROM runtime AS final
ENTRYPOINT ["dotnet", "dragon.dll"]
Please let me know if you have any idea. I tried to curl from another pod, and it works. The problem is using the external IP or using the internal IP that's assigned to the gateway. None of these work.
Thanks in advance for any clue.
Edit:
Adding more info about the curl
curl 40.118.228.111/api/values -v
* Trying 40.118.228.111...
* TCP_NODELAY set
* Connected to 40.118.228.111 (40.118.228.111) port 80 (#0)
> GET /api/values HTTP/1.1
> Host: 40.118.228.111
> User-Agent: curl/7.55.1
> Accept: */*
>
< HTTP/1.1 404 Not Found
< date: Sun, 22 Sep 2019 23:27:54 GMT
< server: istio-envoy
< content-length: 0
<
* Connection #0 to host 40.118.228.111 left intact
Adding proxy status as well:
NAME CDS LDS EDS RDS PILOT VERSION
dragon-dc789456b-g9fxb.flight SYNCED SYNCED SYNCED (50%) SYNCED istio-pilot-689d75bc8-j7j8m 1.1.3
istio-ingressgateway-5c4f9f859d-nj9sq.istio-system SYNCED SYNCED SYNCED (100%) SYNCED istio-pilot-689d75bc8-j7j8m 1.1.3
Looks like you put the dragon VirtualService and the dragon-gateway in the default namespace?
Because service names rely on dns and typically a pod's resolv.conf search paths only include the local namespace, the service name dragon will only resolve properly within the same namespace. Instead, use the fqdn for the dragon service:
...
route:
- destination:
host: dragon.flight.svc.cluster.local
port:
number: 80
You have configured istio to route based on hostname but your curl command is using the ip address. Either configure DNS with an A record like this - vtest.westus.cloudapp.azure.com -> 40.118.228.111, or force curl to send the correct host header:
curl http://vtest.westus.cloudapp.azure.com/api/values --resolve vtest.westus.cloudapp.azure.com:80:40.118.228.111
Hi I am not expert about istio but after invsetigation it looks like working with host and istio gateway, virtualnetworkservices you should use Host prefix in order to pass host HTTP Header,
like this:
curl -I -HHost:httpbin.example.com http://$INGRESS_HOST:$INGRESS_PORT/
This is needed because your ingress Gateway is configured to handle “httpbin.example.com”, but in your test environment you have no DNS binding for that host and are simply sending your request to the ingress IP.
From another point of view this setting must match Vitualservice:
A VirtualService must be bound to the gateway and must have one or more hosts that match the hosts specified in a server.
Specifying '*' bound all hostnames.
Also you can restrict Virtualservices or specify multiple rules for servers like hosts/hosts using this approach.
More advanced examples you can find here- Istio Server:
Hope this help.
Related
We have a requirement to forward the request to service outside of cluster.
/ -> some service outside cluster (someapi.com)
/api -> service inside cluster
When I try to hit the https://someapi.com/health it gives me proper response but not through ingress.
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: custom-ingress
annotations:
kubernetes.io/ingress.class: haproxy
status:
loadBalancer: {}
spec:
tls:
- hosts:
- mytenant.com
secretName: tenant-secret
rules:
- host: mytenant.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: external-service
port:
number: 80
Service
apiVersion: v1
kind: Service
metadata:
name: external-service
status:
loadBalancer: {}
spec:
type: ExternalName
sessionAffinity: None
externalName: someapi.com
curl -ikv https://mytenant.com/health is giving me
503 Service Unavailable
No server is available to handle this request.
Connection #0 to host mytenant.com left intact
I tried nslookup and it does evaluate to ip
/usr/src/app # nslookup external-service
Server: 901.63.1.11
Address: 901.63.1.11:53
external-service.default.svc.cluster.local canonical name = someapi.com
someapi.com canonical name = proxy-aws-can-55.elb.eu-central-1.amazonaws.com
Name: proxy-aws-can-55.elb.eu-central-1.amazonaws.com
Address: 92.220.220.137
Name: proxy-aws-can-55.elb.eu-central-1.amazonaws.com
Address: 33.43.161.163
Name: proxy-aws-can-55.elb.eu-central-1.amazonaws.com
Address: 98.200.178.250
external-service.default.svc.cluster.local canonical name = someapi.com
someapi.com canonical name = proxy-aws-can-55.elb.eu-central-1.amazonaws.com
When I changed the external-service port to 80 (also tried changing target port of service to 443)
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ExternalName
sessionAffinity: None
externalName: someapi.com
It keeps on looping with 301
< HTTP/2 301
< content-length: 0
< location: https://mytenant.com/health
< strict-transport-security: max-age=15768000
(With same setup if I just change the externalName to httpbin.org it works fine.)
When I changed ingress (port) and service (port and targetPort) to 443, I am getting
REFUSED_STREAM, retrying a fresh connect
Connection died, tried 5 times before giving up
Closing connection 5
curl: (56) Connection died, tried 5 times before giving up
I also tried setting host header mentioned here, https://www.haproxy.com/documentation/kubernetes/latest/configuration/ingress/#set-host but no luck still 301.
Please help me understand how I should make it work. many thanks!
I got the working configuration, I changed the ingress (port) and service (port/targetPort) to 443. Also, added annotation ingress.kubernetes.io/backend-protocol: h1-ssl on ingress.
I believe I was getting 301 because the upstream service was expecting https request and after adding the backend-protocol annotation, after ssl termination at HA Proxy controller, the new call that initiated was https and that fulfilled the request. Also, I think value for Service targetPort doesn't matter in case of ExternalName service.
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: custom-ingress
annotations:
ingress.kubernetes.io/backend-protocol: h1-ssl
kubernetes.io/ingress.class: haproxy
status:
loadBalancer: {}
spec:
tls:
- hosts:
- mytenant.com
secretName: tenant-secret
rules:
- host: mytenant.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: external-service
port:
number: 443
Service
apiVersion: v1
kind: Service
metadata:
name: external-service
status:
loadBalancer: {}
spec:
ports:
- protocol: TCP
port: 443
targetPort: 443
type: ExternalName
sessionAffinity: None
externalName: someapi.com
Facing an issue when accessing a service deployed in AKS through Istio using Host Header.
Below is the configuration for gateway and VirtualService in Istio (based out of the sample from httpbin) and trying to test the sample service using Host Header as given in this URL: https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/#before-you-begin
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "httpbin.example.com"
- "40.76.148.29"
- "52.171.230.140"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
spec:
hosts:
- "httpbin.example.com"
- "40.76.148.29"
- "52.171.230.140"
gateways:
- httpbin-gateway
http:
- match:
- uri:
prefix: /status
- uri:
prefix: /delay
route:
- destination:
port:
number: 8000
host: httpbin
The issue is that when I access the above service in CURL without host header works fine, whereas including the Host Header throws HTTP 403 error:
curl -s -I "http://52.171.230.140:80/status/200"
curl -s -I -HHost:httpbin.example.com "http://52.171.230.140:80/status/200"
Note: There is no Mutual TLS (PeerAuthentication) set on any of the namespaces.
Can someone point if anything is wrong in the setup/steps to fix the issue?
I have istio installed and can see it on Rancher. I have keycloak installed as well. I am trying to connect the two and have a gateway setup so I can access keycloak front-end through a URL.
In my keycloak manifest I have
# Source: keycloak/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: keycloak
.
. #Many other lines here
.
ports:
- name: http
containerPort: 8080
protocol: TCP
I then setup a gateway with command -
kubectl apply -f networking/custom-gateway.yaml
And in my custom-gateway.yaml file I have -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: keycloak-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- "*"
gateways:
- keycloak-gateway
http:
- match:
- uri:
exact: /keycloak
rewrite:
uri: "/" # Non context aware backend
route:
- destination:
host: keycloak
port:
number: 80
websocketUpgrade: true
Now when I try to access the URL with http://node_ip_address:port/keycloak, I find that I am not able to access the front end. I have verified that keycloak is installed and the pod is up and running on Rancher.
I also have my istio instance connected to the bookinfo application and am able to run the bookinfo-gateway and connect to http://node_ip_address:port/productpage with a gateway that looks like the one described here. I am trying to setup the same gateway only for keycloak.
What am I doing wrong in my yaml files. How do I fix this? Any help is appreciated. Do I have the ports connected correctly?
As far as I can see, you should fix your Virtual Service.
I prepared small example with helm and keycloak helm chart.
Save this as keycloak.yaml, you can configure your keycloak password here.
keycloak:
service:
type: ClusterIP
password: mykeycloakadminpasswd
persistence:
deployPostgres: true
dbVendor: postgres
Install keycloak with helm and values prepared above.
helm upgrade --install keycloak stable/keycloak -f keycloak.yml
Create gateway and virtual service
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: keycloak-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: keycloak
spec:
hosts:
- "*"
gateways:
- keycloak-gateway
http:
- match:
- uri:
prefix: /auth
- uri:
prefix: /keycloak
rewrite:
uri: /auth
route:
- destination:
host: keycloak-http
port:
number: 80
virtual service route.host is name of kubernetes keycloak pod service.
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
keycloak-http ClusterIP 10.0.14.36 <none> 80/TCP 22m
You should be able to connect to keycloak via your ingress_gateway_ip/keycloak or ingress_gateway_ip/auth and login with keycloak credentials, in my example it's login: keycloak and password: mykeycloakadminpasswd.
Note that you need to add prefix for /auth as it's default keycloak web to do everything. Keycloak prefix just rewrite to /auth here.
I am running a service on kubernetes Azure AKS Cluster.
Istio-version: 1.3.2
My service is listening to both port 80 and 443:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes ClusterIP 10.0.43.233 <none> 80/TCP,443/TCP 28h
Also istio-gateway.yaml file looks like below:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: my-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
#tls:
#httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: "mycert" # must be the same as secret
privateKey: sds
serverCertificate: sds
#serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
#privateKey: /etc/istio/ingressgateway-certs/tls.key
The secret is created by below command- I have a custom certificate that I have uploaded on the cluster:
kubectl create -n istio-system secret generic mycert \
--from-file=key=/home/user/istio-1.3.2/ssl/myprivate.key \
--from-file=cert=/home/user/istio-1.3.2/ssl/mycert.pem
mycert.pem file includes both certificate key and intermediate key.
The VirtualService file is like:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hello-kubernetes
spec:
hosts:
- "mydomain.com"
gateways:
- my-gateway
http:
- match:
- uri:
prefix: /hello-k8s
route:
- destination:
host: hello-kubernetes
If I curl it with http, it give me 200 OK response however when I curl it with https port, it gives HTTP/1.1 503 Service Unavailable.
Error message on the browser is:
NET::ERR_CERT_AUTHORITY_INVALID
Any idea of what it is missing?
The error is fixed by adding:
port:
number: 80
in destination part of virtual service file.
I am using helm to install istio-1.0.0 version with --set grafana.enabled=true.
To access the grafana dashboard, I have to do port forwarding using kubectl command. It works okay. However, i want to access it using public ip, hence I am using this gateway yaml file
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: agung-ns
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 15031
name: http-grafana
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-global-route
namespace: agung-ns
spec:
hosts:
- "grafana.domain"
gateways:
- grafana-gateway
- mesh
http:
- route:
- destination:
host: "grafana.istio-system"
port:
number: 3000
weight: 100
I tried to curl it, but it returns 404 status, which means something wrong with routing logic and/or my configuration above.
curl -HHost:grafana.domain http://<my-istioingressgateway-publicip>:15031 -I
HTTP/1.1 503 Service Unavailable
date: Tue, 14 Aug 2018 13:04:27 GMT
server: envoy
transfer-encoding: chunked
Any idea?
I think the problem is that you refer service in different namespace. You need to add FQDN (grafana.istio-system.svc.cluster.local).
If you need istio, grafana, prometheus and jaeger integrated, exposed through gateway and with enabled security you can check the project I am working on:
https://github.com/kyma-project/kyma
I did expose it like this:
grafana.yml
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "my.dns.com"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vts
namespace: istio-system
spec:
hosts:
- "my.dns.com"
gateways:
- grafana-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: grafana
port:
number: 3000
then:
kubectl apply grafana.yml