domain registry with Istio ServiceEntry - kubernetes

Now I am testing the scenario running apps on Istio.
I cant access to legacy codes, so I cant change request urls.
For that, I made some simple apps.
I am not sure this scenario is available on Istio.
I have two applications(order and customer)
On the order app, there is code calling customer app with url "http://customer-app:8080/customer".
Now, I want to run two apps on K8S with Istio.
And I don't want to change my code especially calling urls.
(I know I can call each service with service-name.
but I want to make customer service name with "customer-service" not "customer-app")
I find there is a VirtualService which can register MESH_INTERNAL.
I make yaml file like that.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: customer-service-entry
spec:
hosts:
- http://customer-app:8080
location: MESH_INTERNAL
ports:
- number: 8080
name: http
protocol: http
endpoints:
- address: customer-service
ports:
http: 8080
is it possible scenario using virtual domain?

You should be able to do this using a VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: customer-app
spec:
hosts:
- customer-app
http:
- match:
- port: 8080
route:
- destination:
host: customer-service
port:
number: 9080
You will also need to define a dummy K8S service for customer-app, so that it's resolvable:
apiVersion: v1
kind: Service
metadata:
name: customer-app
spec:
ports:
- port: 8080
name: http

Related

Istio: How to redirect to HTTPS except for /.well-known/acme-challenge

I want the traffic thar comes to my cluster as HTTP to be redirected to HTTPS. However, the cluster receives requests from hundreds of domains that change dinamically (creating new certs with cert-manager). So I want the redirect to happen only when the URI doesn't have the prefix /.well-known/acme-challenge
I am using a gateway that listens to 443 and other gateway that listens to 80 and send the HTTP to an acme-solver virtual service.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: default-gateway
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- site1.com
port:
name: https-site1.com
number: 443
protocol: HTTPS
tls:
credentialName: cert-site1.com
mode: SIMPLE
- hosts:
- site2.com
port:
name: https-site2.com
number: 443
protocol: HTTPS
tls:
credentialName: cert-site2.com
mode: SIMPLE
...
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: acme-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: acme-solver
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- acme-gateway
http:
- match:
- uri:
prefix: /.well-known/acme-challenge
route:
- destination:
host: acme-solver.istio-system.svc.cluster.local
port:
number: 8089
- redirect:
authority: # Should redirect to https://$HOST, but I don't know how to get the $HOST
How can I do that using istio?
Looking into the documentation:
The HTTP-01 challenge can only be done on port 80. Allowing clients to specify arbitrary ports would make the challenge less secure, and so it is not allowed by the ACME standard.
As a workaround:
Please consider using DNS-01 challenge:
a) it only makes sense to use DNS-01 challenges if your DNS provider has an API you can use to automate updates.
b) using this approach you should consider additional security risk as stated in the docs:
Pros:
You can use this challenge to issue certificates containing wildcard domain names.
It works well even if you have multiple web servers.
Cons:
*Keeping API credentials on your web server is risky.
Your DNS provider might not offer an API.
Your DNS API may not provide information on propagation times.
As mentioned here:
In order to be automatic, though, the software that requests the certificate will also need to be able to modify the DNS records for that domain. In order to modify the DNS records, that software will also need to have access to the credentials for the DNS service (e.g. the login and password, or a cryptographic token), and those credentials will have to be stored wherever the automation takes place. In many cases, this means that if the machine handling the process gets compromised, so will the DNS credentials, and this is where the real danger lies.
I would suggest also another approach to use some simple nginx pod which would redirect all http traffic to https.
There is a tutorial on medium with nginx configuration you might try to use.
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
server {
listen 80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
---
apiVersion: v1
kind: Service
metadata:
name: redirect
labels:
app: redirect
spec:
ports:
- port: 80
name: http
selector:
app: redirect
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redirect
spec:
replicas: 1
selector:
matchLabels:
app: redirect
template:
metadata:
labels:
app: redirect
spec:
containers:
- name: redirect
image: nginx:stable
resources:
requests:
cpu: "100m"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: config
volumes:
- name: config
configMap:
name: nginx-config
Additionally you would have to change your virtual service to send all the traffic except prefix: /.well-known/acme-challenge to nginx.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: acme-solver
namespace: istio-system
spec:
hosts:
- "*"
gateways:
- acme-gateway
http:
- name: "acmesolver"
match:
- uri:
prefix: /.well-known/acme-challenge
route:
- destination:
host: reviews.prod.svc.cluster.local
port:
number: 8089
- name: "nginx"
route:
- destination:
host: nginx

Istio VirtualService always 404s

I've had a single VirtualService running through a simple http Gateway and everything works fine:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: satc-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: report
spec:
hosts:
- report-create.default.svc.cluster.local
gateways:
- satc-gateway
http:
- match:
- uri:
prefix: /v1/report
route:
- destination:
host: report-create
port:
number: 8080
I have since added a second VirtualService for a the second service in the cluster through the same Gateway
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: routing
spec:
hosts:
- routing-svc.default.svc.cluster.local
gateways:
- satc-gateway
http:
- match:
- uri:
prefix: /v1/routing
route:
- destination:
host: routing-svc.default.svc.cluster.local
port:
protcol: http
number: 8080
Whilst the first VirtualService still seems to serve, the second consistently 404s. I have added a simple endpoint "/v1/routing/test" to return a Hello World message to ensure the issue doesn't spawn from application logic.
Both VirtualServices seem to be running as expected, though only requests to report are returning anything other than a 404:
report [satc-gateway] [report-create.default.svc.cluster.local] 2h
routing [satc-gateway] [routing-svc.default.svc.cluster.local] 18m
I have tried removing the first deployment all together to ensure it is now hoovering up all the traffic coming in to the cluster and still get 404s. I've also tried executing the routes from inside the pod with a successful response, both services use container port 8080 which I have also triple checked.
I seem to have hit a bit of a wall with this, unsure what is the next best steps in order to debug correctly.
I have analyzed both your yamls and found few issues regarding the second one:
there are 2 extra spaces in the prefix: line
there is a typo in the 'protocol:' line (missed an o)
there are some differences in the destination: part
Please make the adjustments and let me know if that helped.

Granular policy over istio egress trafic

I have kubernetes cluster with installed Istio. I have two pods, for example, sleep1 and sleep2 (containers with installed curl). I want to configure istio to permit traffic from sleep1 to www.google.com and forbid traffic from sleep2 to www.google.com.
So, I created ServiceEntry:
---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: google
spec:
hosts:
- www.google.com
- google.com
ports:
- name: http-port
protocol: HTTP
number: 80
resolution: DNS
Gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 80
name: http-port
protocol: HTTP
hosts:
- "*"
two virtualServices (mesh->egress, egress->google)
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mesh-to-egress
spec:
hosts:
- www.google.com
- google.com
gateways:
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: egress-to-google-int
spec:
hosts:
- www.google.com
- google.com
gateways:
- istio-egressgateway
http:
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: google.com
port:
number: 80
weight: 100
As result, I can curl google from both pods.
And the question again: can i permit traffic from sleep1 to www.google.com and forbid traffic from sleep2 to www.google.com? I know that this is possible to do with kubernetes NetworkPolicy and black/white lists (https://istio.io/docs/tasks/policy-enforcement/denial-and-list/), but both methods are forbids (permits) traffic to specific ips or maybe I missed something?
You can create different service accounts for sleep1 and sleep2. Then you create an RBAC policy to limit access to the istio-egressgateway policy, so sleep2 will not be able to access any egress traffic through the egress gateway. This should work with forbidding any egress traffic from the cluster, that does not originate from the egress gateway. See https://istio.io/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations.
If you want to allow sleep2 access other services, but not www.google.com, you can use Mixer rules and handlers, see this blog post. It shows how to allow a certain URL path to a specific service account.
I think you're probably on the right track on the denial option.
It is also not limited to IP as we may see attribute-based example for Simple Denial and Attribute-based Denial
So, for example, if we write a simple denial rule for Sleep2 -> www.google.com:
apiVersion: "config.istio.io/v1alpha2"
kind: handler
metadata:
name: denySleep2Google
spec:
compiledAdapter: denier
params:
status:
code: 7
message: Not allowed
---
apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
name: denySleep2GoogleRequest
spec:
compiledTemplate: checknothing
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: denySleep2
spec:
match: destination.service.host == "www.google.com" && source.labels["app"]=="sleep2"
actions:
- handler: denySleep2Google
instances: [ denySleep2GoogleRequest ]
Please check and see if this helps.
Also, the "match" field in the "rule" entry is based on istio expression language around the attributes. Some vocabulary can be found in this doc.

Cannot allow external traffic through ISTIO

I am trying to setup Istio and I need to whitelist few ports for allowing non mTLS traffic from outside world coming in through specfic port for few pods runnings in local k8s.
I am unable to find a successful way of doing it.
Tried Service entry, policy and destination rule and didnt succeed.
Helps is highly appreciated.
version.BuildInfo{Version:"1.1.2", GitRevision:"2b1331886076df103179e3da5dc9077fed59c989", User:"root", Host:"35adf5bb-5570-11e9-b00d-0a580a2c0205", GolangVersion:"go1.10.4", DockerHub:"docker.io/istio", BuildStatus:"Clean", GitTag:"1.1.1"}```
Service Entry
```apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-traffic
namespace: cloud-infra
spec:
hosts:
- "*.cluster.local"
ports:
- number: 50506
name: grpc-xxx
protocol: TCP
location: MESH_EXTERNAL
resolution: NONE```
You need to add a DestinationRule and a Policy :
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: destinationrule-test
spec:
host: service-name
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 8080
tls:
mode: DISABLE
---
apiVersion: authentication.istio.io/v1alpha1
kind: Policy
metadata:
name: policy-test
spec:
targets:
- name: service-name
ports:
- number: 8080
peers:
This has been tested with istio 1.0, but it will probably work for istio 1.1. It is heavily inspired by the documentation https://istio.io/help/ops/setup/app-health-check/
From your question, I understood that you want to control your ingress traffic allow some ports to your services that functioning in your mesh/cluster from outside, but your configuration is for egress traffic.
In order to control and allow ports to your services from outside, you can follow these steps.
1.Make sure that containerPort included to your deployment/pod configuration.
For more info
2.You have to have service pointing to your backends/pods. For more info about Kubernetes Services.
3.Then in your Istio enabled cluster, you have to create Gateway similar to below configuration:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: your-service-gateway
namespace: foo-namespace # Use same namespace with backend service
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: HTTP
protocol: HTTP
hosts:
- "*"
4.Then configure route to your service for traffic entering via the this gateway by creating VirtualService:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: your-service
namespace: foo-namespace # Use same namespace with backend service
spec:
hosts:
- "*"
gateways:
- your-service-gateway # define gateway name
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 3000 # Backend service port
host: your-service # Backend service name
Hope it helps.

Mounting applications of different paths with Istio

We have installed Istio on my Kubernetes cluster on AWS (we are using EKS). We have deployed several applications such as: Airflow, Jenkins, Grafana, etc. and we are able to reach them with port-forward. So, they are working as expected.
Now, what we would like to achieve is the possibility of mounting the applications on specific paths so that we can access them via an unique entrypoint.
Here an example to explain what I mean with "unique entrypoint":
http://a517f7.eu-west-1.elb.amazonaws.com/aiflow/ and you can see Airflow dahsboard
http://a517f7.eu-west-1.elb.amazonaws.com/jenkins/ and you can see Jenkins dahsboard
http://a517f7.eu-west-1.elb.amazonaws.com/grafana/ and you can see Grafana dahsboard
What we tried is the following
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: apps-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: airflow-virtual-service
spec:
hosts:
- "*"
gateways:
- apps-gateway
http:
- match:
- uri:
prefix: /airflow
route:
- destination:
host: webserver.airflow.svc.cluster.local
port:
number: 8080
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-virtual-service
spec:
hosts:
- "*"
gateways:
- apps-gateway
http:
- match:
- uri:
prefix: /grafana
route:
- destination:
host: grafana.grafana.svc.cluster.local
port:
number: 3306
---
and so on
In this way but I keep having Aiflow 404 = lots of circles or similar depending on the service.
Do you know how to achieve such a result with Istio? We are also open to also use Nginx or Traefik.
So based on the comments:
The best way to expose a frontend & backend it's to separate both with a wildcard domain like *.domain.com.
For the frontend:
my-app.domain.com/
For the backend services it will always start with a subdomain like api:
api.domain.com/foo
For backend service you will need to use rewrite if your backend service it's no serving the path.
rewrite:
uri: "/"
Check Istio docs for more info about rewrite https://istio.io/docs/reference/config/istio.networking.v1alpha3/#Destination