How do I segregate internal and external loads using Istio Ingress? - kubernetes

On my Kubernetes cluster I would like to segregate access to internal and external apps. In my example below I have app1 and app2 both exposed to the internet but would like only app1 exposed to the internet and app2 only available for users in the internal vnet.
My initial thought was to just make a new service (blue box) and use the "internal=true" attribute and my cloud provider creates the internal IP and I'm good. The issue is the gateway points to the deployment (pods) so it seem like to create an internal ingress I need to copy all 3 blue boxes.
Is there an easy way to tie in a new service and gateway without a new deployment (blue boxes) or maybe restrict external access via policy?

Based on my knowledge you can create virtual service to do that
The reserved word mesh is used to imply all the sidecars in the mesh. When this field is omitted, the default gateway (mesh) will be used, which would apply the rule to all sidecars in the mesh. If a list of gateway names is provided, the rules will apply only to the gateways. To apply the rules to both gateways and sidecars, specify mesh as one of the gateway names.
You can check my another answer on stackoverflow, there is whole reproduction of someone problem where i made virtual service with a gateway to access(in a example just a curl) from outside, and if you want to make it only inside the mesh just delete this gateway and leave only mesh one, like in below example.
Specially the virtual service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: nginxvirt
spec:
gateways:
- mesh #inside cluster
hosts:
- nginx.default.svc.cluster.local #inside cluster
http:
- name: match-myuid
match:
- uri:
prefix: /
rewrite:
uri: /
route:
- destination:
host: nginx.default.svc.cluster.local
port:
number: 80
And some external and internal tests
External
with additional gateway to allow external traffic
curl -v -H "host: nginx.com" loadbalancer_istio_ingress_gateway_ip/
HTTP/1.1 200 OK
without additional gateway to allow external traffic, just the mesh one
curl -v -H "host: nginx.com" loadbalancer_istio_ingress_gateway_ip/
HTTP/1.1 404 Not Found
Internal
Created some basic ubuntu pod for tests
kubectl exec -ti ubu1 -- /bin/bash
With mesh gateway
curl -v nginx/
HTTP/1.1 200 OK
Without mesh gateway
curl -v nginx/
HTTP/1.1 404 Not Found
Based on that you can use gateway "mesh" which will work only inside the mesh and won't allow external requests.
I can bring you pack of yamls to test if you want, if you wanna test it.
Let me know if that answer your question or you have any more questions.

Related

How to use grpcurl to query a remote service behind an Ambassador/Emissary gateway?

I am able to use the following command, locally, to work with my gRPC service:
grpcurl -vv -plaintext localhost:9999 describe echok8s.EchoK8sService
echok8s.EchoK8sService is a service:
service EchoK8sService {
rpc Echo ( .echok8s.EchoRequest ) returns ( .echok8s.EchoResponse ) {
option (.google.api.http) = { post:"/v1/echok8s" body:"*" };
}
}
When I create a listener, host, and mapping for Emissary-Ingress and deploy my service, I cannot invoke it nor can I simply query via reflection. I've verified that the services are working properly, so I must not be sending the request properly with grpcurl. Assuming the following mapping to a simple, plaintext Echo service, can you please indicate how to tell the gateway to route the message the backend service? This seems like a simple question, but I have not found any examples that work online and the documentation is sparse.
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: echok8s-mapping
namespace: mikie
spec:
grpc: true # Informs Emissary to use HTTP/2 so requests can communicate with the backend pod.
hostname: "*"
prefix: /echok8s.EchoK8sService/ # The prefix is built from the grpc package and service names -- see the proto file.
rewrite: /echok8s.EchoK8sService/ # Tells Emissary to forward requests to the grpc backend pod.
service: echok8s # This is the name of the grpc backend pod in Kubernetes.
Assuming that the Emissary gateway is at 123.456.789.111:9999, I've tried a lot of variations, but the following is the one that makes sense to me. The docs don't provide an actual example. In Emissary's non-gRPC mapping, the prefix simply sits between the IP address:port and the path. So, I assume that with gRPC, the prefix, which is made of the protobuf package and service name, would either be a suffix to the IP address and port or a prefix to the package and service name or ... ?
grpcurl -vv -plaintext 123.456.789.111:9999 describe /echok8s.EchoK8sService/echok8s.EchoK8sService
grpcurl -vv -plaintext 123.456.789.111:9999/echok8s.EchoK8sService/ describe echok8s.EchoK8sService
I'd appreciate your help to resolve this question. Thanks!

how to expose ingress for Consul

I'm trying to add consul ingress to my project, and I'm using this GitHub repo as a doc for ui and ingress: here and as you can see unfortunately there is no ingress in doc, there is an ingressGateways which is not useful because doesn't create ingress inside Kubernetes(it can just expose URL to outside)
I have searched a lot, there are 2 possible options:
1: create extra deployment for ingress
2: create consul helm chart to add ingress deploy
(unfortunately I couldn't find a proper solution for this on the Internet)
Here is an example Docker compose file which configures Traefik to expose an entrypoint named web which listens on TCP port 8000, and integrates Traefik with Consul's service catalog for endpoint discovery.
# docker-compose.yaml
---
version: "3.8"
services:
consul:
image: consul:1.8.4
ports:
- "8500:8500/tcp"
traefik:
image: traefik:v2.3.1
ports:
- "8000:8000/tcp"
environment:
TRAEFIK_PROVIDERS_CONSULCATALOG_CACHE: 'true'
TRAEFIK_PROVIDERS_CONSULCATALOG_STALE: 'true'
TRAEFIK_PROVIDERS_CONSULCATALOG_ENDPOINT_ADDRESS: http://consul:8500
TRAEFIK_PROVIDERS_CONSULCATALOG_EXPOSEDBYDEFAULT: 'false'
TRAEFIK_ENTRYPOINTS_web: 'true'
TRAEFIK_ENTRYPOINTS_web_ADDRESS: ":8000"
Below is a Consul service registration file which registers an application named web which is listening on port 80. The service registration includes a couple tags which instructs Traefik to expose traffic to the service (traefik.enable=true) over the entrypoint named web, and creates the associated routing config for the service.
service {
name = "web"
port = 80
tags = [
"traefik.enable=true",
"traefik.http.routers.web.entrypoints=web",
"traefik.http.routers.web.rule=Host(`example.com`) && PathPrefix(`/myapp`)"
]
}
This can be registered into Consul using the CLI (consul service register web.hcl). Traefik will then discover this via the catalog integration, and configure itself based on the routing config specified in the tags.
HTTP requests received by Traefik on port 8000 with an Host header of example.com and path of /myapp will be routed to the web service that was registered with Consul.
Example curl command.
curl --header "Host: example.com" http://127.0.0.1:8000/myapp
This is a relatively basic example that is suitable for dev/test. You will need to define additional Traefik config parameters if you are deploying into a production Consul environment which is typically secured by access control lists (ACLs).
The ingressGateways config in the Helm chart is for deploying a Consul ingress gateway (powered by Envoy) for Consul service mesh. This is different from a Kubernetes Ingress.
Consul's ingress enables routing to applications running inside the service mesh, and is configured using an ingress-gateway configuration entry (or in the future using Consul CRDs). It cannot route to endpoints that exist outside the service mesh, such as Consul's API/UI endpoints.
If you need a generic ingress that can route to applications outside the mesh, I recommend using a solution such as Ambassador, Traefik, or Gloo. All three of this also support integrations with Consul for service discovery, or service mesh.

Ambassador Edge Stack Questions

I'm getting no healthy upstream error. when accessing ambassador. Pods/Services and Loadbalancer seems to be all fine and healthy. Ambassador is on top of aks.
At the moment I have got multiple services running in the Kubernetes cluster and each service has it's on Mapping with its own prefix. Is it possible to point out multiple k8s services to the same mapping so that I don't have too many prefixes? And all my k8s services will be under the same ambassador prefix?
By Default ambassador is taking me through https which is creating certificate issues, although I will be bringing https in near future for now I'm just looking to prove the concept so how can I disable HTTPS and do HTTP only ambassador?
No healthy upstream typically means that, for whatever reason, Ambassador cannot find the service listed in the mapping. The first thing I usually do when I see this is to run kubectl exec -it -n ambassador {my_ambassador_pod_name} -- sh and try to curl -v my-service where "my-service" is the Kube DNS name of the service you are trying to hit. Depending on the response, it can give you some hints on why Ambassador is failing to see the service.
Mappings work on a 1-1 basis with services. If your goal, however, is to avoid prefix usage, there are other ways Ambassador can match to create routes. One common way I've seen is to use host-based routing (https://www.getambassador.io/docs/latest/topics/using/headers/host/) and create subdomains for either individual or logical sets of services.
AES defaults to redirecting to HTTPS, but this behavior can be overwritten by applying a host with insecure routing behavior. A very simple one that I commonly use is this:
---
apiVersion: getambassador.io/v2
kind: Host
metadata:
name: wildcard
namespace: ambassador
spec:
hostname: "*"
acmeProvider:
authority: none
requestPolicy:
insecure:
action: Route
selector:
matchLabels:
hostname: wildcard

How to create https endpoint in Google Cloud from http based server for Kubernetes Engine?

I have been trying to create HTTPS endpoint in Google Cloud K8s environment.
I have built a flask application in Python that serves on the waitress production environment via port 5000.
serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)
I created a docker file and pushed this to the google cloud repository. Then, created a Kubernetes cluster with one workload containing this image. After, I exposed this via external IP by creating LoadBalancer. (After pushing the image to the Google repository, everything is managed through the Google Cloud Console. I do not have any configuration file, it should be through the Google Cloud Console.)
Now, I do have an exposed IP and port number to access my application. Let's say this IP address and the port is: 11.111.11.222:1111. Now, I can access this IP via Postman and get a result.
My goal is to implement, If it is possible, to expose this IP address via HTTPS as well, by using any google cloud resources. (redirection, creating ingress, etc)
So, in the end I want to reach the application through http://11.111.11.222:111 and https://11.111.11.222:111
Any suggestions?
A LoadBalancer translates to a network load balancer. You can configure multiple ports for this e.g. 80 and 443. Then your application must handle the TLS part.
The ingress resource creates an HTTP(S) LB
From the GKE perspective you can try to configure Ingress resource with HTTPS enabled:
Steps:
Create a basic flask app inside a pod (for example purposes only)
Expose an app via service object of type nodePort
Create a certificate
Create an Ingress resource
Test
Additional information (added by EDIT)
Create a basic flask app inside a pod (for example purposes only)
Below is a flask script which will respond with <h1>Hello!</h1>:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index():
return "<h1>Hello!</h1>"
if __name__ == "__main__":
from waitress import serve
serve(app, host="0.0.0.0", port=8080)
By default it will respond on port 8080.
Link to an answer with above script.
Expose an app via service object of type nodePort
Assuming that deployment is configured correctly with working app inside, you can expose it via service object type of nodePort with following YAML definition:
apiVersion: v1
kind: Service
metadata:
name: flask-service
spec:
type: NodePort
selector:
app: ubuntu
ports:
- name: flask-port
protocol: TCP
port: 80
targetPort: 8080
Please make sure that:
selector is configured correctly
targetPort is pointing to port which is app is running on
Create a certificate
For Ingress object to work with HTTPS you will need to provide a certificate. You can create it with GKE official documentation on: Cloud.google.com: Managed certificates
Be aware of a fact that you will need a domain name to do that.
Create an Ingress resource
Below is an example Ingress resource which will point your requests to your flask application:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: flask-ingress
annotations:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
spec:
rules:
- host: DOMAIN.NAME
http:
paths:
- path: /
backend:
serviceName: flask-service
servicePort: flask-port
Please take a specific look on part of YAML definition below and change accordingly to your case:
networking.gke.io/managed-certificates: flask-certificate
kubernetes.io/ingress.global-static-ip-name: flask-static-ip
Please wait for everything to configure correctly.
After that you will have access to your application by domain.name with ports:
80(http)
443(https)
Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination.
-- Kubernetes.io: Ingress TLS
Test
You can check if above steps are configured correctly by:
entering https://DOMAIN.NAME in your web browser and check if it responds with Hello with HTTPS enabled
using a tool curl -v https://DOMAIN.NAME.
Please let me know if this solution works for you.
Additional information (added by EDIT)
You can try to configure service object of type LoadBalancer which will be operate at layer 4 as #Florian said in his answer.
Please refer to official documentation: Kubernetes.io: Create external load balancer
You can also use Nginx Ingress controller and either:
Expose TCP/UDP service by following: Kubernetes.github.io: Ingress nginx: Exposing tcp udp services which will operating at L4.
Create an Ingress resource that will have SSL Passthrough configured by following: Kubernetes.github.io: Ingress nginx: Ssl passthrough
After researching, I found the answer in Google Cloud Run. It is very simple to deploy HTTP based flask app in the container. As serve(app, host='0.0.0.0', port=5000, ipv6=False, threads=30)(No need for self-certificate or HTTPS in this part, just make sure the HTTP app works) and then push it Cloud Run.
Adjust the service parameters, depend on how much resources do you need to run it. In the machine settings, set the port that you are using in the docker container to be mapped. for instance, in my case, it is 5000. When you create the service, Google provides you a domain address with HTTPS. You can use that URL and access your resources.
That's it!
For more information on Cloud Run:
https://cloud.google.com/serverless-options
The differences between computing platforms: https://www.signalfx.com/blog/gcp-serverless-comparison/

GKE/Istio: outside world cannot connect to service in private cluster

I've created a private GKE cluster with Istio through the Cloud Console UI. The cluster is set up with VPC Peering to be able to reach another private GKE cluster in another Google Cloud Project.
I've created a Deployment (called website) with a Service in Kubernetes in the staging namespace. My goal is to expose this service to the outside world with Istio, using the Envoy proxy. I've created the necessary VirtualService and Gateway to do so, following this guide.
When running "kubectl exec ..." to access a pod in the private cluster, I can successfully connect to the internal IP address of the website service, and see the output of that service with "curl".
I have set up a NAT Gateway so pods in the private cluster can connect to the Internet. I confirmed this by curl-ing various non-Google web pages from within the website pod.
However, I can't connect to the website service from the outside, using the External IP of the istio-ingressgateway service, as the guide above mentions. Instead, curl-ing that External IP leads to a timeout.
I've put the full YAML config for all related resources in a private Gist, here: https://gist.github.com/marceldegraaf/0f36ca817a8dba45ac97bf6b310ca282
I'm wondering if I'm missing something in my config here, or if my use case is actually impossible?
Looking at your Gist I suspect the problem lies in the joining up of the Gateway to the istio-ingressgateway.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: website-gateway
namespace: staging
labels:
version: v1
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
In particular I'm not convinced the selector part is correct.
You should be able to do something like
kubectl describe po -n istio-system istio-ingressgateway-rrrrrr-pppp
to find out what the selector is trying to match in the Istio Ingress Gateway pod.
I had the same problem. On my case, the istio virtual service dont find my service.
Try this on your VirtualService:
route:
- destination:
host: website
port:
number: 80
From verifying all options, the only way to have the private GKE cluster with Istio to be exposed to externally is to use Cloud NAT.
Since the Master node within GKE is a managed service, there are current limits when using Istio with a private cluster. The only workaround that would accomplish your use case is to use Cloud NAT. I have also attached an article on how to get started using Cloud NAT here.