Currently, I want to introduce istio as our service-mesh framework for our microservices. I have played it sometime (< 1 week), and my understanding is that Istio really provides an easy way to secure service to service communication. Much (or all?) of Istio docs/article provides an example how client and server who have istio-proxy (envoy) installed as a sidecar container, can establish secure communication using mtls method.
However, since our existing client (which I don't have any control) who consume our service (which will be migrated to use istio) doesn't have istio, I still don't understand it well how we should do it better.
Is there any tutorial or example that provides my use case better?
How can the non-istio-based client use mtls for consuming our istio-based service? Think about using basic curl command to simulate such thing.
Also, I am thinking of distributing a specific service account (kubernetes, gcp iam service account, etc) to the client to limit the client's privilege when calling our service. I have many questions on how these things: gcp iam service account, istio, rbac, mtls, jwt token, etc contributes to securing our service API?
Any advice?
You want to add a third party to your Istio mesh outside of your network via SSL over public internet?
I dont think Istio is really meant for federating external services but you could just have an istio ingress gateway proxy sat at the edge of your network for routing into and back out of your application.
https://istio.io/docs/tasks/traffic-management/ingress/
If you're building microservices then surely you have an endpoint or gateway, that seems more sensible to me, try Apigee or something.
Related
I would like to know what is/are differences between an api gateway and Ingress controller. People tend to use these terms interchangeably due to similar functionality they offer. When I say, 'Ingress controller'; don't confuse it with Ingress objects provided by kubernetes. Also, it would be nice if you can explain the scenario where one will be more useful than other.
Is api gateway a generic term used for traffic routers in cloud-native world and 'Ingress controller' is implementation of api-gateway in kubernetes world?
Ingress controller allows single ip-port to access all services running in k8s through ingress rules. The ingress controller service is set to load balancer so it is accessible from public internet.
An api gateway is used for application routing, rate limiting, security, request and response handling and other application related tasks. Say, you have a microservice based application in which the request needs an information to be collected from multiple micro services. You need a way to distribute the user requests to different services and gather the responses from all micro services and prepare the final response to be sent to the user. API Gateway is the one which does this kind of work for you.
Ingress
Ingress manages and route the traffic into Kubernetes services.
Ingress rules/config yaml and backed by Ingress controller (Nginx ingress controller famous one)
Ingress controller makes one Kubernetes service using that get exposed as LoadBalancer.
Other list of ingrss controller : https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
For simple understanding, you can consider ingress as Nginx server which just do the work of forwarding the traffic to services based on the ruleset.
ingress don't have much functionality like API gateway. Some of ingress don't support authentication, rate limiting, application routing, security, merging response & request, and other add-ons/plugin options.
API gateway
API gateway can also do the work of simple routing but it's mostly gets used when you need higher flexibility, security and configuration options.
There are lots of parameters to compare when you are choosing the Ingress or API gateway however it's more depends on your usecase.
API gateway like KrakenD, Kong are way better compare to ingress have security integration like Oauth plugin, API key option, it support rate-limiting, API aggregation.
Kong API gateway also has a good plugin option which you can use if you want to configure logging/monitoring of traffic also.
There are so many API gateways available in the market same as the ingress controller, you can check the API gateway feature and comparison below.
Read more at : https://medium.com/#harsh.manvar111/api-gateway-identity-server-comparison-ec439468cc8a
If your use case is small and sure about requirement you can use the ingress also for production API gateway is not necessary.
Indeed both have a set of features that intersect, path mapping, path conversion, load balancing, etc.
However, they do differ. I may be wrong, but you create an Ingress 1) to run it in Kubernetes 2) to be more of like a reverse proxy "kubernetes native".
API Gateway could be installed anywhere (although there are now many that run in Kubernetes natively like Ambassador, Gloo, Kong), and they do have more functionality available like developer portal, rate limiting, etc.
Personally I use an ingress as a reverse proxy for a website. And API Gateway for APIs. This does not mean you can't use ingress for apis. However, you are not taking full advantage of them.
Say you are using Microservices with Docker Containers and Kubernetes.
If you use an API Gateway (e.g. Azure API Gateway) in front of your microservices to handle composite UI and authentication, do you still need a Service Mesh to handle Service Discovery and Circuit Breaker? Is there any functionality in Azure API Gateway to handle these kind of challenges? How?
API gateways are applied on Layer 7 of OSI model or you can say to manage traffic coming from outside network ( sometimes also called north/south traffic ) , whereas Service Mesh is applied to Layer 4 of OSI model or to manager inter-services communications ( sometimes also called as east/west traffic). Some examples of API Gateway features are Reverse Proxy,Load Balancing , Authentication and Authorization , IP Listing , Rate-Limiting etc.
Service Mesh, on the other hand, works like a proxy or a side-car pattern which de-couples the communication responsibility of the service and handles other concerns such as Circuit breaker , timeouts , retries , service-discovery etc.
If you happen to use Kubernetes and Microservices then you might want to explore other solutions such as Ambassador + Istio Or Kong which works as Gateway as well as Service Mesh.
An API Gateway only handles the entry point into your Kubernetes clusters, e.g. it sends a request to your frontend microservice. However, it can do nothing after the request enters your cluster. There might still be multiple calls between microservices. You still want to verify authentication for those requests, you still want to make sure that there are circuit breakers in between the services, etc. Theoretically, you could make sure all your microservices call each other via the API gateway, however I do not think that is what you want.
In short: No, because an API Gateway is only an entry point, any service to service communication is better handled with a Service Mesh.
you can use an API gateway to handle service discovery and circuit breaker - but that would make it a central point in your deployment i.e. all calls external and internal will have to be routed via the gateway.
A service mesh deploy an additional edge component ("sidecar") alongside each service making the overall behavior distributed (but also more complex)
Depending on your particular requirements you may use one, the other, both or none
Nicely explained by fatcook above.. See Azure-Frontdoor
as this is attempting to do the same as Kong on Azure. API gateway + handling control plane level features
I have a cluster on AWS installed via kops. Now I need to expose a WebSocket service (with security enabled, the wss://) to the outside world. There are different ingress controllers, nginx, traefik, ELBs, ALBs. Which one is the suggested and:
easy to deploy and config
support http://, https://, ws://, and wss://
In my opinion this question is opinion based and too broad. Please try to avoid such questions as there is not one solution that is the best.
I was able to find plenty resources about nginx and websockets. I do not have production experience with configuring this, but I think you might find this helpful.
NGINX is a popular choice for an Ingress Controller for a variety of
features:
Websocket, which allows you to load balance Websocket applications.
SSL Services, which allows you to load balance HTTPS applications.
Rewrites, which allows you to rewrite the URI of a request before sending it to the application.
Session Persistence (NGINX Plus only), which guarantees that all the requests from the same client are always passed to the same
backend container.
Support for JWTs (NGINX Plus only), which allows NGINX Plus to authenticate requests by validating JSON Web Tokens (JWTs).
The most important part with nginx is the annotation - which specifies which services are Websocket services. Some more information about usage and configuration. Also useful tutorial about configuration of nginx ingress, although it is about GKE it might be useful.
We have a microservice architecture and there are REST services interacting with each other through HTTP. All of these services are hosted on a Kubernetes cluster. Do we need to have explicit authentication for such service interaction or does Kubernetes provide enough security for it?
Kubernetes provides only orchestration for your conteinerized applications. It helps you to run, update, scale your services and provides a way of delivering traffic to them inside the cluster. Most of the Kubernetes security relates to traffic management and role based administration of the cluster.
Some additional tools like Istio can provide you secure communication between pods and some other traffic management capabilities.
Applications in pods should have their own capabilities of providing Authentication and Authorization based on local files/databases or network services like LDAP or OpenID etc.
It's purely based on how you design, architect, how you create a SDD for your system. While designing one, security hardening must be considered and give priority. The software and tools bring their features but, how you adopt is important. Kubernetes is no exception.
You are running your micro-services using HTTP and in production system, you can not believe that your system is secure even if it's running in Kubernetes cluster. Kubernetes brings cool features from security perspective as RBAC, CRD, etc. as you can find in here, Kubernetes 1.8 Security, Workloads and Feature Depth. But, still leveraging only these feature is not sufficient. The internal services should be as secure as external once. Following are few things you should take care once you are running your workload into kubernetes cluster,
Scan all your docker images for vulnerability testing.
Use RBAC over ABAC and assign optimum privileges to respective teams.
Configure a security context for a pod running your service.
Avoid unauthorized internal access to service data and protect all micro-services end-points.
Encryption keys should be rotated over a certain period of time.
The datastore like etcd for your kubernetes cluster must be secured.
Only admin should have access to kubectl.
Use token based validation and enable authentication on all REST api calls.
Continuous Monitoring all the services, logs for analysis, health-check, all the processes running inside containers.
Hope this helps.
I have a Kubernetes cluster with services and I use Ambassador as an API gateway between outside world and my services.
With Ambassador I know that I can use a service, which I have, to check authentication and authorization for incoming requests but does this only apply for requests coming outside the cluster?
I want to intercept service-to-service calls as well.
I would be surprised if you cannot.
This answer needs some terminology, to avoid getting lost in word-soup.
App-A is a consumer of an in-cluster Service, and the one which will be authenticating to Ambassador
App-Z is the provider of an in-cluster Service (the selector would target its Pods)
The k8s Service for app-Z we'll call z-service in the z namespace, for a FQDN of z-service.z.svc.cluster.local
It seems like you can use its v-host support and teach it to honor the in-cluster virtual host (the aforementioned FQDN), then update the z-service selector to target the Ambassador Pods rather than the underlying app-Z Pods.
From app-A's point of view, the only thing that would change is that it now must provide authentication for contacting z-service.z.svc.cluster.local.
Without studying Ambassador's setup more, it's hard to know if Ambassador would Just Work™ at that point, or whether you would then need a "implementation" Service -- such as z-for-real.z.svc.cluster.local -- so that Ambassador knows how to find the actual app-Z Pods.
I have the same problem at the moment. Ambassador routes every request to an auth service (if provided), the auth service can be anything. So you can setup http basic auth, oauth, jwt auth and so on.
The next important thing to mention is that your services may use header based routing (https://www.getambassador.io/reference/headers). Only if a bearer (or something similiar) is present the request will hit your service, otherwise will fail. In your service you can check for permissions and so on. So all in all ambassador can help you, but you have still to program something by yourself.
If you want something ready from start or more advanced you can try
https://github.com/ory/oathkeeper or https://istio.io.
If you already found a solution, it would be interesting to know.