We have a set of microservice APIs hosted on AWS EKS behind the Istio Service Mesh (which is exposed as an ALB ingress).
We have two ALB ingress for Istio, one meant for external traffic (from internet) and one meant for internal traffic (within the VPC).
The APIs are mostly meant for internal traffic. We also want to create an AWS APIGateway route to the internal Istio ALB for these APIs (the APIGateway will manage authentication).
Here are the steps we have completed:
We are using HTTP AWS Gateway. We cant use REST AWS Gateway since that only works on NLBs and we have an ALB for our Istio workloads.
We have created a VPC connector to allow HTTP AWS Gateway to access our internal ALBs.
We can see that the request is reaching the Istio envoy service from the APIGateway but is not getting forwarded further. This is because the API gateway is hitting our ALB but not passing any HOST header. So Istio doesn't know where to send the request.
So, how do we achieve the following:
Have multiple internal APIs hosted over a single ALB routed from AWS APIGateway?
Ensure Istio forwards the request from APIGateway to the appropriate service?
Related
I deployed NIFI using cetic helm charts in GKE, its service is working on HTTPS.
now I don't want to make it as a load balancer, I'm using Istio ingress gateway for to expose it on my DNS name.
I used Istio for http service, but not I'm confused about the HTTPS.
Please help me on this
I have microservices that are deployed in kubernetes. These microservices have 2 endpoints. One direct endpoint which has been exposed via ingress. Also the service has been exposed with API endpoint.
I am trying to configure my ingress in a way that only microservices would be accessed from API Gateway and no access from direct endpoint.
I am also using NGINX as my ingress controller.
Any suggestions would be great.
I have microsevices and SPA app. All of them run on docker with docker compose. I have ocelot api gateway. But gateway knows ip address or container names of microservices for reaching . I add a aggregater service inside ocelot app. And I can reach to all services from aggregator service with ips.
But I want to move kubernates. I can scale services. there is no static ip. How can I configure .
I have identity service. This service knows clients ip addresses. Again same problem.
I searched for hours. I found some keywords. Envoy, Ingress, Consul, Ocelot . Can someone explain these things ?
It sounds like your question is related to Service Discovery.
In Kubernetes, the native way an "API Gateway" is implemented, is by using Ingress resources and Ingress Controllers. If you use a cloud provider, they usually have a product for this, or you can use a custom deployed within the cluster.
Service Discovery the Kubernetes way, is by referring to Service resources, e.g. the Ingress resources maps URLs (in your public API) to services. And your app is deployed as a Deployment resource, and all replicas (instances) is exposed via a Service resource. An app can also send request to other apps, and it should address that request to the Service resource for the other app. The Service resource does load balancing to the replicas of the receiving app.
you can use the service name to connect with the service instead of the client IP.
for example : curl HTTP://<service.name>.<namespace name>.svc.cluster.local
now if you are looking forward to list of API gateway and Identity server for Kubernetes
there are several options however it all depends on requirement.
For basic requirement nginx ingress and other ingress is available while if you are looking for API gateway :
Kong APi gateway
Ambassador api gateway
TYK API gateway
part of that service mesh can be also useful not in all scenario because it's mostly used for managing internal traffic (east-west).
API gateway is mostly used for managing edge traffic.
List of identity servers :
keycloak
Cognito IAM (AWS)
ingress controllers :
GCE ingress
Nginx ingress controller
Kong ingress controller
Gloo
HA proxy
AKS gateway
istio ingress
I am planning to deploy to an AKS cluster and use an NGINX ingress controller, so that my micro-services will be internal to the cluster and the NGINX ingress controller will be the entry point to the micro-services.
One of my micro-services acts as an API gateway using the Ocelot library, and it implements the BFF pattern. So my ingress controller will have only one rule which will route requests made to the path "/(.*)" to the API gateway micro-service.
My question is - is this the conventional way to use an ingress controller and an API gateway micro-service? Somehow it feels redundant, although I could think that both have different responsibilities.
I don't think you would need an Ingress-Controller in this case, we use an API Gateway which is Ambassador and we simply have a public IP assigned to its kubernetes service.
If you don't expect other pods to expose themselves using Ingress objects, and that all traffic will be coming in your API gateway, I would simply drop the Ingress-controller and enable a Service of Type LoadBalancer for your API gateway pods
We are trying to figure out a microservice architecture where we have an API Gateway (Zuul in this case), now all the services that Zuul is redirecting requests to would also need to be exposed externally? It seems counter intuitive as all these services can have private/local/cluster access and gateway is the one that should be externally exposed. Is this correct assessment? In what scenarios would you want these backend services to be exposed externally?
-----
|-----
Normally, you would not expose your backend services externally. The gateway (or the ingress) serves as the external gateway and proxies the requests to the internal network.
I am familiar with one use case where I expose some services directly: I do not want to expose some admin services running on my cluster to the external world, but I want to expose them to my VPN, so I have an ingress forwarding traffic between the external network and the cluster, and nodePort services that expose admin apps to my VPN.