I want to connect to my ECS cluster in a private VPC and am a bit confused on what would be the best way to do so.
As I've understood it my options are:
API Gateway -> VPC Link -> Private NLB -> Private ECS cluster
Public ALB -> Private ECS Cluster
API Gateway HTTP API -> Private ALB -> Private ECS cluster
Ideally I want Cognito authorization, and from what I understand, all three options would support that.
What option should I go with and why?
Related
We have created a completely private EKS cluster in a VPC with a private subnet only i.e. without any NAT Gateway through the Cloudformation template. We created an EC2 instance in the private subnet with the same role that was used to create the EKS Cluster. We created all desired VPC endpoints needed for a private EKS cluster. We have added all allow traffic rules to the cluster security group as well as to the outbound traffic from the EC2. Our VPC has all pre-requisites i.e. VPC must have enableDnsHostnames and enableDnsSupport set to true, and the DHCP options set for your VPC must include AmazonProvidedDNS still we are unable to download kubeconfig file by issuing below command
aws eks update-kubeconfig --region region-code --name my-cluster
Instead, we receive the below timeout error
Connect timeout on endpoint URL: "https://eks.ap-south-1.amazonaws.com/clusters/<cluster-name>"
Please help to resolve the problem
I'm integrating the API Gate with EKS, using CDK
Following this architecture as below
I'm using the ALB Controller which provisions a NLB when I deploy an k8s service with type LoadBalancer
The problem is, how can I aware when the NLB will be provisioned, how to get its "object reference" then connect it to a VPC link for API gateway resources to access?
Can you please help for a CDK sample?
We have a set of microservice APIs hosted on AWS EKS behind the Istio Service Mesh (which is exposed as an ALB ingress).
We have two ALB ingress for Istio, one meant for external traffic (from internet) and one meant for internal traffic (within the VPC).
The APIs are mostly meant for internal traffic. We also want to create an AWS APIGateway route to the internal Istio ALB for these APIs (the APIGateway will manage authentication).
Here are the steps we have completed:
We are using HTTP AWS Gateway. We cant use REST AWS Gateway since that only works on NLBs and we have an ALB for our Istio workloads.
We have created a VPC connector to allow HTTP AWS Gateway to access our internal ALBs.
We can see that the request is reaching the Istio envoy service from the APIGateway but is not getting forwarded further. This is because the API gateway is hitting our ALB but not passing any HOST header. So Istio doesn't know where to send the request.
So, how do we achieve the following:
Have multiple internal APIs hosted over a single ALB routed from AWS APIGateway?
Ensure Istio forwards the request from APIGateway to the appropriate service?
I have created EKS cluster in that cluster created 2 nodes & deployed few microservices on cluster IP.
As cluster IP is only internally accessible so wanted to configure it with AWS API gateway using ELB.
When you create an ingress in kubernetes, it automatically creates a load balancer for the ingress.
If you are using route 53 as your dns manager then after you have created an ingress you can add an A record to point to the newly created Application Load Balancer.
Please refer to the AWS document here to create ingress controllers:
https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
One solution to the problem is to change the service to a Private NLB using the load balancer controller and then link this nlb to api gateway via VPC link
Further process and documentation for the process can be found here
So the idea is Kubernetes dashboard accesses Kubernetes API to give us beautiful visualizations of different 'kinds' running in the Kubernetes cluster and the method by which we access the Kubernetes dashboard is by the proxy mechanism of the Kubernetes API which can then be exposed to a public host for public access.
My question would be is there any possibility that we can access Kubernetes API proxy mechanism for some other service inside a Kubernetes cluster via that publically exposed address of Kubernetes Dashboard?
Sure you can. So after you set up your proxy with kubectl proxy, you can access the services with this format:
http://localhost:8001/api/v1/namespaces/kube-system/services/<service-name>:<port-name>/proxy/
For example for http-svc and port name http:
http://localhost:8001/api/v1/namespaces/default/services/http-svc:http/proxy/
Note: it's not necessarily for public access, but rather a proxy for you to connect from your public machine (say your laptop) to a private Kubernetes cluster.
You can do it by changing your service to NodePort:
$ kubectl -n kube-system edit service kubernetes-dashboard
You should see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file.
Note: This way of accessing Dashboard is only possible if you choose to install your user certificates in the browser. Certificates used by kubeconfig file to contact API Server can be used.
Please check the following articles and URLs for better understanding:
Stackoverflow thread
Accessing Dashboard 1.7.X and above
Deploying a publicly accessible Kubernetes Dashboard
How to access kubernetes dashboard from outside cluster
Hope it will help you!
Exposing Kubernetes Dashboard not secure at all , but your answer is about K8s API Server that need to be accessible by external services.
The right answer differs according your platform and infrastructure , but as general points
[Network Security] Limit IP public reachability to K8s API Servers(s) / Load balancer if exist as a white list mechanism
[Network Security] Private-to-Private reachability is better like vpn or AWS PrivateLink
[ API Security ] Limit Privileges by clusterrole/role to enforce RBAC , better to keep it ReadOnly verbs { Get , List }
[ API Security ] enable audit logging for k8s components to keep track of events and actions