I have an EKS cluster with ASG.
I want to restrict pods that are in the specific name space to connect to specific RDS services.
Is this available in AWS, is there any suggestions how to do so?
Looking for best practices that are already running in production.
I don’t believe there is a way to whitelist just a single namespace in order to access your RDS instance. This is mainly because clusters are shared and AWS services don’t really understand what kubernetes namespace is.
In order to achieve connectivity you can have a private vpc peering or a publicly available RDS on which you are going to whitelist elastic IP attached to VPC NAT gateway. I would strongly advise you use private vpc peering and then you at least know that connections are private.
Finally, RDS access is going to be allowed for entire cluster as you can’t really limit it to a single set of resources. However, because your RDS requires user credentials to access any data inside I don’t believe that it is such a big issue to have your cluster whitelisted against RDS.
You can use Pod Security Groups to achieve this.
Blog Post explaining Pod Security Groups
Tutorial for setting up Pod Security Groups
If you want all Pods of a namespace to be part of a EC2 Security Group, you should be able to use an empty podSelector in your SecurityGroupPolicy.
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: my-security-group-policy
namespace: my-namespace
spec:
securityGroups:
groupIds:
- my_pod_security_group_id
podSelector:
Related
I have deployed my Kubernetes cluster on AWS EKS and using ingress gateway to block IPs to access my certain services. Is there a way I can block those public IPs to access my Kubernetes cluster from inside my cluster (say using ingress-gateway) if not, then is there a way to white list certain IPs to access cluster from inside the cluster?
I am already aware that the security group of AWS will be able to do this but I want to implement it from the inside of cluster.
You can try using Calico on EKS.
Network policies are similar to AWS security groups in that you can
create network ingress and egress rules. Instead of assigning
instances to a security group, you assign network policies to pods
using pod selectors and labels.
We have an application and for each customer we provision a new namespace. There are two deployments running inside a single namespace:
front-end Deployment
Back-end Deployment
The front-end should be accessed by the users hence we are using LoadBalancer for each customer (We have a VM Based k8s cluster).
The problem is, as of now we have a few customers and when the business grows, the customers will be increasing and will be having more NameSpaces.
For example: If there are 100 Users, we have to have 100 LoadBalancers. This is not practical and can we have a single LoadBalancer instead and allow all the 100 Users to access through that LoadBalancer?
Can we do this using Ingress?
Yes, ingress is a right way to manage your case.
Generally you've already mentioned why ingress should be used - get a single entry point to the cluster and not having a lot of load balancers which is not convenient and may be expensive in cloud environment.
Main benefits of using ingress are:
TLS termination
hosts/paths based routing
serves itself as a loadbalancer
and many more.
You can choose an ingress which fits better for your use-case. Ingress options
Most common are:
nginx ingress supported by kubernetes community
nginx ingress supported by Nginx inc and community
Please consider getting familiar with general concepts and examples of kubernetes ingress
I've got a Kubernetes cluster with nginx ingress setup for public endpoints. That works great, but I have one service that I don't want to expose to the public, but I do want to expose to people who have vpc access via vpn. The people who will need to access this route will not have kubectl setup, so they can't use port-forward to send it to localhost.
What's the best way to setup ingress for a service that will be restricted to only people on the VPN?
Edit: thanks for the responses. As a few people guessed I'm running an EKS cluster in AWS.
It depends a lot on your Ingress Controller and cloud host, but roughly speaking you would probably set up a second copy of your controller using a internal load balancer service rather than a public LB and then set that service and/or ingress to only allow from the IP of the VPN pods.
Since you are talking about "VPC" and assuming you have your cluster in AWS, you probably need to do what #coderanger said.
Deploy a new ingress controller with "LoadBalancer" in the service type and add an the annotation service.beta.kubernetes.io/aws-load-balancer-internal: "true".
Check here what are the possible annotations that you can add to a Load Balancer in AWS: https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#load-balancers
You can also create a security group for example and add it to the load balancer with service.beta.kubernetes.io/aws-load-balancer-security-groups.
So the idea is Kubernetes dashboard accesses Kubernetes API to give us beautiful visualizations of different 'kinds' running in the Kubernetes cluster and the method by which we access the Kubernetes dashboard is by the proxy mechanism of the Kubernetes API which can then be exposed to a public host for public access.
My question would be is there any possibility that we can access Kubernetes API proxy mechanism for some other service inside a Kubernetes cluster via that publically exposed address of Kubernetes Dashboard?
Sure you can. So after you set up your proxy with kubectl proxy, you can access the services with this format:
http://localhost:8001/api/v1/namespaces/kube-system/services/<service-name>:<port-name>/proxy/
For example for http-svc and port name http:
http://localhost:8001/api/v1/namespaces/default/services/http-svc:http/proxy/
Note: it's not necessarily for public access, but rather a proxy for you to connect from your public machine (say your laptop) to a private Kubernetes cluster.
You can do it by changing your service to NodePort:
$ kubectl -n kube-system edit service kubernetes-dashboard
You should see yaml representation of the service. Change type: ClusterIP to type: NodePort and save file.
Note: This way of accessing Dashboard is only possible if you choose to install your user certificates in the browser. Certificates used by kubeconfig file to contact API Server can be used.
Please check the following articles and URLs for better understanding:
Stackoverflow thread
Accessing Dashboard 1.7.X and above
Deploying a publicly accessible Kubernetes Dashboard
How to access kubernetes dashboard from outside cluster
Hope it will help you!
Exposing Kubernetes Dashboard not secure at all , but your answer is about K8s API Server that need to be accessible by external services.
The right answer differs according your platform and infrastructure , but as general points
[Network Security] Limit IP public reachability to K8s API Servers(s) / Load balancer if exist as a white list mechanism
[Network Security] Private-to-Private reachability is better like vpn or AWS PrivateLink
[ API Security ] Limit Privileges by clusterrole/role to enforce RBAC , better to keep it ReadOnly verbs { Get , List }
[ API Security ] enable audit logging for k8s components to keep track of events and actions
We are looking at setting up network policies for our Kubernetes cluster. However in at least one of our namespaces we have an ExternalName service (kubernetes reference - service types) for an AWS RDS intance. We would like to restrict traffic to this ExternalName service to be from a particular set of pods, or if that is not possible, from a particular namespace. Neither the namespace isolation policy or the NetworkPolicy resoure seem to apply to ExternalName services. After searching the documentation for both Weave and Project Calico, there doesn't seem to be any mention of such functionality.
Is it possible to restrict network traffic to an ExternalName service to be from a specific set of pods or from a particular namespace?
You can't really do that. ExternalName services are a DNS construct. A client performs a DNS lookup for the service and kube-dns returns the CNAME record for, in your case, the RDS instance. Then the client connects to RDS directly.
There are two possible ways to tackle this:
Block just DNS lookups (pods can still connect to the DB if they know the IP or fully qualified RDS hostname):
change namespace isolation to support ExternalName services
make kube-dns figure the client pod behind each request it gets
make kube-dns aware of namespace isolation settings and apply them, so it only returns CNAME records to authorized pods
Return DNS lookups, but block RDS connections:
extend NetworkPolicy somehow to also control egress traffic
blacklist/whitelist RDS IPs wholesale (easier said than done, since they are dynamic) or make the network controllers track the results from DNS lookups and block/allow connections accordingly.
In either case, you'll have to file a number of feature requests in Kubernetes and downstream.
Source: I wrote the EN support code.