Trying to check connection to RDS-postgres from EKS pod, unable to connect. Below are the details-
VPC- 10.0.0.0/16
EKS-
in private subnets:- (10.0.1.0/24, 10.0.2.0/24). AZ-a and AZ-b
security-groups:- control-plane and worker-node-sec-group.
RDS-postgres
db subnet-group:- (10.0.3.0/24, 10.0.4.0/24) AZ-a and AZ-b
DB is deployed in 10.0.3.0/24 (AZ-a)
sec-group:- db-subnet-group
=> allowing traffic from both EKS cluster security group.
=> also, from whole vpc 10.0.0.0/16. on port 5432.
NACL, route table entries seems ok. everything is allowed.
Anything missing which needs to be configured ?
Outbound rule was missing in EKS worker node security group. Updated to allow outbound traffic for port 5432 and destination as db security group.
I was under impression that security groups are by default stateful. It means everything is allowed to go out for destination 0.0.0.0/0. However, you can modify/delete outbound rules.
In my case, I was deploying EKS using terraform EKS module and this module was allowing only required outbound traffic.
Related
In my OVH Managed Kubernetes cluster I'm trying to expose a NodePort service, but it looks like the port is not reachable via <node-ip>:<node-port>.
I followed this tutorial: Creating a service for an application running in two pods. I can successfully access the service on localhost:<target-port> along with kubectl port-forward, but it doesn't work on <node-ip>:<node-port> (request timeout) (though it works from inside the cluster).
The tutorial says that I may have to "create a firewall rule that allows TCP traffic on your node port" but I can't figure out how to do that.
The security group seems to allow any traffic:
The solution is to NOT enable "Private network attached" ("réseau privé attaché") when you create the managed Kubernetes cluster.
If you already paid your nodes or configured DNS or anything, you can select your current Kubernetes cluster, and select "Reset your cluster" ("réinitialiser votre cluster"), and then "Keep and reinstall nodes" ("conserver et réinstaller les noeuds") and at the "Private network attached" ("Réseau privé attaché") option, choose "None (public IPs)" ("Aucun (IPs publiques)")
I faced the same use case and problem, and after some research and experimentation, got the hint from the small comment on this dialog box:
By default, your worker nodes have a public IPv4. If you choose a private network, the public IPs of these nodes will be used exclusively for administration/linking to the Kubernetes control plane, and your nodes will be assigned an IP on the vLAN of the private network you have chosen
Now i got my Traefik ingress as a DaemonSet using hostNetwork and every node is reachable directly even on low ports (as you saw yourself, the default security group is open)
Well i can't help any further i guess, but i would check the following:
Are you using the public node ip address?
Did you configure you service as Loadbalancer properly?
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
Do you have a loadbalancer and set it up properly?
Did you install any Ingress controller? (ingress-nginx?) You may need to add a Daemonset for this ingress-controller to duplicate the ingress-controller pod on each node in your cluster
Otherwise, i would suggest an Ingress, (if this works, you may exclude any firewall related issues).
This page explains very well:
What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?
In AWS, you have things called security groups... you may have the same kind of thing in you k8s provider (or even your local machine). Please add those ports to the security groups or local firewalls. In AWS you may need to bind those security groups to your EC2 instance (Ingress node) as well.
I've a small Fargate cluster with a service running and found that if I disable the public IP the container won't build as it doesn't have a route to pull the image.
The ELB for ECS Fargate is part of a subnet which has:
internet gateway configured and attached
route table allowing unrestricted outgoing
security policy on the ECS service allows unrestricted outgoing
DNS enabled
My understanding is the internet gateway is a NAT and the above actions should permit outgoing internet access however I can't make it so. What else is missing?
Just like all other resources in your AWS VPC, if you don't attach a public IP address, then it needs either to be placed in a subnet with a route to a NAT Gateway to access things outside the VPC, or it needs VPC endpoints to access those resources.
I have set-up a EBL for a persistent public & subnet IP. As far as I
can tell my subnet has outgoing internet unrestricted (internet
gateway attached and route opens up all outgoing traffic to 0.0.0.0/0.
I'm unsure if the service setup will configure the EC2 to use this
first then attempt to set-up the container. If not then it probably
doesn't apply.
ELB is for inbound traffic only, it does not provide any sort of outbound networking functionality for your EC2 or Fargate instance. The ELB is not in any way involved when ECS tries to pull a container image.
Having a volatile public IP address is a bit annoying as my
understanding is the security policy will apply to both the
ELB/Elastic provided IP and this one.
What "security policy" are you referring to? I'm not aware of security policies on AWS that are applied directly to IP addresses. Assuming you mean the Security Group when you say "security policy", your understanding is incorrect. Both the EC2 or Fargate instance and the ELB should have different security groups assigned to them. The ELB would have a security group allowing all inbound traffic, if you want it to be public on the Internet. The EC2 or Fargate instance should have a security group only allowing inbound traffic from the ELB (by specifying the ELB's security group ID in the inbound rule).
I want to point out you say "EC2" in your question and never mention Fargate, but you tagged your question with Fargate twice and didn't tag it with EC2. EC2 and Fargate are separate compute services on AWS. You would either be using one or the other. It doesn't really matter in this case given the issue you are encountering, but it helps to be clear in your questions.
I have a GKE clusters setup, dev and stg let's say, and wanted apps running in pods on stg nodes to connect to dev master and execute some commands on that's GKE - I have all the setup I need and when I add from hand IP address of the nodes all works fine but the IP's are changing,
so my question is how can I add to Master authorised networks the ever-changing default-pool IPs of nodes from the other cluster?
EDIT: I think I found the solution, it's not the node IP but the NAT IP I have added to authorized networks, so assuming I don't change those I just need to add the NAT I guess, unless someone knows better solution ?
I'm not sure that you are doing the correct things. In kubernetes, your communication is performed between services, that represents deployed pods, on one or several nodes.
When you communicate with the outside, you reach an endpoint (an API or a specific port). The endpoint is materialized by a loadbalancer that routes the traffic.
Only the kubernetes master care about the node as resources (CPU, memory, GPU,...) provider inside the cluster. You should never have to directly reach the node of a cluster without using the standard way.
Potentially you can reach the NodePort service exposal on the NodeIP+servicePort.
What you really need to do is configure the kubectl in jenkins pipeline to connect to GKE Master IP. The master is responsible for accepting your commands (rollback, deployment, etc). See Configuring cluster access for kubectl
The Master IP is available in the Kubernetes Engine console along with the Certificate Authority certificate. A good approach is to use a service account token to authenticate with the master. See how to Login to GKE via service account with token.
I'm exposing an application run on a GKE cluster using a LoadBalancer service. By default, the LoadBalancer creates a rule in the Google VPC firewall with IP range 0.0.0.0/0. With this configuration, I'm able to reach the service in all situations.
I'm using an OpenVPN server inside my default network to prevent outside access to GCE instances on a certain IP range. By modifying the service .yaml file loadBalancerSourceRanges value to match the IP range of my VPN server, I expected to be able to connect to the Kubernetes application while connected to the VPN, but not otherwise. This updated the Google VPN firewall rule with the range I entered in the .yaml file, but didn't allow me to connect to the service endpoint. The Kubernetes cluster is located in the same network as the OpenVPN server. Is there some additional configuration that needs to be used other than setting loadBalancerSourceRanges to the desired ingress IP range for the service?
You didn't mention the version of this GKE cluster; however, it might be helpful to know that, beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Google Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons. You can replicate the behavior of older clusters (1.8.x and earlier) by setting a new firewall rule on your cluster. You can see this notification on the Release Notes published in the official documentation
I am having a situation where my MongoDB in running on a separate ec2 instance and my app is running inside a kubernetes cluster created by kops. Now I want to access the DB from the app running inside k8s.
For this, I tried VPC peering between k8s VPC and ec2 instance' VPC. I tried setting requester VPC as k8s VPC and acceptor VPC as instance' VPC. After that, I've also added an ingress rule in ec2 instance' security group for allowing access from k8s cluster's security group on port 27017.
But, when I ssh'd into the k8s node and tried with telnet, the connection failed.
Is there anything incorrect in the procedure? Is there any better way to handle this?
CIDR blocks:
K8S VPC - 172.20.0.0/16
MongoDB VPC - 172.16.0.0/16
What are the CIDR blocks of the two VPCs? They mustn't overlap. In addition, you need to make sure that communication is allowed to travel both ways when modifying the security groups. That is, in addition to modifying your MongoDB VPC to allow inbound traffic from the K8s VPC, you need to make sure the K8s VPC allows inbound traffic from the MongDB VPC.
First , this does not seems to be kubernetes issue.
Make sure you have the proper route from kubernetes to mongodb node and vice versa
Make sure the required ports are open in security groups of VPCs
Allow inbound traffic from kubernetes vpc to monogdb vpc
Allow inbound traffic from mongodb vpc to kubernetes vpc
Make sure the namespace security allows the inbound and bound traffic