I'm having a microservice architecture and one of my services needs to access some specific IP (3.*.*.*:63815) to connect WebSocket. So from the provider side, I have whitelist my ingress External IP.
But when I tried to connect, the connection is not established.
Do I need to update any firewall rules or add custom IP/Port access inside via Ingress?
Any help on this will be appreciated!
Edit:
I'm using GCP Cloud for this
I need to connect a external FIXapi client from the POD
I need to agree and give more visibility to the comment made by user #dishant makwana:
Most probably you will need to whitelist the IP address of the nodes that your pods are running on
Assuming that you want to send a request from your GKE Pod to a service located outside of your project/organization/GCP, you should allow the traffic on your "on-premise" location from your GCP resources.
The source IP of the traffic that you are creating could be either:
GKE Nodes External IP addresses - if the cluster is not created private.
Cloud NAT IP address - if you've configured Cloud NAT for your private cluster.
A side note!
If you haven't created a Cloud NAT for your private cluster, you won't be able to reach external sources.
Answering following question:
Do I need to update any firewall rules or add custom IP/Port access inside via Ingress?
If we are speaking about GKE/GCP environment then, no. You don't need to modify any firewall rules from GCP side (assuming that you haven't reconfigured your firewall rules in any way).
Ingress resource (not rule) in Kubernetes/GKE is used to expose your own application to the external access (it's for inbound traffic, not outbound).
Citing the official documentation:
Implied rules
Every VPC network has two implied firewall rules. These rules exist, but are not shown in the Cloud Console:
Implied allow egress rule. An egress rule whose action is allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to any destination, except for traffic blocked by Google Cloud. A higher priority firewall rule may restrict outbound access. Internet access is allowed if no other firewall rules deny outbound traffic and if the instance has an external IP address or uses a Cloud NAT instance. For more information, see Internet access requirements.
Implied deny ingress rule. An ingress rule whose action is deny, source is 0.0.0.0/0, and priority is the lowest possible (65535) protects all instances by blocking incoming connections to them. A higher priority rule might allow incoming access. The default network includes some additional rules that override this one, allowing certain types of incoming connections.
-- Cloud.google.com: VPC: Docs: Firewall: Default firewall rules
Additional resources:
Cloud.google.com: NAT: Docs: GKE example
Cloud.google.com: Kubernetes Engine: Docs: Concepts: Ingress
Related
I have a Kubernetes cluster with multiple nodes in two different subnets (x and y). I have an IPsec VPN tunnel setup between my x subnet and an external network. Now my problem is that the pods that get scheduled in the nodes on the y subnet can't send requests to the external network because they're in nodes not covered by the VPN tunnel. Creating another VPN to cover the y subnet isn't possible right now. Is there a way in k8s to force all pods' traffic to go through a single source? Or any clean solution even if outside of k8s?
Posting this as a community wiki, feel free to edit and expand.
There is no built-in functionality in kubernetes that can do it. However there are two available options which can help to achieve the required setup:
Istio
If services are well known then it's possible to use istio egress gateway. We are interested in this use case:
Another use case is a cluster where the application nodes don’t have
public IPs, so the in-mesh services that run on them cannot access the
Internet. Defining an egress gateway, directing all the egress traffic
through it, and allocating public IPs to the egress gateway nodes
allows the application nodes to access external services in a
controlled way.
Antrea egress
There's another solution which can be used - antrea egress. Use cases are:
You may be interested in using this capability if any of the following apply:
A consistent IP address is desired when specific Pods connect to
services outside of the cluster, for source tracing in audit logs, or
for filtering by source IP in external firewall, etc.
You want to force outgoing external connections to leave the cluster
via certain Nodes, for security controls, or due to network topology
restrictions.
This is my first question on Stack Overflow:
We are using Gcloud Kubernetes.
A customer specifically requested a VPN Tunnel to scrape a single service in our Cluster (I know ingress would be more suited for this).
Since VPN is IP based and Kubernetes changes these, I can only configure the VPN to the whole IP range of services.
I'm worried that the customer will get full access to all services if I do so.
I have been searching for days on how to treat incoming VPN traffic, but haven't found anything.
How can I restrict the access? Or is it restricted and I need netpols to unrestrict it?
Incoming VPN traffic can either be terminated at the service itself, or at the ingress - as far as I see it. Termination at the ingress would probably be better though.
I hope this is not too confusing, thanks you so much in advance
As you mentioned, an external Load Balancer would be ideal here as you mentioned, but if you must use GCP Cloud VPN then you can restrict access into your GKE cluster (and GCP VPC in general) by using GCP Firewall rules along with GKE internal LBs HTTP or TCP.
As a general picture, something like this.
Second, we need to add two firewall rules to the dedicated networks (project-a-network and project-b-network) we created. Go to Networking-> Networks and click the project-[a|b]-network. Click “Add firewall rule”. The first rule we create allows SSH traffic from the public so that we can SSH into the instances we just created. The second rule allows icmp traffic (ping uses the icmp protocol) between the two networks.
Consider a microservice X which is containerized and deployed in a kubernetes cluster. X communicates with a Payment Gateway PG. However, the payment gateway requires a static IP for services contacting it as it maintains a whitelist of IP addresses which are authorized to access the payment gateway. One way for X to contact PG is through a third party proxy server like QuotaGuard which will provide a static IP address to service X which can be whitelisted by the Payment Gateway.
However, is there an inbuilt mechanism in kubernetes which can enable a service deployed in a kube-cluster to obtain a static IP address?
there's no mechanism in Kubernetes for this yet.
other possible solutions:
if nodes of the cluster are in a private network behind a NAT then just add your network's default gateway to the PG's whitelist.
if whitelist can accept a cidr apart from single IPs (like 86.34.0.0/24 for example) then add your cluster's network cidr to the whitelist
If every node of the cluster has a public IP and you can't add a cidr to the whitelist then it gets more complicated:
a naive way would be to add ever node's IP to the whitelist, but it doesn't scale above tiny clusters few just few nodes.
if you have access to administrating your network, then even though nodes have pubic IPs, you can setup a NAT for the network anyway that targets only packets with PG's IP as a destination.
if you don't have administrative access to the network, then another way is to allocate a machine with a static IP somewhere and make it act as a proxy using iptables NAT similarly like above again. This introduces a single point of failure though. In order to make it highly available, you could deploy it on a kubernetes cluster again with few (2-3) replicas (this can be the same cluster where X is running: see below). The replicas instead of using their node's IP to communicate with PG would share a VIP using keepalived that would be added to PG's whitelist. (you can have a look at easy-keepalived and either try to use it directly or learn from it how it does things). This requires high privileges on the cluster: you need be able to grant to pods of your proxy NET_ADMIN and NET_RAW capabilities in order for them to be able to add iptables rules and setup a VIP.
update:
While waiting for builds and deployments during last few days, I've polished my old VIP-iptables scripts that I used to use as a replacement for external load-balancers on bare-metal clusters, so now they can be used as well to provide egress VIP as described in the last point of my original answer. You can give them a try: https://github.com/morgwai/kevip
There are two answers to this question: for the pod IP itself, it depends on your CNI plugin. Some allow it with special pod annotations. However most CNI plugins also involve a NAT when talking to the internet so the pod IP being static on the internal network is kind of moot, what you care about is the public IP the connection ends up coming from. So the second answer is "it depends on how your node networking and NAT is set up". This is usually up to the tool you used to deploy Kubernetes (or OpenShift in your case I guess). With Kops it's pretty easy to tweak the VPC routing table.
I have a k8s cluster on DigitalOcean using traefik 1.7 as Ingress Controller.
our domain point to the load balancer ip created by trafik.
All incomming request go through load balancer ip and be routed by trafik to proper service.
Now I want to perform HTTP requests from my services to an external system which only accepts registered IPs.
Can I provide them load balancer's IP and make all outbound requests go through load balancer IP? or I need to provide them all node's public IPs?
thanks
You can do either of them.
But the best solution to this would be to make all the traffic go through load balancer assuming this is some proxy server with tunnelling capabilities and open comms through load balancer IP on your external system. Because, imagine, right now you might be having a dozen of nodes running 100 micro services and now you opened your external system security group to allow traffic from dozen.
But in next few months you might go from 12 to 100 nodes and the overhead of updating your external system's security group whenever you add a node in DigitalOcean.
But you can also try a different approach by adding a standalone proxy server and route traffic through it from your pods. Something like [this] (Kubernetes outbound calls to an external endpoint with IP whitelisting).
Just a note, it's not just these options there are several ways one can achieve this, one another approach would be associating a NAT IP to all your nodes and keeping every node behind a private network would also work. It all depends on how you want to set it up and the purpose of the system you are planning to achieve.
Hope this helps.
Unfortunately, Ingress resources can't use outbound requests.
So you need to provide all nodes public IPs.
Another idea, if you use a forward proxy(e.g. nginx, haproxy), you can limit the nodes where forward proxy pods are scheduled by setting nodeSelector.
By doing so, I think you can limit the nodes that provide public IP addresses.
Egress packets from a k8s cluster to cluster-external services have node's IP as the source IP. So, you can register k8s nodes' IPs in the external system to allow egress packets from the k8s cluster.
https://kubernetes.io/docs/tutorials/services/source-ip/ says egress packets from k8s get source NAT'ed with node's IP:
Source NAT: replacing the source IP on a packet, usually with a node’s IP
Following can be used to send egress packets from a k8s cluster:
k8s network policies
calico CNI egress policies
istio egress gateway
nirmata/kube-static-egress-ip GitHub project
kube-static-egress-ip provides a solution with which a cluster operator can define an egress rule where a set of pods whose outbound traffic to a specified destination is always SNAT'ed with a configured static egress IP. kube-static-egress-ip provides this functionality in Kubernetes native way using custom rerources.
I have created an Ingress service that forwards TCP port 22 to a service in my cluster. As is, every inbound traffic is allowed.
What I would like to know is if it is possible to define NSG rules to prevent access to a certain subnet only. I was able to define that rule using the Azure interface. However, every time that Ingress service is edited, those Network Security Group rules get reverted.
Thanks!
I think there would be some misunderstanding about the NSG in AKS. So first let us take a look at the network of the AKS, Kubernetes uses Services to logically group a set of pods together and provide network connectivity. See the AKS Service for more details. And when you create services, the Azure platform automatically configures any network security group rules that are needed.
Don't manually configure network security group rules to filter
traffic for pods in an AKS cluster.
See NSG in AKS for more details. So in this situation, you do not need to manage the rule in the NSG manually.
But don't worry, you can also manage the rules for your pods manually as you want. See Secure traffic between pods using network policies in Azure Kubernetes Service. You can install the Calico network policy engine and create Kubernetes network policies to control the flow of traffic between pods in AKS. Although it just is the preview version, it also can help you with what you want. But remember, the Network policy can only be enabled when the cluster is created.
Yes! this is most definitely possible. The Azure NSG is for subnets and NIC's. You can define the CIDR on the NSG rule to allow/deny traffic on the desired port and apply it to the NIC and subnet. A word of caution would be to make sure to have matching rules at Subnet and NIC level if the cluster is within the same subnet. Else the traffic would be blocked internally and won't go out. This doc best describes them https://blogs.msdn.microsoft.com/igorpag/2016/05/14/azure-network-security-groups-nsg-best-practices-and-lessons-learned/.