We're increasing the safety of our recently developed software (running on Service Fabric), and want all trafic to go through the API Management. In the load balancer of the SFcluster, you can restrict access on a port level, but where do I restrict access to my cluster on IP-address level? We want to only allow incomming trafic from the API Management, and block everything else, so blacklist all IP-addresses but the API Managemnet IP.
Thanks!
You can use a Network Security Group for this.
A network security group (NSG) contains a list of security rules that
allow or deny network traffic to resources connected to Azure Virtual
Networks (VNet). NSGs can be associated to subnets, individual VMs
(classic), or individual network interfaces (NIC) attached to VMs
(Resource Manager). When an NSG is associated to a subnet, the rules
apply to all resources connected to the subnet. Traffic can further be
restricted by also associating an NSG to a VM or NIC.
This quick start template describes how to deploy one.
More about networking here.
Related
I have a Kubernetes cluster with multiple nodes in two different subnets (x and y). I have an IPsec VPN tunnel setup between my x subnet and an external network. Now my problem is that the pods that get scheduled in the nodes on the y subnet can't send requests to the external network because they're in nodes not covered by the VPN tunnel. Creating another VPN to cover the y subnet isn't possible right now. Is there a way in k8s to force all pods' traffic to go through a single source? Or any clean solution even if outside of k8s?
Posting this as a community wiki, feel free to edit and expand.
There is no built-in functionality in kubernetes that can do it. However there are two available options which can help to achieve the required setup:
Istio
If services are well known then it's possible to use istio egress gateway. We are interested in this use case:
Another use case is a cluster where the application nodes don’t have
public IPs, so the in-mesh services that run on them cannot access the
Internet. Defining an egress gateway, directing all the egress traffic
through it, and allocating public IPs to the egress gateway nodes
allows the application nodes to access external services in a
controlled way.
Antrea egress
There's another solution which can be used - antrea egress. Use cases are:
You may be interested in using this capability if any of the following apply:
A consistent IP address is desired when specific Pods connect to
services outside of the cluster, for source tracing in audit logs, or
for filtering by source IP in external firewall, etc.
You want to force outgoing external connections to leave the cluster
via certain Nodes, for security controls, or due to network topology
restrictions.
I'm having a microservice architecture and one of my services needs to access some specific IP (3.*.*.*:63815) to connect WebSocket. So from the provider side, I have whitelist my ingress External IP.
But when I tried to connect, the connection is not established.
Do I need to update any firewall rules or add custom IP/Port access inside via Ingress?
Any help on this will be appreciated!
Edit:
I'm using GCP Cloud for this
I need to connect a external FIXapi client from the POD
I need to agree and give more visibility to the comment made by user #dishant makwana:
Most probably you will need to whitelist the IP address of the nodes that your pods are running on
Assuming that you want to send a request from your GKE Pod to a service located outside of your project/organization/GCP, you should allow the traffic on your "on-premise" location from your GCP resources.
The source IP of the traffic that you are creating could be either:
GKE Nodes External IP addresses - if the cluster is not created private.
Cloud NAT IP address - if you've configured Cloud NAT for your private cluster.
A side note!
If you haven't created a Cloud NAT for your private cluster, you won't be able to reach external sources.
Answering following question:
Do I need to update any firewall rules or add custom IP/Port access inside via Ingress?
If we are speaking about GKE/GCP environment then, no. You don't need to modify any firewall rules from GCP side (assuming that you haven't reconfigured your firewall rules in any way).
Ingress resource (not rule) in Kubernetes/GKE is used to expose your own application to the external access (it's for inbound traffic, not outbound).
Citing the official documentation:
Implied rules
Every VPC network has two implied firewall rules. These rules exist, but are not shown in the Cloud Console:
Implied allow egress rule. An egress rule whose action is allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to any destination, except for traffic blocked by Google Cloud. A higher priority firewall rule may restrict outbound access. Internet access is allowed if no other firewall rules deny outbound traffic and if the instance has an external IP address or uses a Cloud NAT instance. For more information, see Internet access requirements.
Implied deny ingress rule. An ingress rule whose action is deny, source is 0.0.0.0/0, and priority is the lowest possible (65535) protects all instances by blocking incoming connections to them. A higher priority rule might allow incoming access. The default network includes some additional rules that override this one, allowing certain types of incoming connections.
-- Cloud.google.com: VPC: Docs: Firewall: Default firewall rules
Additional resources:
Cloud.google.com: NAT: Docs: GKE example
Cloud.google.com: Kubernetes Engine: Docs: Concepts: Ingress
This is my first question on Stack Overflow:
We are using Gcloud Kubernetes.
A customer specifically requested a VPN Tunnel to scrape a single service in our Cluster (I know ingress would be more suited for this).
Since VPN is IP based and Kubernetes changes these, I can only configure the VPN to the whole IP range of services.
I'm worried that the customer will get full access to all services if I do so.
I have been searching for days on how to treat incoming VPN traffic, but haven't found anything.
How can I restrict the access? Or is it restricted and I need netpols to unrestrict it?
Incoming VPN traffic can either be terminated at the service itself, or at the ingress - as far as I see it. Termination at the ingress would probably be better though.
I hope this is not too confusing, thanks you so much in advance
As you mentioned, an external Load Balancer would be ideal here as you mentioned, but if you must use GCP Cloud VPN then you can restrict access into your GKE cluster (and GCP VPC in general) by using GCP Firewall rules along with GKE internal LBs HTTP or TCP.
As a general picture, something like this.
Second, we need to add two firewall rules to the dedicated networks (project-a-network and project-b-network) we created. Go to Networking-> Networks and click the project-[a|b]-network. Click “Add firewall rule”. The first rule we create allows SSH traffic from the public so that we can SSH into the instances we just created. The second rule allows icmp traffic (ping uses the icmp protocol) between the two networks.
I have created an Ingress service that forwards TCP port 22 to a service in my cluster. As is, every inbound traffic is allowed.
What I would like to know is if it is possible to define NSG rules to prevent access to a certain subnet only. I was able to define that rule using the Azure interface. However, every time that Ingress service is edited, those Network Security Group rules get reverted.
Thanks!
I think there would be some misunderstanding about the NSG in AKS. So first let us take a look at the network of the AKS, Kubernetes uses Services to logically group a set of pods together and provide network connectivity. See the AKS Service for more details. And when you create services, the Azure platform automatically configures any network security group rules that are needed.
Don't manually configure network security group rules to filter
traffic for pods in an AKS cluster.
See NSG in AKS for more details. So in this situation, you do not need to manage the rule in the NSG manually.
But don't worry, you can also manage the rules for your pods manually as you want. See Secure traffic between pods using network policies in Azure Kubernetes Service. You can install the Calico network policy engine and create Kubernetes network policies to control the flow of traffic between pods in AKS. Although it just is the preview version, it also can help you with what you want. But remember, the Network policy can only be enabled when the cluster is created.
Yes! this is most definitely possible. The Azure NSG is for subnets and NIC's. You can define the CIDR on the NSG rule to allow/deny traffic on the desired port and apply it to the NIC and subnet. A word of caution would be to make sure to have matching rules at Subnet and NIC level if the cluster is within the same subnet. Else the traffic would be blocked internally and won't go out. This doc best describes them https://blogs.msdn.microsoft.com/igorpag/2016/05/14/azure-network-security-groups-nsg-best-practices-and-lessons-learned/.
I have a service fabric cluster with two node types, Frontend and Backend. Each node type has a single application in there that listen on a REST interface. The front end app should be accessible from the outside world, but the backend node type should only be accessible from the front end app.
Each node type has an associated Load Balancer and I have setup rules to allow access to each of the apps and this all works fine. However I would like to make sure that the load balancer only allows comms to backend node type if the comms originates from the front end app. I cannot see a way to configure this in the load balancer rules.
Can someone tell me how to prevent public access to my backend application?
I believe you can solve this problem by using Network Security Groups.
A network security group (NSG) contains a list of security rules that
allow or deny network traffic to resources connected to Azure Virtual
Networks (VNet).
Here's an example on how to deploy this.
Use this template as a sample for setting up a three nodetype secure
cluster and to control the inbound and outbound network traffic using
Network Security Groups. The template has a Network Security Group for
each of the VMSS to control the traffic in and out of the VMSS.