Kubernetes Network Policy and External Load Balancer [closed] - kubernetes

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Let's suppose I have a deployment in my cluster which is exposed to the outside world via a load-balancer service (has static IP with some external firewall rules) on top of this now I want to apply internal firewall rules for the same deployment, I want to limit it to connect only with a few other pods in case if it is compromised. So can I simultaneously apply load-balancer and egress network policy for deployment in Kubernetes without messing the things up? Is there a distinct separation between load-balancers and network policies (one is for external traffic the other for internal) or it is not like that.
Thanks in advance!
For the sake of argument let's assume this is the network policy I want to apply:
kind: NetworkPolicy
metadata:
name: bridge-ergress-access
namespace: default
spec:
podSelector:
matchLabels:
name: mqtt-lb-service
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
- app: kafka1
- podSelector:
matchLabels:
- app: kafka2
- podSelector:
matchLabels:
- app: kafka3
- podSelector:
matchLabels:
- app: redis

Kubernetes network policy is used to enforce layer-3 segmentation for applications that are deployed on the platform. Network policies lack the advanced features of modern firewalls like layer-7 control and threat detection, but they do provide a basic level of network security which is a good starting point.Kubernetes network policies specify the access permissions for groups of pods, much like security groups in the cloud are used to control access to VM instances.
You can use kubernetes network policy to control traffic within your pod network with external firewall rules which control traffic within VM/host network.

Related

Kubernetes Network Policy - allow Google managed services

My Setup
I have GKE cluster with network policy enabled.
I have a network policy to block all ingress and egress:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-traffic
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
In my cluster I have multiple deployments that use google managed services such as Pubsub and Datastore.
I want to allow those connections.
Suggested Solution
Only way I found to do this is by getting all of google ips and allow all of them. Example of how to get those can be found here: https://gist.github.com/n0531m/f3714f6ad6ef738a3b0a
This is problematic for two main reasons:
If those ips change then my cluster will fail to contact google services.
Security wise this is bad because I am allowing here any google ip, including gcp clients and not only the specific google services.
My Question
How can I allow connections to these services using a network policy? What is the best practice in such a case?

EKS block a single IP [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Please ease my suffering here: I'm trying to block a single IP from getting to one of the sites hosted on EKS. I've tried the server-snippet annotation, but it didn't work. I've also tried creating a network policy to block, no luck. Any idea how to set up a list of restricted IPs?
Here's the Network Policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: web-dev-network-policy
namespace: target_namespace
spec:
podSelector:
matchLabels:
app: php
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: source_ip_value/32
ports:
- protocol: TCP
port: 80
And here's the server-snippet:
nginx.ingress.kubernetes.io/server-snippet: |
location / {
deny source_ip;
}
Edit:
When monitoring incoming requests for the domain, I can see that CoreDNS rewrites the requests (I suppose) to match the service name where the site is hosted. I guess that's why the location / doesn't match the request and is allowed, ex.:
source.ip.address - - [time/date] "HEAD / HTTP/2.0" 200 0 "-" "curl/7.58.0" 54 0.382 [service-name-service-name-80] [] private.ip:80 0 0.384 200 7a06748e7395fbsssceb737723399919
This is a community wiki answer. Feel free to expand it.
It is worth noting that according to the official docs:
Cluster ingress and egress mechanisms often require rewriting the
source or destination IP of packets. In cases where this happens, it
is not defined whether this happens before or after NetworkPolicy
processing, and the behavior may be different for different
combinations of network plugin, cloud provider, Service
implementation, etc.
In the case of ingress, this means that in some cases you may be able
to filter incoming packets based on the actual original source IP,
while in other cases, the "source IP" that the NetworkPolicy acts on
may be the IP of a LoadBalancer or of the Pod's node, etc.
However as you already mentioned in the comments:
We've put this on hold for the moment as it seems there's no way of
filtering client IP addresses without enabling the proxy protocol,
which would mean a rework of a production Ingress. For now we'll have
to satisfy with AWS WAF

Kubernetes NGINX Ingŕess controller TCP / MQTT config [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
This post was edited and submitted for review 6 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I have an Kubernetes Cluster with a working Ingress config for one REST API. Now I want to add a port forward to my mqtt adapter to this config, but I have problems finding a way to add an TCP rule to the config. The Kubernetes docs only show a HTTP example. https://kubernetes.io/docs/concepts/services-networking/ingress/
I'm pretty new to Kubernetes and I have problems adapting other configs, because whatever I find looks totally different from that what I found in the Kubernetes Docs.
I have used a regular nginx webserver with letsencrypt to secure TCP connections. I hope this works with the ingress controller, too.
My goal is to send messages via MQTT with TLS to my cluster.
Does someone have the right docs for this or knows how to add the config?
My config looks like this:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ratings-web-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
spec:
tls:
- hosts:
- example.com
secretName: ratings-web-cert
rules:
- host: example.com
http:
paths:
- backend:
serviceName: test-api
servicePort: 8080
path: /
the Ingress system only handles HTTP traffic in general. A few Ingress Controllers support custom extensions for non-HTTP packet handling but it's different for each. https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ shows how to do this specifically for ingress-nginx, as shown there you configure it entirely out of band via some ConfigMaps, not via the Ingress object(s).
What you probably actually want is a LoadBalancer type Service object instead.

Kubernetes Operator Vs Helm for Pub-Sub Model application [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have a Publisher Subscriber (Pub-Sub model) application in C# and I want to host it on Kubernetes for high availability. Is it good to go with helm or shall I use operators in my application.
What is best suited for Pub-Sub model applications ?
If you have a (dockerized) application and you want to run it in Kubernetes, then it's enough if you create Kubernetes Deployment configuration.
So the simplest thing you can do is to create a file deployment.yaml with the following content.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: <your-docker-image>
And then deploy it in Kubernetes with the following command.
kubectl apply -f deployment.yaml
About Helm and Operators, you generally use them for some more complex deployments, to organize and template multiple Kubernetes configurations, to interact with your application, to perform backups, and more operational tasks.
As already mentioned on the previous answer simple deployment would be enough for you to launch an application in Kubernetes.
Idea of helm is to have reusable yaml artifacts thru templates. So it allows you to define Kubernetes yamls files with some properties. Values for those properties are stored in separate file. Most use case for helm is to create custom yamls for the same application workload with different configuration or scheduling those deployment in different environments.
Kubernetes Operator on the other hand is an application-specific controller that extends the functionality of the Kubernetes API to create, configure, and manage instances of complex applications on behalf of a Kubernetes user. It builds upon the basic Kubernetes resource and controller concepts, but includes domain or application-specific knowledge to automate the entire life cycle of the software it manages.
So if you some special requirements that your application needs you may want to be more interested in creating custom operator.
To sum up one could say that helm is sort of package manager for Kubernetes where Kubernetes operator is a controller which manages the life cycle of particular kubernetes resource/application/software/
Here's a good article how does two differs and what they have in common.

Understanding subnetting in Kubernetes cluster

When using GKE, I found that a all the nodes in the Kubernetes cluster must be in the same network and the same subnet. So, I wanted to understand the correct way to design networking.
I have two services A and B and they have no relation between them. My plan was to use a single cluster in a single region and have two nodes for each of the services A and B in different subnets in the same network.
However, it seems like that can't be done. The other way to partition a cluster is using namespaces, however I am already using partitioning development environment using namespaces.
I read about cluster federation https://kubernetes.io/docs/concepts/cluster-administration/federation/, however it my services are small and I don't need them in multiple clusters and in sync.
What is the correct way to setup netowrking for these services? Should I just use the same network and subnet for all the 4 nodes to serve the two services A and B?
You can restrict the incoming (or outgoing) traffic making use of labels and networking policies.
In this way the pods would be able to receive the traffic merely if it has been generated by a pod belonging to the same application or with any logic you want to implement.
You can follow this step to step tutorial that guides you thorough the implementation of a POC.
kubectl run hello-web --labels app=hello \
--image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose
Example of Network policy
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: hello-allow-from-foo
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
app: hello
ingress:
- from:
- podSelector:
matchLabels:
app: foo