Restrict aws security groups on kubernetes cluster - kubernetes

I created my kubernetes cluster with specified security group for each ec2 server type, for example for backend server I have backend-sg associated with and a node-sg which is created with the cluster.
Now I try to restrict access to my backend ec2 and open only port 8090 as an inbound and port 8080 as an outbound to a specific security group (lets call it frontend-sg).
I was manage to do so but when changing the inbound port to 8081 in order to check that those restrictions actually worked I was still able to acess port 8080 from the frontend-sg ec2.
I think I am missing something...
Any help would be appreciated

Any help would be appriciated
I will try to illustrate situation in this answer to make it more clear. If I'm understanding your case correctly, this is what you have so far:
Now if you try ports from Frontend EC2 instance to Backend EC2 instance, and they are in same security group (node-sg) you will have traffic there. If you want to check group isolation then you should have one instance outside of node-sg and only in frontend-sg targetting any instance in backend-sg (supposing that both node-sg and backend-sg are not permitting said ports for inbound traffic)...
Finally, a small note... Kubernetes is by default closing all traffic (and you need ingress, loadbalancer, upstream proxy, nodePort or some other means to actually expose your front-facing services) so traditional fine graining of backend/frontend instances and security groups is not that "clearcut" when using k8s, especially since you don't really want to schedule manually (or by labels for that matter) which instances pods will actually run (but instead leave that to k8s scheduler for better unitilization of resources).

Related

Access restrictions when using Gcloud vpn with Kubernetes

This is my first question on Stack Overflow:
We are using Gcloud Kubernetes.
A customer specifically requested a VPN Tunnel to scrape a single service in our Cluster (I know ingress would be more suited for this).
Since VPN is IP based and Kubernetes changes these, I can only configure the VPN to the whole IP range of services.
I'm worried that the customer will get full access to all services if I do so.
I have been searching for days on how to treat incoming VPN traffic, but haven't found anything.
How can I restrict the access? Or is it restricted and I need netpols to unrestrict it?
Incoming VPN traffic can either be terminated at the service itself, or at the ingress - as far as I see it. Termination at the ingress would probably be better though.
I hope this is not too confusing, thanks you so much in advance
As you mentioned, an external Load Balancer would be ideal here as you mentioned, but if you must use GCP Cloud VPN then you can restrict access into your GKE cluster (and GCP VPC in general) by using GCP Firewall rules along with GKE internal LBs HTTP or TCP.
As a general picture, something like this.
Second, we need to add two firewall rules to the dedicated networks (project-a-network and project-b-network) we created. Go to Networking-> Networks and click the project-[a|b]-network. Click “Add firewall rule”. The first rule we create allows SSH traffic from the public so that we can SSH into the instances we just created. The second rule allows icmp traffic (ping uses the icmp protocol) between the two networks.

Can I create a cluster whose nodes are connected via HTTP by kubeadm?

I want to create a cluster whose nodes are connected via HTTP(not HTTPS).
Is it possible to create the cluster with kubeadm?
I didn't see any command can do it in the document of kubeadm.
If it is impossible, can I change a cluster over HTTPS to one over HTTP?
Currently as best practices is to make your cluster secure as much as possible and kubernetes provides many ways to do it.
However if you will have isolated environment you can try to set flags --insecure-port to set port which will be insecure bounded and --insecure-bind-address which will set an address to insecure port. Details about them can be found here.
For example insecure-bind-address: "0.0.0.0" insecure-port: "8080" should allow access to the unauthenticated api endpoint running on port 8080.
You can also check if vulnerability scenarios which are based on Kind contains something interesting for your testing.

Change Kubernetes Instance Template to open HTTPS port

I was using NodePort to host a webapp on Google Container Engine (GKE). It allows you to directly point your domains to the node IP address, instead of an expensive Google load balancer. Unfortunately, instances are created with HTTP ports blocked by default, and an update locked down manually changing the nodes, as they are now created using and Instance Group/and an Immutable Instance Template.
I need to open port 443 on my nodes, how do I do that with Kubernetes or GCE? Preferably in an update resistant way.
Related github question: https://github.com/nginxinc/kubernetes-ingress/issues/502
Using port 443 on your Kubernetes nodes is not a standard practice. If you look at the docs you and see the kubelet option --service-node-port-range which defaults to 30000-32767. You could change it to 443-32767 or something. Note that every port under 1024 is restricted to root.
In summary, it's not a good idea/practice to run your Kubernetes services on port 443. A more typical scenario would be an external nginx/haproxy proxy that sends traffic to the NodePorts of your service. The other option you mentioned is using a cloud load balancer but you'd like to avoid that due to costs.
Update: A deamonset with a nodeport can handle the port opening for you. nginx/k8s-ingress has a nodeport on 443 which gets exposed by a custom firewall rule. the GCE UI will not show「Allow HTTPS traffic」as checked, because its not using the default rule.
You can do everything you do on the GUI Google Cloud Console using the Cloud SDK, most easily through the Google Cloud Shell. Here is the command for adding a network tag to a running instance. This works, even though the GUI disabled the ability to do so
gcloud compute instances add-tags gke-clusty-pool-0-7696af58-52nf --zone=us-central1-b --tags https-server,http-server
This also works on the beta, meaning it should continue to work for a bit.
See https://cloud.google.com/sdk/docs/scripting-gcloud for examples on how to automate this. Perhaps consider running on a webhook when downtime is detected. Obviously none of this is ideal.
Alternatively, you can change the templates themselves. With this method you can also add a startup to new nodes, which allows you do do things like fire a webhook with the new IP Address for a round robin low downtime dynamic dns.
Source (he had the opposite problem, his problem is our solution): https://stackoverflow.com/a/51866195/370238
If I understand correctly, if nodes can be destroyed and recreated themselves , how are you going to rest assured that certain service behind port reliably available on production w/o any sort of load balancer which takes care of route orchestration diverting port traffic to new node(s)

Firewall at container level in GCP

I have my jenkins slaves running on gke dynamically. I need to allow those containers to access my nexus server which is running on port 8080 on a different instance but same network. In firewall I have to allow those containers to access nexus-port 8080. But I don't want to keep 0.0.0.0 in source IP ranges. What is the IP range that I should allow to make it work. I tried Internal IPs, Cluster EndPoint in Source IP and targets I allowed all instances in the network. It is not working as expected. I need some help.
What you want to use to achieve this is not fiddling directly with firewall, but utilize Kubernetes native way of limiting traffic between pods by use of Network Policies
https://kubernetes.io/docs/concepts/services-networking/network-policies/
In addition to #Radek 'Goblin' Pieczonka answer I think it’s worth to add that traditional firewall rules are no longer sufficient for containerized environments.
Kubernetes Network Policy allows you to specify the connectivity allowed within your cluster, and what should be blocked. This is not based on traditional IP firewall concept but rather on selectors, not IP addresses and ports.
Here you can read foundations of the new philosophy of security. You probably will find interesting for your project.

How to access Kubernetes pod in local cluster?

I have set up an experimental local Kubernetes cluster with one master and three slave nodes. I have created a deployment for a custom service that listens on port 10001. The goal is to access an exemplary endpoint /hello with a stable IP/hostname, e.g. http://<master>:10001/hello.
After deploying the deployment, the pods are created fine and are accessible through their cluster IPs.
I understand the solution for cloud providers is to create a load balancer service for the deployment, so that you can just expose a service. However, this is apparently not supported for a local cluster. Setting up Ingress seems overkill for this purpose. Is it not?
It seems more like kube proxy is the way to go. However, when I run kube proxy --port <port> on the master node, I can access http://<master>:<port>/api/..., but not the actual pod.
There are many related questions (e.g. How to access services through kubernetes cluster ip?), but no (accepted) answers. The Kubernetes documentation on the topic is rather sparse as well, so I am not even sure about what is the right approach conceptually.
I am hence looking for a straight-forward solution and/or a good tutorial. It seems to be a very typical use case that lacks a clear path though.
If an Ingress Controller is overkill for your scenario, you may want to try using a service of type NodePort. You can specify the port, or let the system auto-assign one for you.
A NodePort service exposes your service at the same port on all Nodes in your cluster. If you have network access to your Nodes, you can access your service at the node IP and port specified in the configuration.
Obviously, this does not load balance between nodes. You can add an external service to help you do this if you want to emulate what a real load balancer would do. One simple option is to run something like rocky-cli.
An Ingress is probably your simplest bet.
You can schedule the creation of an Nginx IngressController quite simply; here's a guide for that. Note that this setup uses a DaemonSet, so there is an IngressController on each node. It also uses the hostPort config option, so the IngressController will listen on the node's IP, instead of a virtual service IP that will not be stable.
Now you just need to get your HTTP traffic to any one of your nodes. You'll probably want to define an external DNS entry for each Service, each pointing to the IPs of your nodes (i.e. multiple A/AAAA records). The ingress will disambiguate and route inside the cluster based on the HTTP hostname, using name-based virtual hosting.
If you need to expose non-HTTP services, this gets a bit more involved, but you can look in the nginx ingress docs for more examples (e.g. UDP).