Change Kubernetes Instance Template to open HTTPS port - kubernetes

I was using NodePort to host a webapp on Google Container Engine (GKE). It allows you to directly point your domains to the node IP address, instead of an expensive Google load balancer. Unfortunately, instances are created with HTTP ports blocked by default, and an update locked down manually changing the nodes, as they are now created using and Instance Group/and an Immutable Instance Template.
I need to open port 443 on my nodes, how do I do that with Kubernetes or GCE? Preferably in an update resistant way.
Related github question: https://github.com/nginxinc/kubernetes-ingress/issues/502

Using port 443 on your Kubernetes nodes is not a standard practice. If you look at the docs you and see the kubelet option --service-node-port-range which defaults to 30000-32767. You could change it to 443-32767 or something. Note that every port under 1024 is restricted to root.
In summary, it's not a good idea/practice to run your Kubernetes services on port 443. A more typical scenario would be an external nginx/haproxy proxy that sends traffic to the NodePorts of your service. The other option you mentioned is using a cloud load balancer but you'd like to avoid that due to costs.

Update: A deamonset with a nodeport can handle the port opening for you. nginx/k8s-ingress has a nodeport on 443 which gets exposed by a custom firewall rule. the GCE UI will not show「Allow HTTPS traffic」as checked, because its not using the default rule.
You can do everything you do on the GUI Google Cloud Console using the Cloud SDK, most easily through the Google Cloud Shell. Here is the command for adding a network tag to a running instance. This works, even though the GUI disabled the ability to do so
gcloud compute instances add-tags gke-clusty-pool-0-7696af58-52nf --zone=us-central1-b --tags https-server,http-server
This also works on the beta, meaning it should continue to work for a bit.
See https://cloud.google.com/sdk/docs/scripting-gcloud for examples on how to automate this. Perhaps consider running on a webhook when downtime is detected. Obviously none of this is ideal.
Alternatively, you can change the templates themselves. With this method you can also add a startup to new nodes, which allows you do do things like fire a webhook with the new IP Address for a round robin low downtime dynamic dns.
Source (he had the opposite problem, his problem is our solution): https://stackoverflow.com/a/51866195/370238

If I understand correctly, if nodes can be destroyed and recreated themselves , how are you going to rest assured that certain service behind port reliably available on production w/o any sort of load balancer which takes care of route orchestration diverting port traffic to new node(s)

Related

Having 1 outgoing IP for kubernetes egress traffic

Current set-up
Cluster specs: Managed Kubernetes on Digital Ocean
Goal
My pods are accessing some websites but I want to use a proxy first.
Problem
The proxy I need to use is only taking 1 IP address in an "allow-list".
My cluster is using different nodes, with node-autoscaler so I have multiple and changing IP addresses.
Solutions I am thinking about
Setting-up a proxy (squid? nginx?) outside of the cluster (Currently not working when I access an HTTPS website)
Istio could let me set-up a gateway? (No knowledge of Istio)
Use GCP managed K8s, and follow the answers on Kubernetes cluster outgoing traffic IP. But all our stack is on Digital Ocean and the pricing is better there.
I am curious to know what is the best practice, easiest solution or if anyone experienced such use-case before :)
Best
You could set up all your traffic to go through istio-egressgateway.
Then you could manipulate the istio-egressgateway to always be deployed on the same node of the cluster, and whitelist that IP address.
Pros: super easy. BUT. If you are not using Istio already, to set up Istio just for this is may be killing a mosquito with a bazooka.
Cons: Need to make sure the node doesn't change the IP address. Otherwise the istio-egressgateway itself might not get deployed (if you do not have the labels added to the new node), and you will need to reconfigure everything for the new node (new IP address). Another con might be the fact that if the traffic goes up, there is an HPA, which will deploy more replicas of the gateway, and all of them will be deployed on the same node. So, if you are going to have lots of traffic, may be it would be a good idea to isolate one node, just for this purpose.
Another option would be as you are suggesting; a proxy. I would recommend an Envoy proxy directly. I mean, Istio is going to be using Envoy anyways right? So, just get the proxy directly, put it in a pod, do the same thing as I mentioned before; node affinity, so it will always run on the same node, so it will go out with the same IP.
Pros: You are not installing entire service mesh control plane for one tiny thing.
Cons: Same as before, as you still have the issue of the node IP change if something goes wrong, plus you will need to manage your own Deployment object, HPA, configure the Envoy proxy, etc. instead of using Istio objects (like Gateway and a VirtualService).
Finally, I see a third option; to set up a NAT gateway outside the cluster, and configure your traffic to go through it.
Pros: You won't have to configure any kubernetes object, therefor there will be no need to set up any node affinity, therefor there will be no node overwhelming or IP change. Plus you can remove the external IP addresses from your cluster, so it will be more secure (unless you have other workloads that need to reach internet directly). Also , probably having a single node configured as NAT will be more resilient then a kubernetes pod, running in a node.
Cons: May be a little bit more complicate to set up?
And there is this general Con, that you can whitelist only 1 IP address, so you will always have a single point of failure. Even NAT gateway; it still can fail.
The GCP static IP won't help you. What is suggesting the other post is to reserve an IP address, so you can re-use it always. But it's not that you will have that IP address automatically added to a random node that goes down. Human intervention is needed. I don't think you can have one specific node to have a static IP address, and if it goes down, the new created node will pick the same IP. That service, to my knowledge, doesn't exist.
Now, GCP does offer a very resilient NAT gateway. It is managed by Google, so shouldn't fail. Not cheap though.

Change NodePort to 80 in baremetal

If i use node port in yml file it give a port more than 30000
but when my user want to use it they do not want to remember that port and want to use 80. my kubernetes cluster is on baremetal.
How can i solve that?
Kubernetes doesn't allow you to expose low ports via the Node Port service type by design. The idea is that there is a significant chance of a port conflict if users are allowed to set low port numbers for their Node Port services.
If you really want to use port 80, you're going to have to either use a Load Balancer service type, or route your traffic through an Ingress. If you were on a cloud service, then either option would be fairly straight forward. However, since you're on bare metal, both options are going to be very involved. You're going to have to configure the load balancer or ingress functionality yourself in order to use either option, and it's going to be rough, sorry.
If you want to go forward with this, you'll have to read through a bunch of documentation to figure out what you want to implement and how to implement it.
https://www.weave.works/blog/kubernetes-faq-how-can-i-route-traffic-for-kubernetes-on-bare-metal
According to api-server docs you can use --service-node-port-range parameter for api-server or specify it to kubeadm configuration when bootstrapping your cluster see github issue

Restrict aws security groups on kubernetes cluster

I created my kubernetes cluster with specified security group for each ec2 server type, for example for backend server I have backend-sg associated with and a node-sg which is created with the cluster.
Now I try to restrict access to my backend ec2 and open only port 8090 as an inbound and port 8080 as an outbound to a specific security group (lets call it frontend-sg).
I was manage to do so but when changing the inbound port to 8081 in order to check that those restrictions actually worked I was still able to acess port 8080 from the frontend-sg ec2.
I think I am missing something...
Any help would be appreciated
Any help would be appriciated
I will try to illustrate situation in this answer to make it more clear. If I'm understanding your case correctly, this is what you have so far:
Now if you try ports from Frontend EC2 instance to Backend EC2 instance, and they are in same security group (node-sg) you will have traffic there. If you want to check group isolation then you should have one instance outside of node-sg and only in frontend-sg targetting any instance in backend-sg (supposing that both node-sg and backend-sg are not permitting said ports for inbound traffic)...
Finally, a small note... Kubernetes is by default closing all traffic (and you need ingress, loadbalancer, upstream proxy, nodePort or some other means to actually expose your front-facing services) so traditional fine graining of backend/frontend instances and security groups is not that "clearcut" when using k8s, especially since you don't really want to schedule manually (or by labels for that matter) which instances pods will actually run (but instead leave that to k8s scheduler for better unitilization of resources).

access k8s apis from outside

I want to access k8s api resources. my cluster is 1node cluster. kube-api server is listening on 8080 and 6443 port. curl localhost:8080/api/v1 inside node is working. if i hit :8080, its not working because some other service (eureka) is running on this port. this leaves me option to access :6443 . in order to do make api accessible, there are 2 ways.
1- create service for kube-api with some specific port which will target 6443. For that ca.crt , key , token etc are required. How to create and configure such things so that i will be able to access api.
2- make change in waeve (weave is available as service in k8s setup) so that my server can access k8s apis.
anyone of option is fine with me. any help will be appreciated .
my cluster is 1node cluster
One of those words does not mean what you think it does. If you haven't already encountered it, you will eventually discover that the memory and CPU pressure of attempting to run all the components of a kubernetes cluster on a single Node will cause memory exhaustion, and then lots of things won't work right with some pretty horrible error messages.
I can deeply appreciate wanting to start simple, but you will be much happier with a 3 machine cluster than trying to squeeze everything into a single machine. Not to mention the fact that only having a single machine won't surface any networking misconfigurations, which can be a separate frustration when you think everything is working correctly and only then go to scale your cluster up to more Nodes.
some other service (eureka) is running on this port.
Well, at the very real risk of stating the obvious: why not move one of those two services to listen on a separate port from one another? Many cluster provisioning tools (I love kubespray) have a configuration option that allows one to very easily adjust the insecure port used by the apiserver to be a port of your choosing. It can even be a privileged port (that is: less than 1024) because docker runs as root and thus can --publish a port using any number it likes.
If having the :8080 is so important to both pieces of software that it would be prohibitively costly to relocate the port, then consider binding the "eureka" software to the machine's IP and bind the kubernetes apiserver's insecure port to 127.0.0.1 (which is certainly the intent, anyway). If "eureka" is also running in docker, you can change its --publish to include an IP address on the "left hand side" to very cheaply do what I said: --publish ${the_ip}:8080:8080 (or whatever). If it is not using docker, there is still a pretty good chance that the software will accept a "bind address" or "bind host" through which you can enter the ip address, versus "0.0.0.0".
1- create service for kube-api with some specific port which will target 6443. For that ca.crt , key , token etc are required. How to create and configure such things so that i will be able to access api.
Every Pod running in your cluster has the option of declaring a serviceAccountName, which by default is default, and the effect of having a serviceAccountName is that every container in the Pod has access to those components you mentioned: the CA certificate and a JWT credential that enables the Pod to invoke the kubernetes API (which from within the cluster one can always access via: the kubernetes Service IP, the environment variable $KUBERNETES_SERVICE_HOST, or the hostname https://kubernetes -- assuming you are using kube-dns). Those serviceAccount credentials are automatically projected into the container at /var/run/secret/kubernetes.io without requiring that your Pod declare those volumeMounts explicitly.
So, if your concern is that one must have credentials from within the cluster, that concern can go away pretty quickly. If your concern is access from outside the cluster, there are a lot of ways to address that concern which don't directly involve creating all 3 parts of that equation.

Can I make calls directly to pods from outside Kubernetes?

I'm attempting to transition existing applications to Kubernetes that work as follows:
An outside service calls our application through a load balancer with a new session.
Our application returns the ip of the server that processed the request.
All subsequent calls from the outside service for that session are made directly to the same server (bypassing the load balancer)
Is there any way to do this in kubernetes? I understand that pod ip's are not exposed externally, is there some way to expose them directly?
Also, I don't think I can use sessionAffinity="ClientIP" because the requests will all come in from the same place. Is there a way to write custom sessionAffinity type?
It kind of depends on how your network is set up and what you mean by an "outside service", but the answer is most likely "no".
If you're running using one of the default cluster creation scripts in a cloud environment, pod IP addresses are not routable from the Internet, so any service not in the same private network as your cluster won't be able to talk directly to pods.
However, depending on what cloud provider you're on, you'll likely get the behavior that you want anyways by just continuing to make all calls through to the external IP of a service of type LoadBalancer. For instance, on the Google Cloud Platform, the cloud load balancer that gets created for such services by default maintains connection affinity by 5-tuple (src ip and port, dst ip and port, L4 protocol), which sounds like it's what you want, since you want balancing per session rather than per IP.
As for creating a new sessionAffinity type, that's not an easy thing to extend, since it requires changing Kubernetes source code. If that's really a path you want to take, it's likely that you'd want to run your own load balancer within your cluster rather than relying on the built-in load balancing.