I am doing the CKAD (Certified Kubernetes Application Developer) 2019 using GCP (Google Cloud Platform) and I am facing timeouts issue when trying to curl the pod from another node. I set a simple Pod with a simple Service.
Looks the firewall is blocking something ip/port/protocol but I cannot find any documentation.
Any ideas?
So after some heavy investigation with tshark and google firewall I was able to unblock myself.
If you add a new firewall rule to GPC allowing ipip protocol for your node networks (in my case 10.128.0.0/9) the curl works !!
sources: https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml
You can create nodeport service and use below command to set firewall rule.
gcloud compute firewall-rules create test-node-port --allow tcp:[NODE_PORT]
Then you can access service even from outside of cluster.
Related
In my OVH Managed Kubernetes cluster I'm trying to expose a NodePort service, but it looks like the port is not reachable via <node-ip>:<node-port>.
I followed this tutorial: Creating a service for an application running in two pods. I can successfully access the service on localhost:<target-port> along with kubectl port-forward, but it doesn't work on <node-ip>:<node-port> (request timeout) (though it works from inside the cluster).
The tutorial says that I may have to "create a firewall rule that allows TCP traffic on your node port" but I can't figure out how to do that.
The security group seems to allow any traffic:
The solution is to NOT enable "Private network attached" ("réseau privé attaché") when you create the managed Kubernetes cluster.
If you already paid your nodes or configured DNS or anything, you can select your current Kubernetes cluster, and select "Reset your cluster" ("réinitialiser votre cluster"), and then "Keep and reinstall nodes" ("conserver et réinstaller les noeuds") and at the "Private network attached" ("Réseau privé attaché") option, choose "None (public IPs)" ("Aucun (IPs publiques)")
I faced the same use case and problem, and after some research and experimentation, got the hint from the small comment on this dialog box:
By default, your worker nodes have a public IPv4. If you choose a private network, the public IPs of these nodes will be used exclusively for administration/linking to the Kubernetes control plane, and your nodes will be assigned an IP on the vLAN of the private network you have chosen
Now i got my Traefik ingress as a DaemonSet using hostNetwork and every node is reachable directly even on low ports (as you saw yourself, the default security group is open)
Well i can't help any further i guess, but i would check the following:
Are you using the public node ip address?
Did you configure you service as Loadbalancer properly?
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
Do you have a loadbalancer and set it up properly?
Did you install any Ingress controller? (ingress-nginx?) You may need to add a Daemonset for this ingress-controller to duplicate the ingress-controller pod on each node in your cluster
Otherwise, i would suggest an Ingress, (if this works, you may exclude any firewall related issues).
This page explains very well:
What's the difference between ClusterIP, NodePort and LoadBalancer service types in Kubernetes?
In AWS, you have things called security groups... you may have the same kind of thing in you k8s provider (or even your local machine). Please add those ports to the security groups or local firewalls. In AWS you may need to bind those security groups to your EC2 instance (Ingress node) as well.
I am developing an application where users can spin up compute pods running jupyter notebooks and can ssh into their pod. We are using k8s to provision and manage the compute. Since we run our infrastructure in AWS using EKS and elastic IPs are scarce, we need to route ssh traffic through a bastion instance which forwards ssh traffic (also http for jupyter notebooks) to the correct pod. I am hoping for any suggestions on how to implement this. From my understanding so far, I need to have a separate port for each user for ssh on the bastion instance. This seems unwieldily but AFAIK ssh traffic cannot be routed in any other way. For HTTP, we can have routing rules which should be much more straightforward.
I have created a simple hello world service in my kubernetes cluster. I am not using any cloud provider and have created it in a simple Ubuntu 16.04 server from scratch.
I am able to access the service inside the cluster but now when I want to expose it to the internet, it does not work.
Here is the yml file - deployment.yml
And this is the result of the command - kubectl get all:
Now when I am trying to access the external IP with the port in my browser, i.e., 172.31.8.110:8080, it does not work.
NOTE: I also tried the NodePort Service Type, but then it does not provide any external IP to me. The state remains pending under the "External IP" tab when I do "kubectl get services".
How to resolve this??
I believe you might have a mix of networking problems tied together.
First of all, 172.31.8.110 belongs to a private network, and it is not routable via Internet. So make sure that the location you are trying to browse from can reach the destination (i.e. same private network).
As a quick test you can make an ssh connection to your master node and then check if you can open the page:
curl 172.31.8.110:8080
In order to expose it to Internet, you need a to use a public IP for your master node, not internal one. Then update your Service externalIPs accordingly.
Also make sure that your firewall allows network connections from public Internet to 8080 on master node.
In any case I suggest that you use this configuration for testing purposes only, as it is generally bad idea to use master node for service exposure, because this applies extra networking load on the master and widens security surface. Use something like an Ingress controller (like Nginx or other) + Ingress resource instead.
One option is also to do SSH local port forwarding.
ssh -L <local-port><private-ip-on-your-server><remote-port> <ip-of-your-server>
So in your case for example:
ssh -L 8888:172.31.8.110:8080 <ip-of-your-ubuntu-server>
Then you can simply go to your browser and configure a SOCKS Proxy for localhost:8888.
Then you can access the site on http://localhost:8888 .
I have a Kubernetes cluster on Google Kubernetes Engine. I want to assign a static IP for all outgoing traffic of a cluster.
I already have reserved external IPs but I can't assign them to a cluster with the GCP console.
I found a solution to do it with the cli :
Static outgoing IP in Kubernetes
but it targets the VM and I will need to set it each time I deploy. So it's not targeting the cluster.
Can anybody provide any pointers? Thanks.
GKE currently doesn't have an option to create the cluster with all your nodes using a reserved public IP. All you get in advanced networking options is something like this:
You will have to use the gcloud API that you mentioned which should be easy to put in a script.
Or you can also use the UI by editing the instance(s) and going into 'Network Interfaces' like this:
I agree with something in the previous answer you can't do something like this directly in the cluster, but you can use another service to do what you are looking for: nat gateway that will use a fixe public ip.
For more security, you can even deploy the gateways in multiple zones to have some redundancy and your cluster will always have outgoing trafic go by the gateways.
I won't explain how it works here, because google already provided a tutorial to what you want to do here: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
Enjoy.
right now i'm accessing my pods (postgres port 5432) trough a service that is exposed, but since gcloud charge for every forwarding rule created, the amount of pods i need to monitor or to execute stuff in it, is costing me more and more, is there a way to create a single expose service for all of my pods? or can i create some sort of vpn? putty tunnel or something? any help would be appreciated!
I'm also using
kubectl exec
If you are looking for a managed solution then Google is offering VPN for that:
https://console.cloud.google.com/networking/vpn/
If you are happy to roll your own then you can create a new Compute instance on the same network where your nodes are and set up openvpn there. This will give you a fix ip as a freebie.
A more advanced solution is if you run openvpn as a pod (or pods) and use a Service with NodePort to expose it. (Optionally manually create a single loadbalacer on google cloud to get a static ip for that.)
At the end of the day the ideal solution depends much on your environment and goal.