How can I visit NodePort service in GKE - kubernetes

This is my Service yaml. When create the svc on GKE.I don't know how can I visit the svc.I can't find a external ip for visiting the svc. How can I visit this svc in standard flow. Is it need to create an ingress?
apiVersion: v1
kind: Service
metadata:
namespace: dev
name: ui-svc
labels:
targetEnv: dev
app: ui-svc
spec:
selector:
app: ui
targetEnv: dev
ports:
- name: ui
port: 8080
targetPort: 8080
nodePort: 30080
type: NodePort

If you don't use a private cluster where nodes don't have public IP addresses, you can access your NodePort services using any node's public IP address.
What you can see in Services & Ingresses section in the Endpoints column, it's an internal, cluster ip address of your NodePort service.
If you want to know what are public IP addresses of your GKE nodes, please go to Compute Engine > VM instances:
You will see the list of all your Compute Engine VMs which also includes your GKE nodes. Note the IP address in External IP column. You should use it along with port number which you may check in your NodePort service details. Simply click on it's name "ui-svc" to see the details. At the very bottom of the page you should see ports section which may look as follows:
So in my case I should use <any_node's_public_ip_address>:31251.
One more important thing. Don't forget to allow traffic to this port on Firewall as by default it is blocked. So you need to explicitly allow traffic to your nodes e.g. on 31251 port to be able to access it from public internet. Simply go to VPC Network > Firewall and set the apropriate rule:
UPDATE:
If you created an Autopilot Cluster, by default it is a public one, which means its nodes have public IP addresses:
If during the cluster creation you've selected a second option i.e. "Private cluster", your nodes won't have public IPs by design and you won't be able to access your NodePort service on any public IP. So the only option that remains in such scenario is exposing your workload via LoadBalancer service or Ingress, where a single public IP endpoint is created for you, so you can access your workload externally.
However if you've chosen the default option i.e. "Public cluster", you can use your node's public IP's to access your NodePort service in the very same way as if you used a Standard (non-autopilot) cluster.
Of course in autopilot mode you won't see your nodes as compute engine VMs in your GCP console, but you can still get their public IPs by running:
kubectl get nodes -o wide
They will be shown in EXTERNAL-IP column.
To connect to your cluster simply click on 3 dots you can see to the right of the cluster name ("Kubernetes Engine" > "Clusters") > click "Connect" > click "RUN IN CLOUD SHELL".
Since you don't know what network tags have been assigned to your GKE auto-pilot nodes (if any) as you don't manage them and they are not shown in your GCP console, you won't be able to use specified network tags when defining a firewall rule to allow access to your NodePort service port e.g. 30543 and you would have to choose the option "All instances in the network" instead:

Related

Access the Kubernetes cluster/node from outside

I am new to kubernetes. I have created a cluster of db of kubernetes with 2 nodes. I can access those kubernetes pods from thin client like dbeaver to check the data. But I can not access those kubernetes nodes externally. I am currently trying to run a thick client which will load the data into cluster on kubernetes.
kubectl describe svc <svc>
I can see cluster-Ip assigned to the service. Type of my service is loadbalancer. I tried to use that but still not connecting. I read about using nodeport but without any IP address how to access that
So what is the best way to connect any node or cluster from outside.
Thank you in advance
Regards
#KrishnaChaurasia is right but I would like to explain it in more detail with the help of the official docs.
I strongly recommend going through the following sources:
NodePort Type Service: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>. Here is an example of the NodePort Service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
Accessing services running on the cluster: You have several options for connecting to nodes, pods and services from outside the cluster:
Access services through public IPs.
Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster. See the services and kubectl expose documentation.
Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication?
Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, place a unique label on the pod and create a new service which selects this label.
In most cases, it should not be necessary for application developer to directly access nodes via their nodeIPs.
A supplement example: Use a Service to Access an Application in a Cluster: This page shows how to create a Kubernetes Service object that external clients can use to access an application running in a cluster.
These will help you to better understand the concepts of different Service Types, how to expose and access them from outside the cluster.

How to get client ip from Google Network Load Balancer with kubernetes service

I created a kubernetes service in GKE with type:LoadBalancer.
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: http
selector:
app: nginx
It's a nginx service and try to get origin client IP. like
location / {
echo $remote_addr;
echo $http_x_forwarded_for;
}
But the result will get:
10.140.0.97
$remote_addr is like inside kubernetes IP.
$http_x_forwarded_for is empty.
I don't know why this is not like document said.
What I read
https://cloud.google.com/load-balancing/docs/network
Network Load Balancing is a pass-through load balancer, which means that your firewall rules must allow traffic from the client source IP addresses.
https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview#ext-lb
If your Service needs to be reachable from outside the cluster and outside your VPC network, you can configure your Service as a LoadBalancer, by setting the Service's type field to LoadBalancer when defining the Service. GKE then provisions a Network Load Balancer in front of the Service. The Network Load Balancer is aware of all nodes in your cluster and configures your VPC network's firewall rules to allow connections to the Service from outside the VPC network, using the Service's external IP address. You can assign a static external IP address to the Service. Visit Configuring Domain Names with Static IP Addresses for more information.
Just add externalTrafficPolicy: Local
spec:
externalTrafficPolicy: Local
type: LoadBalancer
Packets sent to Services with Type=LoadBalancer are source NAT’d by default, because all schedulable Kubernetes nodes in the Ready state are eligible for load-balanced traffic. So if packets arrive at a node without an endpoint, the system proxies it to a node with an endpoint, replacing the source IP on the packet with the IP of the node (as described in the previous section).
Reference
https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
https://github.com/kubernetes/kubernetes/issues/10921
Because Network Load Balancer handles all incoming traffics and redirect to your GKE cluster. Inside k8s cluster, everything is running under virtual IP network, so you get 10.140.0.97.
The 1st document says you need to setup firewall to accept traffics from client source IP, otherwise by GCP default you are not gonna get any incoming traffic. But 2nd document indicates that GKE will automatically setup for you. All you need to do is find out your external IP and give it a try. You should be able to see your nginx welcome page.
P.S. The default external IP is dynamic, if you want a static IP you can get one via console or gcloud CLI.

GKE 1 load balancer with multiple apps on different assigned ports

I want to be able to deploy several, single pod, apps and access them on a single IP address leaning on Kubernetes to assign the ports as they are when you use a NodePort service.
Is there a way to use NodePort with a load balancer?
Honestly, NodePort might work by itself, but GKE seems to block direct access to the nodes. There doesn't seem to be firewall controls like on their unmanaged VMs.
Here's a service if we need something to base an answer on. In this case, I want to deploy 10 these services which are different applications, on the same IP, each publicly accessible on a different port, each proxying port 80 of the nginx container.
---
apiVersion: v1
kind: Service
metadata:
name: foo-svc
spec:
selector:
app: nginx
ports:
- name: foo
protocol: TCP
port: 80
type: NodePort
GKE seems to block direct access to the nodes.
GCP allows creating the FW rules that allow incoming traffic either to 'All Instances in the Network' or 'Specified Target Tags/Service Account' in your VPC Network.
Rules are persistent unless the opposite is specified under the organization's policies.
Node's external IP address can be checked at Cloud Console --> Compute Engine --> VM Instances or with kubectl get nodes -o wide.
I run GKE (managed k8s) and can access all my assets externally.
I have opened all the needed ports in my setup. below is the quickest example.
Below you can find my setup:
$ kubectl get nodes -o wide
NAME AGE VERSION INTERNAL-IP EXTERNAL-IP
gke--mnnv 43d v1.14.10-gke.27 10.156.0.11 34.89.x.x
gke--nw9v 43d v1.14.10-gke.27 10.156.0.12 35.246.x.x
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR
knp-np NodePort 10.0.11.113 <none> 8180:30008/TCP 8180:30009/TCP app=server-go
$ curl 35.246.x.x:30008/test
Hello from ServerGo. You requested: /test
That is why it looks like a bunch of NodePort type Services would be sufficient (each one serves requests for particular selector)
If for some reason it's not possible to set up the FW rules to allow traffic directly to your Nodes it's possible to configure GCP TCP LoadBalancer.
Cloud Console --> Network Services --> Load Balancing --> Create LB --> TCP Load Balancing.
There you can select your GKE Nodes (or pool of nodes) as a 'Backend' and specify all the needed ports for the 'Frontend'. For the Frontend you can Reserve Static IP right during the configuration and specify 'Port' range as two port numbers separated by a dash (assuming you have multiple ports to be forwarded to your node pool). Additionally, you can create multiple 'Frontends' if needed.
I hope that helps.
Is there a way to use NodePort with a load balancer?
Kubernetes LoadBalancer type service builds on top of NodePort. So internally LoadBalancer uses NodePort meaning when a loadBalancer type service is created it automatically maps to the NodePort. Although it's tricky but possible to create NodePort type service and manually configure the Google provided loadbalancer to point to NodePorts.

Kubernetes + GCP TCP Load balancing: How can I assign a static IP to a Kubernetes Service?

I want to assign a static (i.e. non-ephemeral) regional IP to a Kubernetes service. Currently the service is of type "LoadBalancer", which GCP exposes as a regional TCP load balancer. By default the IP address of the forwarding rule is ephemeral. Is there any way I can use an existing static ip or to assign my own address by name (as is possible with Ingress/HTTP(S) Load Balancer)?
I have also tried to create my own forwarding rule with a custom static regional IP using the NodePort of the service. I have only succeeded to build the forwarding rule using the actual NodePort, but how does the Kubernetes/GCP-magic work that maps port 80 to the NodePort when using type "LoadBalancer"?
I have found a way to set the static IP. After that I needed to delete the service object and re-create it.
- apiVersion: v1
kind: Service
spec:
loadBalancerIP: '<static ip>'
But the second part of my question I am still curious about

Accessing GCP Internal Load Balancer from another region

I need to access an internal application running on GKE Nginx Ingress service riding on Internal Load Balancer, from another GCP region.
I am fully aware that it is not possible using direct Google networking and it is a huge limitation (GCP Feature Request).
Internal Load Balancer can be accessed perfectly well via VPN tunnel from AWS, but I am not sure that creating such a tunnel between GCP regions under the same network is a good idea.
Workarounds are welcomed!
In the release notes from GCP, it is stated that:
Global access is an optional parameter for internal LoadBalancer Services that allows clients from any region in your VPC to access the internal TCP/UDP Load Balancer IP address.
Global access is enabled per-Service using the following annotation:
networking.gke.io/internal-load-balancer-allow-global-access: "true".
UPDATE: Below service works for GKE v1.16.x & newer versions:
apiVersion: v1
kind: Service
metadata:
name: ilb-global
annotations:
# Required to assign internal IP address
cloud.google.com/load-balancer-type: "Internal"
# Required to enable global access
networking.gke.io/internal-load-balancer-allow-global-access: "true"
labels:
app: hello
spec:
type: LoadBalancer
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
For GKE v1.15.x and older versions:
Accessing internal load balancer IP from a VM sitting in a different region will not work. But this helped me to make the internal load balancer global.
As we know internal load balancer is nothing but a forwarding rule, we can use gcloud command to enable global access.
Firstly get the internal IP address of the Load Balancer using kubectl and save its IP like below:
# COMMAND:
kubectl get services/ilb-global
# OUTPUT:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ilb-global LoadBalancer 10.0.12.12 10.123.4.5 80:32400/TCP 18m
Note the value of "EXTERNAL-IP" or simply run the below command to make it even simpler:
# COMMAND:
kubectl get service/ilb-global \
-o jsonpath='{.status.loadBalancer.ingress[].ip}'
# OUTPUT:
10.123.4.5
GCP gives a randomly generated ID to the forwarding rule created for this Load Balancer. If you have multiple forwarding rules, use the following command to figure out which one is the internal load balancer you just created:
# COMMAND:
gcloud compute forwarding-rules list | grep 10.123.4.5
# OUTPUT
NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
a26cmodifiedb3f8252484ed9d0192 asia-south1 10.123.4.5 TCP asia-south1/backendServices/a26cmodified44904b3f8252484ed9d019
NOTE: If you not working on Linux or grep is not installed, simply run gcloud compute forwarding-rules list and manually look for the forwarding rule having the IP address we are looking for.
Note the name of the forwarding-rule and run the following command to update the forwarding rule with --allow-global-access (remember adding beta, as it is still a beta feature):
# COMMAND:
gcloud beta compute forwarding-rules update a26cmodified904b3f8252484ed9d0192 \
--region asia-south1 --allow-global-access
# OUTPUT:
Updated [https://www.googleapis.com/compute/beta/projects/PROJECT/regions/REGION/forwardingRules/a26hehemodifiedhehe490252484ed9d0192].
And it's done. Now you can access this internal IP (10.123.4.5) from any instance in any region (but the same VPC network).
Another possible way is to implement the ngnix reverser proxy server on an compute engine in the same region as of GKE cluster, and use the internal IP of compute engine instance to communicate with the services of the GKE.
First of all, note that the only way to connect any GCP resource (in this case your GKE cluster) from an on premise location, it’s either through a Cloud Interconnect or VPN set up, which actually they must be in the same region and VPC to be able to communicate with each other.
Having said that, I see you won’t like to do that under the same VPC, therefore a workaround for your scenario could be:
Creating a Service of type LoadBalancer, so your cluster can be reachable through and external (public) IP by exposing this service. If you are worried about the security, you can use Istio to enforce access policies for example.
Or, to create an HTTP(S) load balancing with Ingress, so your cluster can be reachable through its external (public) IP. Where again, for security purposes you can use GCP Cloud Armor which actually so far works only for HTTP(S) Load Balancing.