How to access service through ingress from inside and outside server - kubernetes

Using nodePort service with ingress, I success expose the service to the out world.
--- service
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
default postgres ClusterIP 10.106.182.170 <none> 5432/TCP
default user-api NodePort 10.99.12.136 <none> 3000:32099/TCP
ingress-nginx ingress-nginx NodePort 10.110.104.0 <none> 80:31691/TCP,443:30593/TCP
--- ingress
NAME HOSTS ADDRESS PORTS AGE
app-ingress example.com 10.110.104.0 80 3h27m
The rule of ingress show below.
Host Path Backends
---- ---- --------
example.com
/user-api user-api:3000 (172.16.117.201:3000)
If my user-api have a restful api /v1/health interface, how to access this api inside and outside server?

From the inside, http://user-api.default:3000/user-api. From the outside, use any node external IP (see kubectl get node -o wide for a list of them).

Related

Change the IP address of my LoadBalancer on GKE

I want to change the IP address of my LoadBalancer ingress-nginx-controller in Google Cloud. I have now assigned the IP address via LoadBalancer. See the screenshot. Unfortunately it is not adopted in GKE. Why? Is that a bug?
GKE lb IP address change
I have verified this on my GKE test cluster.
When you Reserving a static external IP address it isn't assigned to any of your VMs. Depends on how you created cluster/reserved ip (standard or premium) you can get error like below:
Error syncing load balancer: failed to ensure load balancer: failed to create forwarding rule for load balancer (a574130f333b143a2a62281ef47c8dbb(default/nginx-ingress-controller)): googleapi: Error 400: PREMIUM network tier (the project's default network tier) is not supported: The network tier of specified IP address is STANDARD, that of Forwarding Rule must be the same., badRequest
In this scenario I've used cluster based in us-central-1c and reserved IP as Network Service Tier: Premium, Type: Regional and used region where my cluster is based - us-central-1. My ExternalIP: 34.66.79.1X8
NOTE Reserved IP must be in the same reagion as your cluster
Option 1: - Use Helm chart
Deploy Nginx
helm install nginx-ingress stable/nginx-ingress --set controller.service.loadBalancerIP=34.66.79.1X8,rbac.create=true
Service output:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 5h49m
nginx-ingress-controller LoadBalancer 10.8.5.158 <pending> 80:31898/TCP,443:30554/TCP 27s
nginx-ingress-default-backend ClusterIP 10.8.13.209 <none> 80/TCP 27s
Service describe output:
$ kubectl describe svc nginx-ingress-controller
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 32s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 5s service-controller Ensured load balancer
Final output:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 5h49m
nginx-ingress-controller LoadBalancer 10.8.5.158 34.66.79.1X8 80:31898/TCP,443:30554/TCP 35s
nginx-ingress-default-backend ClusterIP 10.8.13.209 <none> 80/TCP 35s
Option 2 - Editing Nginx YAMLs before deploying Nginx
As per docs:
Initialize your user as a cluster-admin with the following command:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
Download YAML
$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml
Edit LoadBalancer service and add loadBalancerIP: <your-reserved-ip> like below:
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-2.13.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.35.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
loadBalancerIP: 34.66.79.1x8 #This line
externalTrafficPolicy: Local
ports:
Deploy it kubectl apply -f deploy.yaml. Service output below:
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.8.0.1 <none> 443/TCP 6h6m
ingress-nginx ingress-nginx-controller LoadBalancer 10.8.5.165 <pending> 80:31226/TCP,443:31161/TCP 17s
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.8.9.216 <none> 443/TCP 18s
6h6m
...
Describe output:
$ kubectl describe svc ingress-nginx-controller -n ingress-nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 40s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 2s service-controller Ensured load balancer
Service with reserved IP:
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.8.5.165 34.66.79.1X8 80:31226/TCP,443:31161/TCP 2m22s
ingress-nginx-controller-admission ClusterIP 10.8.9.216 <none> 443/TCP 2m23s
In Addition
Also please keep in mind that you should add annotations: kubernetes.io/ingress.class: nginx in your ingress resource when you want force GKE to use Nginx Ingress features, like rewrite.

How to get FQDN DNS name of a kubernetes service?

How to get a full FQDN of the service inside Kubernetes?
➜ k get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
airflow-flower-service ClusterIP 172.20.119.107 <none> 5555/TCP 20d app=edna-airflow
airflow-service ClusterIP 172.20.76.63 <none> 80/TCP 20d app=edna-airflow
backend-service ClusterIP 172.20.39.154 <none> 80/TCP 20d app=edna-backend
so how to query internal Kubernetes DNS to get the FQDN of the backend-service for example?
Go inside any pod in the same namespace with kubectl exec -ti <your pod> bash and then run nslookup <your service> which will typically be, unless you change some configurations in the cluster to: yourservice.yournamespace.svc.cluster.local

How to assign an IP to istio-ingressgateway on localhost?

I am using kubespray to run a kubernetes cluster on my laptop. The cluster is running on 7 VMs and the roles of the VM's spread as follows:
NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 2d22h v1.16.2
k8s-2 Ready master 2d22h v1.16.2
k8s-3 Ready master 2d22h v1.16.2
k8s-4 Ready master 2d22h v1.16.2
k8s-5 Ready <none> 2d22h v1.16.2
k8s-6 Ready <none> 2d22h v1.16.2
k8s-7 Ready <none> 2d22h v1.16.2
I've installed https://istio.io/ to build a microservices environment.
I have 2 services running and like to access from outside:
k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.50.109 <none> 3000/TCP 47h
helloweb ClusterIP 10.233.8.207 <none> 3000/TCP 47h
and the running pods:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default greeter-service-v1-8d97f9bcd-2hf4x 2/2 Running 0 47h 10.233.69.7 k8s-6 <none> <none>
default greeter-service-v1-8d97f9bcd-gnsvp 2/2 Running 0 47h 10.233.65.3 k8s-2 <none> <none>
default greeter-service-v1-8d97f9bcd-lkt6p 2/2 Running 0 47h 10.233.68.9 k8s-7 <none> <none>
default helloweb-77c9476f6d-7f76v 2/2 Running 0 47h 10.233.64.3 k8s-1 <none> <none>
default helloweb-77c9476f6d-pj494 2/2 Running 0 47h 10.233.69.8 k8s-6 <none> <none>
default helloweb-77c9476f6d-tnqfb 2/2 Running 0 47h 10.233.70.7 k8s-5 <none> <none>
The problem is, I can not access the services from outside, because I do not have the EXTERNAL IP address(remember the cluster is running on my laptop).
k get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.61.112 <pending> 15020:31311/TCP,80:30383/TCP,443:31494/TCP,15029:31383/TCP,15030:30784/TCP,15031:30322/TCP,15032:30823/TCP,15443:30401/TCP 47h
As you can see, the column EXTERNAL-IP the value is <pending>.
The question is, how to assign an EXTERNAL-IP to the istio-ingressgateway.
First of all, you can't make k8s to assign you an external IP address, as LoadBalancer service is Cloud Provider specific. You could push your router external IP address to be mapped to it, I guess, but it is not trivial.
To reach the service, you can do this:
kubectl edit svc istio-ingressgateway -n istio-system
Change the type of the service from LoadBalancer to ClusterIp. You can also do NodePort. Actually you can skip this step, as LoadBalancer service already contains NodePort and ClusterIp. It is just to get rid of that pending status.
kubectl port-forward svc/istio-ingressgateway YOUR_LAPTOP_PORT:INGRESS_CLUSTER_IP_PORT -n istio-system
I don't know to which port you want to access from your localhost. Say 80, you can do:
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Now port 8080 of your laptop (localhost:8080) will be mapped to the port 80 of istio-ingressgateway service.
By default, there is no way Kubernetes can assign external IP to LoadBalancer service.
This service type needs infrastructure support which works in cloud offerings like GKE, AKS, EKS etc.
As you are running this cluster inside your laptop, deploy MetalLB Load Balancer to get EXTERNAL-IP
It's not possible as Suresh explained.
But if you want to access from your laptop you can use in your service type: NodePort, which gives you access from outside the cluster.
You should first obtain the IP of your cluster, then create your service with something like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: NodePort
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30000
After that, you can access from your laptop with: http://cluster-ip:30000
There is no need to create an ingress for that.
You should use a port in range (30000-32767), as stated below:
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
If you are using minikube, just run:
$ minikube tunnel
$ k get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.111.187.167 127.0.0.1 15021:31949/TCP,80:32215/TCP,443:30585/TCP 9m48s

Access services on k8s on prem

I have 3 virtual machines (ubuntu 18 lts) on my local pc: 1 is master and 2 are nodes. I was able to install kubernetes and also to setup my application.
My application consist of 3 parts: database, backend and frontend. For each of these parts I've created and deployed services. I want to expose the FE service outside the cluster to be able to access it from one of the nodes.
The service description looks like this:
apiVersion: v1
kind: Service
metadata:
name: fe-deployment
labels:
run: fe-srv
spec:
ports:
- protocol: TCP
port: 8085
targetPort: 80
selector:
app: fe
type: NodePort
The ouput of
kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8node1 Ready <none> 2d22h v1.16.0 172.17.199.105 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
k8node2 Ready <none> 2d22h v1.16.0 172.17.199.110 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
kubectl get service -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
be-deployment ClusterIP 10.96.169.225 <none> 8080/TCP 2d22h app=be
db-deployment ClusterIP 10.110.14.88 <none> 3306/TCP 2d22h app=db
fe-deployment NodePort 10.104.211.32 <none> 8085:32476/TCP 2d21h app=fe
I would have expected that using one node IP and the node port to be able to access my FE from browser, but it doesn't work.
What am I missing? How to access my FE from outside the cluster?
Edit
Based on the documentation, NodePort service type should:
Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting NodeIP:NodePort
I understand that I will access my service from outside of the cluster using node IP and static port. From the node IP statement I understand that it refers to the machine (the VM in my case) IP.
Later Edit
I've checked the firewall and it seems that is disable on all my machines:
sudo ufw status
Status: inactive
Later later edit
As I told in a comment, trying to telnet to IPv4 address didn't work. Trying with IPv6 does work on localhost and also using the ethernet interface IPv6 IP.
The netstat output is:
netstat -6 -a | grep 324
tcp6 1 0 [::]:32476 [::]:* LISTEN
Despite the fact that it should work (based on the information I read on internet) it doesn't work with IPv4. Is there a way to change this?
Later later later edit
It seems that this is a bug
You can assign EXTERNAL-IP for fe service as IP address if node.
Then you can check : curl -k http://EXTERNAL-IP:PORT
EXTERNAL-IP is Node of IP adress Server.
In your case, due to you didn't defined nodePort, kubernetes randomly assigned port 32476 to your service. To access that service go to <EXTERNAL-NODE-IP>:32476 (kubernetes-docs).
If you want to assign specific port, you need to define nodePort in service definition (example for ingress based on nginx):
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
spec:
ports:
- name: http
nodePort: 30080
port: 80
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx
type: NodePort
You would not get an external IP when exposing service as a nodeport.
Exposing Service on a Nodeport means that your service would be available on externally via the NodeIP of any node in the cluster at a random port between 30000-32767(default behaviour) .
Each of the nodes in the cluster proxy that port (the same port number on every Node) into the pod where your service is launched.
From your kubectl get service -o wide output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
fe-deployment NodePort 10.104.211.32 <none> 8085:32476/TCP 2d21h app=fe
We can find that port on which your service is exposed is port 32476.
From Your kubectl get node -o wide output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8node1 Ready <none> 2d22h v1.16.0 172.17.199.105 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
k8node2 Ready <none> 2d22h v1.16.0 172.17.199.110 <none> Ubuntu 18.04.3 LTS 5.0.0-29-generic docker://18.9.7
We can find that your node ips are: 172.17.199.105 and 172.17.199.110
You can now access your service externally using <Node-IP>:<Node-Port>.
So in Your case these are 172.17.199.105:32476 and 172.17.199.110:32476 depending on which node you want to access Your service.
Additionally, if you want a fixed Node port, you can specify that in the yaml.
You need to make sure you add a security rule on your nodes to allow traffic on the particular port.

ingress-nginx No IP Address

I've created a test k8s cluster using kubespray (3 nodes, virtualbox
centos vm based) and have been trying to follow the guide for setting up nginx ingress, but i never seem to get an external address assigned to my service:
I can see that the ingress controller is apparently installed:
[root#k8s-01 ~]# kubectl get pods --all-namespaces -l app=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-58c9df5856-v6hml 1/1 Running 0 28m
And following the prerequisites docs, i have set up the http-svc sample service:
[root#k8s-01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
http-svc-794dc89f5-f2vlx 1/1 Running 0 27m
[root#k8s-01 ~]# kubectl get svc http-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc LoadBalancer 10.233.25.131 <pending> 80:30055/TCP 27m
[root#k8s-01 ~]# kubectl describe svc http-svc
Name: http-svc
Namespace: default
Labels: app=http-svc
Annotations: <none>
Selector: app=http-svc
Type: LoadBalancer
IP: 10.233.25.131
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30055/TCP
Endpoints: 10.233.65.5:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 27m service-controller ClusterIP -> LoadBalancer
As far as i know, i should see a LoadBalancer Ingress entry, but the External IP for the service still appears to be pending, so something isn't working, but i'm at a loss where to diagnose what has gone wrong
Since you are creating your cluster locally, exposing your service as type LoadBalancer will not provision a loadbalancer for you. Use the type LoadBalancer if you are creating your cluster in a cloud environment such as AWS or GKE. In AWS it will auto-provision you an loadbalancer (ELB) and assign an external ip for the service.
To make your service work with current settings and environment change your service type from Loadbalancer to NodePort.