External ip is none after expose my service - kubernetes

I do exactly with the kubernetes url.
But the last step I can't get the external ip with the cmd: kubectl get svc nginx.
root#XXX:~# kubectl expose rc nginx --port=80
service "nginx" exposed
root#in28-051:~# kubectl get svc nginx
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
nginx 10.0.0.109 <none> 80/TCP run=nginx 5m

https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md#type-loadbalancer
starts with:
"On cloud providers which support external load balancers..."
I agree we should better document which cloud providers do support this. I'll file an issue.

Related

How do I actually connect to botfront on kubernetes?

I tried deploying on EKS, and my config.yaml follows this suggested format:
botfront:
app:
# The complete external host of the Botfront application (eg. botfront.yoursite.com). It must be set even if running on a private or local DNS (it populates the ROOT_URL).
host: botfront.yoursite.com
mongodb:
enabled: true # disable to use an external mongoDB host
# Username of the MongoDB user that will have read-write access to the Botfront database. This is not the root user
mongodbUsername: username
# Password of the MongoDB user that will have read-write access to the Botfront database. This is not the root user
mongodbPassword: password
# MongoDB root password
mongodbRootPassword: rootpassword
And I ran this command:
helm install -f config.yaml -n botfront --namespace botfront botfront/botfront
and the deployment appeared successful with all pods listed as running.
But botfront.yoursite.com goes nowhere. I checked the ingress and it matches, but there are no external ip addresses or anything. I don't know how to actually access my botfront site once deployed on kubernetes.
What am I missing?
EDIT:
With nginx lb installed kubectl get ingresses -n botfront now
returns:
NAME CLASS HOSTS ADDRESS PORTS AGE
botfront-app-ingress <none> botfront.cream.com a182b0b24e4fb4a0f8bd6300b440e5fa-423aebd224ce20ac.elb.us-east-2.amazonaws.com 80 4d1h
and
kubectl get svc -n botfront returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
botfront-api-service NodePort 10.100.207.27 <none> 80:31723/TCP 4d1h
botfront-app-service NodePort 10.100.26.173 <none> 80:30873/TCP 4d1h
botfront-duckling-service NodePort 10.100.75.248 <none> 80:31989/TCP 4d1h
botfront-mongodb-service NodePort 10.100.155.11 <none> 27017:30358/TCP 4d1h
If you run kubectl get svc -n botfront, it will show you all the Services that expose your botfront
$ kubectl get svc -n botfront
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
botfront-api-service NodePort 10.3.252.32 <none> 80:32077/TCP 63s
botfront-app-service NodePort 10.3.249.247 <none> 80:31201/TCP 63s
botfront-duckling-service NodePort 10.3.248.75 <none> 80:31209/TCP 63s
botfront-mongodb-service NodePort 10.3.252.26 <none> 27017:31939/TCP 64s
Each of them is of type NodePort, which means it exposes your app on the external IP address of each of your EKS cluster nodes on a specific port.
So if you your node1 ip happens to be 1.2.3.4 you can acess botfront-api-service on 1.2.3.4:32077. Don't forget to allow access to this port on firewall/security groups. If you have any registered domain e.g. yoursite.com you can configure for it a subdomain botfront.yoursite.com and point it to one of your EKS nodes. Then you'll be able to access it using your domain. This is the simplest way.
To be able to access it in a more effective way than by using specific node's IP and non-standard port, you may want to expose it via Ingress which will create an external load balancer, making your NodePort services available under one external IP adress and standard http port.
Update: I see that this chart already comes with ingress that exposes your app:
$ kubectl get ingresses -n botfront
NAME HOSTS ADDRESS PORTS AGE
botfront-app-ingress botfront.yoursite.com 80 70m
If you retrieve its yaml definition by:
$ kubectl get ingresses -n botfront -o yaml
you'll see that it uses the following annotation:
kubernetes.io/ingress.class: nginx
which means you need nginx-ingress controller installed on your EKS cluster. This might be one reason why it fails. As you can see in my example, this ingress doesn't get any external IP. That's because nginx-ingress wasn't installed on my GKE cluster. Not sure about EKS but as far as I know it doesn't come with nginx-ingress preinstalled.
One more thing: I assume that in your config.yaml you put some real domain name that you have registered instead of botfront.yoursite.com. Suppose your domain is yoursite.com and you successfully created subdomain botfront.yoursite.com, you should redirected it to the IP of your load balancer (the one used by your ingress).
If you run kubectl get ingresses -n botfront but the ADDRESS is empty, you probably don't have nginx-ingress installed and the underlying load balancer cannot be created. If you have here some external IP address, then redirect your registered domain to this address.

LoadBalancer 'EXTERNAL IP" is in pending state after I installed k8s using helm Charts

I Installed K8S with Helm Charts on EKS but the Loadbalancer EXTERNAL IP is in pending state , I see that EKS does support the service Type : LoadBalancer now.
Is it something I will have to check at the network outgoing traffic level ? Please share your experience if any.
Tx,
The Loadbalancer usually takes some seconds or a few minutes to provision you an IP.
If after 5 minutes the IP isn't provisioned:
- run kubectl get svc <SVC_NAME> -o yaml and if there is any different annotation set.
By default services with Type:LoadBalancer are provisioned with Classic Load Balancers automatically. Learn more here.
If you wish to use Network load Balancers you have to use the annotation:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
The process is really automatic, you don't have to check for network traffic.
You can check if there is any issue with the Helm Chart you are deploying by manually creating a service with loadbalancer type and check if it gets provisioned:
$ kubectl run --generator=run-pod/v1 nginx --image=nginx --port=80
pod/nginx created
$ kubectl get pod nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 34s
$ kubectl expose pod nginx --type=LoadBalancer
service/nginx exposed
$ kubectl get svc nginx -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.1.63.178 <pending> 80:32522/TCP 7s
nginx LoadBalancer 10.1.63.178 35.238.146.136 80:32522/TCP 42s
In this example the LoadBalancer took 42s to be provisioned. This way you can verify if the issue is on the Helm Chart or something else.
If Kubernetes is running in an environment that doesn't support LoadBalancer services, the load balancer will not be provisioned, but the service will still behave like a NodePort service, your cloud/K8 engine should support LoadBalancer Service.
In that case, if you manage to add EIP or VIP to your node then you can attach to the EXTERNAL-IP of your TYPE=LoadBalancer in the K8 cluster, for example attaching the EIP/VIP address to the node 172.16.2.13.
kubectl patch svc ServiceName -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.16.2.13"]}}'

Traefik ingress controller on minikube: External IP pending

I am trying to deploy a Traefik Ingress controller in my minikube environment by following this:
helm install stable/traefik --name-template traefik --set dashboard.enabled=true,dashboard.domain=dashboard.traefik,rbac.enabled=true --namespace kube-system
Even after half an hour I still see that External IP is pending:
pascals#pascals:~$ kubectl get svc -l app=traefik -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik LoadBalancer 10.96.172.128 <pending> 443:30812/TCP,80:31078/TCP 20m
traefik-dashboard ClusterIP 10.96.56.105 <none> 80/TCP 20m
Ideally I would like to reach http://dashboard.traefik but I am not able to do so.
I tried to assign an External Ip using the kubectl patch Api:
kubectl patch svc traefik -n kube-system -p '{"spec":{"externalIPs":["192.168.99.107"]}}'
where, 192.168.99.107 is the minikube ip. This however still did not solve my problem.
Appreciate any nudge in the right direction!
The external IP is assigned by the ServiceController if any cloud provider used in the cluster, usually in managed clusters.
In a minikube cluster, LoadBalance-typed Service will never have an external IP. You can access Services through minikubeip:nodeport, or running minikube service. For the Service traefik-dashboard, it should be a NodePort-typed Service first.
You should install some Kubernetes bare-metal load balancer, like MetalLB

how to access eureka by dns name in kubernetes cluster

I am deploy eureka in kubernetes(v1.15.2) cluster.Now I want to using my app pod register to eureka by domain name,first I try this way:
http://eureka-0.eureka.dabai-fat.svc.cluster.local:8761
It not works.I am login my app pod using this command:
/opt/k8s/bin/kubectl exec -ti soa-room-service-6c4448dfb6-grhtb -n dabai-fat /bin/sh
and using curl command to access cluster's eureka this way:
/ # curl http://eureka-0.eureka.dabai-fat.svc.cluster.local:8761
curl: (6) Could not resolve host: eureka-0.eureka.dabai-fat.svc.cluster.local
but using this way works:
/ # curl http://172.30.224.17:8761
{"timestamp":"2020-02-03T17:10:23.037+0000","status":401,"error":"Unauthorized","message":"Unauthorized","path":"/"}
But I think the domain or dns way is better because the ip could floating in the future. So what is the right way to register to eureka using dns? My coredns in the namespace kube-sytem,and my eureka service and app pod in dabai-fat namespace. By the way,this is my eureka service info in kubernetes:
Do you have service with name eureka-0.eureka in dabai-fat namespace ? You can check it via kubectl get svc -n dabai-fat.
Check the service and change the url to http://eureka.dabai-fat.svc.cluster.local:8761, It works.
/ # curl http://eureka.dabai-fat.svc.cluster.local:8761
{"timestamp":"2020-02-03T17:29:08.045+0000","status":401,"error":"Unauthorized","message":"Unauthorized","path":"/"}
this is service check command output:
[root#ops001 ~]# kubectl get svc -n dabai-fat
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
eureka ClusterIP None <none> 8761/TCP,8081/TCP 2d
soa-auth-service ClusterIP 10.254.84.39 <none> 11013/TCP 2d8h

Publishing the Application

I followed the instructions found here...
https://schoolofdevops.github.io/ultimate-kubernetes-bootcamp/quickdive/
As you can see, "NodePort" type do not have external-IP like wordpress. Therefore I can not connect.
# /usr/local/bin/kubectl --kubeconfig="padhaku2.yaml" get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 38m
vote NodePort 10.245.33.151 <none> 81:31876/TCP 6m20s
wordpress LoadBalancer 10.245.170.65 139.59.49.69 80:31820/TCP 21m
How do I publish the app using external IP?
you can access the application using nodeport.
try http://NODEIP:NODEPORT
in your case, http://NODEIP:31876
follow the steps to update the service type
kubectl delete svc vote
kubectl expose deployment vote --type=LoadBalancer --port 80
you might need to deploy rest of the voting services
kubectl run redis --image=redis:alpine
kubectl expose deployment redis --port 6379
kubectl run worker --image=schoolofdevops/worker
kubectl run db --image=postgres:9.4
kubectl expose deployment db --port 5432
kubectl run result --image=schoolofdevops/vote-result
kubectl expose deployment result --type=NodePort --port 80
If your service type is NodePort, you can connect to your service using the address <protocol>://<Node_ip>:<NodePort>, where
**protocol** may be **http** or **https**
**Node_ip** is the IP of the Node where your application is running
**NodePort** is the value of the **NodePort** field used in your service manifest file