I tried deploying on EKS, and my config.yaml follows this suggested format:
botfront:
app:
# The complete external host of the Botfront application (eg. botfront.yoursite.com). It must be set even if running on a private or local DNS (it populates the ROOT_URL).
host: botfront.yoursite.com
mongodb:
enabled: true # disable to use an external mongoDB host
# Username of the MongoDB user that will have read-write access to the Botfront database. This is not the root user
mongodbUsername: username
# Password of the MongoDB user that will have read-write access to the Botfront database. This is not the root user
mongodbPassword: password
# MongoDB root password
mongodbRootPassword: rootpassword
And I ran this command:
helm install -f config.yaml -n botfront --namespace botfront botfront/botfront
and the deployment appeared successful with all pods listed as running.
But botfront.yoursite.com goes nowhere. I checked the ingress and it matches, but there are no external ip addresses or anything. I don't know how to actually access my botfront site once deployed on kubernetes.
What am I missing?
EDIT:
With nginx lb installed kubectl get ingresses -n botfront now
returns:
NAME CLASS HOSTS ADDRESS PORTS AGE
botfront-app-ingress <none> botfront.cream.com a182b0b24e4fb4a0f8bd6300b440e5fa-423aebd224ce20ac.elb.us-east-2.amazonaws.com 80 4d1h
and
kubectl get svc -n botfront returns:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
botfront-api-service NodePort 10.100.207.27 <none> 80:31723/TCP 4d1h
botfront-app-service NodePort 10.100.26.173 <none> 80:30873/TCP 4d1h
botfront-duckling-service NodePort 10.100.75.248 <none> 80:31989/TCP 4d1h
botfront-mongodb-service NodePort 10.100.155.11 <none> 27017:30358/TCP 4d1h
If you run kubectl get svc -n botfront, it will show you all the Services that expose your botfront
$ kubectl get svc -n botfront
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
botfront-api-service NodePort 10.3.252.32 <none> 80:32077/TCP 63s
botfront-app-service NodePort 10.3.249.247 <none> 80:31201/TCP 63s
botfront-duckling-service NodePort 10.3.248.75 <none> 80:31209/TCP 63s
botfront-mongodb-service NodePort 10.3.252.26 <none> 27017:31939/TCP 64s
Each of them is of type NodePort, which means it exposes your app on the external IP address of each of your EKS cluster nodes on a specific port.
So if you your node1 ip happens to be 1.2.3.4 you can acess botfront-api-service on 1.2.3.4:32077. Don't forget to allow access to this port on firewall/security groups. If you have any registered domain e.g. yoursite.com you can configure for it a subdomain botfront.yoursite.com and point it to one of your EKS nodes. Then you'll be able to access it using your domain. This is the simplest way.
To be able to access it in a more effective way than by using specific node's IP and non-standard port, you may want to expose it via Ingress which will create an external load balancer, making your NodePort services available under one external IP adress and standard http port.
Update: I see that this chart already comes with ingress that exposes your app:
$ kubectl get ingresses -n botfront
NAME HOSTS ADDRESS PORTS AGE
botfront-app-ingress botfront.yoursite.com 80 70m
If you retrieve its yaml definition by:
$ kubectl get ingresses -n botfront -o yaml
you'll see that it uses the following annotation:
kubernetes.io/ingress.class: nginx
which means you need nginx-ingress controller installed on your EKS cluster. This might be one reason why it fails. As you can see in my example, this ingress doesn't get any external IP. That's because nginx-ingress wasn't installed on my GKE cluster. Not sure about EKS but as far as I know it doesn't come with nginx-ingress preinstalled.
One more thing: I assume that in your config.yaml you put some real domain name that you have registered instead of botfront.yoursite.com. Suppose your domain is yoursite.com and you successfully created subdomain botfront.yoursite.com, you should redirected it to the IP of your load balancer (the one used by your ingress).
If you run kubectl get ingresses -n botfront but the ADDRESS is empty, you probably don't have nginx-ingress installed and the underlying load balancer cannot be created. If you have here some external IP address, then redirect your registered domain to this address.
Related
I have below service running in my k8s pod. I want to access and ping service name "service-plt-mediator" from another pod. What needs to be added in manifest file of the pod so that the service name should be in /etc/host file and can be pinged from inside the pod?
/home/ravi>kubectl get svc | grep
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
plt service-plt-mediator ClusterIP 10.108.188.147 <none> 4561/TCP,4562/TCP 3h47m
Tried to add entry using "hostAlias" in manifest file, but it needs a static IP also. Which I can not as the service IP will change after reboot
You don't need to add a mapping in /etc/hosts. Your pods /etc/resolv.conf is configured by kubelet to send DNS queries to CoreDNS service that is running in the cluster (You can see that default config in the pod spec as dnsPolicy: ClusterFirst). The DNS response will be the clusterIP of the Service.
You can use <service-name>.<namespace> as the DNS request name in the other pod.
You can debug your DNS in the cluster as described here.
I Installed K8S with Helm Charts on EKS but the Loadbalancer EXTERNAL IP is in pending state , I see that EKS does support the service Type : LoadBalancer now.
Is it something I will have to check at the network outgoing traffic level ? Please share your experience if any.
Tx,
The Loadbalancer usually takes some seconds or a few minutes to provision you an IP.
If after 5 minutes the IP isn't provisioned:
- run kubectl get svc <SVC_NAME> -o yaml and if there is any different annotation set.
By default services with Type:LoadBalancer are provisioned with Classic Load Balancers automatically. Learn more here.
If you wish to use Network load Balancers you have to use the annotation:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
The process is really automatic, you don't have to check for network traffic.
You can check if there is any issue with the Helm Chart you are deploying by manually creating a service with loadbalancer type and check if it gets provisioned:
$ kubectl run --generator=run-pod/v1 nginx --image=nginx --port=80
pod/nginx created
$ kubectl get pod nginx
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 34s
$ kubectl expose pod nginx --type=LoadBalancer
service/nginx exposed
$ kubectl get svc nginx -w
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.1.63.178 <pending> 80:32522/TCP 7s
nginx LoadBalancer 10.1.63.178 35.238.146.136 80:32522/TCP 42s
In this example the LoadBalancer took 42s to be provisioned. This way you can verify if the issue is on the Helm Chart or something else.
If Kubernetes is running in an environment that doesn't support LoadBalancer services, the load balancer will not be provisioned, but the service will still behave like a NodePort service, your cloud/K8 engine should support LoadBalancer Service.
In that case, if you manage to add EIP or VIP to your node then you can attach to the EXTERNAL-IP of your TYPE=LoadBalancer in the K8 cluster, for example attaching the EIP/VIP address to the node 172.16.2.13.
kubectl patch svc ServiceName -p '{"spec": {"type": "LoadBalancer", "externalIPs":["172.16.2.13"]}}'
I have a testing Kubernetes cluster running in remote VMs (on VSphere), I have full access to the VMs through ssh (they have private IPs). How can I expose services and access them from outside the cluster (from my remote laptop trying to get access to the machines) knowing that I can remotely perform all kubectl commands.
For example: I tried with the dashboard, I installed it, I have changed the service to NodePort, and I tried to access to it from my laptop using this URL http:master-private-ip:exposedport, also with worker IPs, but it does not work. It returns in browser only � (binary output). When I try to connect through https, it trows a certificates error.
$ kubectl get svc -n kube-system -l k8s-app=kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.97.143.110 <none> 443:30714/TCP 42m
$ kubectl proxy -p 8001
$ curl http://172.16.5.226:30714 --output -
I have expected that the output shows me the html from the UI of the Kubernetes dashboard
NOTE: Dashboard should not be exposed publicly over HTTP. For domains accessed over HTTP it will not be possible to sign in. Nothing will happen after clicking Sign in button on login page.
If you have done everything correctly it should work over HTTPS
As it's explained in Accessing Dashboard 1.7.X and above.
In order to expose Dashboard using NodePort you need to edit kubernetes-dashboard service.
kubectl -n kube-system edit service kubernetes-dashboard
Find type: ClusterIP and change it to type: NodePort, then save the file.
Then, check which port was the Dashboard exposed to:
kubectl -n kube-system get service kubernetes-dashboard
which might look:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard 10.100.124.90 <nodes> 443:31707/TCP 21h
To access the Dashboard navigate your browser to https://<server_IP>:31707
EDIT:
In your case with self-signed certificate, you need to put it into a secret. It has to be named kubernetes-dashboard-certs and it has to be in kube-system namespace.
You have to save the cert as dashboard.crt and dashboard.key and store them under $HOME/certs.
kubectl create secret generic kubernetes-dashboard-certs --from-file=$HOME/certs -n kube-system
This installation process is explained here.
I am trying to deploy multiple identical docker containers on Google Container Engine. I am using kubectl for that following the instructions here: https://cloud.google.com/container-engine/docs/tutorials/hello-node
The instructions describe how to run a redundant service managed by the load balancer, so when I contact the balancer, it sends my request to one of my redundant pods. And in that mode, it works fine.
But I need to do this differently. I need to be able to contact individual pods directly from the client. So I am trying to use --type=NodePort with my "kubectl expose deployment" command:
mac-124307:hellonode ivm$ kubectl expose deployment hello-world --type=NodePort --port 9000 --target-port 9000
service "hello-world" exposed
mac-124307:hellonode ivm$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world 10.15.253.149 <nodes> 9000:30513/TCP 21m
kubernetes 10.15.240.1 <none> 443/TCP 46m
The command does not complain, and I can use "gcloud compute instances list" to see external IP addressed of individual pods:
mac-124307:hellonode ivm$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-hello-cluster-default-pool-402030b2-j60q us-central1-a n1-standard-1 10.128.0.3 104.197.72.212 RUNNING
gke-hello-cluster-default-pool-402030b2-q86r us-central1-a n1-standard-1 10.128.0.4 35.192.4.43 RUNNING
gke-hello-cluster-default-pool-402030b2-tf7t us-central1-a n1-standard-1 10.128.0.2 146.148.72.137 RUNNING
but when I try to connect to port 9000 at any of these IP addresses, my connection times-out.
mac-124307:hellonode ivm$ curl http://104.197.72.212:9000/
... <time-out>
What am I doing wrong ?
Note that the node port that was allocated is 30513. You are using 9000, that's the port for the ClusterIP, 10.15.253.149 that was assigned.
You also need to have port 30513 open on the firewall, as suggested by Eric.
If you only need a one-off access to this pod, you can also use kubectl port-forward name-of-a-hello-pod 9000 which will forward 127.0.0.1:9000 on your workstation directly to the pod. Of course this only works as long as kubect port-forward is running.
I have a local Kubernetes installation with a master node and two worker nodes. Is there a way to access all services/pods that will be installed on Kubernetes through master node's ip?
What i mean is say i have a test service running on port 30001 on each worker and i want to access this service like http://master-node:30001. Every help is appreciated.
You can use "the proxy verb" to acces nodes, pods, or services through the master. Only HTTP and HTTPS can be proxied. See these docs and these docs.
There are some ways to do it:
Define a NodePort Kubernetes service
Use kubefwd or port forwarding command
Use proxy command (Only support HTTP & HTTPS)
In this answer, I explain how to define a NodePort Service.
The NodePort service is explained as below (Service - Kubernetes)
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
Here is an example of the NodePort service for PostgreSQL:
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: postgres
labels:
app: postgres
spec:
ports:
- port: 5432
type: NodePort
selector:
app: postgres
The port field stands for both service port and default target port. There is also a nodePort field that allows you to choose the port to access the service from outside of the cluster (via the node's IP and the nodePort)
To view the node's Port (if you don't specify it from the manifest), you can run the command:
kubectl get services -n postgres
The output should look similar to:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
postgres NodePort 10.96.156.75 <none> 5432:30864/TCP 6d9h app=postgres
In this case, the nodePort is 30864, this is the port to access to the service from outside the cluster.
To find out the node's IP, the command to use is:
kubectl get nodes -o wide
The output should look similar to:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
homedev-control-plane Ready master 30d v1.19.1 172.18.0.2 <none> Ubuntu Groovy Gorilla (development branch) 5.9.1-arch1-1 containerd://1.4.0
If what you need is the IP only:
kubectl get nodes -o wide --no-headers | awk '{print $6}'
In this case, the node's IP is 172.18.0.2. Hence to connect to the Postgres in the local Kubernetes cluster from your host machine, the command would look like this:
psql -U postgres -h 172.18.0.2 -p 30864-d postgres