Unable to access service deployed in aws eks - kubernetes

I have deployed user management service in EKS but unable to access.
Input : curl -v http://54.253.213.152:30077/usermgmt/health-status
Output : curl: (7) Failed to connect to 54.253.213.152 port 30077: Connection refused
Pods Running
mysql-cc94644fc-5g6wq 1/1 Running 0 19h 192.168.22.212
usermgt-microsevice-694d677968-bnlsv 1/1 Running 6 19h 192.168.56.62
Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 24h
mysql ClusterIP None <none> 3306/TCP 20h
usermgmt-restapp-service NodePort 10.100.202.28 <none> 8095:30077/TCP 20h
Nodes with external IPs
NAME STATUS EXTERNAL-IP
ip-192-168-0-106.ap-southeast-2.compute.internal Ready 54.253.26.10
ip-192-168-29-240.ap-southeast-2.compute.internal Ready 54.79.163.174
ip-192-168-62-151.ap-southeast-2.compute.internal Ready 54.253.213.152
Security groups Added
HTTP TCP 80 175.45.149.101/32
All TCP TCP 0 - 65535 175.45.149.101/32
SSH TCP 22 175.45.149.101/32
I have tried with IP range 0.0.0.0 as we well but unable to reach to the the service.

Related

Unable to reach service/API from outside the cluster - Kubernetes (Metallb+HAProxy Ingress Controller)

I've created a bare-metal multi-master k8s cluster using kubekey.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 23h v1.23.10
master2 Ready control-plane,master 23h v1.23.10
master3 Ready control-plane,master 23h v1.23.10
worker1 Ready worker 23h v1.23.10
worker2 Ready worker 23h v1.23.10
worker3 Ready worker 23h v1.23.10
$ curl localhost:10249/healthz
ok
Added MetalLB load balancer and HAProxy Ingress Controller. The haproxy-controller gets the external IP address from the Metallb correctly:
$ kubectl get svc -n haproxy-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
haproxy-kubernetes-ingress LoadBalancer 10.233.59.120 10.30.2.81 80:32244/TCP,443:30908/TCP,1024:32666/TCP 21h
Deployed a microservice, and exposed the service via ingress:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 23h
ms-login-http ClusterIP 10.233.3.180 <none> 80/TCP 21h
$ kubectl describe ing
Name: ms-login-http
Labels: <none>
Namespace: default
Address: 10.30.2.81
Default backend: ms-login-http:80 (10.233.103.1:8080)
Rules:
Host Path Backends
---- ---- --------
api.mydomain.in
/api/sc ms-login-http:80 (10.233.103.1:8080)
Annotations: haproxy.org/load-balance: roundrobin
haproxy.org/src-ip-header: True-Client-IP
Events: <none>
The issue is reachability of the deployed API:
[✓] Accessing the API from within any of the cluster nodes works fine
$ curl api.mydomain.in/api/sc/healthcheck
success
[✕] Same API from outside the cluster nodes fails
$ curl api.mydomain.in/api/sc/healthcheck
curl: (7) Failed to connect to api.mydomain.in port 80 after 0 ms: Connection refused
Seem to be firewall issue, but unable to narrow down for what maybe blocking the traffic. The IPTables on the master nodes has several calico forward rules. The rules list is shared in this gist.
Any direction/insight would greatly help, as I'm missing something basic here. Not faced this issue when I created a similar cluster some months back. Seems the latest version of calico has something to do with it.

Kubernetes service created via exposed deployment is not responding to curl

I deployed my application using deployment construct. State of my pod is Running and making curl against pod's IP returns application content. However when I created service using kubectl expose deployment and I curl service's IP then curl throws Connection refused error. Why is that?
My pod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cge-frontend-5d4595469b-qvcsd 0/1 Running 0 19s 10.40.0.4 compute04 <none> <none>
My service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cge-frontend ClusterIP 10.98.212.184 <none> 80/TCP 16m
Error
$ curl 10.98.212.184
curl: (7) Failed connect to 10.98.212.184:80; Connection refused
After investigating my service with kubectl describe svc command. I fogure out that my service has no Endpoints - endpoints section should list pod's IP.
$ kubectl describe svc cge-frontend
Name: cge-frontend
Namespace: default
Labels: app=cge-frontend
Annotations: <none>
Selector: app=cge-frontend
Type: ClusterIP
IP: 10.98.212.184
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints:
Session Affinity: None
It turned out that, the error was caused by one of my probe that was keeping my pod in Running state but not in Readystate. Fixing probes, fixed my pods, and that fixed the service.
My pod after fixing probes is now in correct state READY 1/1
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cge-frontend-5d4595469b-qvcsd 1/1 Running 0 19s 10.40.0.5 compute04 <none> <none>

How to assign an IP to istio-ingressgateway on localhost?

I am using kubespray to run a kubernetes cluster on my laptop. The cluster is running on 7 VMs and the roles of the VM's spread as follows:
NAME STATUS ROLES AGE VERSION
k8s-1 Ready master 2d22h v1.16.2
k8s-2 Ready master 2d22h v1.16.2
k8s-3 Ready master 2d22h v1.16.2
k8s-4 Ready master 2d22h v1.16.2
k8s-5 Ready <none> 2d22h v1.16.2
k8s-6 Ready <none> 2d22h v1.16.2
k8s-7 Ready <none> 2d22h v1.16.2
I've installed https://istio.io/ to build a microservices environment.
I have 2 services running and like to access from outside:
k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
greeter-service ClusterIP 10.233.50.109 <none> 3000/TCP 47h
helloweb ClusterIP 10.233.8.207 <none> 3000/TCP 47h
and the running pods:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default greeter-service-v1-8d97f9bcd-2hf4x 2/2 Running 0 47h 10.233.69.7 k8s-6 <none> <none>
default greeter-service-v1-8d97f9bcd-gnsvp 2/2 Running 0 47h 10.233.65.3 k8s-2 <none> <none>
default greeter-service-v1-8d97f9bcd-lkt6p 2/2 Running 0 47h 10.233.68.9 k8s-7 <none> <none>
default helloweb-77c9476f6d-7f76v 2/2 Running 0 47h 10.233.64.3 k8s-1 <none> <none>
default helloweb-77c9476f6d-pj494 2/2 Running 0 47h 10.233.69.8 k8s-6 <none> <none>
default helloweb-77c9476f6d-tnqfb 2/2 Running 0 47h 10.233.70.7 k8s-5 <none> <none>
The problem is, I can not access the services from outside, because I do not have the EXTERNAL IP address(remember the cluster is running on my laptop).
k get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.233.61.112 <pending> 15020:31311/TCP,80:30383/TCP,443:31494/TCP,15029:31383/TCP,15030:30784/TCP,15031:30322/TCP,15032:30823/TCP,15443:30401/TCP 47h
As you can see, the column EXTERNAL-IP the value is <pending>.
The question is, how to assign an EXTERNAL-IP to the istio-ingressgateway.
First of all, you can't make k8s to assign you an external IP address, as LoadBalancer service is Cloud Provider specific. You could push your router external IP address to be mapped to it, I guess, but it is not trivial.
To reach the service, you can do this:
kubectl edit svc istio-ingressgateway -n istio-system
Change the type of the service from LoadBalancer to ClusterIp. You can also do NodePort. Actually you can skip this step, as LoadBalancer service already contains NodePort and ClusterIp. It is just to get rid of that pending status.
kubectl port-forward svc/istio-ingressgateway YOUR_LAPTOP_PORT:INGRESS_CLUSTER_IP_PORT -n istio-system
I don't know to which port you want to access from your localhost. Say 80, you can do:
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Now port 8080 of your laptop (localhost:8080) will be mapped to the port 80 of istio-ingressgateway service.
By default, there is no way Kubernetes can assign external IP to LoadBalancer service.
This service type needs infrastructure support which works in cloud offerings like GKE, AKS, EKS etc.
As you are running this cluster inside your laptop, deploy MetalLB Load Balancer to get EXTERNAL-IP
It's not possible as Suresh explained.
But if you want to access from your laptop you can use in your service type: NodePort, which gives you access from outside the cluster.
You should first obtain the IP of your cluster, then create your service with something like this:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
type: NodePort
ports:
- name: http
protocol: TCP
port: 3000
targetPort: 3000
nodePort: 30000
After that, you can access from your laptop with: http://cluster-ip:30000
There is no need to create an ingress for that.
You should use a port in range (30000-32767), as stated below:
If you set the type field to NodePort, the Kubernetes control plane allocates a port from a range specified by --service-node-port-range flag (default: 30000-32767).
If you are using minikube, just run:
$ minikube tunnel
$ k get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.111.187.167 127.0.0.1 15021:31949/TCP,80:32215/TCP,443:30585/TCP 9m48s

curl: (7) Failed to connect to 192.168.99.100 port 31591: Connection refused

These are my pods
hello-kubernetes-5569fb7d8f-4rkhs 0/1 ImagePullBackOff 0 5d2h
hello-minikube-5857d96c67-44kfg 1/1 Running 1 5d2h
hello-minikube2 1/1 Running 0 3m24s
hello-minikube2-74654c8f6f-trrrw 1/1 Running 0 4m8s
hello-newkubernetes 0/1 ImagePullBackOff 0 5d1h
If I try
curl $(minikube service hello-minikube2 --url)
curl: (7) Failed to connect to 192.168.99.100 port 31591: Connection refused
Let's check VBox
inet 192.168.99.1/24 brd 192.168.99.255 scope global vboxnet0
valid_lft forever preferred_lft forever
inet6 fe80::800:27ff:fe00:0/64 scope link
valid_lft forever preferred_lft forever
Why is my connection refused?
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
hello-kubernetes NodePort 10.98.65.138 <none> 8080:30062/TCP 5d2h run=hello-kubernetes
hello-minikube NodePort 10.105.166.56 <none> 8080:30153/TCP 5d3h run=hello-minikube
hello-minikube2 NodePort 10.96.94.39 <none> 8080:31591/TCP 42m run=hello-minikube2
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d4h <none>
tomcat-deployment NodePort 10.96.205.228 <none> 8080:30613/TCP 2m13s app=tomcat
kubectl get ep -o wide
NAME ENDPOINTS AGE
hello-kubernetes 5d14h
hello-minikube 172.17.0.7:8080 5d14h
hello-minikube2 172.17.0.4:8080,172.17.0.5:8080 12h
kubernetes 192.168.99.100:8443 5d16h
tomcat-deployment 172.17.0.6:8080 11h
I want to show service endpoint
minikube service tomcat-deployment --url
http://192.168.99.100:30613
Why is this url different from get ep -o wide output?
Apparently, you are trying to reach your service outside of the cluster, thus you need to expose your service IP for external connection.
Run kubectl edit svc hello-minikube2 and change
type: NodePort
to
type: LoadBalancer
Or
kubectl expose deployment hello-minikube2 --type=LoadBalancer --port=8080
On cloud providers that support load balancers, an external IP address would be provisioned to access the Service. On Minikube, the LoadBalancer type makes the Service accessible through the minikube service command.
Run the following command:
minikube service hello-minikube2
it seems that you are trying to access a pod and it needs to be via a kubernetes service and not directly to the pod.
can you also show the : kubectl get svc -o wide ?
if the service indeed exists , try using the kubectl get ep -o wide in order to check that the pod is indeed discovered by the service

Accessing Kubernetes dashboard on Compute instance in Oracle Cloud

I have deployed kubernetes and the dashboard onto a compute instance in Oracle cloud.
I have the dashboard installed with grafana onto my compute instance.
NAME READY STATUS RESTARTS AGE
po/etcd-mst-instance1 1/1 Running 0 1h
po/heapster-7856f6b566-rkfx5 1/1 Running 0 1h
po/kube-apiserver-mst-instance1 1/1 Running 0 1h
po/kube-controller-manager-mst-instance1 1/1 Running 0 1h
po/kube-dns-d879d6bcb-b9zjf 3/3 Running 0 1h
po/kube-flannel-ds-lgklw 1/1 Running 0 1h
po/kube-proxy-g6vxm 1/1 Running 0 1h
po/kube-scheduler-mst-instance1 1/1 Running 0 1h
po/kubernetes-dashboard-dd5c889c-6vphq 1/1 Running 0 1h
po/monitoring-grafana-5d4d76cd65-p7n5l 1/1 Running 0 1h
po/monitoring-influxdb-787479f6fd-8qkg2 1/1 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/heapster ClusterIP 10.98.200.184 <none> 80/TCP 1h
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1h
svc/kubernetes-dashboard ClusterIP 10.107.155.3 <none> 443/TCP 1h
svc/monitoring-grafana ClusterIP 10.96.130.226 <none> 80/TCP 1h
svc/monitoring-influxdb ClusterIP 10.105.163.213 <none> 8086/TCP 1h
I am trying to access the dashboard via SSH and did the below in my local computer:
ssh -L localhost:8001:172.31.4.117:6443 opc#xxxxxxxx
However, it tells me this error :
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Im not sure what is the best way to access the dashboard. I am new at k8s and still at a beginner stage so would want to consult as I have also tried doing kubectl proxy on my local computer but when i try to access 127.0.0.1 it gives me this error:
I0804 17:01:28.902675 77193 logs.go:41] http: proxy error: dial tcp [::1]:8080: connect: connection refused
Would really appreciaate any help and thank you
Kubernetes includes a web dashboard that can be used for basic management operations.
Once Dashboard is installed on your Kubernetes cluster, it can be accessed in a few different ways.
I prefer to use the kubectl proxy from the command line to access Kubernetes Dashboard.
Kubectl does for you: authentication with API server and forward traffic between
your cluster (with Dashboard deployed inside) and your web browser.
Please notice that kubectl does it for a local running web browser, as it is running on
a localhost.
From the command line:
kubectl proxy
Next, start browsing this address:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
In case Kubernetes API server is exposed and accessible, you may try:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
where master-ip is the IP address of your Kubernetes master node where API is running.
On single node setup, another way is use NodePort configuration to access Dashboard.
I found it on dashboard wiki:
Here is a sample of configuration to consider and adapt to your needs:
apiVersion: v1
...
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "343478"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard-head
spec:
clusterIP: <your-cluster-ip>
externalTrafficPolicy: Cluster
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
After applying configuration, check for the exposed port for https using the command:
kubectl -n kube-system get service kubernetes-dashboard
If it returned for example 31707, you could start your browser with:
https://<master-ip>:31707
I was inspired by web ui dashboard guide and accessing dashboard wiki.