I'm unable to access k8's Service through Postman - scala

I created a simple scala application, that uses akkaHttp to add user data and get user data, akkaHttp is running on "localhost" and has port "8080".
Http().newServerAt("localhost", 8080).bind(route)
After that I create a "deployment" and "service" given below:
deployment.yaml
`apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-deployment
spec:
replicas: 2
selector:
matchLabels:
app: docker-label
template:
metadata:
name: docker-pod
labels:
app: docker-label
spec:
containers:
- name: docker-container
image: akkahttp-k8s:0.1.0-SNAPSHOT
ports:
- containerPort: 80
`
service.yaml
`apiVersion: v1
kind: Service
metadata:
name: docker-service
spec:
type: NodePort
selector:
app: docker-label
ports:
- port: 8080
targetPort: 80
nodePort: 32100`
My service and deployment are in "running" state and showing no error.
pods:
NAME READY STATUS RESTARTS AGE
docker-deployment-79756959c6-rdknq 1/1 Running 0 6s
docker-deployment-79756959c6-thjzt 1/1 Running 0 6s
service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-service NodePort 10.103.170.1 <none> 8080:32100/TCP 4m35s
But when I try to access the service through the "Postman" it throughs an error " Error: connect ECONNREFUSED 192.168.49.2:32100 ".
When I tried to use " port-forward docker-deployment-79756959c6-rdknq 8080:8080 " then I can interact with my pod successfully through postman using " http://localhost:8080 ". Why I'm unable to interact with my pod through service ? where I'm doing mistake ?.
Kindly help me to deal with this issue.

I'm using a generic container that listens on 8080:
https://github.com/kubernetes-up-and-running/kuard
deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-deployment
spec:
replicas: 2
selector:
matchLabels:
app: docker-label
template:
metadata:
name: docker-pod
labels:
app: docker-label
spec:
containers:
- name: docker-container
image: gcr.io/kuar-demo/kuard-amd64:blue
ports:
- containerPort: 8080
NOTE containerPort: 8080
and service.yaml:
apiVersion: v1
kind: Service
metadata:
name: docker-service
spec:
type: NodePort
selector:
app: docker-label
ports:
- port: 8080
targetPort: 8080
NOTE targetPort: 8080 (matches the Pod's port which is the container's port)
Then:
NAMESPACE="75488823"
kubectl create namespace ${NAMESPACE}
kubectl apply --filename=./deployment.yaml \
--namespace=${NAMESPACE}
kubectl apply --file=./service.yaml \
--namespace=${NAMESPACE}
Then access the service using kubectl port-forward
kubectl port-forward deployment/docker-deployment \
--namespace=${NAMESPACE} \
8080:8080
And:
curl \
--silent \
--get \
--output /dev/null \
--write-out '%{response_code}' \
http://localhost:8080
Should yield 200 (success)
Then access the services using any (!) nodes' IP address. You may need to determine the {HOST} value for yourself. You want any node's public IP address.
# Get any nodes' IP
HOST=$(\
kubectl get nodes \
--output=jsonpath="{.items[0].status.addresses[0].address}"
# Get the service's `nodePort`
PORT=$(\
kubectl get service/docker-service \
--namespace=${NAMESPACE} \
--output=jsonpath="{.spec.ports[0].nodePort}")
curl \
--silent \
--get \
--output /dev/null \
--write-out '%{response_code}' \
http://${HOST}:${PORT}
Should also yield 200 (success).
Tidy:
kubectl delete namespace/${NAMESPACE}

Related

RabbitMQ on Kubernetes - Can't login to the management UI

I'm trying to switch an existing app from docker-compose to Kubernetes (first time using it).
My app is deployed on AWS EKS using Fargate nodes. It runs well, but I would like to access the RabbitMQ management UI for debugging purposes.
The rabbit deployment/services files I am using are the following:
# rabbit-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
name: rabbit
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: rabbit
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
spec:
containers:
- image: rabbitmq:3.9.13-management
name: rabbit
ports:
- containerPort: 15672
- containerPort: 5672
- containerPort: 8080
resources: {}
env:
- name: RABBITMQ_DEFAULT_USER
value: "guest"
- name: RABBITMQ_DEFAULT_PASS
value: "guest"
restartPolicy: Always
status: {}
and
# rabbit-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
name: rabbit
spec:
type: NodePort
ports:
- name: "15672"
port: 15672
targetPort: 15672
- name: "5672"
port: 5672
targetPort: 5672
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: rabbit
status:
loadBalancer: {}
I also followed the instructions to create a new user:
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl add_user test test
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl set_user_tags test administrator
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
I can access the webUI on
http://localhost:8001/api/v1/namespaces/default/services/rabbit:15672/proxy/
after activating the proxy with kubectl proxy, however, login with test and test still gives me a Login failed message.
Posting the answer out of comments.
First what kubectl proxy is:
Creates a proxy server or application-level gateway between localhost
and the Kubernetes API server. It also allows serving static content
over specified HTTP path. All incoming data enters through one port
and gets forwarded to the remote Kubernetes API server port, except
for the path matching the static content path.
Also kubectl proxy works with HTTP requests, it does not work with TCP traffic. (this is probably the reason why it did not work).
You can read more in a good answer - kubectl proxy vs kubectl port-forward
Common options to access the service inside the cluster are:
use kubectl port-forward - for local development and testing purposes
use loadbalancer or nodeport service type - more advanced options which can be used across clusters and production environments. Find more about service types.

getting 502 Bad Gateway on eks aws-alb-ingress

I created in AWS a EKS Cluster via Terraform using terraform-aws-modules/eks/aws as module. This cluster has one pod (golang app) using nodeport as service and ingress. The problem I have is that I'm getting 502 bad gateway when I hit the endpoint.
My config:
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: golang-deployment
labels:
app: golang-app
spec:
replicas: 1
selector:
matchLabels:
name: golang-app
template:
metadata:
labels:
name: golang-app
spec:
containers:
- name: golang-app
image: 019496914213.dkr.ecr.eu-north-1.amazonaws.com/goland:1.0
ports:
- containerPort: 9000
service:
kind: Service
apiVersion: v1
metadata:
name: golang-service
spec:
type: NodePort
selector:
app: golang-app
ports:
- protocol: TCP
port: 9000
targetPort: 9000
Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
labels:
app: app
spec:
rules:
- http:
paths:
- path: /api/v2
pathType: ImplementationSpecific
backend:
service:
name: golang-service
port:
number: 9000
kubectl get service
golang-service NodePort 172.20.44.34 <none> 9000:32184/TCP 106m
The security groups for the cluster and nodes were created by terraform-aws-modules/eks/aws module.
I checked severals things:
kubectl port-forward golang-deployment-5894d8d6fc-ktmmb 9000:9000
WORKS! I can see the golang app using localhost:9000 in my computer
kubectl exec curl -i --tty nslookup golang-app
Server: 172.20.0.10
Address 1: 172.20.0.10 kube-dns.kube-system.svc.cluster.local
Name: golang-app
Address 1: 172.20.130.130 golang-app.default.svc.cluster.local
WORKS!
kubectl exec curl -i --tty curl golang-app:9000
curl: (7) Failed to connect to golang-app port 9000: Connection refused
NOT WORKS
Any idea?
You should be calling the service not deployment.
golang-service is svc name instead of deployment name
kubectl exec curl -i --tty curl golang-service:9000

how to access service on rpi k8s cluster

I built a k8s cluster with help of this guide: rpi+k8s. I got some basic nginx service up and running and and I can curl from master node to worker node to get the nginx welcome page content using:
k exec nginx-XXX-XXX -it -- curl localhost:80
I tried following suggestions in the following SO posts:
link 1
link 2
However, I still can't access a simple nginx service on the worker node from my local computer (linux). I used, NODE IP:NODE PORT. I also installed kubefwd and ran, sudo kubefwd svc -n nginx-ns but I don't see the expected output where it would show the port forwards. Any help would be appreciated. Thanks.
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx-svc NodePort 10.101.19.230 <none> 80:32749/TCP 168m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 3/3 3 3 168m
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-54485b444f 3 3 3 168m
And here is the yaml file:
kind: Namespace
apiVersion: v1
metadata:
name: nginx-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: nginx-ns
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19-alpine
ports:
- name: nginxport
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
selector:
app: backend
You need to update your service nginx-svc where you have used two selector.
remove below part:
selector:
app: backend
Updated service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
namespace: nginx-ns
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
name: nginxport
port: 80
targetPort: 80
nodePort: 32749
type: NodePort
Then, Try this one for port-forwarding.
kubectl port-forward -n nginx-ns svc/nginx-svc 8080:80
Template is like this:
kubectl port-forward -n <namespace> svc/<svc_name> <local_port>:<svc_port>
Then try in the browser with 127.0.0.1:8080 or localhost:8080

Kubernetes AWS EKS Failed to load resource: net::ERR_NAME_NOT_RESOLVED

I have an issue with the following AWS EKS deployment, where the front end always get a Failed to load resource: net::ERR_NAME_NOT_RESOLVED from the backend
Failed to load resource: net::ERR_NAME_NOT_RESOLVED
The reason appears to be that the frontend app running from the browser
has no access from the internet to the backend-API in Kubernetes http://restapi-auth-nginx/api/
(see attached browser image)
Here are the details of the configuration
- file: restapi-auth-api.yaml
Description: Backend API using GUNICORN
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/bash
Port 5000
- file: restapi-auth-nginx.yaml
Description: NGINX proxy for the API
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/bash I can also reach the api pod from the nginx pod so this part is working fine
- file: frontend.yaml
Description: NGINX proxy plus Angular App in a multistage deployment
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/ash
I can also reach the api pod from the frontend pod so this part is working fine
However, from the browser, I still get the above error even if all the components appear to be working fine
(see the image of the web site working from the browser)
Let me show you how I can access the api through its NGINX pod from the frontend pod. here are our pods
kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-7674f4d9bf-jbd2q 1/1 Running 0 35m
restapi-auth-857f94b669-j8m7t 1/1 Running 0 39m
restapi-auth-nginx-5d885c7b69-xt6hf 1/1 Running 0 38m
udagram-frontend-5fbc78956c-nvl8d 1/1 Running 0 41m
Now let's log into one pod and get frontend pod and do a curl to the NGINX proxy that sever the API.
Let's try with this curl request directly from the frontend pod to the Nginx backend
curl --location --request POST 'http://restapi-auth-nginx/api/users/auth/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "email":"david#me.com",
> "password":"SuperPass"
> }'
Now let's log into the frontend pod and see if it works
kubectl exec -it frontend-7674f4d9bf-jbd2q /bin/ash
/usr/share/nginx/html # curl --location --request POST 'http://restapi-auth-nginx/api/users/auth/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "email":"david#me.com",
> "password":"SuperPass"
> }'
{
"auth": true,
"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyIjoiZGF2aWRAcHlta455aS5vcmciLCJleHAiOjE1OTYwNTk7896.OIkuwLsyLhrlCMTVlccg8524OUMnkJ2qJ5fkj-7J5W0",
"user": "david#me.com"
}
It works perfectly meaning that the frontend is correctly communicating to the restapi-auth-Nginx API reverse proxy
Here in this image, you have the output of multiple commands
Here are the .yaml files
LOAD BALANCER and FRONT END
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: udagram
spec:
replicas: 1
selector:
matchLabels:
app: udagram
tier: frontend
template:
metadata:
labels:
app : udagram
tier: frontend
spec:
containers:
- name: udagram-frontend
image: pythonss/frontend_udacity_app
resources:
requests:
cpu: 100m
memory: 100Mi
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: frontend-lb
labels:
app: udagram
tier: frontend
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: udagram
tier: frontend
Nginx reverse proxy for API backend
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
name: restapi-auth-nginx
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: restapi-auth-nginx
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
spec:
containers:
- image: pythonss/restapi_auth_microservice_nginx
imagePullPolicy: Always
name: restapi-auth-nginx-nginx
ports:
- containerPort: 80
resources: {}
imagePullSecrets:
- name: regcred
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
name: restapi-auth-nginx
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: restapi-auth-nginx
status:
loadBalancer: {}
For brevity, I will not share the API app server .yaml file.
So my questions are:
How could I grant access from the internet to the backend API gateway without exposing the API to the world?
Or should I expose the API, through an LB as so:
apiVersion: v1
kind: Service
metadata:
name: backend-lb
labels:
io.kompose.service: restapi-auth-nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
io.kompose.service: restapi-auth-nginx
This will solve the issue as it will expose the API.
However, I now need to add the FrontEnd LB to CORS in the API and the BackEndLB to the FrontEnd so it can make the calls.
Could someone explain how to do otherwise without exposing the APIs?
What are the common patterns for this architetcure?
BR
The solution to this is to just expose the NGINX pods ( note that the docker-compose create 2 images) through a Load balancer service
We need this another yaml:
apiVersion: v1
kind: Service
metadata:
name: backend-lb
labels:
io.kompose.service: restapi-auth-nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
io.kompose.service: restapi-auth-nginx
Now we will have 3 services and 2 deployments. This latest service exposes the API to the world
Below is an image for this deployment
As you can see the world has access to this latest services only.
The GNINX and GUNICORN are unreachable from the internet.
Now your frontend application can access the API through the exposed LB (represented in black) inside the Kubernetes

Kubernetes's LoadBalancer yaml not working even though CLI `expose` function works

This is my Service and Deployment yaml that I am running on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-hello-world
labels:
app: node-hello-world
spec:
replicas: 1
selector:
matchLabels:
app: node-hello-world
template:
metadata:
labels:
app: node-hello-world
spec:
containers:
- name: node-hello-world
image: node-hello-world:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: node-hello-world-load-balancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
targetPort: 8080
nodePort: 30002
selector:
name: node-hello-world
Results:
$ minikube service node-hello-world-load-balancer --url
http://192.168.99.101:30002
$ curl http://192.168.99.101:30002
curl: (7) Failed to connect to 192.168.99.101 port 30002: Connection refused
However, running the following CLI worked:
$ kubectl expose deployment node-hello-world --type=LoadBalancer
$ minikube service node-hello-world --url
http://192.168.99.101:30130
$ curl http://192.168.99.101:30130
Hello World!
What am I doing wrong with my LoadBalancer yaml config?
you have configured wrong the service selector
selector:
name: node-hello-world
it should be:
selector:
app: node-hello-world
https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
you can debug this by describing the service, and seeing that the endpoint list is empty, so there are no pods mapped to your endpoint's service list
kubectl describe svc node-hello-world-load-balancer | grep -i endpoints
Endpoints: <none>