RabbitMQ on Kubernetes - Can't login to the management UI - kubernetes

I'm trying to switch an existing app from docker-compose to Kubernetes (first time using it).
My app is deployed on AWS EKS using Fargate nodes. It runs well, but I would like to access the RabbitMQ management UI for debugging purposes.
The rabbit deployment/services files I am using are the following:
# rabbit-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
name: rabbit
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: rabbit
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
spec:
containers:
- image: rabbitmq:3.9.13-management
name: rabbit
ports:
- containerPort: 15672
- containerPort: 5672
- containerPort: 8080
resources: {}
env:
- name: RABBITMQ_DEFAULT_USER
value: "guest"
- name: RABBITMQ_DEFAULT_PASS
value: "guest"
restartPolicy: Always
status: {}
and
# rabbit-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
name: rabbit
spec:
type: NodePort
ports:
- name: "15672"
port: 15672
targetPort: 15672
- name: "5672"
port: 5672
targetPort: 5672
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: rabbit
status:
loadBalancer: {}
I also followed the instructions to create a new user:
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl add_user test test
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl set_user_tags test administrator
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
I can access the webUI on
http://localhost:8001/api/v1/namespaces/default/services/rabbit:15672/proxy/
after activating the proxy with kubectl proxy, however, login with test and test still gives me a Login failed message.

Posting the answer out of comments.
First what kubectl proxy is:
Creates a proxy server or application-level gateway between localhost
and the Kubernetes API server. It also allows serving static content
over specified HTTP path. All incoming data enters through one port
and gets forwarded to the remote Kubernetes API server port, except
for the path matching the static content path.
Also kubectl proxy works with HTTP requests, it does not work with TCP traffic. (this is probably the reason why it did not work).
You can read more in a good answer - kubectl proxy vs kubectl port-forward
Common options to access the service inside the cluster are:
use kubectl port-forward - for local development and testing purposes
use loadbalancer or nodeport service type - more advanced options which can be used across clusters and production environments. Find more about service types.

Related

Cannot Access Application Deployment from Outside in Kubernetes

I'm trying to access my Golang Microservice that is running in the Kubernetes Cluster and has following Manifest..
apiVersion: apps/v1
kind: Deployment
metadata:
name: email-application-service
namespace: email-namespace
spec:
selector:
matchLabels:
run: internal-service
template:
metadata:
labels:
run: internal-service
spec:
containers:
- name: email-service-application
image: some_image
ports:
- containerPort: 8000
hostPort: 8000
protocol: TCP
envFrom:
- secretRef:
name: project-secrets
imagePullPolicy: IfNotPresent
So to access this Deployment from the Outside of the Cluster I'm using Service as well,
And I've set up some External IP for test purposes, which suppose to forward HTTP requests to the port 8000, where my application is actually running at.
apiVersion: v1
kind: Service
metadata:
name: email-internal-service
namespace: email-namespace
spec:
type: ClusterIP
externalIPs:
- 192.168.0.10
selector:
run: internal-service
ports:
- name: http
port: 8000
targetPort: 8000
protocol: TCP
So the problem is that When I'm trying to send a GET request from outside the Cluster by executing curl -f http:192.168.0.10:8000/ it just stuck until the timeout.
I've checked the state of the pods, logs of the application, matching of the selector/template names at the Service and Application Manifests, namespaces, but everything of this is fine and working properly...
(There is also a secret config but It Deployed and also working file)
Thanks...
Making reference to jordanm's solution: you want to put it back to clusterIP and then use port-forward with kubectl -n email-namespace port-forward svc/email-internal-service 8000:8000. You will then be able to access the service via http://localhost:8000. You may also be interested in github.com/txn2/kubefwd

I expose my pod in kubernetes but I canΒ΄t seem to establish a connection with it

I am trying to expose a deployment I made on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-test
labels:
app: debian
spec:
replicas: 1
selector:
matchLabels:
app: debian
strategy: {}
template:
metadata:
labels:
app: debian
spec:
containers:
- image: agracia10/debian_bash:latest
name: debian
ports:
- containerPort: 8006
resources: {}
restartPolicy: Always
status: {}
I decided to follow was is written on here
I try to expose the deployment using the following command:
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
but when I try to then access the service created by the expose command, using the url provided by
minikube service deployment-test-8497d6f458-xxhgm --url
it throws an error using packetsender to try and connect to the service:
packet sender log
Im not really sure what the reason for this could be, I think it has something to do with the fact that when I get the services it says on the external ip field. Also, when I try and retrieve the node IP using minikube ip it gives an address, but when the minikube service --url it gives the 127.0.0.1 address. In any case, using either one does not work.
it's not working due to a port configuration mismatch.
You deployment container running on the 8006 but you have exposed the 8080 and your target port is : --target-port=80
so due to this it's not working.
Ideal flow of traffic goes like :
service (node port, cluster IP or any) > Deployment > PODs
Below sharing the example for deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: blog-app-server-instance
labels:
app: blog-app
spec:
replicas: 1
selector:
matchLabels:
app: blog-app
template:
metadata:
labels:
app: blog-app
spec:
containers:
- name: agracia10/debian_bash:latest
image: blog-app-server
ports:
- containerPort: 8006
---
apiVersion: v1
kind: Service
metadata:
name: blog-app-service
labels:
app: blog-app
spec:
selector:
app: blog-app
type: NodePort
ports:
- port: 80
nodePort: 31364
targetPort: 8006
protocol: TCP
name: HTTP
so things I have changed are image and target port.
Once your Node port service is up and running you will send the request on Port 80 or 31364
i will redirect the request internally to the target port which is 8006 for the container also.
Using this command you exposed your deployment on wrong target point
kubectl expose pod deployment-test-8497d6f458-xxhgm --type=NodePort --port=8080 --target-port=80
ideally it should be 8006
As I know the simplest way to expose the deployment to service we can run this command, you don't expose the pod but expose the deployment.
kubectl expose deployment deployment-test --port 80

Access redis by service name in Kubernetes

I created a redis deployment and service in kubernetes,
I can access redis from another pod by service ip, but I can't access it by service name
the redis yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
I applied your file, and I am able to ping and telnet to the service both from within the same namespace and from a different namespace. To test this, I created pods in the same namespace and in a different namespace and installed telnet and ping. Then I exec'ed into them and did the below tests:
Same Namespace
kubectl exec -it <same-namespace-pod> /bin/bash
# ping redis
PING redis.<redis-namespace>.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
Different Namespace
kubectl exec -it <different-namespace-pod> /bin/bash
# ping redis.<redis-namespace>.svc.cluster.local
PING redis.test.svc.cluster.local (172.20.211.84) 56(84) bytes of data.
# telnet redis.<redis-namespace>.svc.cluster.local 6379
Trying 172.20.211.84...
Connected to redis.<redis-namespace>.svc.cluster.local.
Escape character is '^]'.
If you are not able to do that due to dns resolution issues, you could look at your /etc/resolv.conf in your pod to make sure it has the search prefixes svc.cluster.local and cluster.local
I created a redis deployment and service in kubernetes, I can access
redis from another pod by service ip, but I can't access it by service
name
Keep in mind that you can use the Service name to access the backend Pods it exposes only within the same namespace. Looking at your Deployment and Service yaml manifests, we can see they're deployed within myapp-ns namespace. It means that only from a Pod which is deployed within this namespace you can access your Service by using it's name.
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
namespace: myapp-ns ### πŸ‘ˆ
spec:
replicas: 1
selector:
matchLabels:
component: redis
template:
metadata:
labels:
component: redis
spec:
containers:
- name: redis
image: redis
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: myapp-ns ### πŸ‘ˆ
spec:
type: ClusterIP
selector:
component: redis
ports:
- port: 6379
targetPort: 6379
So if you deploy the following Pod:
apiVersion: v1
kind: Pod
metadata:
name: redis-client
namespace: myapp-ns ### πŸ‘ˆ
spec:
containers:
- name: redis-client
image: debian
you will be able to access your Service by its name, so the following commands (provided you've installed all required tools) will work:
redis-cli -h redis
telnet redis 6379
However if your redis-cliet Pod is deployed to completely different namespace, you will need to use fully qualified domain name (FQDN) which is build according to the rule described here:
redis-cli -h redis.myapp-ns.svc.cluster.local
telnet redis.myapp-ns.svc.cluster.local 6379

Kubernetes AWS EKS Failed to load resource: net::ERR_NAME_NOT_RESOLVED

I have an issue with the following AWS EKS deployment, where the front end always get a Failed to load resource: net::ERR_NAME_NOT_RESOLVED from the backend
Failed to load resource: net::ERR_NAME_NOT_RESOLVED
The reason appears to be that the frontend app running from the browser
has no access from the internet to the backend-API in Kubernetes http://restapi-auth-nginx/api/
(see attached browser image)
Here are the details of the configuration
- file: restapi-auth-api.yaml
Description: Backend API using GUNICORN
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/bash
Port 5000
- file: restapi-auth-nginx.yaml
Description: NGINX proxy for the API
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/bash I can also reach the api pod from the nginx pod so this part is working fine
- file: frontend.yaml
Description: NGINX proxy plus Angular App in a multistage deployment
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/ash
I can also reach the api pod from the frontend pod so this part is working fine
However, from the browser, I still get the above error even if all the components appear to be working fine
(see the image of the web site working from the browser)
Let me show you how I can access the api through its NGINX pod from the frontend pod. here are our pods
kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-7674f4d9bf-jbd2q 1/1 Running 0 35m
restapi-auth-857f94b669-j8m7t 1/1 Running 0 39m
restapi-auth-nginx-5d885c7b69-xt6hf 1/1 Running 0 38m
udagram-frontend-5fbc78956c-nvl8d 1/1 Running 0 41m
Now let's log into one pod and get frontend pod and do a curl to the NGINX proxy that sever the API.
Let's try with this curl request directly from the frontend pod to the Nginx backend
curl --location --request POST 'http://restapi-auth-nginx/api/users/auth/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "email":"david#me.com",
> "password":"SuperPass"
> }'
Now let's log into the frontend pod and see if it works
kubectl exec -it frontend-7674f4d9bf-jbd2q /bin/ash
/usr/share/nginx/html # curl --location --request POST 'http://restapi-auth-nginx/api/users/auth/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "email":"david#me.com",
> "password":"SuperPass"
> }'
{
"auth": true,
"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyIjoiZGF2aWRAcHlta455aS5vcmciLCJleHAiOjE1OTYwNTk7896.OIkuwLsyLhrlCMTVlccg8524OUMnkJ2qJ5fkj-7J5W0",
"user": "david#me.com"
}
It works perfectly meaning that the frontend is correctly communicating to the restapi-auth-Nginx API reverse proxy
Here in this image, you have the output of multiple commands
Here are the .yaml files
LOAD BALANCER and FRONT END
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: udagram
spec:
replicas: 1
selector:
matchLabels:
app: udagram
tier: frontend
template:
metadata:
labels:
app : udagram
tier: frontend
spec:
containers:
- name: udagram-frontend
image: pythonss/frontend_udacity_app
resources:
requests:
cpu: 100m
memory: 100Mi
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: frontend-lb
labels:
app: udagram
tier: frontend
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: udagram
tier: frontend
Nginx reverse proxy for API backend
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
name: restapi-auth-nginx
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: restapi-auth-nginx
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
spec:
containers:
- image: pythonss/restapi_auth_microservice_nginx
imagePullPolicy: Always
name: restapi-auth-nginx-nginx
ports:
- containerPort: 80
resources: {}
imagePullSecrets:
- name: regcred
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
name: restapi-auth-nginx
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: restapi-auth-nginx
status:
loadBalancer: {}
For brevity, I will not share the API app server .yaml file.
So my questions are:
How could I grant access from the internet to the backend API gateway without exposing the API to the world?
Or should I expose the API, through an LB as so:
apiVersion: v1
kind: Service
metadata:
name: backend-lb
labels:
io.kompose.service: restapi-auth-nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
io.kompose.service: restapi-auth-nginx
This will solve the issue as it will expose the API.
However, I now need to add the FrontEnd LB to CORS in the API and the BackEndLB to the FrontEnd so it can make the calls.
Could someone explain how to do otherwise without exposing the APIs?
What are the common patterns for this architetcure?
BR
The solution to this is to just expose the NGINX pods ( note that the docker-compose create 2 images) through a Load balancer service
We need this another yaml:
apiVersion: v1
kind: Service
metadata:
name: backend-lb
labels:
io.kompose.service: restapi-auth-nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
io.kompose.service: restapi-auth-nginx
Now we will have 3 services and 2 deployments. This latest service exposes the API to the world
Below is an image for this deployment
As you can see the world has access to this latest services only.
The GNINX and GUNICORN are unreachable from the internet.
Now your frontend application can access the API through the exposed LB (represented in black) inside the Kubernetes

Issue with monitoring custom service on prometheus in kubernetes namespace

My goal is to monitor services with Prometheus, so I was following a guide located at:
https://github.com/coreos/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md
I am relatively new to all of this, so please forgive my naiveness. I tried looking into the error, but all the answers were convoluted. I have no idea where to start on the debug process (perhaps look into the YAMLs?)
I wanted to monitor a custom Service. So, I deployed a service.yaml of the following into a custom namespace (t):
kind: Service
apiVersion: v1
metadata:
namespace: t
name: example-service-test
labels:
app: example-service-test
spec:
selector:
app: example-service-test
type: NodePort
ports:
- name: http
nodePort: 30901
port: 8080
protocol: TCP
targetPort: http
---
apiVersion: v1
kind: Pod
metadata:
name: example-service-test
namespace: t
labels:
app: example-service-test
spec:
containers:
- name: example-service-test
image: python:2.7
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c", "echo \"<p>This is POD1 $(hostname)</p>\" > index.html; python -m SimpleHTTPServer 8080"]
ports:
- name: http
containerPort: 8080
And deployed a service monitor into the namespace:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: example-service-test
labels:
team: frontendtest1
namespace: t
spec:
selector:
matchLabels:
app: example-service-test
endpoints:
- port: http
So far, the service monitor is detecting the service, as shown:
Prometheus Service Discovery.
However, there is an error with obtaining the metrics from the service: Prometheus Targets.
From what I know, prometheus isn't able to access the /metrics on the sample service - in that case, do I need to expose the metrics? If so, could I get a step by step guide solution to how to expose metrics? If not, what route should I take?
I'm afraid you could miss the key thing from the tutorial you're following on CoreOS website, about how a metrics from an app are getting to Prometheus:
First, deploy three instances of a simple example application, which
listens and exposes metrics on port 8080
Yes, your application (website) listens on port 8080, but does not expose any metrics on '/metrics' endpoint in the known to Prometheus format.
You can verify about what kind of metrics I'm talking about by hiting the endpoint from inside of Pod/Conatiner where it's hosted.
kubectl exec -it $(kubectl get po -l app=example-app -o jsonpath='{.items[0].metadata.name}') -c example-app -- curl localhost:8080/metrics
You should see similar output to this one:
# HELP codelab_api_http_requests_in_progress The current number of API HTTP requests in progress.
# TYPE codelab_api_http_requests_in_progress gauge
codelab_api_http_requests_in_progress 1
# HELP codelab_api_request_duration_seconds A histogram of the API HTTP request durations in seconds.
# TYPE codelab_api_request_duration_seconds histogram
codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.0001"} 0
codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.00015000000000000001"} 0
codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.00022500000000000002"} 0
codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.0003375"} 0
codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.00050625"} 0
codelab_api_request_duration_seconds_bucket{method="GET",path="/api/bar",status="200",le="0.000759375"} 0
Please read more here on ways of exposing metrics.