Kubenetes accsess api web - kubernetes

Im new to Kub and i converted my envirement from docker-compose,
I have a pod that have python code - if i use my docker on the same host i can accsess
but when its on pod no traffic goes inside,
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.10.10.130:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
api-service.yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
ports:
- name: "5001"
port: 5001
targetPort: 5001
selector:
io.kompose.service: api
status:
loadBalancer: {}
api-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- image: 127.0.0.1:5000/api:latest
imagePullPolicy: "Never"
name: api
ports:
- containerPort: 5001
resources: {}
volumeMounts:
- mountPath: /base
name: api-hostpath0
restartPolicy: Always
serviceAccountName: ""
volumes:
- hostPath:
path: /root/ansible/api/base
name: api-hostpath0
status: {}
pod log:
* Serving Flask app 'server' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://10.244.0.17:5001/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 553-272-086
I tried reaching what the config view shows and i get this :
https://10.10.10.130:6443/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
The path to reach through container is :
https://10.10.10.130:5001/
It does not reach container and says like site does not exists -
again this works on docker container so what am i missing ?
Thanks
--EDIT--
If i curl http://10.244.0.17:5001/ (the address the api pod) from host i get in, why i cannot get in from outside?
Also tried adding nginx + api pod deployment
template:
spec:
hostNetwork: true
Still cannot reach please help

Found the solution!
I needed to add externalIPs to my pods service.yaml (api and nginx)
spec:
ports:
- name: "8443"
port: 8443
targetPort: 80
externalIPs:
- 10.10.10.130

Related

Open Telemetry python-FastAPI auto instrumentation in k8s not collecting data

So I am trying to instrument a FastAPI python server with Open Telemetry. I installed the dependencies needed through poetry:
opentelemetry-api = "^1.11.1"
opentelemetry-distro = {extras = ["otlp"], version = "^0.31b0"}
opentelemetry-instrumentation-fastapi = "^0.31b0"
When running the server locally, with opentelemetry-instrument --traces_exporter console uvicorn src.main:app --host 0.0.0.0 --port 5000 I can see the traces printed out to my console whenever I call any of my endpoints.
The main issue I face, is when running the app in k8s I see no logs in the collector.
I have added cert-manager kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml (needed by the OTel Operator) and the OTel Operator itself install the operator kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml.
Then, I added a collector with the following config:
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
spec:
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [logging]
And finally, an Instrumentation CR to enable auto-instrumentation:
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: my-instrumentation
spec:
exporter:
endpoint: http://otel-collector:4317
propagators:
- tracecontext
- baggage
sampler:
type: parentbased_traceidratio
argument: "0.25"
My app's deployment contains:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-run-local-with-aws.yml -c
kompose.version: 1.26.1 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-run-local-with-aws.yml -c
kompose.version: 1.26.1 (HEAD)
sidecar.opentelemetry.io/inject: "true"
instrumentation.opentelemetry.io/inject-python: "true"
creationTimestamp: null
labels:
io.kompose.network/backend: "true"
io.kompose.service: api
app: api
spec:
containers:
- env:
- name: APP_ENV
value: docker
- name: RELEASE_VERSION
value: "1.0.0"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "service.name=fastapiApp"
- name: OTEL_LOG_LEVEL
value: "debug"
- name: OTEL_TRACES_EXPORTER
value: otlp_proto_http
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otel-collector:4317
image: my_org/api:1.0.0
name: api
ports:
- containerPort: 5000
resources: {}
imagePullPolicy: IfNotPresent
restartPolicy: Always
What am I missing? I have double checked everything a thousand times and cannot figure out what might be wrong
You have incorrectly configured the exporter endpoint setting OTEL_EXPORTER_OTLP_ENDPOINT. Endpoint value for OTLP over HTTP exporter should have port number 4318. The 4317 port number should be used for OTLP/gRPC exporters.

Access consul-api from consul-connect-service-mesh

In a consul-connect-service-mesh (using k8) how do you get to the consul-api itself?
For example to access the consul-kv.
I'm working through this tutorial, and I'm wondering how
you can bind the consul (http) api in a service to localhost.
Do you have to configure the Helm Chart further?
I would have expected the consul-agent to always be an upstream service.
The only way i found to access the api is via the k8-service consul-server.
Environment:
k8 (1.22.5, docker-desktop)
helm consul (0.42)
consul (1.11.3)
used helm-yaml
global:
name: consul
datacenter: dc1
server:
replicas: 1
securityContext:
runAsNonRoot: false
runAsGroup: 0
runAsUser: 0
fsGroup: 0
ui:
enabled: true
service:
type: 'NodePort'
connectInject:
enabled: true
controller:
enabled: true
You can access the Consul API on the local agent by using the Kubernetes downward API to inject an environment variable in the pod with the IP address of the host. This is documented on Consul.io under Installing Consul on Kubernetes: Accessing the Consul HTTP API.
You will also need to exclude port 8500 (or 8501) from redirection using the consul.hashicorp.com/transparent-proxy-exclude-outbound-ports label.
My current final solution is a (connect)service based on reverse proxy (nginx) that targets consul.
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-kv-proxy
data:
nginx.conf.template: |
error_log /dev/stdout info;
server {
listen 8500;
location / {
access_log off;
proxy_pass http://${MY_NODE_IP}:8500;
error_log /dev/stdout;
}
}
---
apiVersion: v1
kind: Service
metadata:
# This name will be the service name in Consul.
name: consul-kv-proxy
spec:
selector:
app: consul-kv-proxy
ports:
- protocol: TCP
port: 8500
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul-kv-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: consul-kv-proxy
spec:
replicas: 1
selector:
matchLabels:
app: consul-kv-proxy
template:
metadata:
name: consul-kv-proxy
labels:
app: consul-kv-proxy
annotations:
'consul.hashicorp.com/connect-inject': 'true'
spec:
containers:
- name: consul-kv-proxy
image: nginx:1.14.2
volumeMounts:
- name: config
mountPath: "/usr/local/nginx/conf"
readOnly: true
command: ['/bin/bash']
#we have to transform the nginx config to use the node ip address
args:
- -c
- envsubst < /usr/local/nginx/conf/nginx.conf.template > /etc/nginx/conf.d/consul-kv-proxy.conf && nginx -g 'daemon off;'
ports:
- containerPort: 8500
name: http
env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumes:
- name: config
configMap:
name: consul-kv-proxy
# If ACLs are enabled, the serviceAccountName must match the Consul service name.
serviceAccountName: consul-kv-proxy
A downstream service (called static-client) now can be declared like this
apiVersion: v1
kind: Service
metadata:
name: static-client
spec:
selector:
app: static-client
ports:
- port: 80
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: static-client
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: static-client
spec:
replicas: 1
selector:
matchLabels:
app: static-client
template:
metadata:
name: static-client
labels:
app: static-client
annotations:
'consul.hashicorp.com/connect-inject': 'true'
'consul.hashicorp.com/connect-service-upstreams': 'consul-kv-proxy:8500'
spec:
containers:
- name: static-client
image: curlimages/curl:latest
# Just spin & wait forever, we'll use `kubectl exec` to demo
command: ['/bin/sh', '-c', '--']
args: ['while true; do sleep 30; done;']
serviceAccountName: static-client
Assume we have a key-value in consul called "test".
From a pod of the static-client we can now access the consul-web-api with:
curl http://localhost:8500/v1/kv/test
This solution still lacks fine-tuning (i have not try https, or ACL).

Bad Gateway in Rails app with Kubernetes setup

I try to setup a Rails app within a Kubernetes Cluster (which is created with k3d on my local machine.
k3d cluster create --api-port 6550 -p "8081:80#loadbalancer" --agents 2
kubectl create deployment nginx --image=nginx
kubectl create service clusterip nginx --tcp=80:80
# apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
I can get an Ingress running which correctly exposes a running Nginx Deployment ("Welcome to nginx!")
(I took this example from here: https://k3d.io/usage/guides/exposing_services/
So I know my setup is working (with nginx).
Now, I simpy wanna point that ingress to my Rails app, but I always get an "Bad Gateway". (I also tried to point to my other services (elasticsearch, kibana, pgadminer) but I always get a "Bad gateway".
I can see my Rails app running at (http://localhost:62333/)
last lines of my Dockerfile:
EXPOSE 3001:3001
CMD rm -f tmp/pids/server.pid && bundle exec rails s -b 0.0.0.0 -p 3001
Why is my API has the "bad gateway" but Nginx not?
Does it have something to do with my selectors and labels which are created by kompose convert?
This is my complete Rails-API Deployment:
kubectl apply -f api-deployment.yml -f api.service.yml -f ingress.yml
api-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.network/metashop-net: 'true'
io.kompose.service: api
spec:
containers:
- env:
- name: APPLICATION_URL
valueFrom:
configMapKeyRef:
key: APPLICATION_URL
name: env
- name: DEBUG
value: 'true'
- name: ELASTICSEARCH_URL
valueFrom:
configMapKeyRef:
key: ELASTICSEARCH_URL
name: env
image: metashop-backend-api:DockerfileJeanKlaas
name: api
ports:
- containerPort: 3001
resources: {}
# volumeMounts:
# - mountPath: /usr/src/app
# name: api-claim0
# restartPolicy: Always
# volumes:
# - name: api-claim0
# persistentVolumeClaim:
# claimName: api-claim0
status: {}
api-service.yml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
app: api
io.kompose.service: api
name: api
spec:
type: ClusterIP
ports:
- name: '3001'
protocol: TCP
port: 3001
targetPort: 3001
selector:
io.kompose.service: api
status:
loadBalancer: {}
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 3001
configmap.yml
apiVersion: v1
data:
APPLICATION_URL: localhost:3001
ELASTICSEARCH_URL: elasticsearch
RAILS_ENV: development
RAILS_MAX_THREADS: '5'
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: api-env
name: env
I hope I didn't miss anything.
thank you in advance
EDIT: added endpoint, requested in comment:
kind: endpoint
kind: Endpoints
apiVersion: v1
metadata:
name: api
namespace: default
labels:
app: api
io.kompose.service: api
selfLink: /api/v1/namespaces/default/endpoints/api
subsets:
- addresses:
- ip: 10.42.1.105
nodeName: k3d-metashop-cluster-server-0
targetRef:
kind: Pod
namespace: default
apiVersion: v1
ports:
- name: '3001'
port: 3001
protocol: TCP
The problem was within the Dockerfile:
I had not defined ENV RAILS_LOG_TO_STDOUT true, so I was not able to see any errors in the pod logs.
After I added ENV RAILS_LOG_TO_STDOUT true I saw errors like database xxxx does not exist

Kubernetes AWS EKS Failed to load resource: net::ERR_NAME_NOT_RESOLVED

I have an issue with the following AWS EKS deployment, where the front end always get a Failed to load resource: net::ERR_NAME_NOT_RESOLVED from the backend
Failed to load resource: net::ERR_NAME_NOT_RESOLVED
The reason appears to be that the frontend app running from the browser
has no access from the internet to the backend-API in Kubernetes http://restapi-auth-nginx/api/
(see attached browser image)
Here are the details of the configuration
- file: restapi-auth-api.yaml
Description: Backend API using GUNICORN
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/bash
Port 5000
- file: restapi-auth-nginx.yaml
Description: NGINX proxy for the API
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/bash I can also reach the api pod from the nginx pod so this part is working fine
- file: frontend.yaml
Description: NGINX proxy plus Angular App in a multistage deployment
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/ash
I can also reach the api pod from the frontend pod so this part is working fine
However, from the browser, I still get the above error even if all the components appear to be working fine
(see the image of the web site working from the browser)
Let me show you how I can access the api through its NGINX pod from the frontend pod. here are our pods
kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-7674f4d9bf-jbd2q 1/1 Running 0 35m
restapi-auth-857f94b669-j8m7t 1/1 Running 0 39m
restapi-auth-nginx-5d885c7b69-xt6hf 1/1 Running 0 38m
udagram-frontend-5fbc78956c-nvl8d 1/1 Running 0 41m
Now let's log into one pod and get frontend pod and do a curl to the NGINX proxy that sever the API.
Let's try with this curl request directly from the frontend pod to the Nginx backend
curl --location --request POST 'http://restapi-auth-nginx/api/users/auth/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "email":"david#me.com",
> "password":"SuperPass"
> }'
Now let's log into the frontend pod and see if it works
kubectl exec -it frontend-7674f4d9bf-jbd2q /bin/ash
/usr/share/nginx/html # curl --location --request POST 'http://restapi-auth-nginx/api/users/auth/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "email":"david#me.com",
> "password":"SuperPass"
> }'
{
"auth": true,
"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyIjoiZGF2aWRAcHlta455aS5vcmciLCJleHAiOjE1OTYwNTk7896.OIkuwLsyLhrlCMTVlccg8524OUMnkJ2qJ5fkj-7J5W0",
"user": "david#me.com"
}
It works perfectly meaning that the frontend is correctly communicating to the restapi-auth-Nginx API reverse proxy
Here in this image, you have the output of multiple commands
Here are the .yaml files
LOAD BALANCER and FRONT END
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: udagram
spec:
replicas: 1
selector:
matchLabels:
app: udagram
tier: frontend
template:
metadata:
labels:
app : udagram
tier: frontend
spec:
containers:
- name: udagram-frontend
image: pythonss/frontend_udacity_app
resources:
requests:
cpu: 100m
memory: 100Mi
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: frontend-lb
labels:
app: udagram
tier: frontend
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: udagram
tier: frontend
Nginx reverse proxy for API backend
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
name: restapi-auth-nginx
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: restapi-auth-nginx
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
spec:
containers:
- image: pythonss/restapi_auth_microservice_nginx
imagePullPolicy: Always
name: restapi-auth-nginx-nginx
ports:
- containerPort: 80
resources: {}
imagePullSecrets:
- name: regcred
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
name: restapi-auth-nginx
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: restapi-auth-nginx
status:
loadBalancer: {}
For brevity, I will not share the API app server .yaml file.
So my questions are:
How could I grant access from the internet to the backend API gateway without exposing the API to the world?
Or should I expose the API, through an LB as so:
apiVersion: v1
kind: Service
metadata:
name: backend-lb
labels:
io.kompose.service: restapi-auth-nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
io.kompose.service: restapi-auth-nginx
This will solve the issue as it will expose the API.
However, I now need to add the FrontEnd LB to CORS in the API and the BackEndLB to the FrontEnd so it can make the calls.
Could someone explain how to do otherwise without exposing the APIs?
What are the common patterns for this architetcure?
BR
The solution to this is to just expose the NGINX pods ( note that the docker-compose create 2 images) through a Load balancer service
We need this another yaml:
apiVersion: v1
kind: Service
metadata:
name: backend-lb
labels:
io.kompose.service: restapi-auth-nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
io.kompose.service: restapi-auth-nginx
Now we will have 3 services and 2 deployments. This latest service exposes the API to the world
Below is an image for this deployment
As you can see the world has access to this latest services only.
The GNINX and GUNICORN are unreachable from the internet.
Now your frontend application can access the API through the exposed LB (represented in black) inside the Kubernetes

Kubernetes service without a external IP, not able to access the service

I'm trying to deploy my container in a kubernetes cluster but I'm not getting an External ip and hence I'm not able to access the server.
This is my .yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
name: service-app
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: service-app
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
spec:
containers:
- image: magneto-challengue:1.0
imagePullPolicy: ""
name: magneto-challengue
ports:
- containerPort: 8080
resources: {}
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: service-app
name: service-app
spec:
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
selector:
io.kompose.service: service-app
type: NodePort
When I use the kubectl get svc,deployment,pods command. I'm getting the next response:
As you can see I'm not getting an external Ip. With the kubectl describe service service-app command I'm getting the next response:
I tried with the 10.107.179.10 ip, but it didn't work.
Any idea?
You can not use 10.107.179.10 IP to access a pod from outside kubernetes cluster because that IP is clusterIP and is valid inside the kubernetes cluster and can be used from another pod for example.
NodePort type does not get an EXTERNAL-IP. To access a pod from outside the kubernetes cluster via NodePort service you can use NodeIP:NodePort where NodeIP is any of your kubernetes cluster nodes IP address.