Four different errors when using kubectl apply command - kubernetes

My docker-compose file is as below:
version: '3'
services:
nginx:
build: .
container_name: nginx
ports:
- '80:80'
And my Dockerfile:
FROM nginx:alpine
I used kompose konvert and it created two files called nginx-deployment.yml and nginx-service.yml with the below contents.
nginx-deployment.yml:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nginx
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
spec:
containers:
- image: nginx:alpine
name: nginx
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
And nginx-service.yml:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: nginx
name: nginx
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: nginx
status:
loadBalancer: {}
My kustomization.yml:
resources:
- nginx-deployment.yml
- nginx-service.yml
All files are in the same path /home.
I run these three commands but for each of them I got a different error:
kubectl apply -f kustomization.yml:
error: error validating "kustomization.yml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl apply -k .:
error: accumulating resources: accumulation err='accumulating resources from 'nginx-deployment.yml': evalsymlink failure on '/home/nginx-deployment.yml' : lstat /home/nginx-deployment.yml: no such file or directory': evalsymlink failure on '/home/nginx-deployment.yml' : lstat /home/nginx-deployment.yml: no such file or directory
kubectl apply -f kustomization.yml --validate=false:
error: unable to decode "kustomization.yml": Object 'Kind' is missing in '{"resources":["nginx-deployment.yml","nginx-service.yml"]}'
kubectl apply -k . --validate=false:
error: accumulating resources: accumulation err='accumulating resources from 'nginx-deployment.yml': evalsymlink failure on '/home/nginx-deployment.yml' : lstat /home/nginx-deployment.yml: no such file or directory': evalsymlink failure on '/home/nginx-deployment.yml' : lstat /home/nginx-deployment.yml: no such file or directory
The kubernetes is a single node.
Why I'm getting these errors and how may I run my container in this environment?

Your Kustomization.yml file has two errors. The files generated by kompose have .yaml extensions but you are referring to .yml and you are missing the kind and apiVersion lines.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- nginx-deployment.yaml
- nginx-service.yaml
kubectl apply -k .

Related

Open Telemetry python-FastAPI auto instrumentation in k8s not collecting data

So I am trying to instrument a FastAPI python server with Open Telemetry. I installed the dependencies needed through poetry:
opentelemetry-api = "^1.11.1"
opentelemetry-distro = {extras = ["otlp"], version = "^0.31b0"}
opentelemetry-instrumentation-fastapi = "^0.31b0"
When running the server locally, with opentelemetry-instrument --traces_exporter console uvicorn src.main:app --host 0.0.0.0 --port 5000 I can see the traces printed out to my console whenever I call any of my endpoints.
The main issue I face, is when running the app in k8s I see no logs in the collector.
I have added cert-manager kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml (needed by the OTel Operator) and the OTel Operator itself install the operator kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml.
Then, I added a collector with the following config:
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
spec:
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [logging]
And finally, an Instrumentation CR to enable auto-instrumentation:
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: my-instrumentation
spec:
exporter:
endpoint: http://otel-collector:4317
propagators:
- tracecontext
- baggage
sampler:
type: parentbased_traceidratio
argument: "0.25"
My app's deployment contains:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-run-local-with-aws.yml -c
kompose.version: 1.26.1 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-run-local-with-aws.yml -c
kompose.version: 1.26.1 (HEAD)
sidecar.opentelemetry.io/inject: "true"
instrumentation.opentelemetry.io/inject-python: "true"
creationTimestamp: null
labels:
io.kompose.network/backend: "true"
io.kompose.service: api
app: api
spec:
containers:
- env:
- name: APP_ENV
value: docker
- name: RELEASE_VERSION
value: "1.0.0"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "service.name=fastapiApp"
- name: OTEL_LOG_LEVEL
value: "debug"
- name: OTEL_TRACES_EXPORTER
value: otlp_proto_http
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otel-collector:4317
image: my_org/api:1.0.0
name: api
ports:
- containerPort: 5000
resources: {}
imagePullPolicy: IfNotPresent
restartPolicy: Always
What am I missing? I have double checked everything a thousand times and cannot figure out what might be wrong
You have incorrectly configured the exporter endpoint setting OTEL_EXPORTER_OTLP_ENDPOINT. Endpoint value for OTLP over HTTP exporter should have port number 4318. The 4317 port number should be used for OTLP/gRPC exporters.

Bad Gateway in Rails app with Kubernetes setup

I try to setup a Rails app within a Kubernetes Cluster (which is created with k3d on my local machine.
k3d cluster create --api-port 6550 -p "8081:80#loadbalancer" --agents 2
kubectl create deployment nginx --image=nginx
kubectl create service clusterip nginx --tcp=80:80
# apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
I can get an Ingress running which correctly exposes a running Nginx Deployment ("Welcome to nginx!")
(I took this example from here: https://k3d.io/usage/guides/exposing_services/
So I know my setup is working (with nginx).
Now, I simpy wanna point that ingress to my Rails app, but I always get an "Bad Gateway". (I also tried to point to my other services (elasticsearch, kibana, pgadminer) but I always get a "Bad gateway".
I can see my Rails app running at (http://localhost:62333/)
last lines of my Dockerfile:
EXPOSE 3001:3001
CMD rm -f tmp/pids/server.pid && bundle exec rails s -b 0.0.0.0 -p 3001
Why is my API has the "bad gateway" but Nginx not?
Does it have something to do with my selectors and labels which are created by kompose convert?
This is my complete Rails-API Deployment:
kubectl apply -f api-deployment.yml -f api.service.yml -f ingress.yml
api-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.network/metashop-net: 'true'
io.kompose.service: api
spec:
containers:
- env:
- name: APPLICATION_URL
valueFrom:
configMapKeyRef:
key: APPLICATION_URL
name: env
- name: DEBUG
value: 'true'
- name: ELASTICSEARCH_URL
valueFrom:
configMapKeyRef:
key: ELASTICSEARCH_URL
name: env
image: metashop-backend-api:DockerfileJeanKlaas
name: api
ports:
- containerPort: 3001
resources: {}
# volumeMounts:
# - mountPath: /usr/src/app
# name: api-claim0
# restartPolicy: Always
# volumes:
# - name: api-claim0
# persistentVolumeClaim:
# claimName: api-claim0
status: {}
api-service.yml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
app: api
io.kompose.service: api
name: api
spec:
type: ClusterIP
ports:
- name: '3001'
protocol: TCP
port: 3001
targetPort: 3001
selector:
io.kompose.service: api
status:
loadBalancer: {}
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 3001
configmap.yml
apiVersion: v1
data:
APPLICATION_URL: localhost:3001
ELASTICSEARCH_URL: elasticsearch
RAILS_ENV: development
RAILS_MAX_THREADS: '5'
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: api-env
name: env
I hope I didn't miss anything.
thank you in advance
EDIT: added endpoint, requested in comment:
kind: endpoint
kind: Endpoints
apiVersion: v1
metadata:
name: api
namespace: default
labels:
app: api
io.kompose.service: api
selfLink: /api/v1/namespaces/default/endpoints/api
subsets:
- addresses:
- ip: 10.42.1.105
nodeName: k3d-metashop-cluster-server-0
targetRef:
kind: Pod
namespace: default
apiVersion: v1
ports:
- name: '3001'
port: 3001
protocol: TCP
The problem was within the Dockerfile:
I had not defined ENV RAILS_LOG_TO_STDOUT true, so I was not able to see any errors in the pod logs.
After I added ENV RAILS_LOG_TO_STDOUT true I saw errors like database xxxx does not exist

Converting docker-compose to kubernetes using Kompose

Im new to Kubernetes and i saw that there is a way runing Kompose up, but i get this error:
root#master-node:kompose --file docker-compose.yml --volumes hostPath up
INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
FATA Error while deploying application: Error loading config file "/etc/kubernetes/admin.conf": open /etc/kubernetes/admin.conf: permission denied
ls -l /etc/kubernetes/admin.conf
-rw------- 1 root root 5593 Jun 14 08:59 /etc/kubernetes/admin.conf
How can i make this work ?
system info :
ubuntu18
Client Version: version.Info: Major:"1", Minor:"21"
I also tried making it work like that:
kompose convert --volumes hostPath
INFO Kubernetes file "postgres-1-service.yaml" created
INFO Kubernetes file "postgres-2-service.yaml" created
INFO Kubernetes file "postgres-1-deployment.yaml" created
INFO Kubernetes file "postgres-2-deployment.yaml" created
kubectl create -f postgres-1-service.yaml,postgres-2-service.yaml,postgres-1-deployment.yaml,postgres-1-claim0-persistentvolumeclaim.yaml,postgres-1-claim1-persistentvolumeclaim.yaml,postgres-2-deployment.yaml,postgres-2-claim0-persistentvolumeclaim.yaml
I get this error only on postgres-1-deployment.yaml and postgres-2-deployment.yaml.
service/postgres-1 created
service/postgres-2 created
persistentvolumeclaim/postgres-1-claim0 created
persistentvolumeclaim/postgres-1-claim1 created
persistentvolumeclaim/postgres-2-claim0 created
Error from server (BadRequest): error when creating "postgres-1-deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|sword"},{"name":"REPMGR_PGHBA_TRUST_ALL","value":true},{"name":"REPMGR_PRIMARY_HOST","value":"postgr|...
Error from server (BadRequest): error when creating "postgres-2-deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|sword"},{"name":"REPMGR_PGHBA_TRUST_ALL","value":true},{"name":"REPMGR_PRIMARY_HOST","value":"postgr|...
example of postgres-1-deployment.yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: postgres-1
name: postgres-1
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: postgres-1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: postgres-1
spec:
containers:
- env:
- name: BITNAMI_DEBUG
value: "true"
- name: POSTGRESQL_PASSWORD
value: password
- name: POSTGRESQL_POSTGRES_PASSWORD
value: adminpassword
- name: POSTGRESQL_USERNAME
value: user
- name: REPMGR_NODE_NAME
value: postgres-1
- name: REPMGR_NODE_NETWORK_NAME
value: postgres-1
- name: REPMGR_PARTNER_NODES
value: postgres-1,postgres-2:5432
- name: REPMGR_PASSWORD
value: repmgrpassword
- name: REPMGR_PGHBA_TRUST_ALL
value: yes
- name: REPMGR_PRIMARY_HOST
value: postgres-1
- name: REPMGR_PRIMARY_PORT
value: "5432"
image: bitnami/postgresql-repmgr:11
imagePullPolicy: ""
name: postgres-1
ports:
- containerPort: 5432
resources: {}
volumeMounts:
- mountPath: /bitnami/postgresql
name: postgres-1-hostpath0
- mountPath: /docker-entrypoint-initdb.d
name: postgres-1-hostpath1
restartPolicy: Always
serviceAccountName: ""
volumes:
- hostPath:
path: /db4_data
name: postgres-1-hostpath0
- hostPath:
path: /root/ansible/api/posrgres11/cluster
name: postgres-1-hostpath1
status: {}
Is kompose translated deploy.yml files the wrong way ? i did everything like guided on kompose guid
figured the issue is Kompose translated env value true without quotes "true"
used this verifier https://kubeyaml.com/

After running Kompose I get: pod has unbound immediate PersistentVolumeClaims

Whats the Problem?
I can't get my pods running which are using a volume. In the Kubernetes Dashboard I got the following error:
running "VolumeBinding" filter plugin for pod "influxdb-6979bff6f9-hpf89": pod has unbound immediate PersistentVolumeClaims
What did I do?
After running Kompose convert to my docker-compose.yml file I tried to start the pods with micro8ks kubectl apply -f . (I am using micro8ks) I had to replace the version of the networkpolicy yaml files with networking.k8s.io/v1 (see here) but except of this change, I didn't change anything.
YAML Files
influxdb-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: influxdb
name: influxdb
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: influxdb
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.network/cloud-net: "true"
io.kompose.network/default: "true"
io.kompose.service: influxdb
spec:
containers:
- env:
- name: INFLUXDB_HTTP_LOG_ENABLED
value: "false"
image: influxdb:1.8
imagePullPolicy: ""
name: influxdb
ports:
- containerPort: 8086
resources: {}
volumeMounts:
- mountPath: /var/lib/influxdb
name: influx
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: influx
persistentVolumeClaim:
claimName: influx
status: {}
influxdb-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: influxdb
name: influxdb
spec:
ports:
- name: "8087"
port: 8087
targetPort: 8086
selector:
io.kompose.service: influxdb
status:
loadBalancer: {}
influx-persistenvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: influx
name: influx
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
The PersistentVolumeClaim will be unbound if either the cluster does not have a StorageClass which can dynamically provision a PersistentVolume or it does not have a manually created PersistentVolume to satisfy the PersistentVolumeClaim
Here is a guide on how to configure a pod to use PersistentVolume
To solve the current scenario you can manually create a PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100Mi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Please note usage of hostPath is only as an example. It's not recommended for production usage. Consider using external block or file storage from the supported types here

Error: `selector` does not match template `labels` after adding selector

I am trying to deploy this kubernetes deployment; however, when ever I do: kubectl apply -f es-deployment.yaml it throws the error: Error: `selector` does not match template `labels
I have already tried to add the selector, matchLabels under the specs section but it seems like that did not work. Below is my yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
name: elasticsearchconnector
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
spec:
selector:
matchLabels:
app: elasticsearchconnector
containers:
- env:
- [env stuff]
image: confluentinc/cp-kafka-connect:latest
name: elasticsearchconnector
ports:
- containerPort: 28082
resources: {}
volumeMounts:
- mountPath: /etc/kafka-connect
name: elasticsearchconnector-hostpath0
- mountPath: /etc/kafka-elasticsearch
name: elasticsearchconnector-hostpath1
- mountPath: /etc/kafka
name: elasticsearchconnector-hostpath2
restartPolicy: Always
volumes:
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-connect
name: elasticsearchconnector-hostpath0
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafka-elasticsearch
name: elasticsearchconnector-hostpath1
- hostPath:
path: /home/centos/Sprint0Demo/DockerScriptsAWS/ServersCompose/home/centos/Sprint0Demo/Servers/elasticSearchConnector/etc/kafak
name: elasticsearchconnector-hostpath2
status: {}
Your labels and selectors are misplaced.
First, you need to specify which pods the deployment will control:
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearchconnector
Then you need to label the pod properly:
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose.yaml --volumes hostPath
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: elasticsearchconnector
app: elasticsearchconnector
spec:
containers: