So I am trying to instrument a FastAPI python server with Open Telemetry. I installed the dependencies needed through poetry:
opentelemetry-api = "^1.11.1"
opentelemetry-distro = {extras = ["otlp"], version = "^0.31b0"}
opentelemetry-instrumentation-fastapi = "^0.31b0"
When running the server locally, with opentelemetry-instrument --traces_exporter console uvicorn src.main:app --host 0.0.0.0 --port 5000 I can see the traces printed out to my console whenever I call any of my endpoints.
The main issue I face, is when running the app in k8s I see no logs in the collector.
I have added cert-manager kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml (needed by the OTel Operator) and the OTel Operator itself install the operator kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml.
Then, I added a collector with the following config:
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
spec:
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [logging]
And finally, an Instrumentation CR to enable auto-instrumentation:
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: my-instrumentation
spec:
exporter:
endpoint: http://otel-collector:4317
propagators:
- tracecontext
- baggage
sampler:
type: parentbased_traceidratio
argument: "0.25"
My app's deployment contains:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-run-local-with-aws.yml -c
kompose.version: 1.26.1 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-run-local-with-aws.yml -c
kompose.version: 1.26.1 (HEAD)
sidecar.opentelemetry.io/inject: "true"
instrumentation.opentelemetry.io/inject-python: "true"
creationTimestamp: null
labels:
io.kompose.network/backend: "true"
io.kompose.service: api
app: api
spec:
containers:
- env:
- name: APP_ENV
value: docker
- name: RELEASE_VERSION
value: "1.0.0"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "service.name=fastapiApp"
- name: OTEL_LOG_LEVEL
value: "debug"
- name: OTEL_TRACES_EXPORTER
value: otlp_proto_http
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otel-collector:4317
image: my_org/api:1.0.0
name: api
ports:
- containerPort: 5000
resources: {}
imagePullPolicy: IfNotPresent
restartPolicy: Always
What am I missing? I have double checked everything a thousand times and cannot figure out what might be wrong
You have incorrectly configured the exporter endpoint setting OTEL_EXPORTER_OTLP_ENDPOINT. Endpoint value for OTLP over HTTP exporter should have port number 4318. The 4317 port number should be used for OTLP/gRPC exporters.
Related
I've tried to make a custom metric exporter for my kubernetes ..
And still failing to get the right value in order to exploit if within an hpa with this simplistic "a" variable ..
I don't really know how to export it according to different routes .. if I do {"a":"1"}, the error message on the hpa tells me it's missing kind .. then another fields ..
So I've did sum up the complete reproducible experiment below, in order to make it the simplest ever
Any Idea on how to complete this task ?
Thanks a lot for any clue, advice, enlightment, comment, notice
apiVersion: v1
kind: ConfigMap
metadata:
name: exporter
namespace: test
data:
os.php: |
<?php // If anyone knows a simple replacement other than swoole .. he's welcome --- Somehow I only think I'll need to know what the json output is expected for theses routes
$server = new Swoole\HTTP\Server("0.0.0.0",443,SWOOLE_PROCESS,SWOOLE_SOCK_TCP | SWOOLE_SSL);
$server->set(['worker_num' => 1,'ssl_cert_file' => __DIR__ . '/example.com+5.pem','ssl_key_file' => __DIR__ . '/example.com+5-key.pem']);
$server->on('Request', 'onMessage');
$server->start();
function onMessage($req, $res){
$value=1;
$url=$req->server['request_uri'];
file_put_contents('monolog.log',"\n".$url,8);//Log
if($url=='/'){
$res->end('{"status":"healthy"}');return;
} elseif($url=='/metrics'){
$res->end('a '.$value);return;
} elseif($url=='/apis/custom.metrics.k8s.io/v1beta1'){ // <-- This url is called lots of time in the logs
$res->end('{"kind": "APIResourceList","apiVersion": "v1","groupVersion": "custom.metrics.k8s.io/v1beta1","resources": [{"name": "namespaces/a","singularName": "","namespaced": false,"kind": "MetricValueList","verbs": ["get"]}]}');return;
} elseif($url=='/apis/custom.metrics.k8s.io/v1beta1/namespaces/test/services/test-metrics-exporter/a'){
$res->end('{"kind": "MetricValueList","apiVersion": "custom.metrics.k8s.io/v1beta1","metadata": {"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/test/services/test-metrics-exporter-svc/a"},"items": [{"describedObject": {"kind": "Service","namespace": "test","name": "test-metrics-exporter-svc","apiVersion": "/v1"},"metricName": "a","timestamp": "2020-06-21T08:35:58Z","value": "'.$value.'","selector": null}]}');return;
}
$res->status(404);return;
}
---
apiVersion: v1
kind: Service
metadata:
name: test-metrics-exporter
namespace: test
annotations:
prometheus.io/port: '443'
prometheus.io/scrape: 'true'
spec:
ports:
- port: 443
protocol: TCP
selector:
app: test-metrics-exporter
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-metrics-exporter
namespace: test
spec:
selector:
matchLabels:
app: test-metrics-exporter
template:
metadata:
labels:
app: test-metrics-exporter
spec:
terminationGracePeriodSeconds: 1
volumes:
- name: exporter
configMap:
name: exporter
defaultMode: 0744
items:
- key: os.php
path: os.php
containers:
- name: test-metrics-exporter
image: openswoole/swoole:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: exporter
mountPath: /var/www/os.php
subPath: os.php
command:
- /bin/sh
- -c
- |
touch monolog.log;
apt update && apt install wget -y && wget https://github.com/FiloSottile/mkcert/releases/download/v1.4.3/mkcert-v1.4.3-linux-amd64
cp mkcert-v1.4.3-linux-amd64 /usr/local/bin/mkcert && chmod +x /usr/local/bin/mkcert
mkcert example.com "*.example.com" example.test localhost 127.0.0.1 ::1
php os.php &
tail -f monolog.log
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: alpine
namespace: test
spec:
selector:
matchLabels:
app: alpine
template:
metadata:
labels:
app: alpine
spec:
terminationGracePeriodSeconds: 1
containers:
- name: alpine
image: alpine
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- |
tail -f /dev/null
---
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: alpine
namespace: test
spec:
scaleTargetRef:
kind: Deployment
name: alpine
apiVersion: apps/v1
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
target:
kind: Service
name: test-metrics-exporter
metricName: a
targetValue: '1'
---
# This is the Api hook for custom metrics
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
name: v1beta1.custom.metrics.k8s.io
namespace: test
spec:
insecureSkipTLSVerify: true
group: custom.metrics.k8s.io
groupPriorityMinimum: 1000
versionPriority: 5
version: v1beta1
service:
name: test-metrics-exporter
namespace: test
port: 443
In a consul-connect-service-mesh (using k8) how do you get to the consul-api itself?
For example to access the consul-kv.
I'm working through this tutorial, and I'm wondering how
you can bind the consul (http) api in a service to localhost.
Do you have to configure the Helm Chart further?
I would have expected the consul-agent to always be an upstream service.
The only way i found to access the api is via the k8-service consul-server.
Environment:
k8 (1.22.5, docker-desktop)
helm consul (0.42)
consul (1.11.3)
used helm-yaml
global:
name: consul
datacenter: dc1
server:
replicas: 1
securityContext:
runAsNonRoot: false
runAsGroup: 0
runAsUser: 0
fsGroup: 0
ui:
enabled: true
service:
type: 'NodePort'
connectInject:
enabled: true
controller:
enabled: true
You can access the Consul API on the local agent by using the Kubernetes downward API to inject an environment variable in the pod with the IP address of the host. This is documented on Consul.io under Installing Consul on Kubernetes: Accessing the Consul HTTP API.
You will also need to exclude port 8500 (or 8501) from redirection using the consul.hashicorp.com/transparent-proxy-exclude-outbound-ports label.
My current final solution is a (connect)service based on reverse proxy (nginx) that targets consul.
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-kv-proxy
data:
nginx.conf.template: |
error_log /dev/stdout info;
server {
listen 8500;
location / {
access_log off;
proxy_pass http://${MY_NODE_IP}:8500;
error_log /dev/stdout;
}
}
---
apiVersion: v1
kind: Service
metadata:
# This name will be the service name in Consul.
name: consul-kv-proxy
spec:
selector:
app: consul-kv-proxy
ports:
- protocol: TCP
port: 8500
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: consul-kv-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: consul-kv-proxy
spec:
replicas: 1
selector:
matchLabels:
app: consul-kv-proxy
template:
metadata:
name: consul-kv-proxy
labels:
app: consul-kv-proxy
annotations:
'consul.hashicorp.com/connect-inject': 'true'
spec:
containers:
- name: consul-kv-proxy
image: nginx:1.14.2
volumeMounts:
- name: config
mountPath: "/usr/local/nginx/conf"
readOnly: true
command: ['/bin/bash']
#we have to transform the nginx config to use the node ip address
args:
- -c
- envsubst < /usr/local/nginx/conf/nginx.conf.template > /etc/nginx/conf.d/consul-kv-proxy.conf && nginx -g 'daemon off;'
ports:
- containerPort: 8500
name: http
env:
- name: MY_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumes:
- name: config
configMap:
name: consul-kv-proxy
# If ACLs are enabled, the serviceAccountName must match the Consul service name.
serviceAccountName: consul-kv-proxy
A downstream service (called static-client) now can be declared like this
apiVersion: v1
kind: Service
metadata:
name: static-client
spec:
selector:
app: static-client
ports:
- port: 80
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: static-client
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: static-client
spec:
replicas: 1
selector:
matchLabels:
app: static-client
template:
metadata:
name: static-client
labels:
app: static-client
annotations:
'consul.hashicorp.com/connect-inject': 'true'
'consul.hashicorp.com/connect-service-upstreams': 'consul-kv-proxy:8500'
spec:
containers:
- name: static-client
image: curlimages/curl:latest
# Just spin & wait forever, we'll use `kubectl exec` to demo
command: ['/bin/sh', '-c', '--']
args: ['while true; do sleep 30; done;']
serviceAccountName: static-client
Assume we have a key-value in consul called "test".
From a pod of the static-client we can now access the consul-web-api with:
curl http://localhost:8500/v1/kv/test
This solution still lacks fine-tuning (i have not try https, or ACL).
Im new to Kub and i converted my envirement from docker-compose,
I have a pod that have python code - if i use my docker on the same host i can accsess
but when its on pod no traffic goes inside,
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.10.10.130:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
api-service.yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
ports:
- name: "5001"
port: 5001
targetPort: 5001
selector:
io.kompose.service: api
status:
loadBalancer: {}
api-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- image: 127.0.0.1:5000/api:latest
imagePullPolicy: "Never"
name: api
ports:
- containerPort: 5001
resources: {}
volumeMounts:
- mountPath: /base
name: api-hostpath0
restartPolicy: Always
serviceAccountName: ""
volumes:
- hostPath:
path: /root/ansible/api/base
name: api-hostpath0
status: {}
pod log:
* Serving Flask app 'server' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
* Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://10.244.0.17:5001/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 553-272-086
I tried reaching what the config view shows and i get this :
https://10.10.10.130:6443/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
The path to reach through container is :
https://10.10.10.130:5001/
It does not reach container and says like site does not exists -
again this works on docker container so what am i missing ?
Thanks
--EDIT--
If i curl http://10.244.0.17:5001/ (the address the api pod) from host i get in, why i cannot get in from outside?
Also tried adding nginx + api pod deployment
template:
spec:
hostNetwork: true
Still cannot reach please help
Found the solution!
I needed to add externalIPs to my pods service.yaml (api and nginx)
spec:
ports:
- name: "8443"
port: 8443
targetPort: 80
externalIPs:
- 10.10.10.130
I try to setup a Rails app within a Kubernetes Cluster (which is created with k3d on my local machine.
k3d cluster create --api-port 6550 -p "8081:80#loadbalancer" --agents 2
kubectl create deployment nginx --image=nginx
kubectl create service clusterip nginx --tcp=80:80
# apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
I can get an Ingress running which correctly exposes a running Nginx Deployment ("Welcome to nginx!")
(I took this example from here: https://k3d.io/usage/guides/exposing_services/
So I know my setup is working (with nginx).
Now, I simpy wanna point that ingress to my Rails app, but I always get an "Bad Gateway". (I also tried to point to my other services (elasticsearch, kibana, pgadminer) but I always get a "Bad gateway".
I can see my Rails app running at (http://localhost:62333/)
last lines of my Dockerfile:
EXPOSE 3001:3001
CMD rm -f tmp/pids/server.pid && bundle exec rails s -b 0.0.0.0 -p 3001
Why is my API has the "bad gateway" but Nginx not?
Does it have something to do with my selectors and labels which are created by kompose convert?
This is my complete Rails-API Deployment:
kubectl apply -f api-deployment.yml -f api.service.yml -f ingress.yml
api-deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.network/metashop-net: 'true'
io.kompose.service: api
spec:
containers:
- env:
- name: APPLICATION_URL
valueFrom:
configMapKeyRef:
key: APPLICATION_URL
name: env
- name: DEBUG
value: 'true'
- name: ELASTICSEARCH_URL
valueFrom:
configMapKeyRef:
key: ELASTICSEARCH_URL
name: env
image: metashop-backend-api:DockerfileJeanKlaas
name: api
ports:
- containerPort: 3001
resources: {}
# volumeMounts:
# - mountPath: /usr/src/app
# name: api-claim0
# restartPolicy: Always
# volumes:
# - name: api-claim0
# persistentVolumeClaim:
# claimName: api-claim0
status: {}
api-service.yml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -c --file metashop-backend/docker-compose.yml --file metashop-backend/docker-compose.override.yml
kompose.version: 1.22.0 (HEAD)
creationTimestamp: null
labels:
app: api
io.kompose.service: api
name: api
spec:
type: ClusterIP
ports:
- name: '3001'
protocol: TCP
port: 3001
targetPort: 3001
selector:
io.kompose.service: api
status:
loadBalancer: {}
ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 3001
configmap.yml
apiVersion: v1
data:
APPLICATION_URL: localhost:3001
ELASTICSEARCH_URL: elasticsearch
RAILS_ENV: development
RAILS_MAX_THREADS: '5'
kind: ConfigMap
metadata:
creationTimestamp: null
labels:
io.kompose.service: api-env
name: env
I hope I didn't miss anything.
thank you in advance
EDIT: added endpoint, requested in comment:
kind: endpoint
kind: Endpoints
apiVersion: v1
metadata:
name: api
namespace: default
labels:
app: api
io.kompose.service: api
selfLink: /api/v1/namespaces/default/endpoints/api
subsets:
- addresses:
- ip: 10.42.1.105
nodeName: k3d-metashop-cluster-server-0
targetRef:
kind: Pod
namespace: default
apiVersion: v1
ports:
- name: '3001'
port: 3001
protocol: TCP
The problem was within the Dockerfile:
I had not defined ENV RAILS_LOG_TO_STDOUT true, so I was not able to see any errors in the pod logs.
After I added ENV RAILS_LOG_TO_STDOUT true I saw errors like database xxxx does not exist
Im new to Kubernetes and i saw that there is a way runing Kompose up, but i get this error:
root#master-node:kompose --file docker-compose.yml --volumes hostPath up
INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
FATA Error while deploying application: Error loading config file "/etc/kubernetes/admin.conf": open /etc/kubernetes/admin.conf: permission denied
ls -l /etc/kubernetes/admin.conf
-rw------- 1 root root 5593 Jun 14 08:59 /etc/kubernetes/admin.conf
How can i make this work ?
system info :
ubuntu18
Client Version: version.Info: Major:"1", Minor:"21"
I also tried making it work like that:
kompose convert --volumes hostPath
INFO Kubernetes file "postgres-1-service.yaml" created
INFO Kubernetes file "postgres-2-service.yaml" created
INFO Kubernetes file "postgres-1-deployment.yaml" created
INFO Kubernetes file "postgres-2-deployment.yaml" created
kubectl create -f postgres-1-service.yaml,postgres-2-service.yaml,postgres-1-deployment.yaml,postgres-1-claim0-persistentvolumeclaim.yaml,postgres-1-claim1-persistentvolumeclaim.yaml,postgres-2-deployment.yaml,postgres-2-claim0-persistentvolumeclaim.yaml
I get this error only on postgres-1-deployment.yaml and postgres-2-deployment.yaml.
service/postgres-1 created
service/postgres-2 created
persistentvolumeclaim/postgres-1-claim0 created
persistentvolumeclaim/postgres-1-claim1 created
persistentvolumeclaim/postgres-2-claim0 created
Error from server (BadRequest): error when creating "postgres-1-deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|sword"},{"name":"REPMGR_PGHBA_TRUST_ALL","value":true},{"name":"REPMGR_PRIMARY_HOST","value":"postgr|...
Error from server (BadRequest): error when creating "postgres-2-deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found t, error found in #10 byte of ...|,"value":true},{"nam|..., bigger context ...|sword"},{"name":"REPMGR_PGHBA_TRUST_ALL","value":true},{"name":"REPMGR_PRIMARY_HOST","value":"postgr|...
example of postgres-1-deployment.yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: postgres-1
name: postgres-1
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: postgres-1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: /snap/kompose/19/kompose-linux-amd64 convert --volumes hostPath
kompose.version: 1.21.0 (992df58d8)
creationTimestamp: null
labels:
io.kompose.service: postgres-1
spec:
containers:
- env:
- name: BITNAMI_DEBUG
value: "true"
- name: POSTGRESQL_PASSWORD
value: password
- name: POSTGRESQL_POSTGRES_PASSWORD
value: adminpassword
- name: POSTGRESQL_USERNAME
value: user
- name: REPMGR_NODE_NAME
value: postgres-1
- name: REPMGR_NODE_NETWORK_NAME
value: postgres-1
- name: REPMGR_PARTNER_NODES
value: postgres-1,postgres-2:5432
- name: REPMGR_PASSWORD
value: repmgrpassword
- name: REPMGR_PGHBA_TRUST_ALL
value: yes
- name: REPMGR_PRIMARY_HOST
value: postgres-1
- name: REPMGR_PRIMARY_PORT
value: "5432"
image: bitnami/postgresql-repmgr:11
imagePullPolicy: ""
name: postgres-1
ports:
- containerPort: 5432
resources: {}
volumeMounts:
- mountPath: /bitnami/postgresql
name: postgres-1-hostpath0
- mountPath: /docker-entrypoint-initdb.d
name: postgres-1-hostpath1
restartPolicy: Always
serviceAccountName: ""
volumes:
- hostPath:
path: /db4_data
name: postgres-1-hostpath0
- hostPath:
path: /root/ansible/api/posrgres11/cluster
name: postgres-1-hostpath1
status: {}
Is kompose translated deploy.yml files the wrong way ? i did everything like guided on kompose guid
figured the issue is Kompose translated env value true without quotes "true"
used this verifier https://kubeyaml.com/