Kubernetes NiFi Cluster setup in AKS - kubernetes

Thought to post this because it might help someone. I couldn't find Kubernetes NiFi setup without helm package, so I have prepared the below configuration YAML for Kubernetes NiFi Cluster setup.
Here's the link for Zookeeper Cluster setup in AKS

Please comment if you see any issues anywhere in the configuration or if you would like to provide any suggestions. Increase disk storage configuration according to your usage.
apiVersion: v1
kind: Service
metadata:
name: nifi-hs
labels:
app: nifi
spec:
ports:
- port: 1025
name: nodeport
- port: 8080
name: client
clusterIP: None
selector:
app: nifi
---
apiVersion: v1
kind: Service
metadata:
name: nifi-cs
labels:
app: nifi
annotations:
service.beta.kubernetes.io/azure-dns-label-name: nifi
spec:
ports:
- port: 80
targetPort: 8080
name: client
selector:
app: nifi
type: LoadBalancer
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: nifi-pdb
spec:
selector:
matchLabels:
app: nifi
maxUnavailable: 1
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nifi-sc
selfLink: /apis/storage.k8s.io/v1/storageclasses/nifi-sc
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: 'true'
provisioner: kubernetes.io/azure-disk
parameters:
cachingmode: ReadOnly
kind: Managed
storageaccounttype: StandardSSD_LRS
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nifi
spec:
selector:
matchLabels:
app: nifi
serviceName: nifi-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: nifi
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- k: "app"
operator: In
values:
- nifi
topologyKey: "kubernetes.io/hostname"
containers:
- name: nifi
image: "apache/nifi:1.13.0"
env:
- name: NIFI_CLUSTER_IS_NODE
value: "true"
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NIFI_CLUSTER_ADDRESS
value: $(HOSTNAME).nifi-hs
- name: NIFI_CLUSTER_NODE_PROTOCOL_PORT
value: "1025"
- name: NIFI_WEB_HTTP_HOST
value: $(HOSTNAME).nifi-hs.ns1.svc.cluster.local
#- name: NIFI_WEB_HTTP_PORT
# value: "80"
- name: NIFI_CLUSTER_NODE_PROTOCOL_MAX_THREADS
value: "100"
- name: NIFI_ZK_CONNECT_STRING
value: "zk-cs:2181"
- name: NIFI_ELECTION_MAX_CANDIDATES
value: "3"
ports:
- containerPort: 8080
name: client
- containerPort: 1025
name: nodeport
volumeMounts:
- name: nifi-database
mountPath: "/opt/nifi/nifi-current/database_repository"
- name: nifi-flowfile
mountPath: "/opt/nifi/nifi-current/flowfile_repository"
- name: nifi-content
mountPath: "/opt/nifi/nifi-current/content_repository"
- name: nifi-provenance
mountPath: "/opt/nifi/nifi-current/provenance_repository"
- name: nifi-state
mountPath: "/opt/nifi/nifi-current/state"
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: nifi-database
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-flowfile
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-content
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-provenance
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-state
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi

The labelselector in affinity block is missing few words. Below is the updated working yaml block for statefulset.
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nifi
spec:
selector:
matchLabels:
app: nifi
serviceName: nifi-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: nifi
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- nifi
topologyKey: "kubernetes.io/hostname"
containers:
- name: nifi
image: "apache/nifi:1.13.0"
env:
- name: NIFI_CLUSTER_IS_NODE
value: "true"
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NIFI_CLUSTER_ADDRESS
value: $(HOSTNAME).nifi-hs
- name: NIFI_CLUSTER_NODE_PROTOCOL_PORT
value: "1025"
- name: NIFI_WEB_HTTP_HOST
value: $(HOSTNAME).nifi-hs.ns1.svc.cluster.local
#- name: NIFI_WEB_HTTP_PORT
# value: "80"
- name: NIFI_CLUSTER_NODE_PROTOCOL_MAX_THREADS
value: "100"
- name: NIFI_ZK_CONNECT_STRING
value: "zk-cs:2181"
- name: NIFI_ELECTION_MAX_CANDIDATES
value: "3"
ports:
- containerPort: 8080
name: client
- containerPort: 1025
name: nodeport
volumeMounts:
- name: nifi-database
mountPath: "/opt/nifi/nifi-current/database_repository"
- name: nifi-flowfile
mountPath: "/opt/nifi/nifi-current/flowfile_repository"
- name: nifi-content
mountPath: "/opt/nifi/nifi-current/content_repository"
- name: nifi-provenance
mountPath: "/opt/nifi/nifi-current/provenance_repository"
- name: nifi-state
mountPath: "/opt/nifi/nifi-current/state"
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: nifi-database
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-flowfile
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-content
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-provenance
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
- metadata:
name: nifi-state
spec:
storageClassName: "nifi-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi

Related

MongoDB Prometheus exporter not scrapping all metrics

I have a Mongo deployment with a metrics exporter sidecar. Now when I load the associated dashboard in Grafana the metrics are not showing - it appears the exporter is not scrapping all metrics :
What is working
Only the Mongo UP metric is working i. e mongodb_up{env=~""} and part of Server Metrics
What is not working
All of the following metrics in the dashboard are showing no data : opscounters, Replication Set Metrics, Cursor Metrics
My configuration:
Deployment.yaml (using Percona MongoDB Exporter)
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-prom
namespace: "labs"
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
replicas: 1
strategy:
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
template:
metadata:
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
serviceAccountName: mongodb-prom
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
namespaces:
- "labs"
topologyKey: kubernetes.io/hostname
weight: 100
securityContext:
fsGroup: 1001
sysctls: []
containers:
- name: mongodb
image: docker.io/bitnami/mongodb:5.0.9-debian-10-r15
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "false"
- name: MONGODB_ROOT_USER
value: "root"
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-prom
key: mongodb-root-password
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: MONGODB_SYSTEM_LOG_VERBOSITY
value: "0"
- name: MONGODB_DISABLE_SYSTEM_LOG
value: "no"
- name: MONGODB_DISABLE_JAVASCRIPT
value: "no"
- name: MONGODB_ENABLE_JOURNAL
value: "yes"
- name: MONGODB_PORT_NUMBER
value: "27017"
- name: MONGODB_ENABLE_IPV6
value: "no"
- name: MONGODB_ENABLE_DIRECTORY_PER_DB
value: "no"
ports:
- name: mongodb
containerPort: 27017
volumeMounts:
- name: datadir
mountPath: /bitnami/mongodb
- name: datadir
mountPath: /tmp
- name: metrics
image: percona/mongodb_exporter:0.35
imagePullPolicy: "IfNotPresent"
args:
- "--mongodb.direct-connect=false"
- "--mongodb.uri=mongodb://username:password#mongodb-prom/admin"
ports:
- name: metrics
containerPort: 9216
resources:
requests:
memory: 128Mi
cpu: 250m
volumes:
- name: datadir
persistentVolumeClaim:
claimName: mongodb
metrics-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mongodb-metrics
namespace: "labs"
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: metrics
annotations:
prometheus.io/path: /metrics
prometheus.io/port: '9216'
prometheus.io/scrape: "true"
spec:
type: ClusterIP
ports:
- port: 9216
targetPort: metrics
protocol: TCP
name: http-metrics
selector:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
service.yaml
apiVersion: v1
kind: Service
metadata:
name: mongodb-prom
namespace: "labs"
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
type: ClusterIP
sessionAffinity: None
ports:
- name: "mongodb"
port: 27017
targetPort: mongodb
selector:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
What I have tried
I have tried using the Bitnami MongoDB Exporter version for the sidecar. It yields the exact same results :
- name: metrics
image: docker.io/bitnami/mongodb-exporter:0.32.0-debian-11-r5
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
command:
- /bin/bash
- -ec
args:
- |
/bin/mongodb_exporter --web.listen-address ":9216" --mongodb.uri "mongodb://$MONGODB_ROOT_USER:$(echo $MONGODB_ROOT_PASSWORD | sed -r "s/#/%40/g;s/:/%3A/g")#localhost:27017/admin?"
env:
- name: MONGODB_ROOT_USER
value: "root"
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-prom
key: mongodb-root-password
ports:
- name: metrics
containerPort: 9216
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
httpGet:
path: /
port: metrics
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
httpGet:
path: /
port: metrics
resources:
limits: {}
requests: {}
```
I am following the reference implementation given here and I have added the clusterMonitor role to the db user as below:
db.grantRolesToUser("root",[{ role: "clusterMonitor", db: "admin" }, { role: "read", db: "local" }])
and when I execute db.getUsers(); I get
[
{
"_id" : "admin.root",
"userId" : UUID("d8e181fc-6429-447e-bbcb-cec252f0792f"),
"user" : "root",
"db" : "admin",
"roles" : [
{
"role" : "clusterMonitor",
"db" : "admin"
},
{
"role" : "root",
"db" : "admin"
},
{
"role" : "read",
"db" : "local"
}
],
"mechanisms" : [
"SCRAM-SHA-1",
"SCRAM-SHA-256"
]
}
]
Even after granting these roles the dashboard is still not loading the missing metrics.
I have updated the Prometheus and Grafana versions in the dashboard's JSON to match my installed versions (Could this affect anything?)
The default dashboard I am using is here.
What am I missing ?
For anyone struggling with this I had to add the --collect-all and --compatibility-mode flags for the container to pull the metrics as in the configuration below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-prom
namespace: "labs"
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
replicas: 1
strategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
template:
metadata:
labels:
app.kubernetes.io/name: mongodb
helm.sh/chart: mongodb-12.1.12
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: mongodb
spec:
serviceAccountName: mongodb-prom
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: mongodb
app.kubernetes.io/instance: mongodb-prom
app.kubernetes.io/component: mongodb
namespaces:
- "labs"
topologyKey: kubernetes.io/hostname
weight: 100
securityContext:
fsGroup: 1001
sysctls: []
containers:
- name: mongodb
image: docker.io/bitnami/mongodb:5.0.9-debian-10-r15
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "false"
- name: MONGODB_ROOT_USER
value: "root"
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-prom
key: mongodb-root-password
- name: ALLOW_EMPTY_PASSWORD
value: "no"
- name: MONGODB_SYSTEM_LOG_VERBOSITY
value: "0"
- name: MONGODB_DISABLE_SYSTEM_LOG
value: "no"
- name: MONGODB_DISABLE_JAVASCRIPT
value: "no"
- name: MONGODB_ENABLE_JOURNAL
value: "yes"
- name: MONGODB_PORT_NUMBER
value: "27017"
- name: MONGODB_ENABLE_IPV6
value: "no"
- name: MONGODB_ENABLE_DIRECTORY_PER_DB
value: "no"
ports:
- name: mongodb
containerPort: 27017
volumeMounts:
- name: datadir
mountPath: /bitnami/mongodb
- name: datadir
mountPath: /tmp
- name: metrics
image: docker.io/bitnami/mongodb-exporter:0.32.0-debian-11-r5
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
command:
- /bin/bash
- -ec
args:
- |
/bin/mongodb_exporter --web.listen-address ":9216" --mongodb.uri "mongodb://$MONGODB_ROOT_USER:$(echo $MONGODB_ROOT_PASSWORD | sed -r "s/#/%40/g;s/:/%3A/g")#localhost:27017/admin?" --collect-all --compatible-mode
env:
- name: MONGODB_ROOT_USER
value: "root"
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-prom
key: mongodb-root-password
ports:
- name: metrics
containerPort: 9216
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 15
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
httpGet:
path: /
port: metrics
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
httpGet:
path: /
port: metrics
resources:
limits: {}
requests: {}
volumes:
- name: datadir
persistentVolumeClaim:
claimName: mongodb
The solution is from the following thread:
GithubIssue

How to change Kubernets / PostgreSQL connection credentionsls and host

I want to adapt the example here
https://www.enterprisedb.com/blog/how-deploy-pgadmin-kubernetes
I want to change the following
need the host be localhost
the user / password to access postgreSQL server vis Admin4 be :
postgres / admin
But I dont know what shout be changed in the files
pgadmin-secret.yaml file :
(the password is SuperSecret)
sapiVersion: v1
kind: Secret
type: Opaque
metadata:
name: pgadmin
data:
pgadmin-password: U3VwZXJTZWNyZXQ=
pgadmin-configmap.yaml file :
apiVersion: v1
kind: ConfigMap
metadata:
name: pgadmin-config
data:
servers.json: |
{
"Servers": {
"1": {
"Name": "PostgreSQL DB",
"Group": "Servers",
"Port": 5432,
"Username": "postgres",
"Host": "postgres.domain.com",
"SSLMode": "prefer",
"MaintenanceDB": "postgres"
}
}
}
pgadmin-service.yaml file
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: http
selector:
app: pgadmin
type: NodePort
The pgadmin-statefulset.yaml :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pgadmin
spec:
serviceName: pgadmin-service
podManagementPolicy: Parallel
replicas: 1
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: pgadmin
template:
metadata:
labels:
app: pgadmin
spec:
terminationGracePeriodSeconds: 10
containers:
- name: pgadmin
image: dpage/pgadmin4:5.4
imagePullPolicy: Always
env:
- name: PGADMIN_DEFAULT_EMAIL
value: user#domain.com
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
name: pgadmin
key: pgadmin-password
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: pgadmin-config
mountPath: /pgadmin4/servers.json
subPath: servers.json
readOnly: true
- name: pgadmin-data
mountPath: /var/lib/pgadmin
volumes:
- name: pgadmin-config
configMap:
name: pgadmin-config
volumeClaimTemplates:
- metadata:
name: pgadmin-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
the following commands work OK with the original yaml files ad I get the Admin4 connect form in a web page
But even with the unchanged files, I have a bad username/password error

telepresence: error: workload "xxx-service.default" not found

I have this chart of a personal project deployed in minikube:
---
# Source: frontend/templates/service.yaml
kind: Service
apiVersion: v1
metadata:
name: xxx-app-service
spec:
selector:
app: xxx-app
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
# Source: frontend/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '3'
creationTimestamp: '2022-06-19T21:57:01Z'
generation: 3
labels:
app: xxx-app
name: xxx-app
namespace: default
resourceVersion: '43299'
uid: 7c43767a-abbd-4806-a9d2-6712847a0aad
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: xxx-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: xxx-app
spec:
containers:
- image: "registry.gitlab.com/esavara/xxx/wm:staging"
name: frontend
imagePullPolicy: Always
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /
port: 3000
initialDelaySeconds: 10
periodSeconds: 3
env:
- name: PORT
value: "3000"
resources: {}
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
# Source: frontend/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: '2022-06-19T22:28:58Z'
generation: 1
name: xxx-app
namespace: default
resourceVersion: '44613'
uid: b58dcd17-ee1f-42e5-9dc7-d915a21f97b5
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: "xxx-app-service"
port:
number: 3000
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- ip: 192.168.39.80
---
# Source: frontend/templates/gitlab-registry-sealed.json
{
"kind": "SealedSecret",
"apiVersion": "bitnami.com/v1alpha1",
"metadata": {
"name": "regcred",
"namespace": "default",
"creationTimestamp": null
},
"spec": {
"template": {
"metadata": {
"name": "regcred",
"namespace": "default",
"creationTimestamp": null
},
"type": "kubernetes.io/dockerconfigjson",
"data": null
},
"encryptedData": {
".dockerconfigjson": "AgBpHoQw1gBq0IFFYWnxlBLLYl1JC23TzbRWGgLryVzEDP8p+NAGjngLFZmtklmCEHLK63D9pp3zL7YQQYgYBZUjpEjj8YCTOhvnjQIg7g+5b/CPXWNoI5TuNexNJFKFv1mN5DzDk9np/E69ogRkDJUvUsbxvVXs6/TKGnRbnp2NuI7dTJ18QgGXdLXs7S416KE0Yid9lggw06JrDN/OSxaNyUlqFGcRJ6OfGDAHivZRV1Kw9uoX06go3o+AjVd6eKlDmmvaY/BOc52bfm7pY2ny1fsXGouQ7R7V1LK0LcsCsKdAkg6/2DU3v32mWZDKJgKkK5efhTQr1KGOBoLuuHKX6nF5oMA1e1Ii3wWe77lvWuvlpaNecCBTc7im+sGt0dyJb4aN5WMLoiPGplGqnuvNqEFa/nhkbwXm5Suke2FQGKyzDsMIBi9p8PZ7KkOJTR1s42/8QWVggTGH1wICT1RzcGzURbanc0F3huQ/2RcTmC4UvYyhCUqr3qKrbAIYTNBayfyhsBaN5wVRnV5LiPxjLbbOmObSr/8ileJgt1HMQC3V9pVLZobaJvlBjr/mmNlrF118azJOFY+a/bqzmtBWwlcvVuel/EaOb8uA8mgwfnbnShMinL1OWTHoj+D0ayFmUdsQgMYwRmStnC7x/6OXThmBgfyrLguzz4V2W8O8qbdDz+O5QoyboLUuR9WQb/ckpRio2qa5tidnKXzZzbWzFEevv9McxvCC1+ovvw4IullU8ra3FutnTentmPHKU2OPr1EhKgFKIX20u8xZaeIJYLCiZlVQohylpjkHnBZo1qw+y1CTiDwytunpmkoGGAXLx++YQSjEJEo889PmVVlSwZ8p/Rdusiz1WbgKqFt27yZaOfYzR2bT++HuB5x6zqfK6bbdV/UZndXs"
}
}
}
I'm trying to use Telepresence to redirect the traffic from the deployed application to a Docker container which have my project mounted inside and has hot-reloading, to continue the development of it but inside Kubernetes, but running telepresence intercept xxx-app-service --port 3000:3000 --env-file /tmp/xxx-app-service.env fails with the following error:
telepresence: error: workload "xxx-app-service.default" not found
Why is this happening and how do I fix it?

mongodb with PVC on a NFS volume restarting with "file-rename: rename: Permission denied"

I'm having problems with my mongodb cluster in a custom Kubernetes cluster.
I have a NFS volume, and so in my mongodb statefulSet, I create a ClaimTemplate that access that NFS storage, to save the data on that disk.
It works, I can see the files saved there, but what I can see is that my mongodb pods are restarting often, like 140 times in 25 days.
So I'm looking at the logs of one of the pods this way: kubectl logs mongo-0 mongodb --previousto see the logs before the last restart.
And every time I see this error here:
2020-02-14T08:52:39.133+0000 E STORAGE [WTCheckpointThread] WiredTiger error (13) [1581670359:133838][1:0x7f6e8137b700], file:WiredTiger.wt, WT_SESSION.checkpoint: /data/db/WiredTiger.turtle.set to /data/db/WiredTiger.turtle: file-rename: rename: Permission denied
2020-02-14T08:52:39.137+0000 E STORAGE [WTCheckpointThread] WiredTiger error (0) [1581670359:137663][1:0x7f6e8137b700], file:WiredTiger.wt, WT_SESSION.checkpoint: WiredTiger.turtle: encountered an illegal file format or internal value: (__wt_turtle_update, 391)
2020-02-14T08:52:39.137+0000 E STORAGE [WTCheckpointThread] WiredTiger error (-31804) [1581670359:137717][1:0x7f6e8137b700], file:WiredTiger.wt, WT_SESSION.checkpoint: the process must exit and restart: WT_PANIC: WiredTiger library panic
2020-02-14T08:52:39.137+0000 F - [WTCheckpointThread] Fatal Assertion 28558 at src/mongo/db/storage/wiredtiger/wiredtiger_util.cpp 366
2020-02-14T08:52:39.137+0000 F - [WTCheckpointThread]
***aborting after fassert() failure
2020-02-14T08:52:39.157+0000 F - [WTCheckpointThread] Got signal: 6 (Aborted).
0x562363d30251 0x562363d2f469 0x562363d2f94d 0x7f6e86f41890 0x7f6e86bbc067 0x7f6e86bbd448 0x56236248b12e 0x562362558c1e 0x5623625c5f91 0x562362427d13 0x56236242803c 0x56236258c05c 0x562362589bac 0x5623626694bf 0x562362669bff 0x5623625d6306 0x5623625d8049 0x5623625d99b3 0x5623625d9c83 0x5623625bf50a 0x56236253e62f 0x562363c22961 0x562363e3f470 0x7f6e86f3a064 0x7f6e86c6f62d
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"562361AF7000","o":"2239251","s":"_ZN5mongo15printStackTraceERSo"},{"b":"562361AF7000","o":"2238469"},{"b":"562361AF7000","o":"223894D"},{"b":"7F6E86F32000","o":"F890"},{"b":"7F6E86B87000","o":"35067","s":"gsignal"},{"b":"7F6E86B87000","o":"36448","s":"abort"},{"b":"562361AF7000","o":"99412E","s":"_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj"},{"b":"562361AF7000","o":"A61C1E"},{"b":"562361AF7000","o":"ACEF91"},{"b":"562361AF7000","o":"930D13","s":"__wt_err"},{"b":"562361AF7000","o":"93103C","s":"__wt_panic"},{"b":"562361AF7000","o":"A9505C","s":"__wt_turtle_update"},{"b":"562361AF7000","o":"A92BAC","s":"__wt_metadata_update"},{"b":"562361AF7000","o":"B724BF"},{"b":"562361AF7000","o":"B72BFF","s":"__wt_meta_ckptlist_set"},{"b":"562361AF7000","o":"ADF306"},{"b":"562361AF7000","o":"AE1049","s":"__wt_checkpoint"},{"b":"562361AF7000","o":"AE29B3"},{"b":"562361AF7000","o":"AE2C83","s":"__wt_txn_checkpoint"},{"b":"562361AF7000","o":"AC850A"},{"b":"562361AF7000","o":"A4762F","s":"_ZN5mongo18WiredTigerKVEngine26WiredTigerCheckpointThread3runEv"},{"b":"562361AF7000","o":"212B961","s":"_ZN5mongo13BackgroundJob7jobBodyEv"},{"b":"562361AF7000","o":"2348470"},{"b":"7F6E86F32000","o":"8064"},{"b":"7F6E86B87000","o":"E862D","s":"clone"}],"processInfo":{ "mongodbVersion" : "3.6.7", "gitVersion" : "2628472127e9f1826e02c665c1d93880a204075e", "compiledModules" : [], "uname" : { "sysname" : "Linux", "release" : "4.15.0-74-generic", "version" : "#84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019", "machine" : "x86_64" }, "somap" : [ { "b" : "562361AF7000", "elfType" : 3, "buildId" : "EAE95F4BE97BD38029D21836039E6398277FF9BC" }, { "b" : "7FFE42B82000", "path" : "linux-vdso.so.1", "elfType" : 3, "buildId" : "053A6959CC589646998C0B1448414A8AF5B51F67" }, { "b" : "7F6E880D0000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3, "buildId" : "C0E9A6CE03F960E690EA8F72575FFA29570E4A0B" }, { "b" : "7F6E87CD3000", "path" : "/usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "CFDB319C26A6DB0ED14D33D44024ED461D8A5C23" }, { "b" : "7F6E87A72000", "path" : "/usr/lib/x86_64-linux-gnu/libssl.so.1.0.0", "elfType" : 3, "buildId" : "90275AC4DD8167F60BC7C599E0DBD63741D8F191" }, { "b" : "7F6E8786E000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "D70B531D672A34D71DB42EB32B68E63F2DCC5B6A" }, { "b" : "7F6E87666000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "A63C95FB33CCA970E141D2E13774B997C1CF0565" }, { "b" : "7F6E87365000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "152C93BA3E8590F7ED0BCDDF868600D55EC4DD6F" }, { "b" : "7F6E8714F000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "BAC839560495859598E8515CBAED73C7799AE1FF" }, { "b" : "7F6E86F32000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "9DA9387A60FFC196AEDB9526275552AFEF499C44" }, { "b" : "7F6E86B87000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "48C48BC6ABB794461B8A558DD76B29876A0551F0" }, { "b" : "7F6E882E7000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1D98D41FBB1EABA7EC05D0FD7624B85D6F51C03C" } ] }}
mongod(_ZN5mongo15printStackTraceERSo+0x41) [0x562363d30251]
mongod(+0x2238469) [0x562363d2f469]
mongod(+0x223894D) [0x562363d2f94d]
libpthread.so.0(+0xF890) [0x7f6e86f41890]
libc.so.6(gsignal+0x37) [0x7f6e86bbc067]
libc.so.6(abort+0x148) [0x7f6e86bbd448]
mongod(_ZN5mongo32fassertFailedNoTraceWithLocationEiPKcj+0x0) [0x56236248b12e]
mongod(+0xA61C1E) [0x562362558c1e]
mongod(+0xACEF91) [0x5623625c5f91]
mongod(__wt_err+0x9D) [0x562362427d13]
mongod(__wt_panic+0x33) [0x56236242803c]
mongod(__wt_turtle_update+0x1BC) [0x56236258c05c]
mongod(__wt_metadata_update+0x1BC) [0x562362589bac]
mongod(+0xB724BF) [0x5623626694bf]
mongod(__wt_meta_ckptlist_set+0x29F) [0x562362669bff]
mongod(+0xADF306) [0x5623625d6306]
mongod(__wt_checkpoint+0xA9) [0x5623625d8049]
mongod(+0xAE29B3) [0x5623625d99b3]
mongod(__wt_txn_checkpoint+0x1C3) [0x5623625d9c83]
mongod(+0xAC850A) [0x5623625bf50a]
mongod(_ZN5mongo18WiredTigerKVEngine26WiredTigerCheckpointThread3runEv+0x23F) [0x56236253e62f]
mongod(_ZN5mongo13BackgroundJob7jobBodyEv+0x131) [0x562363c22961]
mongod(+0x2348470) [0x562363e3f470]
libpthread.so.0(+0x8064) [0x7f6e86f3a064]
libc.so.6(clone+0x6D) [0x7f6e86c6f62d]
----- END BACKTRACE -----
If I for example connect to a pod (with the command kubectl exec -it mongo-0 -- /bin/sh) and list the files inside the folder /data/db/, I see this:
# ls -la
total 7832895
drwxrwxrwx 2 root root 81920 Feb 14 09:52 .
drwxr-xr-x 4 root root 4096 Sep 5 2018 ..
-rw------- 1 root root 46 Nov 21 08:21 WiredTiger
-rw------- 1 root root 21 Nov 21 08:21 WiredTiger.lock
-rw------- 1 root root 1105 Feb 14 09:52 WiredTiger.turtle
-rw------- 1 root root 737280 Feb 14 09:48 WiredTiger.wt
-rw------- 1 root root 4096 Feb 14 08:52 WiredTigerLAS.wt
-rw------- 1 root root 57344 Feb 14 08:52 _mdb_catalog.wt
-rw------- 1 root root 36864 Feb 14 09:51 collection-0--2261811786273811131.wt
-rw------- 1 root root 36864 Feb 14 09:52 collection-0--3926443192316967741.wt
-rw------- 1 root root 200704 Feb 14 08:53 collection-0-5585625509523565893.wt
-rw------- 1 root root 36864 Feb 14 08:52 collection-0-6343563410859966760.wt
-rw------- 1 root root 16384 Feb 14 08:52 collection-100-6343563410859966760.wt
all files managed by root, but on the folder all users can do anything.
What am I missing here? why the process that creates all those files has no permission to rename one?
This is the definition of the statefulSet by the way:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongodb
image: mongo:3-jessie
command:
- mongod
- --replSet
- rs0
- --bind_ip_all
ports:
- containerPort: 27017
name: web
volumeMounts:
- name: database
mountPath: /data/db
- name: init-mongo
image: mongo:3-jessie
command:
- bash
- /config/init.sh
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: "mongo-init"
volumeClaimTemplates:
- metadata:
name: database
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs.raid5
resources:
requests:
storage: 30Gi
and this one for the configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: mongo-init
data:
init.sh: |
#!/bin/bash
until /usr/bin/mongo --eval 'printjson(db.serverStatus())'; do
echo "connecting to local mongo..."
sleep 2
done
echo "connected to local."
HOST=mongo-0.mongo:27017
until /usr/bin/mongo --host=${HOST} --eval 'printjson(db.serverStatus())'; do
echo "connecting to remote mongo..."
sleep 2
done
echo "connected to remote."
if [[ "${HOSTNAME}" != 'mongo-0' ]]; then
until /usr/bin/mongo --host=${HOST} --eval="printjson(rs.status())" | grep -v "no replset config has been received"; do
echo "waiting for replication set initialization"
sleep 2
done
echo "adding self to mongo-0"
/usr/bin/mongo --host=${HOST} --eval="printjson(rs.add('${HOSTNAME}.mongo'))"
fi
if [[ "${HOSTNAME}" == 'mongo-0' ]]; then
echo "initializing replica set"
/usr/bin/mongo --eval="printjson(rs.initiate({'_id': 'rs0', 'members': [{'_id': 0, 'host': 'mongo-0.mongo:27017'}]}))"
fi
echo "initialized"
while true; do
sleep 3600
done
This is my NFS storage configuration, shown using the command kubectl get pvc -A -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2020-01-13T14:50:11Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: cassandra
name: cassandra-data-cassandra-0
namespace: default
resourceVersion: "59562447"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/cassandra-data-cassandra-0
uid: ffc81929-3613-11ea-96eb-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-ffc81929-3613-11ea-96eb-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2020-01-13T14:50:16Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: cassandra
name: cassandra-data-cassandra-1
namespace: default
resourceVersion: "59562488"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/cassandra-data-cassandra-1
uid: 02c23282-3614-11ea-96eb-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-02c23282-3614-11ea-96eb-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2020-01-13T14:50:21Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: cassandra
name: cassandra-data-cassandra-2
namespace: default
resourceVersion: "59562528"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/cassandra-data-cassandra-2
uid: 05d251fc-3614-11ea-96eb-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-05d251fc-3614-11ea-96eb-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2020-02-05T10:17:49Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
service: elasticsearch
name: data-elasticsearch-0
namespace: default
resourceVersion: "64027300"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-elasticsearch-0
uid: c2f91498-4800-11ea-96eb-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-c2f91498-4800-11ea-96eb-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2020-02-05T10:18:46Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
service: elasticsearch
name: data-elasticsearch-1
namespace: default
resourceVersion: "64027480"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-elasticsearch-1
uid: e4a434c9-4800-11ea-96eb-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-e4a434c9-4800-11ea-96eb-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2020-02-05T10:19:46Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
service: elasticsearch
name: data-elasticsearch-2
namespace: default
resourceVersion: "64027629"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-elasticsearch-2
uid: 089e3867-4801-11ea-96eb-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-089e3867-4801-11ea-96eb-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2019-12-17T10:19:12Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: minio
name: data-minio-0
namespace: default
resourceVersion: "54242369"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-minio-0
uid: ab9f0e15-20b6-11ea-922b-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-ab9f0e15-20b6-11ea-922b-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2019-12-17T10:36:28Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: minio
name: data-minio-1
namespace: default
resourceVersion: "54244889"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-minio-1
uid: 152ad7c7-20b9-11ea-922b-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-152ad7c7-20b9-11ea-922b-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2019-12-17T10:36:49Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: minio
name: data-minio-2
namespace: default
resourceVersion: "54244969"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-minio-2
uid: 21cdfb6a-20b9-11ea-922b-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-21cdfb6a-20b9-11ea-922b-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2019-12-17T10:37:04Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: minio
name: data-minio-3
namespace: default
resourceVersion: "54245023"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-minio-3
uid: 2ade3c07-20b9-11ea-922b-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-2ade3c07-20b9-11ea-922b-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2020-01-20T09:54:32Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: redis
name: data-redis-0
namespace: default
resourceVersion: "60898171"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-redis-0
uid: dbc61d61-3b6a-11ea-96eb-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-dbc61d61-3b6a-11ea-96eb-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2019-11-21T08:20:39Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: mongo
name: database-mongo-0
namespace: default
resourceVersion: "49142141"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/database-mongo-0
uid: cd2bc437-0c37-11ea-922b-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-cd2bc437-0c37-11ea-922b-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2019-11-21T08:20:57Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: mongo
name: database-mongo-1
namespace: default
resourceVersion: "49142205"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/database-mongo-1
uid: d7d7d7c6-0c37-11ea-922b-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-d7d7d7c6-0c37-11ea-922b-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2019-11-21T08:21:13Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: mongo
name: database-mongo-2
namespace: default
resourceVersion: "49142269"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/database-mongo-2
uid: e1add0ee-0c37-11ea-922b-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 30Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-e1add0ee-0c37-11ea-922b-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 30Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2020-01-14T08:09:50Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: mash-firstupload-protel
name: storage-mash-firstupload-protel-0
namespace: default
resourceVersion: "59703924"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/storage-mash-firstupload-protel-0
uid: 3cb79ca5-36a5-11ea-96eb-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-3cb79ca5-36a5-11ea-96eb-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2020-01-14T08:09:53Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: mash-firstupload-protel
name: storage-mash-firstupload-protel-1
namespace: default
resourceVersion: "59703971"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/storage-mash-firstupload-protel-1
uid: 3ec97b97-36a5-11ea-96eb-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-3ec97b97-36a5-11ea-96eb-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
phase: Bound
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: nfs.raid5
creationTimestamp: "2020-01-14T08:10:06Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: mash-firstupload-protel
name: storage-mash-firstupload-protel-2
namespace: default
resourceVersion: "59704015"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/storage-mash-firstupload-protel-2
uid: 464dd3d0-36a5-11ea-96eb-00155d01e331
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs.raid5
volumeMode: Filesystem
volumeName: pvc-464dd3d0-36a5-11ea-96eb-00155d01e331
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 10Gi
phase: Bound
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Thank you in advance for the replies! If you need any other information, please let me know and I'll edit the question.

Skipper https rest end point requests returning http urls

I am trying a poc with Spring cloud dataflow streams and have the the application iis running in Pivotal Cloud Foundry. Trying the same in kubernetes and the spring dataflow server dashboard is not loading.Debugged the issue and found the root cause is when the dashboard is loaded, its trying to hit the Skipper rest end point /api and this returns a response with the urls of other end points in skipper but the return urls are all in http. How can i force skipper to return https urls instead of http? Below is the response when i try to curl the same endpoints .
C:>curl -k https:///api
RESPONSE FROM SKIPPER
{
"_links" : {
"repositories" : {
"href" : "http://<skipper_url>/api/repositories{?page,size,sort}",
"templated" : true
},
"deployers" : {
"href" : "http://<skipper_url>/api/deployers{?page,size,sort}",
"templated" : true
},
"releases" : {
"href" : "http://<skipper_url>/api/releases{?page,size,sort}",
"templated" : true
},
"packageMetadata" : {
"href" : "**http://<skipper_url>/api/packageMetadata{?page,size,sort,projection}**",
"templated" : true
},
"about" : {
"href" : "http://<skipper_url>/api/about"
},
"release" : {
"href" : "http://<skipper_url>/api/release"
},
"package" : {
"href" : "http://<skipper_url>/api/package"
},
"profile" : {
"href" : "http://<skipper_url>/api/profile"
}
}
}
kubernetes deployment yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: skipper-server-network-policy
spec:
podSelector:
matchLabels:
app: skipper-server
ingress:
- from:
- namespaceSelector:
matchLabels:
gkp_namespace: ingress-nginx
egress:
- {}
policyTypes:
- Ingress
- Egress
---
apiVersion: v1
kind: Secret
metadata:
name: poc-secret
data:
.dockerconfigjson: ewogICJhdXRocyI6
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
replicas: 1
selector:
matchLabels:
app: skipper-server
template:
metadata:
labels:
app: skipper-server
annotations:
kubernetes.io/psp: nonroot
spec:
containers:
- name: skipper-server
image: <image_path>
imagePullPolicy: Always
ports:
- containerPort: 7577
protocol: TCP
resources:
limits:
cpu: "4"
memory: 2Gi
requests:
cpu: 25m
memory: 1Gi
securityContext:
runAsUser: 99
imagePullSecrets:
- name: poc-secret
serviceAccount: spark
serviceAccountName: spark
---
apiVersion: v1
kind: Service
metadata:
name: skipper-server
labels:
app: skipper-server
spec:
ports:
- port: 80
targetPort: 7577
protocol: TCP
name: http
selector:
app: skipper-server
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
ingress.kubernetes.io/secure-backends: "true"
kubernetes.io/ingress.allow.http: true
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
rules:
- host: "<skipper_url>"
http:
paths:
- path: /
backend:
serviceName: skipper-server
servicePort: 80
tls:
- hosts:
- "<skipper_url>"
SKIPPER APPLICATION.properties
spring.datasource.url=jdbc:h2:mem:testdb
spring.datasource.driverClassName=org.h2.Driver
spring.datasource.username=sa
spring.datasource.password=
spring.server.use-forward-headers=true
The root cause was skipper /api end point returning http urls for the /deployer and kubernetes ingress trying to redirect and getting blocked with a 308 error. Added below to skipper env properties and this fixed the issue.
DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipper-server
spec:
containers:
env:
- name: "server.tomcat.internal-proxies"
value: ".*"
- name: "server.use-forward-headers"
value: "true"**
INGRESS
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: skipper-server
annotations:
**nginx.ingress.kubernetes.io/ssl-redirect: false**