I want to deploy keycloak with below custom configuration, before starting it.
new realm
role
client
an admin user under the new realm
I am using below deployment file to create keycloak pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:10.0.1
env:
- name: KEYCLOAK_USER
value: "admin"
- name: KEYCLOAK_PASSWORD
value: "admin"
- name: REALM
value: "ntc"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
volumeMounts:
- mountPath: /opt/jboss/keycloak/startup/elements
name: elements
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 443
readinessProbe:
httpGet:
path: /auth/realms/master
port: 8080
volumes:
- name: elements
configMap:
name: keycloak-elements
and using below cilent.json and realm.json file to generate configmap for keycloak.
client.json
{
"id": "7ec4ccce-d6ed-461f-8e95-ea98e4912b8c",
"clientId": "ntc-app",
"enabled": true,
"clientAuthenticatorType": "client-secret",
"secret": "0b360a88-df24-48fa-8e96-bf6577bbee95",
"directAccessGrantsEnabled": true
}
realm.json
{
"realm": "ntc",
"id": "ntc",
"enabled": "true",
"revokeRefreshToken" : true,
"accessTokenLifespan" : 900,
"passwordPolicy": "length(8) and digits(1) and specialChars(1)",
"roles" : {
"realm" : [ {
"id": "c9253f52-1960-4c9d-af99-5facca0c0846",
"name": "admin",
"description" : "admin role",
"scopeParamRequired": false,
"composite": false,
"clientRole": false,
"containerId": "ntc"
}, {
"id" : "1e7ed0c8-9585-44b0-92f8-59e472573461",
"name" : "user",
"description" : "user role",
"scopeParamRequired" : false,
"composite" : false,
"clientRole" : false,
"containerId" : "ntc"
}
]
}
}
Both the files are saved under the elements folder and used in the below command to generate the config map:
kubectl create configmap keycloak-elements --from-file=elements
Still, I don't see any new realm/role or client created in the KeyCloak console.
I think you need to import the realm and client like described here http://www.mastertheboss.com/jboss-frameworks/keycloak/keycloak-with-docker
Some environment variables might help to accomplish the task.
When you are setting up Keycloak on kubernetes, you only need to import the new realm (realm.json) and the corresponding clients (client.json) only during the first run. So a Job needs created instead of adding it to the deployment.
Once the Job is run, the json will be imported to the Keycloak database and the job can be suspended. Adding it to the deployment will cause Keycloak to try and import the json files during each restart.
Please follow the steps in this blog post: https://blog.knoldus.com/migrate-keycloak-h2-database-to-postgres-on-kubernetes/
Related
We are trying to deploy a logic app as containerized workload in AKS. Following is our Docker file:
FROM mcr.microsoft.com/azure-functions/dotnet:3.0.14492-appservice
ENV AzureWebJobsStorage=<StorageAccount connection string>
ENV AZURE_FUNCTIONS_ENVIRONMENT Development
ENV AzureWebJobsScriptRoot=/home/site/wwwroot
ENV AzureFunctionsJobHost__Logging__Console__IsEnabled=true
ENV FUNCTIONS_V2_COMPATIBILITY_MODE=true
COPY ./bin/release/netcoreapp3.1/publish/ /home/site/wwwroot
Following is our deployment manifest file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: pfna-pgt-sf-pdfextract
namespace: canary
labels:
app: pfna-pgt-sf-pdfextract
spec:
replicas: 1
selector:
matchLabels:
app: pfna-pgt-sf-pdfextract
template:
metadata:
labels:
app: pfna-pgt-sf-pdfextract
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: pfna-pgt-sf-pdfextract
image: "image_link"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: AzureBlob_connectionString
value: <connection_string>
- name: AzureWebJobsStorage
value: <connection_string>
imagePullSecrets:
- name: sbx-acr-secret
---
apiVersion: v1
kind: Service
metadata:
name: pfna-pgt-sf-pdfextract
namespace: canary
labels:
app: pfna-pgt-sf-pdfextract
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http-pfna-pgt-sf-pdfextract
selector:
app: pfna-pgt-sf-pdfextract
Following is connections.json:
{
"serviceProviderConnections": {
"AzureBlob": {
"parameterValues": {
"connectionString": "#appsetting('AzureWebJobsStorage')"
},
"serviceProvider": {
"id": "/serviceProviders/AzureBlob"
},
"displayName": "localAzureBlob"
}
},
"managedApiConnections": {}
}
Following is the host.json:
{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
"version": "[1.*, 2.0.0)"
},
"extensions": {
"workflow": {
"settings": {
"Runtime.Backend.VariableOperation.MaximumStatelessVariableSize": "5000000"
}
}
}
}
The image is running successfully in docker desktop but when deployed to AKS we are getting 'Function host is not running'.
Please help resolve this.
You need to specify WEBSITE_HOSTNAME as well (doesn't matter what it is, just needs to be specified)
That being said, as of today there is another issue that is causing the function host to not start (libadvapi32.dll).
I want to adapt the example here
https://www.enterprisedb.com/blog/how-deploy-pgadmin-kubernetes
I want to change the following
need the host be localhost
the user / password to access postgreSQL server vis Admin4 be :
postgres / admin
But I dont know what shout be changed in the files
pgadmin-secret.yaml file :
(the password is SuperSecret)
sapiVersion: v1
kind: Secret
type: Opaque
metadata:
name: pgadmin
data:
pgadmin-password: U3VwZXJTZWNyZXQ=
pgadmin-configmap.yaml file :
apiVersion: v1
kind: ConfigMap
metadata:
name: pgadmin-config
data:
servers.json: |
{
"Servers": {
"1": {
"Name": "PostgreSQL DB",
"Group": "Servers",
"Port": 5432,
"Username": "postgres",
"Host": "postgres.domain.com",
"SSLMode": "prefer",
"MaintenanceDB": "postgres"
}
}
}
pgadmin-service.yaml file
apiVersion: v1
kind: Service
metadata:
name: pgadmin-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: http
selector:
app: pgadmin
type: NodePort
The pgadmin-statefulset.yaml :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: pgadmin
spec:
serviceName: pgadmin-service
podManagementPolicy: Parallel
replicas: 1
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: pgadmin
template:
metadata:
labels:
app: pgadmin
spec:
terminationGracePeriodSeconds: 10
containers:
- name: pgadmin
image: dpage/pgadmin4:5.4
imagePullPolicy: Always
env:
- name: PGADMIN_DEFAULT_EMAIL
value: user#domain.com
- name: PGADMIN_DEFAULT_PASSWORD
valueFrom:
secretKeyRef:
name: pgadmin
key: pgadmin-password
ports:
- name: http
containerPort: 80
protocol: TCP
volumeMounts:
- name: pgadmin-config
mountPath: /pgadmin4/servers.json
subPath: servers.json
readOnly: true
- name: pgadmin-data
mountPath: /var/lib/pgadmin
volumes:
- name: pgadmin-config
configMap:
name: pgadmin-config
volumeClaimTemplates:
- metadata:
name: pgadmin-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
the following commands work OK with the original yaml files ad I get the Admin4 connect form in a web page
But even with the unchanged files, I have a bad username/password error
I have an existing configuration file in JSON format, something like below
{
"maxThreadCount": 10,
"trackerConfigs": [{
"url": "https://example1.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
},
{
"url": "https://example2.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
}
],
"repoConfigs": [{
"url": "https://github.com/",
"username": "username",
"password": "password",
"type": "GITHUB"
}],
"streamConfigs": [{
"url": "https://example.com/master.json",
"type": "JSON"
}]
}
I understand that I am allowed to pass key/value pair properties file with --from-file option for configmap and secret creation.
But How about JSON formatted file ? Does Kubernetes take JSON format file as input file to create configmap and secret as well?
$ kubectl create configmap demo-configmap --from-file=example.json
If I run this command, it said configmap/demo-configmap created. But how can I refer this configmap values in other pod ?
When you create configmap/secret using --from-file, by default the file name will be the key name and content of the file will be the value.
For example, You created configmap will be like
apiVersion: v1
data:
test.json: |
{
"maxThreadCount": 10,
"trackerConfigs": [{
"url": "https://example1.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
},
{
"url": "https://example2.com/",
"username": "username",
"password": "password",
"defaultLimit": 1
}
],
"repoConfigs": [{
"url": "https://github.com/",
"username": "username",
"password": "password",
"type": "GITHUB"
}],
"streamConfigs": [{
"url": "https://example.com/master.json",
"type": "JSON"
}]
}
kind: ConfigMap
metadata:
creationTimestamp: "2020-05-07T09:03:55Z"
name: demo-configmap
namespace: default
resourceVersion: "5283"
selfLink: /api/v1/namespaces/default/configmaps/demo-configmap
uid: ce566b36-c141-426e-be30-eb843ab20db6
You can mount the configmap into your pod as volume. where the key name will be the file name and value will be the content of the file. like following
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: demo-configmap
restartPolicy: Never
When the pod runs, the command ls /etc/config/ produces the output below:
test.json
Config maps are a container for key value pairs. So, if you create a ConfigMap from a file containing JSON, this will be stored with the file name as key and the JSON as value.
To access such a Config Map from a Pod, you would have to mount it into your Pod as a volume:
How to mount config maps
Unfortunately the solution as stated from hoque did not work for me. In may case that app terminated with a very suspect message:
Could not execute because the application was not found or a compatible .NET SDK is not installed.
Possible reasons for this include:
* You intended to execute a .NET program:
The application 'myapp.dll' does not exist.
* You intended to execute a .NET SDK command:
It was not possible to find any installed .NET SDKs.
Install a .NET SDK from:
https://aka.ms/dotnet-download
I could see that appsettings.json was deployed but something has gone wrong here. In the end, this solution has worked for me (similar, but with some extras):
spec:
containers:
- name: webapp
image: <my image>
volumeMounts:
- name: appconfig
# "mountPath: /app" only doesn't work (app crashes)
mountPath: /app/appsettings.json
subPath: appsettings.json
volumes:
- name: appconfig
configMap:
name: my-config-map
# Required since "mountPath: /app" only doesn't work (app crashes)
items:
- key: appsettings.json
path: appsettings.json
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config-map
labels:
app: swpq-task-02-team5
data:
appsettings.json: |
{
...
}
I have had this issue for a couple of days as I wanted to refer a json config file (config.production.json) from my local directory into a specific location inside the containers for pod (/var/lib/ghost). The below config worked for me. Please note the mountPath and subPath keys that did the trick for me. The snippet below is of a pod kind=deployment shortened for ease of reading ---
spec:
volumes:
- name: configmap-volume
configMap:
name: configmap
containers:
- env:
- name: url
value: https://www.example.com
volumeMounts:
- name: configmap-volume
mountPath: /var/lib/ghost/config.production.json
subPath: config.production.json
I am trying to configure a Kafka cluster behind Traefik but my producers and client (that are outside kubernetes) don't connect to the bootstrap-servers. They keep saying:
"no resolvable boostrap servers in the given url"
Actually here is the Traefik ingress:
{
"apiVersion": "extensions/v1beta1",
"kind": "Ingress",
"metadata": {
"name": "nppl-ingress",
"annotations": {
"kubernetes.io/ingress.class": "traefik",
"traefik.frontend.rule.type": "PathPrefixStrip"
}
},
"spec": {
"rules": [
{
"host": "" ,
"http": {
"paths": [
{
"path": "/zuul-gateway",
"backend": {
"serviceName": "zuul-gateway",
"servicePort": "zuul-port"
}
},
{
"path": "/kafka",
"backend": {
"serviceName": "kafka-broker",
"servicePort": "kafka-port"
}
[..]
}
What I give to the kafka consumers/producers is the public IP of Traefik.
Here is the flow: [Kafka producers/consumers] -> Traefik(exposed as Load Balancer) -> [Kafka-Cluster]
Is there any solution? Otherwise I was thinking to add a kafka-rest proxy (https://docs.confluent.io/current/kafka-rest/docs/index.html) between Traefik and the kafka brokers but I think isn't the ideal solution.
I did. You can refer to it, in kubernetes ,deployment kafka.yaml
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_CREATE_TOPICS
value: "test:1:1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INSIDE://:9092,OUTSIDE://kafka-com:30322"
- name: KAFKA_LISTENERS
value: "INSIDE://:9092,OUTSIDE://:30322"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INSIDE"
kafka service,the external service invocation address, or traefik proxy address
---
kind: Service
apiVersion: v1
metadata:
name: kafka-com
namespace: dev
labels:
k8s-app: kafka
spec:
selector:
k8s-app: kafka
ports:
- port: 9092
name: innerport
targetPort: 9092
protocol: TCP
- port: 30322
name: outport
targetPort: 30322
protocol: TCP
nodePort: 30322
type: NodePort
Ensure that Kafka external port and nodePort port are consistent,Other services call kafka-com:30322, my blog write this config_kafka_in_kubernetes, hope to help U !
I am trying to deploy minio in kubernetes using helm stable charts,
and when I try to check the status of the release
helm status minio
the pod desired capacity is 4, but current is 0
I tried to look the journalctl logs for any logs from kubelet, but found none
I have attached all helm charts can some one please point out what wrong am I doing?
---
# Source: minio/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
type: Opaque
data:
accesskey: RFJMVEFEQU1DRjNUQTVVTVhOMDY=
secretkey: bHQwWk9zWmp5MFpvMmxXN3gxeHlFWmF5bXNPUkpLM1VTb3VqeEdrdw==
---
# Source: minio/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
data:
initialize: |-
#!/bin/sh
set -e ; # Have script exit in the event of a failed command.
# connectToMinio
# Use a check-sleep-check loop to wait for Minio service to be available
connectToMinio() {
ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
set -e ; # fail if we can't read the keys.
ACCESS=$(cat /config/accesskey) ; SECRET=$(cat /config/secretkey) ;
set +e ; # The connections to minio are allowed to fail.
echo "Connecting to Minio server: http://$MINIO_ENDPOINT:$MINIO_PORT" ;
MC_COMMAND="mc config host add myminio http://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;
$MC_COMMAND ;
STATUS=$? ;
until [ $STATUS = 0 ]
do
ATTEMPTS=`expr $ATTEMPTS + 1` ;
echo \"Failed attempts: $ATTEMPTS\" ;
if [ $ATTEMPTS -gt $LIMIT ]; then
exit 1 ;
fi ;
sleep 2 ; # 1 second intervals between attempts
$MC_COMMAND ;
STATUS=$? ;
done ;
set -e ; # reset `e` as active
return 0
}
# checkBucketExists ($bucket)
# Check if the bucket exists, by using the exit code of `mc ls`
checkBucketExists() {
BUCKET=$1
CMD=$(/usr/bin/mc ls myminio/$BUCKET > /dev/null 2>&1)
return $?
}
# createBucket ($bucket, $policy, $purge)
# Ensure bucket exists, purging if asked to
createBucket() {
BUCKET=$1
POLICY=$2
PURGE=$3
# Purge the bucket, if set & exists
# Since PURGE is user input, check explicitly for `true`
if [ $PURGE = true ]; then
if checkBucketExists $BUCKET ; then
echo "Purging bucket '$BUCKET'."
set +e ; # don't exit if this fails
/usr/bin/mc rm -r --force myminio/$BUCKET
set -e ; # reset `e` as active
else
echo "Bucket '$BUCKET' does not exist, skipping purge."
fi
fi
# Create the bucket if it does not exist
if ! checkBucketExists $BUCKET ; then
echo "Creating bucket '$BUCKET'"
/usr/bin/mc mb myminio/$BUCKET
else
echo "Bucket '$BUCKET' already exists."
fi
# At this point, the bucket should exist, skip checking for existence
# Set policy on the bucket
echo "Setting policy of bucket '$BUCKET' to '$POLICY'."
/usr/bin/mc policy $POLICY myminio/$BUCKET
}
# Try connecting to Minio instance
connectToMinio
# Create the bucket
createBucket bucket none false
config.json: |-
{
"version": "26",
"credential": {
"accessKey": "DR06",
"secretKey": "lt0ZxGkw"
},
"region": "us-east-1",
"browser": "on",
"worm": "off",
"domain": "",
"storageclass": {
"standard": "",
"rrs": ""
},
"cache": {
"drives": [],
"expiry": 90,
"maxuse": 80,
"exclude": []
},
"notify": {
"amqp": {
"1": {
"enable": false,
"url": "",
"exchange": "",
"routingKey": "",
"exchangeType": "",
"deliveryMode": 0,
"mandatory": false,
"immediate": false,
"durable": false,
"internal": false,
"noWait": false,
"autoDeleted": false
}
},
"nats": {
"1": {
"enable": false,
"address": "",
"subject": "",
"username": "",
"password": "",
"token": "",
"secure": false,
"pingInterval": 0,
"streaming": {
"enable": false,
"clusterID": "",
"clientID": "",
"async": false,
"maxPubAcksInflight": 0
}
}
},
"elasticsearch": {
"1": {
"enable": false,
"format": "namespace",
"url": "",
"index": ""
}
},
"redis": {
"1": {
"enable": false,
"format": "namespace",
"address": "",
"password": "",
"key": ""
}
},
"postgresql": {
"1": {
"enable": false,
"format": "namespace",
"connectionString": "",
"table": "",
"host": "",
"port": "",
"user": "",
"password": "",
"database": ""
}
},
"kafka": {
"1": {
"enable": false,
"brokers": null,
"topic": ""
}
},
"webhook": {
"1": {
"enable": false,
"endpoint": ""
}
},
"mysql": {
"1": {
"enable": false,
"format": "namespace",
"dsnString": "",
"table": "",
"host": "",
"port": "",
"user": "",
"password": "",
"database": ""
}
},
"mqtt": {
"1": {
"enable": false,
"broker": "",
"topic": "",
"qos": 0,
"clientId": "",
"username": "",
"password": "",
"reconnectInterval": 0,
"keepAliveInterval": 0
}
}
}
}
---
# Source: minio/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
spec:
type: ClusterIP
clusterIP: None
ports:
- name: service
port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
release: RELEASE-NAME
---
# Source: minio/templates/statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
spec:
serviceName: RELEASE-NAME-minio
replicas: 4
selector:
matchLabels:
app: minio
release: RELEASE-NAME
template:
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
release: RELEASE-NAME
spec:
containers:
- name: minio
image: node1:5000/minio/minio:RELEASE.2018-09-01T00-38-25Z
imagePullPolicy: IfNotPresent
command: [ "/bin/sh",
"-ce",
"cp /tmp/config.json &&
/usr/bin/docker-entrypoint.sh minio -C server
http://RELEASE-NAME-minio-0.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-1.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-2.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-3.RELEASE-NAME-minio.default.svc.cluster.local/export" ]
volumeMounts:
- name: export
mountPath: /export
- name: minio-server-config
mountPath: "/tmp/config.json"
subPath: config.json
- name: minio-config-dir
mountPath:
ports:
- name: service
containerPort: 9000
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: RELEASE-NAME-minio
key: accesskey
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: RELEASE-NAME-minio
key: secretkey
livenessProbe:
tcpSocket:
port: service
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: service
periodSeconds: 15
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
requests:
cpu: 250m
memory: 256Mi
volumes:
- name: minio-user
secret:
secretName: RELEASE-NAME-minio
- name: minio-server-config
configMap:
name: RELEASE-NAME-minio
- name: minio-config-dir
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: export
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-fast
resources:
requests:
storage: 49Gi
---
# Source: minio/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
annotations:
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: route
spec:
tls:
- hosts:
- minio.sample.com
secretName: tls-secret
rules:
- host: minio.sample.com
http:
paths:
- path: /
backend:
serviceName: RELEASE-NAME-minio
servicePort: 9000
I suspect you are not getting the physical volume. Check your kube-controller-manager logs on your active master. This will vary depending on the cloud you are using: AWS, GCP, Azure, Openstack, etc. The kube-controller-manager is usually running on a docker container on the master. So you can do something like:
docker logs <kube-controller-manager-container>
Also, check:
kubectl get pvc
kubectl get pv
Hope it helps.
bit more digging gave me the answer, statefulset was deployed but pods were not created
kubectl describe statefulset -n <namespace> minio
the log said it was looking for mount path which was "" (in previous versions of charts), changing it solved my issue.