I followed the instructions here to create a Mongo instance using helm. This requires some adaptation as it's being created through gitlab with gitlab-ci.
Unfortunately, the part about values.yaml gets skimmed over and I haven't found complete examples for mongo through helm. [Many of the examples also seemed deprecated as well.]
Being unsure how to address the issue, using this as a values.yaml file:
global:
mongodb:
DBName: 'example'
mongodbUsername: "user"
mongodbPassword: "password"
mongodbDatabase: "database"
mongodbrootPassword: "password"
auth.rootPassword: "password"
The following error is returned:
'auth.rootPassword' must not be empty, please add '--set auth.rootPassword=$MONGODB_ROOT_PASSWORD' to the command. To get the current value:
export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace angular-23641052-review-86-sh-mousur review-86-sh-mousur-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
Given that I'm only using helm as called by gitlab-ci, I am unsure on how to implement the set or otherwise set the root password.
From what I could tell, I thought setting variables in env in values.yaml had solved the problem:
env:
- name: "auth.rootPassword"
value: "password"
The problem went away - but has returned.
Gitlab is making a move towards using .gitlab/auto-deploy-values.yaml.
I created a local version of autodeploy.sh and added the value auth.rootPassword="password" to auto-deploy-value.yaml and the value seems to work. cf. [Gitlab's documentation][1]
Related
Good Morning.
I'm currently using an helmchart to deploy camunda inside an openshift namespace/cluster.
For your information, Camunda has a default process called "Invoice" and that process is responsible to create a default user called "demo".
I would like to avoid that user creation, so i was able to do it through docker with the following command:
docker run -d --name camunda -p 8080:8080 -v
/tmp/empty:/camunda/webapps/camunda-invoice
camunda/camunda-bpm-platform:latest
But now, my helm chart uses a custom "values.yaml" that calls the camunda image, and then issues a command to start it:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
So is it possible to use the same behavior as docker command shown above, to empty the "webapps" directory after calling the camunda.sh?
I know that I can pass through the args: [ ] the argument "--webapps" but the issue is that it will remove the "tasklist" and "cockpit" that allows users to access the Camunda UI.
Thank you everyone.
Have a nice day!
EDIT:
While speaking with Camunda team, i just had the information that i can send the "--webapps --swaggerui --rest" arguments in order to start the application without having the default BPMN Process (Invoice).
So I'm currently try to use multiple arguments in my Helm Chart values.yaml like this:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
args: ["--webapps", "--rest", "--swaggerui"]
Unfortunately, it's not working this way. What am i doing wrong?
If I send just one argument like "--webapps" it reads the arguments and creates the container.
But if i send multiple arguments, like the example shown above, it just doesn't create the container.
Am i doing something wrong?
The different start arguments for the Camunda 7 RUN distribution are documented here: https://docs.camunda.org/manual/7.18/user-guide/camunda-bpm-run/#start-script-arguments
Here is a helm value file example using these parameters:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
args: ['--production','--webapps','--rest','--swaggerui']
extraEnvs:
- name: DB_VALIDATE_ON_BORROW
value: "false"
I am trying to set claims_map in HASURA_GRAPHQL_JWT_SECRET in my docker compose file
using the below config
HASURA_GRAPHQL_JWT_SECRET: '{"type":"HS256","key":"***************************","claims_namespace":"p-clamis-allow","claims_map":{"x-hasura-user-id":{"path":"$.user.id"}}}'
I get the following error:
Invalid interpolation format for "environment" option in service "graphql-engine":"{"type":"HS256","key":"*************************","claims_namespace":"p-clamis-allow","claims_map":{"x-hasura-user-id":{"path":"$.user.id"}}}"**
Replace $ with $$ and things should work!
HASURA_GRAPHQL_JWT_SECRET: '{"type":"HS256","key":"***************************","claims_namespace":"p-clamis-allow","claims_map":{"x-hasura-user-id":{"path":"$$.user.id"}}}'
I use traefik 1.7.14 and I want use basic auth for my grafana-docker-compose service.
I followed e.g. https://medium.com/#xavier.priour/secure-traefik-dashboard-with-https-and-password-in-docker-5b657e2aa15f
but I also looked at other sources.
In my docker-compose.yml I have for grafana:
grafana:
image: grafana/grafana
labels:
- "traefik.enable=true"
- "traefik.backend=grafana"
- "traefik.port=3000"
- "traefik.frontend.rule=Host:grafana.my-domain.io"
- "traefik.frontend.entryPoints=http,https"
- "traefik.frontend.auth.basic.users=${ADMIN_CREDS}
ADMIN_CREDS is in my .env file. I created the content with htpasswd -nbm my_user my_password I also tried htpasswd -nbB my_user my_password for not md5 but bcrypt encryption.
In .env
ADMIN_CREDS=test:$apr1$f0uSe/rs$KGSQaPMD.352XdXIzsfyY0
You see: I did not escape $ signs in the .env file.
When I inspect my container at runtime I see exactly the same encrypted password as in my .env file!
docker inspect 47aa3dbc3623 | grep test
gives me:
"traefik.frontend.auth.basic.users": "test:$apr1$f0uSe/rs$KGSQaPMD.352XdXIzsfyY0",
I also tried to put the user/password string directly into the docker-compose.yml. this time by escaping the $ sign.
The inspect command was successful too.
BUT: When I call my grafana-URL I get a basic auth dialog-box and when I type in my user/password combination I get always a
{"message":"Invalid username or password"}
What could be still wrong here? I have currently no idea.
This message actually means that you passed the basic auth of traefik. Because the basic auth window would pop up again if you would enter invalid credentials.
Grafana on its own uses basic auth and this one is failing.
DO NOT DO IT IN PRODUCTION: To prove it you could configure grafana to ask for the same user and password. Then it will accept the forwarded basic auth of traefik and would allow access.
However, you should either setup basic auth using traefik OR using the grafana basic auth.
You also might want to check the information on running grafana behind a reverse proxy: https://grafana.com/tutorials/run-grafana-behind-a-proxy/#1
and escpecially https://grafana.com/docs/grafana/latest/auth/auth-proxy/
Another option besides forwarding the auth headers would be to disable forwording it:
labels:
...
- "traefik.http.middlewares.authGrafana.basicauth.removeheader=true"
Now you should see the grafana login page.
I am trying to run KeyCloak on Kubernetes using PostgreSQL as a database.
The files I am referring to are on the peterzandbergen/keycloak-kubernetes project on GitHub.
I used kompose to generate the yaml files, as a staring point, using the files that jboss published.
PostgreSQL is started first using:
./start-postgres.sh
Then I try to start KeyCloak:
kubectl create -f keycloak-deployment.yaml
The KeyCloak pod stops because it cannot connect to the database with the error:
10:00:40,652 SEVERE [org.postgresql.Driver] (ServerService Thread Pool -- 58) Error in url: jdbc:postgresql://172.17.0.4:tcp://10.101.187.192:5432/keycloak
The full log can be found on github. This is also the place to look at the yaml files that I use to create the deployment and the services.
After some experimenting I found out that using the name postgres in the keycloak-deployment.yaml file
- env:
- name: DB_ADDR
value: postgres
messes things up and results in a strange expansion. After replacing this part of the yaml file with:
- env:
- name: DB_ADDR
value: postgres-keycloak
makes it work fine. This also requires changing the postgres-service.yaml file. The new versions of the files are in github.
I'm trying to get keycloak set up as a helm chart requirement to run some integration tests. I can get it to bring it up and run it, but I can't figure out how to set up the realm and client I need. I've switched over to the 1.0.0 stable release that came out today:
https://github.com/kubernetes/charts/tree/master/stable/keycloak
I wanted to use the keycloak.preStartScript defined in the chart and use the /opt/jboss/keycloak/bin/kcadm.sh admin script to do this, but apparently by "pre start" they mean before the server is brought up, so kcadm.sh can't authenticate. If I leave out the keycloak.preStartScript I can shell into the keycloak container and run the kcadm.sh scripts I want to use after it's up and running, but they fail as part of the pre start script.
Here's my requirements.yaml for my chart:
dependencies:
- name: keycloak
repository: https://kubernetes-charts.storage.googleapis.com/
version: 1.0.0
Here's my values.yaml file for my chart:
keycloak:
keycloak:
persistence:
dbVendor: H2
deployPostgres: false
username: 'admin'
password: 'test'
preStartScript: |
/opt/jboss/keycloak/bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password 'test'
/opt/jboss/keycloak/bin/kcadm.sh create realms -s realm=foo -s enabled=true -o
CID=$(/opt/jboss/keycloak/bin/kcadm.sh create clients -r foo -s clientId=foo -s 'redirectUris=["http://localhost:8080/*"]' -i)
/opt/jboss/keycloak/bin/kcadm.sh get clients/$CID/installation/providers/keycloak-oidc-keycloak-json
persistence:
dbVendor: H2
deployPostgres: false
Also a side annoyance is that I need to define the persistence settings in both places or it either fails or brings up postgresql in addition to keycloak
I tried this too and also hit this problem so have raised an issue. I prefer to use -Dimport with a realm .json file but your points suggest a postStartScript option would make sense so I've included both in the PR on that issue
the Keycloak chart has been updated. Have a look at these PRs:
https://github.com/kubernetes/charts/pull/5887
https://github.com/kubernetes/charts/pull/5950