InfluxDB2 on Kubernetes not using existing admin password/token secret - kubernetes

I'm installing InfluxDB2 on a Kubernetes cluster (AWS EKS) and in the helm chart I specify an existing secret name "influxdb-auth" for admin user credentials. When I try to login to the web admin interface, it does not accept the password or token from that secret. If I don't specify an existing secret, it automatically creates a secret "influxdb2-auth" and I can retrieve and use the password successfully, but it will not use the existing secret. Also when I specify the existing secret "influxdb-auth" it does not create a secret "influxdb2-auth" so I can't retrieve the password it has generated. I have tried naming the existing secret "influxdb2-auth" but that also did not work. Any ideas on what the problem might be?
Section from values.yaml:
## Create default user through docker entrypoint
## Defaults indicated below
##
adminUser:
organization: "test"
bucket: "default"
user: "admin"
retention_policy: "0s"
## Leave empty to generate a random password and token.
## Or fill any of these values to use fixed values.
password: ""
token: ""
## The password and token are obtained from an existing secret. The expected
## keys are `admin-password` and `admin-token`.
## If set, the password and token values above are ignored.
existingSecret: influxdb-auth

To anyone here coming here from the future. Make sure you run:
echo $(kubectl get secret influxdb-influxdb2-auth -o "jsonpath={.data['admin-password']}" --namespace monitoring | base64 --decode)
after first installation. First time influxdb2 starts it will setup task, subsequent helm install/upgrade seem to save new password in the secret which isn't on the file system.
I had to delete content of PVC for influxdb and rerun installation.

Related

Vault plugin for GoCD fails to retrieve secret

I am running GoCD installed via Helm in my kubernetes cluster. I've installed the Vault plugin (https://github.com/gocd/gocd-vault-secret-plugin), and have configured it successfully in the "Secrets Management" tab under Admin. My configuration looks like:
id: vault
Vault URL: https://myvaultserver.com
Vault Path: /my/path/to/secrets
Auth Method: Token
Token: MY_AUTH_TOKEN
Rules:
Allow All *
However when I assign a secret {{SECRET:[vault][password]}} to an environment variable my jobs fail after agent registration with this error:
com.thoughtworks.go.plugin.access.exceptions.SecretResolutionFailureException: Expected plugin to resolve secret param(s) `password` using secret config `vault` but plugin failed to resolve secret param(s) `password`. Please make sure that secret(s) with the same name exists in your secret management tool.
I can retrieve this just fine with vault CLI:
=========== Data ===========
Key Value
--- -----
password My_Password
What am I missing?

How to add grafana's admin password in values yaml

I want to ask how to add Grafana admin password configuration in helm chart.
I have followed this link github
From the link, I put below values in (values come from above github page)
[security]
# default admin user, created on startup
admin_user = admin
# default admin password, can be changed before first start of grafana, or in profile settings
;admin_password = bullebolle
# used for signing
;secret_key = SW2YcwTIb9zpOOhoPsMm
# Auto-login remember days
;login_remember_days = 7
;cookie_username = grafana_user
;cookie_remember_name = grafana_remember
in grafana-config.yml. After this I have re-applied that with
kubectl apply -f grafana-config.yml
but the password didn't change.
If I follow this link stackoverflow - How to reset grafana's admin password (installed by helm) I can change admin password but data is lost after restart deployment.
How can I solve this problem ?
Thanks for answering
You need to manually trigger a new deployment after you update the config map for the pod to get the new values.
Or you can use this feature https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments to automate it.

Custom password for kubernetes dashboard when using eks

Is it possible to configure a custom password for the Kubernetes dashboard when using eks without customizing "kube-apiserver"?
This URL mentions changes in "kube-apiserver"
https://techexpert.tips/kubernetes/kubernetes-dashboard-user-authentication/
In K8s, requests come as Authentication and Authorization (so the API server can determine if this user can perform the requested action). K8s dont have users, in the simple meaning of that word (Kubernetes users are just strings associated with a request through credentials). The credential strategy is a choice you make while you install the cluster (you can choose from x509, password files, Bearer tokens, etc.).
Without API K8s server automatically falls back to an anonymous user and there is no way to check if provided credentials are valid.
You can do something like : not tested
Create a new credential using OpenSSL
export NEW_CREDENTIAL=USER:$(echo PASSWORD | openssl passwd -apr1
-noverify -stdin)
Append the previously created credentials to
/opt/bitnami/kubernetes/auth.
echo $NEW_CREDENTIAL | sudo tee -a /opt/kubernetes/auth
Replace the cluster basic-auth secret.
kubectl delete secret basic-auth -n kube-system
kubectl create secret generic basic-auth --from-file=/opt/kubernetes/auth -n kube-system

Openshift 3.11 How to setup permenant token for pulling from integrated docker registry

I'm using openshift 3.11 and I have a very hard time figuring out how to setup permenant token for image pull and push.
After I do docker login it is ok, but eventually that token expires.
By the documentation it seems that services account : default ,builder should have access.
As you can see each of them have some default dockercfg:
Labels:
Annotations:
Image pull secrets: default-dockercfg-ttjml
Mountable secrets: default-token-q4x4w
default-dockercfg-ttjml
Tokens: default-token-729xq
default-token-q4x4w
Events:
default-dockercfg-ttjml, Which has really weird username and password. Read the documentation many times and still I can't understand how to setup a permanent token. Can someone explain me in a plain manner what is the procedure?
AFAIK, serviceAccount token does not expire until create it again. Look [0] for details. If you want to create docker authentication secret against external docker registry, refer [1] for details.
[0]Managing Service Accounts
The generated API token and registry credentials do not expire, but they can be revoked by deleting the secret.
[1]Allowing Pods to Reference Images from Other Secured Registries
$ oc create secret generic <pull_secret_name> \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson

Unable to access Kubernetes Dashboard via kubeconfig

I'm trying to access Kubernetes Dashboard via a kubeconfig file but I don't know how to create a kubeconfig file to access it.
I can access it by a token way but I want to access it by a kubeconfig file, too.
thanks
Can you explain what you mean when you say you can access it by token but not through a kubeconfig? Kubeconfigs simply store authentication information in them, which can include authentication via a token.
Assuming the rest of your kubeconfig file is populated, you just need to modify it so that your user information contains the token, like so:
users:
- name: my-user
user:
token: <token-here>