How to connect to OpenLDAP which created by official helm chart? - kubernetes

Using Helm 3 installed OpenLDAP:
helm install openldap stable/openldap
Got this message:
NAME: openldap
LAST DEPLOYED: Sun Apr 12 13:54:45 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
OpenLDAP has been installed. You can access the server from within the k8s cluster using:
openldap.default.svc.cluster.local:389
You can access the LDAP adminPassword and configPassword using:
kubectl get secret --namespace default openldap -o jsonpath="{.data.LDAP_ADMIN_PASSWORD}" | base64 --decode; echo
kubectl get secret --namespace default openldap -o jsonpath="{.data.LDAP_CONFIG_PASSWORD}" | base64 --decode; echo
You can access the LDAP service, from within the cluster (or with kubectl port-forward) with a command like (replace password and domain):
ldapsearch -x -H ldap://openldap.default.svc.cluster.local:389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w $LDAP_ADMIN_PASSWORD
Test server health using Helm test:
helm test openldap
You can also consider installing the helm chart for phpldapadmin to manage this instance of OpenLDAP, or install Apache Directory Studio, and connect using kubectl port-forward.
However I can't use this command to search content on ldap server in the k8s cluster:
export LDAP_ADMIN_PASSWORD=[REAL_PASSWORD_GET_ABOVE]
ldapsearch -x -H ldap://openldap.default.svc.cluster.local:389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w $LDAP_ADMIN_PASSWORD
Got error
ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
I also login to the pod to run
kubectl exec -it openldap -- /bin/bash
# export LDAP_ADMIN_PASSWORD=[REAL_PASSWORD_GET_ABOVE]
# ldapsearch -x -H ldap://openldap.default.svc.cluster.local:389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w $LDAP_ADMIN_PASSWORD
The same.

As it's stated in the notes:
NOTES:
OpenLDAP has been installed. You can access the server from within the k8s cluster using:
openldap.default.svc.cluster.local:389
You can access the LDAP adminPassword and configPassword using:
kubectl get secret --namespace default openldap -o jsonpath="{.data.LDAP_ADMIN_PASSWORD}" | base64 --decode; echo
kubectl get secret --namespace default openldap -o jsonpath="{.data.LDAP_CONFIG_PASSWORD}" | base64 --decode; echo
You can access the LDAP service, from within the cluster (or with kubectl port-forward) with a command like (replace password and domain):
ldapsearch -x -H ldap://openldap.default.svc.cluster.local:389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w $LDAP_ADMIN_PASSWORD
Test server health using Helm test:
helm test openldap
You can also consider installing the helm chart for phpldapadmin to manage this instance of OpenLDAP, or install Apache Directory Studio, and connect using kubectl port-forward.
You can do:
$ kubectl port-forward services/openldap 3389:389
Forwarding from 127.0.0.1:3389 -> 389
Forwarding from [::1]:3389 -> 389
Handling connection for 3389
From another shell, outside the Kubernetes cluster:
$ kubectl get secret --namespace default openldap -o jsonpath="{.data.LDAP_ADMIN_PASSWORD}" | base64 --decode; echo
l3dkQByvzKKboCWQRyyQl96ulnGLScIx
$ ldapsearch -x -H ldap://localhost:3389 -b dc=example,dc=org -D "cn=admin,dc=example,dc=org" -w l3dkQByvzKKboCWQRyyQl96ulnGLScIx
Also it was already mentioned in a comment by #Totem

Related

Error installing TimescaleDB with K8S / Helm : MountVolume.SetUp failed for volume "certificate" : secret "timescaledb-certificate" not found

I just tried to install timescaleDB Single with Helm in minikube on Ubuntu 20.04.
After installing via:
helm install timescaledb timescaledb/timescaledb-single --namespace espace-client-v2
I got the message:
➜ ~ helm install timescaledb timescaledb/timescaledb-single --namespace espace-client-v2
NAME: timescaledb
LAST DEPLOYED: Fri Aug 7 17:17:59 2020
NAMESPACE: espace-client-v2
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
TimescaleDB can be accessed via port 5432 on the following DNS name from within your cluster:
timescaledb.espace-client-v2.svc.cluster.local
To get your password for superuser run:
# superuser password
PGPASSWORD_POSTGRES=$(kubectl get secret --namespace espace-client-v2 timescaledb-credentials -o jsonpath="{.data.PATRONI_SUPERUSER_PASSWORD}" | base64 --decode)
# admin password
PGPASSWORD_ADMIN=$(kubectl get secret --namespace espace-client-v2 timescaledb-credentials -o jsonpath="{.data.PATRONI_admin_PASSWORD}" | base64 --decode)
To connect to your database, chose one of these options:
1. Run a postgres pod and connect using the psql cli:
# login as superuser
kubectl run -i --tty --rm psql --image=postgres \
--env "PGPASSWORD=$PGPASSWORD_POSTGRES" \
--command -- psql -U postgres \
-h timescaledb.espace-client-v2.svc.cluster.local postgres
# login as admin
kubectl run -i --tty --rm psql --image=postgres \
--env "PGPASSWORD=$PGPASSWORD_ADMIN" \
--command -- psql -U admin \
-h timescaledb.espace-client-v2.svc.cluster.local postgres
2. Directly execute a psql session on the master node
MASTERPOD="$(kubectl get pod -o name --namespace espace-client-v2 -l release=timescaledb,role=master)"
kubectl exec -i --tty --namespace espace-client-v2 ${MASTERPOD} -- psql -U postgres
It seemed to have installed well.
But then, when executing:
PGPASSWORD_POSTGRES=$(kubectl get secret --namespace espace-client-v2 timescaledb-credentials -o jsonpath="{.data.PATRONI_SUPERUSER_PASSWORD}" | base64 --decode)
Error from server (NotFound): secrets "timescaledb-credentials" not found
After that, I realized pod has not even been created, and it gives me the following errors
MountVolume.SetUp failed for volume "certificate" : secret "timescaledb-certificate" not found
Unable to attach or mount volumes: unmounted volumes=[certificate], unattached volumes=[storage-volume wal-volume patroni-config timescaledb-scripts certificate socket-directory timescaledb-token-svqqf]: timed out waiting for the condition
What should I do ?
I could do it. If the page https://github.com/timescale/timescaledb-kubernetes doesn't give much details about installation process, you can go here:
https://github.com/timescale/timescaledb-kubernetes/tree/master/charts/timescaledb-single
I had to use kustomize to generate content:
./generate_kustomization.sh my-release
and then it generate several files:
credentials.conf kustomization.yaml pgbackrest.conf timescaledbMap.yaml tls.crt tls.key
then I did:
kubectl kustomize ./
which generated a k8s yml file, which I saved with the name timescaledbMap.yaml
Finally, I did:
kubectl apply -f timescaledbMap.yaml
Then it created all necesarry secrets, and I could install chart
. Hope it helps others.

Can't connect to postgres installed from stable/postgresql helm chart

I'm trying to install postgresql via helm. I'm not overriding any settings, but when I try to connect, I get a "password authentication failed" error:
$ helm install goatsnap-postgres stable/postgresql
NAME: goatsnap-postgres
LAST DEPLOYED: Mon Jan 27 12:44:22 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:
goatsnap-postgres-postgresql.default.svc.cluster.local - Read/Write connection
To get the password for "postgres" run:
export POSTGRES_PASSWORD=$(kubectl get secret --namespace default goatsnap-postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
kubectl run goatsnap-postgres-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host goatsnap-postgres-postgresql -U postgres -d postgres -p 5432
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/goatsnap-postgres-postgresql 5432:5432 &
PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432
$ kubectl get secret --namespace default goatsnap-postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode
DCsSy0s8hM
$ kubectl run goatsnap-postgres-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=DCsSy0s8hM" --command -- psql --host goatsnap-postgres-postgresql -U postgres -d postgres -p 5432
psql: FATAL: password authentication failed for user "postgres"
pod "goatsnap-postgres-postgresql-client" deleted
pod default/goatsnap-postgres-postgresql-client terminated (Error)
I've tried a few other things, all of which get the same error:
Run kubectl run [...] bash, launch psql, type the password at the prompt
Run kubectl port-forward [...], launch psql locally, type the password
Un/re-install the chart a few times
Use helm install --set postgresqlPassword=[...], use the explicitly-set password
I'm using OSX 10.15.2, k8s 1.15.5 (via Docker Desktop 2.2.0.0), helm 3.0.0, postgres chart postgresql-7.7.2, postgres 11.6.0
I think I had it working before (although I don't see evidence in my scrollback), and I think I updated Docker Desktop since then, and I think I saw something about a K8s update in the notes for the Docker update. So if I'm remembering all those things correctly, maybe it's related to the k8s update?
Whoops, never mind—I found the resolution in an issue on helm/charts. The password ends up in a persistent volume, so if you uninstall and reinstall the chart, the new version retains the password from the old one, not the value from kubectl get secret. I uninstalled the chart, deleted the old PVC and PV, and reinstalled, and now I'm able to connect.

How to reset grafana's admin password (installed by helm)

My password once worked, but I don't remember if I changed it or not.
However, I can't reset it.
I tried with no success:
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
> DpveUuOyxNrandompasswordYuB5Fs2cEKKOmG <-- does not work (anymore?)
PS: I did not set any admin email for web-based reset
Ok found.
Best way is to run grafana-cli inside grafana's pod.
kubectl exec --namespace default -it $(kubectl get pods --namespace default -l "app=grafana,release=grafana" -o jsonpath="{.items[0].metadata.name}") grafana-cli admin reset-admin-password yourNewPasswordHere
INFO[01-21|10:24:17] Connecting to DB logger=sqlstore dbtype=sqlite3
INFO[01-21|10:24:17] Starting DB migration logger=migrator
Admin password changed successfully ✔
Okay, try this.
kubectl get pod -n monitoring
kubectl exec -it grafana-00000000aa-lpwkk -n monitoring -- sh
grafana-cli admin reset-admin-password NEWPASSWORD
If you installed it via kube-prometheus-stack helm chart then the admin password is stored in a secret named kube-prometheus-stack-grafana. You need to set it there are restart the Grafana pod.
Alternatively, you can just decode the password and use:
kubectl get secrets/kube-prometheus-stack-grafana -o json | jq '.data | map_values(#base64d)'

How to login to Kubernetes using service account?

I am trying to perform a simple operation of logging into my cluster to update image of a deployment. I am stuck at the first step. I get an error that connection to localhost:8080 is refused. Please help.
$ chmod u+x kubectl && mv kubectl /bin/kubectl
$ $KUBE_CERT > ca.crt
$ kubectl config set-cluster cfc --server=$KUBE_URL --certificate-authority=ca.crt
Cluster "cfc" set.
$ kubectl config set-context cfc --cluster=cfc
Context "cfc" created.
$ kubectl config set-credentials gitlab-admin --token=$KUBE_TOKEN
User "gitlab-admin" set.
$ kubectl config set-context cfc --user=gitlab-admin
Context "cfc" modified.
$ kubectl config use-context cfc
Switched to context "cfc".
$ echo "Deploying dashboard with version extracted from tag ${CI_COMMIT_TAG}"
Deploying dashboard with version extracted from tag dev-1.0.4-22
$ kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The reason why you have you connection refused is because your proxy is not started. Try executing code below so kubectl can access the cluster via proxy (localhost:8080).
kubectl proxy --address 0.0.0.0 --accept-hosts '.*' &
Another approach is to use curl and operate with your cluster just like in the following example:
curl --cacert /path/to/cert -H "Bearer {your token}" "${KUBE_URL}/api"

Configure Client commands from command line

In IBM Cloud Private EE, I need to go to the Web UI User > Configure client, copy the kubectl config commands and then run these 5 commands on my client machine.
I deployed the IBM Cloud private EE on 5 VMs and have access to the master node. I am wondering if there is a way to capture these kubectl config commands directly from the docker containers without having a need to go to the Web UI.
For example: I did not want to download the kubectl client from google (as I just want to use same kubectl version which is in the ICP containers) and I used the following command to get it from the container itself.
docker run --rm -v $(pwd):/data -e LICENSE=accept \
ibmcom/icp-inception:2.1.0.1-ee \
cp -r /usr/local/bin/kubectl /data
Then, I copied this to all VM guests so that I could access kubectl from any guest.
chmod +x kubectl
for host in $(awk '/192.168.142/ {print $3}' /etc/hosts)
do
scp kubectl $host:/bin
done
Where - 192.168.142 is the subnet of my VM guests.
But, I could not figure out how to get Configure Client commands without having to go to the Web UI. I need this to automate client kubectl command so that my environment is ready for kubectl commands through simple scripts.
You should use Vagrant to automate those steps.
For instance, IBM/deploy-ibm-cloud-private/Vagrantfile has this section:
install_kubectl = <<SCRIPT
echo "Pulling #{image_repo}/kubernetes:v#{k8s_version}..."
sudo docker run -e LICENSE=#{license} --net=host -v /usr/local/bin:/data #{image_repo}/kubernetes:v#{k8s_version} cp /kubectl /data &> /dev/null
kubectl config set-credentials icpadmin --username=admin --password=admin &> /dev/null
kubectl config set-cluster icp --server=http://127.0.0.1:8888 --insecure-skip-tls-verify=true &> /dev/null
kubectl config set-context icp --cluster=icp --user=admin --namespace=default &> /dev/null
kubectl config use-context icp &> /dev/null
SCRIPT
See more at "Kubernetes, IBM Cloud Private, and Vagrant, oh my!", from Tim Pouyer.
#VonC provided useful tips. This is how the service account token can be obtained.
Get the token from a running container - Tip from this link.
RUNNIGCONTAINER=$(docker ps | grep k8s_cloudiam-apikeys_auth | awk '{print $1}')
TOKEN=$(docker exec -t $RUNNIGCONTAINER cat /var/run/secrets/kubernetes.io/serviceaccount/token)
I already know the name of the IBM Cloud Private cluster name, master node and the default user name. The only missing link was the token. Please note that the script used by Tim is using password and the only difference was - I wanted to use token instead of the password.
So use the scripts.
kubectl config set-cluster ${CLUSTERNAME}.icp --server=https://$MASTERNODE:8001 --insecure-skip-tls-verify=true
kubectl config set-context ${CLUSTERNAME}.icp-context --cluster=${CLUSTERNAME}.icp
kubectl config set-credentials admin --token=$TOKEN
kubectl config set-context ${CLUSTERNAME}.icp-context --user=$DEFAULTUSERNAME --namespace=default
kubectl config use-context ${CLUSTERNAME}.icp-context
# get token
icp_auth_token=`curl -s -k -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" \
-d "grant_type=password&username=${myuser}&password=${mypass}&scope=openid" \
https://${icp_server}:8443/idprovider/v1/auth/identitytoken --insecure | \
sed 's/{//g;s/}//g;s/\"//g' | \
awk -F ':' '{print $7}'`
# setup context
kubectl config set-cluster ${icp_server} --server=https://${icp_server}:8001 --insecure-skip-tls-verify=true
kubectl config set-credentials ${icp_server}-user --token=${icp_auth_token}
kubectl config set-context ${icp_server}-context --cluster=${icp_server} --user=${icp_server}-user
kubectl config use-context ${icp_server}-context