I have deployed cassandra in my local k3s cluster with the below command:
helm install monitoring-cassandra bitnami/cassandra --set dbUser.password="cassandra" --set dbUser.user="cassandra" --set image.debug=true
Now, as per the instructions from the output of the above command, I create a cassandra client pod using
kubectl run --namespace default monitoring-cassandra-client --rm --tty -i --restart='Never' --image docker.io/bitnami/cassandra:4.0.3-debian-10-r59 -- bash
And then in the command prompt, when I attempt to connect to the deployed cassandra deployment, I get the below error
$ cqlsh -u cassandra -p cassandra monitoring-cassandra
Connection error: ('Unable to connect to any servers', {'192.168.96.199:9042': AuthenticationFailed('Failed to authenticate to 192.168.96.199:9042: Error from server: code=0100 [Bad credentials] message="Provided username cassandra and/or password are incorrect"')})
Connection error: ('Unable to connect to any servers', {'192.168.96.199:9042': AuthenticationFailed('Failed to authenticate to 192.168.96.199:9042: Error from server: code=0100 [Bad credentials] message="Provided username cassandra and/or password are incorrect"')})
I have no name!#monitoring-cassandra-client:/$ cqlsh -u cassandra -p $CASSANDRA_PASSWORD monitoring-cassandra
Please can someone help me with this.
Related
I just tried to install timescaleDB Single with Helm in minikube on Ubuntu 20.04.
After installing via:
helm install timescaledb timescaledb/timescaledb-single --namespace espace-client-v2
I got the message:
➜ ~ helm install timescaledb timescaledb/timescaledb-single --namespace espace-client-v2
NAME: timescaledb
LAST DEPLOYED: Fri Aug 7 17:17:59 2020
NAMESPACE: espace-client-v2
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
TimescaleDB can be accessed via port 5432 on the following DNS name from within your cluster:
timescaledb.espace-client-v2.svc.cluster.local
To get your password for superuser run:
# superuser password
PGPASSWORD_POSTGRES=$(kubectl get secret --namespace espace-client-v2 timescaledb-credentials -o jsonpath="{.data.PATRONI_SUPERUSER_PASSWORD}" | base64 --decode)
# admin password
PGPASSWORD_ADMIN=$(kubectl get secret --namespace espace-client-v2 timescaledb-credentials -o jsonpath="{.data.PATRONI_admin_PASSWORD}" | base64 --decode)
To connect to your database, chose one of these options:
1. Run a postgres pod and connect using the psql cli:
# login as superuser
kubectl run -i --tty --rm psql --image=postgres \
--env "PGPASSWORD=$PGPASSWORD_POSTGRES" \
--command -- psql -U postgres \
-h timescaledb.espace-client-v2.svc.cluster.local postgres
# login as admin
kubectl run -i --tty --rm psql --image=postgres \
--env "PGPASSWORD=$PGPASSWORD_ADMIN" \
--command -- psql -U admin \
-h timescaledb.espace-client-v2.svc.cluster.local postgres
2. Directly execute a psql session on the master node
MASTERPOD="$(kubectl get pod -o name --namespace espace-client-v2 -l release=timescaledb,role=master)"
kubectl exec -i --tty --namespace espace-client-v2 ${MASTERPOD} -- psql -U postgres
It seemed to have installed well.
But then, when executing:
PGPASSWORD_POSTGRES=$(kubectl get secret --namespace espace-client-v2 timescaledb-credentials -o jsonpath="{.data.PATRONI_SUPERUSER_PASSWORD}" | base64 --decode)
Error from server (NotFound): secrets "timescaledb-credentials" not found
After that, I realized pod has not even been created, and it gives me the following errors
MountVolume.SetUp failed for volume "certificate" : secret "timescaledb-certificate" not found
Unable to attach or mount volumes: unmounted volumes=[certificate], unattached volumes=[storage-volume wal-volume patroni-config timescaledb-scripts certificate socket-directory timescaledb-token-svqqf]: timed out waiting for the condition
What should I do ?
I could do it. If the page https://github.com/timescale/timescaledb-kubernetes doesn't give much details about installation process, you can go here:
https://github.com/timescale/timescaledb-kubernetes/tree/master/charts/timescaledb-single
I had to use kustomize to generate content:
./generate_kustomization.sh my-release
and then it generate several files:
credentials.conf kustomization.yaml pgbackrest.conf timescaledbMap.yaml tls.crt tls.key
then I did:
kubectl kustomize ./
which generated a k8s yml file, which I saved with the name timescaledbMap.yaml
Finally, I did:
kubectl apply -f timescaledbMap.yaml
Then it created all necesarry secrets, and I could install chart
. Hope it helps others.
I am trying to follow the instructions here: https://github.com/bitnami/charts/tree/master/bitnami/mongodb
1) helm install mongorelease --set mongodbRootPassword=secretpassword,mongodbUsername=my-user,mongodbPassword=my-password,mongodbDatabase=my-database bitnami/mongodb
which says:
To connect to your database run the following command:
kubectl run --namespace default mongorelease-mongodb-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mongodb:4.2.5-debian-10-r44 --command -- mongo admin --host mongorelease-mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
I run the command above (replacing $MONGODB_ROOT_PASSWORD with my password) and I see this error:
error: invalid restart policy: 'Never'
See 'kubectl run -h' for help and examples
I remove the single quotes around Never and see this:
MongoDB shell version v4.2.5
connecting to: mongodb://mongorelease-mongodb:27017/admin?authSource=admin&compressors=disabled&gssapiServiceName=mongodb
2020-04-11T10:04:52.187+0000 E QUERY [js] Error: Authentication failed. :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2020-04-11T10:04:52.189+0000 F - [main] exception: connect failed
2020-04-11T10:04:52.189+0000 E - [main] exiting with code 1
pod "mongorelease-mongodb-client" deleted
pod default/mongorelease-mongodb-client terminated (Error)
I then remove --restart=Never from the command and run it again. It then works expected and I can interact with MongoDB, however I am presented with this warning:
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
What is the command I should be using?
--restart=Never creates a pod. So you instead can run this command with --generator=run-pod/v1 to create a pod. This avoids usage of --restart=Never and also the deprecation warning will not be there.
kubectl run --rm --grace-period=1 --force=true --generator=run-pod/v1 --namespace default mongorelease-mongodb-client --tty -i --image docker.io/bitnami/mongodb:4.2.5-debian-10-r44 --command -- mongo admin --host mongorelease-mongodb --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD
I'm trying to install postgresql via helm. I'm not overriding any settings, but when I try to connect, I get a "password authentication failed" error:
$ helm install goatsnap-postgres stable/postgresql
NAME: goatsnap-postgres
LAST DEPLOYED: Mon Jan 27 12:44:22 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
PostgreSQL can be accessed via port 5432 on the following DNS name from within your cluster:
goatsnap-postgres-postgresql.default.svc.cluster.local - Read/Write connection
To get the password for "postgres" run:
export POSTGRES_PASSWORD=$(kubectl get secret --namespace default goatsnap-postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
To connect to your database run the following command:
kubectl run goatsnap-postgres-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host goatsnap-postgres-postgresql -U postgres -d postgres -p 5432
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/goatsnap-postgres-postgresql 5432:5432 &
PGPASSWORD="$POSTGRES_PASSWORD" psql --host 127.0.0.1 -U postgres -d postgres -p 5432
$ kubectl get secret --namespace default goatsnap-postgres-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode
DCsSy0s8hM
$ kubectl run goatsnap-postgres-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io/bitnami/postgresql:11.6.0-debian-9-r0 --env="PGPASSWORD=DCsSy0s8hM" --command -- psql --host goatsnap-postgres-postgresql -U postgres -d postgres -p 5432
psql: FATAL: password authentication failed for user "postgres"
pod "goatsnap-postgres-postgresql-client" deleted
pod default/goatsnap-postgres-postgresql-client terminated (Error)
I've tried a few other things, all of which get the same error:
Run kubectl run [...] bash, launch psql, type the password at the prompt
Run kubectl port-forward [...], launch psql locally, type the password
Un/re-install the chart a few times
Use helm install --set postgresqlPassword=[...], use the explicitly-set password
I'm using OSX 10.15.2, k8s 1.15.5 (via Docker Desktop 2.2.0.0), helm 3.0.0, postgres chart postgresql-7.7.2, postgres 11.6.0
I think I had it working before (although I don't see evidence in my scrollback), and I think I updated Docker Desktop since then, and I think I saw something about a K8s update in the notes for the Docker update. So if I'm remembering all those things correctly, maybe it's related to the k8s update?
Whoops, never mind—I found the resolution in an issue on helm/charts. The password ends up in a persistent volume, so if you uninstall and reinstall the chart, the new version retains the password from the old one, not the value from kubectl get secret. I uninstalled the chart, deleted the old PVC and PV, and reinstalled, and now I'm able to connect.
I have a POD running which had a Java Application running. This Java Application talks to a MySql which is on-prem. The MySql accepts connections from 192.* ip's
I have the pod running on EKS worker nodes with Ip - 192.. I am able to telnet Mysql from the worker nodes. When the pod starts, the Java application tries to connect to the Mysql with the POD Ip (which is some random 172. ip) and fails with MySQL connection error.
How can I solve this?
Try to execute a shell inside the pod and connect to the MySQL server from there.
kubectl exec -it -n <namespace-name> <pod-name> -c <container-name> -- COMMAND [args...]
E.g.:
kubectl exec -it -n default mypod -c container1 -- bash
Then check the MySQL connectivity:
#/> mysql --host mysql.dns.name.or.ip --port 3306 --user root --password --verbose
Or start another pod with usual tools and check MySQL port connectivity:
$ kubectl run busybox --rm -it --image busybox --restart=Never
#/> ping mysql.dns.name.or.ip
#/> telnet mysql.dns.name.or.ip 3306
You should see some connection-related information that helps you to resolve your issue.
I can guess you just need to add a route to your cluster pods network on your MySQL host or its default network router.
I have a kubernetes cluster. I installed a mongodb pod using this link https://github.com/kubernetes/charts/blob/master/stable/mongodb/README.md#configuration
But When I try to connect to the mongodb server it gives me the following error -
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x140c46e]
This is the command I am trying to run to access the server -
kubectl run mydb-mongodb-client --rm --tty -i --image bitnami/mongodb --command -- mongo --host mydb-mongodb -p password